Hi @tiann. Thanks for the great project, I had great fun playing around with it. This PR mainly tries to further minimize the possible delays caused by KernelSU hooking. There are 3 major changes: - Processes with 0 < UID < 2000 are blocked straight-up before going through the allow_list. I don't see any need for such processes to be interested in root, and this allows returning early before going through a more expensive lookup. If there's an expected breakage due to this change, I'll remove it. Let me know. - A page-sized (4K) bitmap is added. This allows O(1) lookup for UID <= 32767. This speeds up `ksu_is_allow_uid()` by about 4.8x by sacrificing a 4K memory. IMHO, a good trade-off. Most notably, this reduces the 99.999% result previously from worrying milliseconds scale to microseconds scale. For UID > 32767, another page-sized (4K) sequential array is used to cache allow_list. Compared to the previous PR #557, this new approach gives another nice 25% performance boost in average, 63-96% boost in worst cases. Benchmark results are available at https://docs.google.com/spreadsheets/d/1w_tO1zRLPNMFRer49pL1TQfL6ndEhilRrDU1XFIcWXY/edit?usp=sharing Thanks! --------- Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
28 lines
677 B
C
28 lines
677 B
C
#ifndef __KSU_H_ALLOWLIST
|
|
#define __KSU_H_ALLOWLIST
|
|
|
|
#include "linux/types.h"
|
|
#include "ksu.h"
|
|
|
|
void ksu_allowlist_init(void);
|
|
|
|
void ksu_allowlist_exit(void);
|
|
|
|
bool ksu_load_allow_list(void);
|
|
|
|
void ksu_show_allow_list(void);
|
|
|
|
bool __ksu_is_allow_uid(uid_t uid);
|
|
#define ksu_is_allow_uid(uid) unlikely(__ksu_is_allow_uid(uid))
|
|
|
|
bool ksu_get_allow_list(int *array, int *length, bool allow);
|
|
|
|
void ksu_prune_allowlist(bool (*is_uid_exist)(uid_t, void *), void *data);
|
|
|
|
bool ksu_get_app_profile(struct app_profile *);
|
|
bool ksu_set_app_profile(struct app_profile *, bool persist);
|
|
|
|
bool ksu_uid_should_umount(uid_t uid);
|
|
struct root_profile *ksu_get_root_profile(uid_t uid);
|
|
#endif
|