Since KernelSU Manager can now be built for 32-bit, theres this problematic
setup where userspace is 32-bit (armeabi-v7a) and kernel is 64bit (aarch64).
On 64-bit kernels with CONFIG_COMPAT=y, 32-bit userspace passes 32-bit pointers.
These values are interpreted as 64-bit pointers without proper casting and that
results in invalid or near-null memory access.
This patch adds proper compat-mode handling with the ff changes:
- introduce a dedicated struct (`sepol_compat_data`) using u32 fields
- use `compat_ptr()` to safely convert 32-bit user pointers to kernel pointers
- adding a runtime `ksu_is_compat` flag to dynamically select between struct layouts
This prevents a near-null pointer dereference when handling SELinux
policy updates from 32-bit ksud in a 64-bit kernel.
Truth table:
kernel 32 + ksud 32, struct is u32, no compat_ptr
kernel 64 + ksud 32, struct is u32, yes compat_ptr
kernel 64 + ksud 64, struct is u64, no compat_ptr
Preprocessor check
64BIT=y COMPAT=y: define both structs, select dynamically
64BIT=y COMPAT=n: struct u64
64BIT=n: struct u32
Co-authored-by: backslashxx <118538522+backslashxx@users.noreply.github.com>
Signed-off-by: backslashxx <118538522+backslashxx@users.noreply.github.com>
This migrates ksud execution decision-making to bprm_check_security.
This requires passing proper argv and envp to a modified _ksud handler
aptly named 'ksu_handle_bprm_ksud'.
Introduces:
int ksu_handle_bprm_ksud(const char *filename, const char *argv1,
const char *envp, size_t envp_len)
which is adapted from:
int ksu_handle_execveat_ksud(int *fd, struct filename **filename_ptr,
struct user_arg_ptr *argv,
struct user_arg_ptr *envp,
int *flags)
ksu_handle_bprm_ksud handles all the decision making, it decides when it is
time to apply_kernelsu_rules depending if it sees "second_stage".
For LSM hook, turns out we can pull out argv and envp from mm_struct.
The code in here explains itself on how to do it.
whole blob exists on arg_start to arg_end, so we just pull it out and grab next
array after the first null terminator.
as for envp, we pass the pointer then hunt for it when needed
My reasoning on adding a fallback on usercopy is that on some devices a fault
happens, and it copies garbled data. On my creation of this, I actually had to lock
that _nofault copy on a spinlock as a way to mimic preempt_disable/enable without
actually doing it. As per user reports, no failed _nofault copies anyway but we
have-to-have a fallback for resilience.
References:
- old version1 6efcd8193e
- old version2 37d5938e66
- bad usercopy #21
This now provides a small helper function, ksu_copy_from_user_retry, which explains
itself. First we attempt a _nofault copy, if that fails, we try plain.
With that, It also provides an inlined copy_from_user_nofault for < 5.8.
While using strncpy_from_user_nofault was considered, this wont do, this will
only copy up to the first \0.
devlog:
16e5dce9e7...16c1f5f52128642e60d7...728de0c571
References:
https://elixir.bootlin.com/linux/v4.14.1/source/include/linux/mm_types.h#L429https://elixir.bootlin.com/linux/v4.14.1/source/include/linux/lsm_hooks.h
Stale: https://github.com/tiann/KernelSU/pull/2653
Signed-off-by: backslashxx <118538522+backslashxx@users.noreply.github.com>
* On newer kernel for some reason -Wno-strict-prototypes still does not fix the errors or warnings.
* To fix it, we just need to add void type.
Signed-off-by: rsuntk <rsuntk@yukiprjkt.my.id>
* The coding format is too messy, reformat to improve readability
and get closer to Linux kernel coding style.
* While at it, update .clang-format file to linux-mainline state.
context: this is known by many as `selinux hook`, `4.9 hook`
add is_ksu_transition check which allows ksud execution under nosuid.
it also eases up integration on 3.X kernels that does not have check_nnp_nosuid.
this also adds a `ksu_execveat_hook` check since this transition is NOT needed
anymore once ksud ran.
Usage:
if (check_ksu_transition(old_tsec, new_tsec))
return 0;
on either check_nnp_nosuid or selinux_bprm_set_creds (after execve sid reset)
reference: dfe003c9fd
taken from:
`allow init exec ksud under nosuid`
- 3df9df42a6
- https://github.com/tiann/KernelSU/pull/166#issue-1565872173
Signed-off-by: backslashxx <118538522+backslashxx@users.noreply.github.com>
Signed-off-by: rsuntk <rsuntk@yukiprjkt.my.id>
1. Replace `do_execveat_common` with `sys_execve` and `sys_execveat`
2. Replace `input_handle_event` with `input_event` and
`input_inject_event`
Tested on android12-5.10-2024-04, android13-5.15-2024-04.
android14-6.1-2024-04
input-event-codes.h:
Input: add input-event-codes header file
(f902dd8934)
This was in 4.4-rc, so 4.4.0 or above has it else no.
aio.h:
fs: move struct kiocb to fs.h
(e2e40f2c1e)
Below this version, we need to explicitly include aio.h for struct kiocb
This was in 4.1-rc, so 4.0 or below should do the include
uaccess.h, sched.h was present for long times, but 4.10 splited out to
include/sched/ but the current ifdef is not including uaccess.h for
lower versions than 4.4. Fix it.
Basic support for the case that init_task.mnt_ns != zygote.mnt_ns(WSA),
just copy nsproxy and fs pointers for solve #276.
Note the copy in `apk_sign.c` is not required but suggested for
secure(ensure the checked mnt_ns is what ns android running, not created
by user, although many distributions does not have user ns.).
Tested with latest release on Win10 19045.3086(with WSAPatch).
Further review required for:
- [x] Security of this operation (without locking).
- [x] The impact of these modifications on other Android distributions.
Hi @tiann.
Thanks for the great project, I had great fun playing around with it.
This PR mainly tries to further minimize the possible delays caused by
KernelSU hooking.
There are 3 major changes:
- Processes with 0 < UID < 2000 are blocked straight-up before going
through the allow_list.
I don't see any need for such processes to be interested in root, and
this allows returning early before going through a more expensive
lookup.
If there's an expected breakage due to this change, I'll remove it. Let
me know.
- A page-sized (4K) bitmap is added.
This allows O(1) lookup for UID <= 32767.
This speeds up `ksu_is_allow_uid()` by about 4.8x by sacrificing a 4K
memory. IMHO, a good trade-off.
Most notably, this reduces the 99.999% result previously from worrying
milliseconds scale to microseconds scale.
For UID > 32767, another page-sized (4K) sequential array is used to
cache allow_list.
Compared to the previous PR #557, this new approach gives another nice
25% performance boost in average, 63-96% boost in worst cases.
Benchmark results are available at
https://docs.google.com/spreadsheets/d/1w_tO1zRLPNMFRer49pL1TQfL6ndEhilRrDU1XFIcWXY/edit?usp=sharing
Thanks!
---------
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>