unicode: Support characters beyond the first unicode plane

We used 16-bit variables to store the 'character code' everywhere but
this won't let us represent anything beyond U+FFFF.

This patch changes those variables to a custom type that can be 32 or 16
bits depending on the build, and adjusts numerous internal APIs and
datastructures to match.  This includes:

 * utf8decode() and friends
 * font manipulation, caching, rendering, and generation
 * on-screen keyboard
 * FAT filesystem (parsing and generating utf16 LFNs)
 * WIN32 simulator platform code

Note that this patch doesn't _enable_ >16bit unicode support; a followup
patch will turn that on for appropriate targets.

Appears to work on:

  * hosted linux, native, linux simulator in both 16/32-bit modes.

Needs testing on:

  * windows and macos simulator (16bit+32bit)

Change-Id: Iba111b27d2433019b6bff937cf1ebd2c4353a0e8
This commit is contained in:
Solomon Peachy 2024-12-17 08:55:21 -05:00
parent 2a88253426
commit a2c10f6189
44 changed files with 476 additions and 330 deletions

View file

@ -82,7 +82,7 @@ static void kdb_init(void)
sleep(HZ/10);
}
int kbd_input(char* text, int buflen, unsigned short *kbd)
int kbd_input(char* text, int buflen, ucschar_t *kbd)
{
(void)kbd;
JNIEnv e = *env_ptr;
@ -107,7 +107,7 @@ int kbd_input(char* text, int buflen, unsigned short *kbd)
e->DeleteLocalRef(env_ptr, str);
e->DeleteLocalRef(env_ptr, ok_text);
e->DeleteLocalRef(env_ptr, cancel_text);
return !accepted; /* return 0 on success */
}