forked from len0rd/rockbox
Base scheduler queues off linked lists and do cleanup/consolidation
Abstracts threading from itself a bit, changes the way its queues are handled and does type hiding for that as well. Do alot here due to already required major brain surgery. Threads may now be on a run queue and a wait queue simultaneously so that the expired timer only has to wake the thread but not remove it from the wait queue which simplifies the implicit wake handling. List formats change for wait queues-- doubly-linked, not circular. Timeout queue is now singly-linked. The run queue is still circular as before. Adds a better thread slot allocator that may keep the slot marked as used regardless of the thread state. Assists in dumping special tasks that switch_thread was tasked to perform (blocking tasks). Deletes alot of code yet surprisingly, gets larger than expected. Well, I'm not not minding that for the time being-- omlettes and break a few eggs and all that. Change-Id: I0834d7bb16b2aecb2f63b58886eeda6ae4f29d59
This commit is contained in:
parent
eb63d8b4a2
commit
6ed00870ab
20 changed files with 1550 additions and 2057 deletions
|
@ -26,10 +26,10 @@
|
|||
|
||||
struct semaphore
|
||||
{
|
||||
struct thread_entry *queue; /* Waiter list */
|
||||
int volatile count; /* # of waits remaining before unsignaled */
|
||||
int max; /* maximum # of waits to remain signaled */
|
||||
IF_COP( struct corelock cl; ) /* multiprocessor sync */
|
||||
struct __wait_queue queue; /* Waiter list */
|
||||
int volatile count; /* # of waits remaining before unsignaled */
|
||||
int max; /* maximum # of waits to remain signaled */
|
||||
IF_COP( struct corelock cl; ) /* multiprocessor sync */
|
||||
};
|
||||
|
||||
extern void semaphore_init(struct semaphore *s, int max, int start);
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue