操作系统代写|ECE344 Operating System Quiz 3 Review Session

Copyright Note

This booklet is only accessible to restricted public. It was made by our tutors and the process was not easy. Help us protect the copyright and do NOT distribute this booklet with anyone else. Each booklet we distributed has a unique mark that can trace to the individual who is the receiver of this booklet. Any person who distributes the booklet without the consent of our tutor is considered as violating the copyright act and may face legal consequences.

Scheduler

Recall: What is Scheduler?

A kernel process which determines which thread/process to run next.

Metrices: Batch System vs. Interactive System

Ø CPU Utilization: % of time that CPU is running.

Ø Throughput: number of jobs accomplished in a unit time period.

Ø Turnaround Time: total time from arriving to the end.

?!”#$%#&”$’ = ?#”$$($) + ?*%!($)

Ø Response Time: time between request sent and first responses.

Batch System Scheduling

Ø Maintain a regular queue (first-in first-out).

Ø When we need to choose a job to run, always choose the one on the queue header.

Ø A running job will not stop until it is accomplished.

Ø When a new job comes, add it to the tail of the queue.

Ø Assume we know the run time of all the jobs.

Ø Maintain a priority queue (or you can call a heap). The header of the queue will always be the job with the shortest run time.

Ø Every time a new job arrives, push it into the priority queue.

Ø When we need to choose a job to run, always choose the one on the queue header.

Ø A running job will not stop until it is accomplished.

Ø When a new job comes, add it to the priority queue (and the queue will sort itself).

Ø Assume we know the run time of all the jobs.

Ø Maintain a priority queue (or you can call a heap). The header of the queue will always be the job with the shortest remaining time.

Ø For new jobs: ?#+,%($($) = ?!&!%-

Ø For jobs that already started: ?#+,%($($) = ?!&!%- − ?%-#+%’._#”$

Ø When the current job is accomplished, or a new job is arrived, put all jobs (possibly with the current one) into the priority queue, and choose the one with the shortest remaining time to run.

Ø This means when a new job comes, if its total time is less than the current job’s remaining time, we will switch to run the new coming one.

Interactive System Scheduling

Ø Maintain a regular queue (first-in first-out).

Ø Every job only runs a very short period (~10ms), and then it will be switched out when it’s expired. This period is called a unit time slice.

Ø When a job arrives, or it’s switched out, put it to the end of queue.

Ø Every time a job is switched out, choose the job on the head of the queue to run next.

Ø Preemptive FIFO policy.

Ø However, the performance of this policy depends on the time slice:

Small time slice: short response time but lower throughput Large time slice: longer response time but higher throughput

Ø Assign a priority level to each job.

Ø Higher priority for I/O bounded threads for lower response time.

Ø Low priority for CPU bounded threads, but give longer time slice.

Ø Assign a priority level to the thread when it starts.

Ø Maintain a queue in the priority level.

Ø The current running thread will always be the one with the lowest priority.

Ø When multiple threads have the same priority, using round-robin policy.

Ø Assign a priority level to the thread when it starts.

Ø The current running thread will always have the highest priority. (a coming thread with higher priority will replace the current running one)

Ø In every time slice (note this time slice is different with what we talked  above): measure the CPU usage of each thread running in this period, and update their priority:

n At the start of a time slice, each thread has C = 0.

n At a certain period, a timer interrupt will arise. For the current running thread, update its counter C:

? = ? + 1

n At the end of a time slice, for every thread, update its value P:

?$+* = ?&-‘⁄2 + ?

n The lower P means higher priority.

Ø For I/O bounded thread: C will be very low (voluntarily give up its CPU time), and it’s probably blocked in most of time (which means it will not in the scheduler queue). This means it will have high priority and always be handled fast.

Ø For CPU bounded thread: C will be very high because it takes CPU time for the most of time slice. It will get lower priority and only be scheduled when high priority threads are blocked or finished.

Ø Kernel might maintain multiple queues for different priority level threads.

Some systems might reset the priority of all threads every certain period.