计算机视觉代写 | Problem Set 0 The Image Processing Pipeline
The purpose of this assignment is to introduce you to Matlab as a tool for manipulating images. For this, you will
build your own version of a very basic image processing pipeline. As we will see later in the class, the image processing
pipeline is the sequence of steps that happens inside a camera, in order to convert a RAW image (roughly speaking, the
values measured by the camera sensor) into a regular 8-bit image that you can display on a computer monitor or print
on paper. There is a “Hints and Information” section at the end of this document that is likely to help. Uniquely to this
assignment, we also provide a solution in the directory of the problem set ZIP file—but you should really, really first try
to solve the assignment on your own, before looking at the solution. Though, you need to upload your solution on canvas.
Throughout this problem, you will use the file banana slug.tiff included in the ./data directory of the problem set
ZIP file. This is a RAW image that was captured with a Canon EOS T3 Rebel camera. Slight pre-processing to the
original RAW file is done in order to convert it to .tiff format. At the end of this assignment, the image should look
something like what is shown in Figure 1. The exact result can vary greatly, depending on the choices you make in your
Figure 1: One possible rendition of the RAW image provided with the assignment.
Initials. Load the image into Matlab. Originally, it will be in the form of a 2D-array of unsigned integers. Check and
report how many bits per integer the image has, and what its width and height is. Then, convert the image into a double-
precision array. (See help for functions imread, size, class and double.)
Linearization. The resulting 2D-array is not a linear function with respect to the true “energy” each pixel receives. It is
possible that it has an offset due to dark noise (intensity values produced even if no light reaches the pixel), and
saturated pixels due to overexposure. Additionally, even though the original data- type of the image was 16 bits, only
14 of those have meaningful information, meaning that the maximum possible value for pixels is 16383 (that’s 214
the provided image file, you can assume the following: All pixels with a value lower than 2047 correspond to pixels that
would be black, were it not for dark noise. All pixels with a value above 15000 are over-exposed pixels. (The values 2047
for the black level and 15000 for saturation are taken from the camera manufacturer).
Convert the image into a linear array within the range [0, 1]. Do this by applying a linear transform (shift and
scale) to the image, so that the value 2047 is mapped to 0, and the value 15000 is mapped to 1. Then, clip negative
values to 0, and values greater than 1 to 1. (See help for functions min and max.)
Identifying the correct Bayer pattern. Most cameras do not capture true RGB images. Instead, they use a process
called mosaicing, where each pixel captures only one of the three color channels. The most common spatial
arrangement of pixels in terms of what color channel they capture is called the Bayer pattern,
which says that in each 2 × 2 neighborhood of pixels, 2 pixels capture green, 1 pixel captures red, and 1 pixel captures
blue measurements (see Figure 2). The same is true for the camera used to capture our RAW image: If you zoom into the
image, you will see the 2 × 2 patches corresponding to the Bayer pattern.
Figure 2: The Bayer pattern.
However, we do not know how the Bayer pattern is positioned relative to our image: If you look at the top-left 2 × 2
image square, it can correspond to any of the four red-green-blue patterns shown in Figure 3.