WebGL代写 | M30242 Graphics and Computer Vision

这个作业是用WebGL创建动画和车辆检测
M30242 Graphics and Computer Vision

Task One (50%)
Warning: For this task, you are NOT allowed to use any WebGL library that has built-in functions for creating, drawing, texturing geometric primitives such as spheres, cubes, and so on. You need to generate the vertex data of such objects, i.e., vertex coordinates, texture coordinates and normals, and perform the relevant operations, such as texture mapping, lighting and shading calculations, by yourself.
Specification
In this task, you are required to create an animation. The scene consists of the planet earth in the middle with a satellite orbiting around it along a circular orbit within the horizontal plane. The scene is illuminated from top-right by a directional light that is at a 60-degree angle with the horizontal plane if viewed in front view.
The earth model is a sphere of radius 10 and mapped with an earth image. The earth rotates slowly around its own vertical axis. An image of the earth is provided for texturing the sphere.
The satellite consists of a main body of a cube of size of 2x2x2 and two “solar panels” that are attached to the two opposite sides of the main body through two connection “rods”. The rods are cuboids of size 0.2×0.2×0.5 and golden in colour. The solar panels are blueish thin rectangular objects of 1×2 in size. For simplicity, we assume that the panels always face upwards. One side of the cube representing the main body of the satellite has a black colour, which will constantly face the earth while orbiting the earth. A golden antenna dish of a diameter of 4 is attached to the black side by a golden rod of 0.2×0.2×0.4. The antenna will face the earth.
The animation will be interactive: You should be able to control the radius of the circular orbit (with the left and right arrow keys) and the speed (up and down arrow keys) of the satellite at runtime. You should have full viewport/scene-navigation control: translations along x- (shift plus mouse drag), y- (alt plus mouse drag) and z-direction (mouse wheel) and rotations around x- and y-axes (mouse drags). The translation controls should be independent of the rotation controls.
Your application should work with standard browsers on University lab PCs without requiring any special set-up or configuration of software or hardware. Firefox browser is preferred because textures may not work properly in Google Chrome if its security policy prevents loading texture files locally. You should extensively test the animation controls to ensure that any control action will not cause the system to freeze, crash, or any scene objects to disappear or behave in a strange way.
Deliverables
1. The source code of the entire WebGL application and any necessary supporting files such as libraries and textures. The program code should be suitably commented and come with necessary instruction for using it.
2. An electronic copy of a short report (no more than 1000 words) that documents the design/implementation decisions, difficulties or problems (if any) and evidence and/or conclusions of test and evaluation of the application against the specification.

Task Two (50%)
Application Scenario and Conditions
To control the traffic at the entrance of a narrow tunnel, a computer vision system is used to intercept oversized or speeding vehicles. Any vehicle that exceeds 2.5m in width and/or 30 miles per hour in speed will be diverted or stopped by traffic lights or police officers upon receiving the warnings from the system. Fire engines (assuming that they are mainly red in colour and have a width/length ratio of approximately 1:3) are the only vehicles exempted from the control.
The system consists of a video camera fixed right above the centre of the lane of a straight road and is 7m off the ground. Its optical axis is 30 degrees off the horizon and pointing to the lane along the direction of traffic. A diagram showing the camera configuration is given in the Appendix. The resolution of the sensor of the camera is 640X480 pixels. For simplicity, it is assumed that the pixels of the camera sensor are square and each sensor pixel is equivalent to 0.042 degrees in view angle. The camera grabs a video frame (an image) of the lane at 0.1s intervals. It is further assumed that each frame contains only one vehicle. The output frames from the camera will be the only input to the vision system. You should not make any assumptions about the identity of the vehicle contained in a frame before processing the frame.