21CSL66 CG Lab Manual Search Creators
21CSL66 CG Lab Manual Search Creators
Created By:
Hanumanthu
Dedicated To...
My Friends
Subject Code:21CSL66
Subject: COMPUTER GRAPHICS AND IMAGE PROCESSING LABORATORY
Program-01
1.Develop a program to draw a line using Bresenham’s line drawing technique
Program
import turtle
return line_points
# Example usage
turtle.setup(500, 500)
turtle.speed(0) # Fastest drawing speed
turtle.exitonclick()
Output
Explaination:
1. The bresenham_line function takes four arguments: x1, y1, x2, and y2, which
represent the starting and ending points of the line segment.
2. Inside the function, it calculates the deltas (dx and dy) between the starting and
ending points, and determines the step direction (x_step and y_step) for each axis.
3. It initializes the error term (error) and an empty list (line_points) to store the
coordinates of the line points.
4. The function then enters a loop that iterates dx + 1 times, where in each iteration:
5. It appends the current point (x, y) to the line_points list.
6. It updates the error term (error) and adjusts the coordinates of x and y based on the
Bresenham's line algorithm.
7. After the loop, the function returns the line_points list containing the coordinates of
the line points.
8. In the example usage section, the code sets up a turtle graphics window, defines the
starting and ending points of the line segment (x1, y1, x2, y2), calls the
bresenham_line function to get the line points, and then draws the line segment by
moving the turtle to each point in the line_points list.
9. The turtle.exitonclick() function keeps the graphics window open until the user clicks
on it, allowing the user to view the drawn line.
Program
import turtle
import math
# Draw a rectangle
# Draw a circle
draw_circle(100, 100, 50, "red")
Output
Explanation
1. The code starts by importing the necessary modules: turtle for drawing graphics and math for
mathematical operations.
2. A turtle screen is set up with a white background color.
3. A turtle instance t is created, and its speed and pen size are set.
4. Two helper functions draw_rectangle and draw_circle is defined to draw rectangles and circles,
respectively. These functions take the coordinates, dimensions (width, height, or radius), and
color as arguments.
5. Three transformation functions are defined: translate, rotate, and scale. These functions take the
coordinates and transformation parameters (translation distances, rotation angle, or scaling
factors) as arguments and move the turtle's position and orientation accordingly.
6. The code then demonstrates the use of these functions by drawing and transforming a rectangle
and a circle.
• A rectangle is drawn at (-200, 0) with a width of 100 and a height of 50 in blue color.
• The rectangle is translated 200 units to the right, and a new rectangle is drawn at (0, 0).
• The rectangle is rotated by 45 degrees, and a new rectangle is drawn.
• The rectangle is scaled by a factor of 2 in both dimensions, and a new rectangle is drawn.
• A circle is drawn at (100, 100) with a radius of 50 in red color.
• The circle is translated 200 units to the right, and a new circle is drawn at (300, 100).
• The circle is rotated by 45 degrees, and a new circle is drawn.
• The circle is scaled by a factor of 2 in both dimensions, and a new circle is drawn at (600,
200).
7. Finally, the turtle. done () function is called to keep the window open until it's closed by the user.
# Create a 3D canvas
scene = canvas(width=800, height=600, background=color.white)
# Draw a cuboid
cuboid = draw_cuboid((-2, 0, 0), 2, 2, 2, color.blue)
# Draw a cylinder
cylinder = draw_cylinder((2, 2, 0), 1, 10, color.red)
Output
Explainations
# Apply transformations
translated_obj = np.array([np.dot(translation_matrix, [x, y, 1])[:2] for x, y in obj_points],
dtype=np.int32)
rotated_obj = np.array([np.dot(rotation_matrix, [x, y, 1])[:2] for x, y in translated_obj],
dtype=np.int32)
scaled_obj = np.array([np.dot(scaling_matrix, [x, y, 1])[:2] for x, y in rotated_obj],
dtype=np.int32)
Explainations
1. The code starts by importing the necessary libraries: cv2 for OpenCV and numpy for
numerical operations.
2. It defines the dimensions of the canvas (canvas_width and canvas_height) and
creates a blank white canvas using NumPy.
3. The initial object (a square) is defined as an array of four points (obj_points)
representing the vertices of the square.
4. The transformation matrices are defined:
• translation_matrix: A 2x3 matrix for translation.
• rotation_matrix: A rotation matrix obtained using cv2.getRotationMatrix2D
for rotating around a specified center point by a given angle.
• scaling_matrix: A 2x3 matrix for scaling.
5. The transformations are applied to the initial object by performing matrix
multiplication with the transformation matrices:
• translated_obj: The object is translated by applying the translation_matrix.
• rotated_obj: The translated object is rotated by applying the rotation_matrix.
• scaled_obj: The rotated object is scaled by applying the scaling_matrix.
6. The original object and the transformed objects (translated, rotated, and scaled) are
drawn on the canvas using cv2.polylines.
7. The canvas with the drawn objects is displayed using cv2.imshow, and the code waits
for a key press (cv2.waitKey(0)) before closing the window.
8. Finally, all windows are closed using cv2.destroyAllWindows().
9. The resulting output is a window displaying the following:
• The original square (black)
• The translated square (green)
• The rotated square (red)
• The scaled square (blue)
# Initialize Pygame
pygame.init()
# Set up OpenGL
glClearColor(0.0, 0.0, 0.0, 1.0)
glEnable(GL_DEPTH_TEST)
glMatrixMode(GL_PROJECTION)
gluPerspective(45, (display_width / display_height), 0.1, 50.0)
glMatrixMode(GL_MODELVIEW)
edges = np.array([
[0, 1], [1, 2], [2, 3], [3, 0],
[4, 5], [5, 6], [6, 7], [7, 4],
[0, 4], [1, 5], [2, 6], [3, 7]
], dtype=np.uint32)
# Main loop
running = True
angle = 0
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
# Apply transformations
glLoadIdentity()
glMultMatrixf(translation_matrix)
glRotatef(angle, 1, 1, 0)
glMultMatrixf(rotation_matrix)
glMultMatrixf(scaling_matrix)
# Quit Pygame
pygame.quit()
Output
Explanation
1. Imports the necessary modules from pygame, OpenGL.GL, OpenGL.GLU, and
numpy.
2. Initializes Pygame and sets up the display with a width of 800 pixels and a height of
600 pixels.
3. Sets up OpenGL by clearing the color buffer, enabling depth testing, setting up the
projection matrix using gluPerspective, and switching to the ModelView matrix
mode.
4. Defines the vertices of a 3D cube using a NumPy array.
5. Defines the edges of the cube as pairs of vertex indices using a NumPy array.
6. Sets up the transformation matrices for translation, rotation, and scaling:
7. translation_matrix translates the object along the negative z-axis by 5 units.
8. rotation_matrix is initially set to the identity matrix (no rotation).
9. scaling_matrix scales the object by a factor of 1.5 along all axes.
10.Enters the main loop, which runs until the user closes the window.
11.Inside the main loop:
• Handles the Pygame event queue, checking for the QUIT event to exit the loop.
• Clears the color and depth buffers using glClear.
• Applies the transformations:
• Loads the identity matrix using glLoadIdentity.
• Applies the translation matrix using glMultMatrixf.
• Rotates the object around the vector (1, 1, 0) by an angle that increases with
each iteration.
• Applies the rotation matrix using glMultMatrixf.
• Applies the scaling matrix using glMultMatrixf.
• Draws the 3D cube by iterating over the edges and vertices, using
glBegin(GL_LINES) and glVertex3fv.
• Increments the rotation angle for the next iteration.
• Swaps the front and back buffers using pygame.display.flip() to display the
rendered scene.
12.After the main loop ends, the code quits Pygame.
# Initialize Pygame
pygame.init()
# Define colors
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
# Main loop
running = True
clock = pygame.time.Clock()
while running:
# Handle events
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
obj["speed_x"] = -obj["speed_x"]
if obj["y"] - obj["radius"] < 0 or obj["y"] + obj["radius"] > screen_height:
obj["speed_y"] = -obj["speed_y"]
# Quit Pygame
pygame.quit()
Output
Explanation
1. Imports the required modules: pygame for creating the graphical window and
handling events, and random for generating random values.
2. Initializes Pygame and sets up a window with a width of 800 pixels and a height of
600 pixels.
3. Defines some colors (BLACK, WHITE, RED, GREEN, BLUE) as RGB tuples.
4. Initializes a list called objects to store the properties of each circle object. The
properties include the x and y coordinates, radius, color, and velocities (speed_x and
speed_y).
5. Generates num_objects (set to 10) with random positions, radii, colors, and
velocities, and appends them to the objects list.
6. Enters the main loop, which runs until the user closes the window.
7. Inside the main loop:
8. Handles the Pygame event queue, checking for the QUIT event to exit the loop.
9. Clears the screen by filling it with the WHITE color.
10.Iterates over each object in the objects list:
11.Updates the x and y coordinates of the object based on its velocities.
12.Checks if the object has collided with the edges of the screen. If so, it reverses the
corresponding velocity component (x or y) to make the object bounce off the edge.
13.Draws the object (circle) on the screen using pygame.draw.circle with the object's
color, position, and radius.
14.Updates the display using pygame.display.flip().
15.Limits the frame rate to 60 frames per second (FPS) using clock.tick(60).
16.After the main loop ends, the code quits Pygame.
7. Write a Program to read a digital image. Split and display image into 4 quadrants,
up, down, right and left.
Program
import cv2
import numpy as np
Output
Explanation
# Apply transformations
rotated_img = cv2.warpAffine(img, rotation_matrix, (width, height))
scaled_img = cv2.warpAffine(img, scaling_matrix, (int(width*1.5), int(height*1.5)))
translated_img = cv2.warpAffine(img, translation_matrix, (width, height))
Output
Explanation
1. The necessary libraries, cv2 (OpenCV) and numpy, are imported.
2. The path to the input image file is specified (image_path). In this case, it's set to
"image/atc.jpg", assuming the image file named "atc.jpg" is located in a directory
named "image" relative to the script's ___location.
3. The image is loaded using cv2.imread().
4. The height and width of the image are obtained from the shape attribute of the image.
5. The transformation matrices for rotation, scaling, and translation are defined:
6. rotation_matrix: Obtained using cv2.getRotationMatrix2D() to rotate the image by
45 degrees around its center.
7. scaling_matrix: A 2x3 NumPy matrix to scale the image by a factor of 1.5 along both
axes.
8. translation_matrix: A 2x3 NumPy matrix to translate the image by (100, 50) pixels.
9. The transformations are applied to the original image using cv2.warpAffine():
10.rotated_img: The image is rotated using the rotation_matrix.
11.scaled_img: The image is scaled using the scaling_matrix.
12.translated_img: The image is translated using the translation_matrix.
13.The original image and the transformed images (rotated, scaled, and translated) are
displayed using cv2.imshow().
14.The script waits for a key press (cv2.waitKey(0)) before closing the windows.
15.Finally, all windows are closed using cv2.destroyAllWindows().
9. Read an image and extract and display low-level features such as edges, textures
using filtering techniques.
Program
import cv2
import numpy as np
# Edge detection
edges = cv2.Canny(gray, 100, 200) # Use Canny edge detector
# Texture extraction
kernel = np.ones((5, 5), np.float32) / 25 # Define a 5x5 averaging kernel
texture = cv2.filter2D(gray, -1, kernel) # Apply the averaging filter for texture extraction
Output
Explanation
1. The necessary libraries, cv2 (OpenCV) and numpy, are imported.
2. The path to the input image file is specified (image_path). In this case, it's set to
"image/atc.jpg", assuming the image file named "atc.jpg" is located in a directory
named "image" relative to the script's ___location.
3. The image is loaded using cv2.imread().
4. The image is converted to grayscale using cv2.cvtColor(img,
cv2.COLOR_BGR2GRAY). This step is necessary for edge detection and texture
extraction, as these operations are typically performed on grayscale images.
5. Edge detection is performed on the grayscale image using the Canny edge detector
(cv2.Canny(gray, 100, 200)). The Canny edge detector is a popular algorithm for
edge detection, and the two arguments (100 and 200) are the lower and upper
thresholds for hysteresis.
6. Texture extraction is performed using a simple averaging filter (cv2.filter2D(gray, -
1, kernel)). A 5x5 averaging kernel (kernel = np.ones((5, 5), np.float32) / 25) is
defined, where each element is set to 1/25 (the sum of the kernel elements is 1). This
kernel is applied to the grayscale image using cv2.filter2D(), which performs a 2D
convolution between the image and the kernel. The resulting image (texture) captures
the texture information of the original image.
7. The original image (img), the detected edges (edges), and the extracted texture
(texture) are displayed using cv2.imshow().
8. The script waits for a key press (cv2.waitKey(0)) before closing the windows.
9. Finally, all windows are closed using cv2.destroyAllWindows().
# Gaussian Blur
gaussian_blur = cv2.GaussianBlur(image, (5, 5), 0)
# Median Blur
median_blur = cv2.medianBlur(image, 5)
# Bilateral Filter
bilateral_filter = cv2.bilateralFilter(image, 9, 75, 75)
Output
Explanation
1. The cv2 library is imported from OpenCV.
2. The image is loaded using cv2.imread('image/atc.jpg'). Make sure to replace
'image/atc.jpg' with the correct path to your image file.
3. Three different types of blurring/smoothing filters are applied to the image:
4. Gaussian Blur: gaussian_blur = cv2.GaussianBlur(image, (5, 5), 0) applies a
Gaussian blur filter to the image. The parameters (5, 5) specify the size of the
Gaussian kernel, and 0 is the standard deviation value in the X and Y directions
(which is automatically calculated from the kernel size).
5. Median Blur: median_blur = cv2.medianBlur(image, 5) applies a median blur filter
to the image. The parameter 5 specifies the size of the median filter kernel.
6. Bilateral Filter: bilateral_filter = cv2.bilateralFilter(image, 9, 75, 75) applies a
bilateral filter to the image. The parameters 9, 75, and 75 represent the diameter of
the pixel neighborhood, the filter sigma in the color space, and the filter sigma in the
coordinate space, respectively.
7. The original image and the filtered images are displayed using cv2.imshow().
8. The script waits for a key press (cv2.waitKey(0)) before closing the windows.
9. Finally, all windows are closed using cv2.destroyAllWindows().
# Find contours
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cv2.imshow('Contours', contour_image)
Output
Explanation
1. The cv2 library is imported from OpenCV, and the numpy library is imported for
numerical operations.
2. The image is loaded using cv2.imread('image/atc.jpg'). Make sure to replace
'image/atc.jpg' with the correct path to your image file.
3. The image is converted to grayscale using cv2.cvtColor(image,
cv2.COLOR_BGR2GRAY). This step is necessary because contour detection is
often performed on grayscale images.
4. Binary thresholding is applied to the grayscale image using cv2.threshold(gray, 0,
255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU). This operation
converts the grayscale image into a binary image, where pixels are either black or
white, based on a threshold value. The cv2.THRESH_OTSU flag automatically
determines the optimal threshold value using Otsu's method. The
cv2.THRESH_BINARY_INV flag inverts the binary image, so that foreground
objects become white and the background becomes black.
5. The contours are found in the binary image using cv2.findContours(thresh,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE). The
cv2.RETR_EXTERNAL flag retrieves only the extreme outer contours, and
cv2.CHAIN_APPROX_SIMPLE compresses the contour data by approximating it
with a simplified polygon.
6. A copy of the original image is created using contour_image = image.copy(). This
copy will be used to draw the contours on.
7. The contours are drawn on the contour image using
cv2.drawContours(contour_image, contours, -1, (0, 255, 0), 2). The -1 argument
indicates that all contours should be drawn, the (0, 255, 0) argument specifies the
color (green in this case), and the 2 argument specifies the thickness of the contour
lines.
Search Creators…… Page 46
21CSL66 | COMPUTER GRAPHICS AND IMAGE PROCESSING LABORATORY
8. The original image and the contour image are displayed using cv2.imshow().
9. The script waits for a key press (cv2.waitKey(0)) before closing the windows.
10.Finally, all windows are closed using cv2.destroyAllWindows().
Output
Explanation
1. This code demonstrates how to perform face detection in an image using OpenCV in
Python. Here's a breakdown of what the code does:
2. The cv2 library is imported from OpenCV.
3. The Haar cascade classifier for face detection is loaded using
cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml'). This classifier is a pre-trained model that can
detect frontal faces in images.
4. The image is loaded using cv2.imread('image/face.jpeg'). Make sure to replace
'image/face.jpeg' with the correct path to your image file containing faces.
5. The image is converted to grayscale using cv2.cvtColor(image,
cv2.COLOR_BGR2GRAY). Face detection is typically performed on grayscale
images.
6. The face_cascade.detectMultiScale method is used to detect faces in the grayscale
image. The parameters scaleFactor=1.1, minNeighbors=5, and minSize=(30, 30)
control the detection process:
7. scaleFactor=1.1 specifies the scale factor used to resize the input image for different
scales.
8. minNeighbors=5 specifies the minimum number of neighboring rectangles that
should overlap to consider a face detection as valid.
9. minSize=(30, 30) specifies the minimum size of the face to be detected.
10.For each detected face, a rectangle is drawn around it using cv2.rectangle(image, (x,
y), (x + w, y + h), (0, 255, 0), 2). The rectangle coordinates are obtained from the
faces list returned by detectMultiScale. The (0, 255, 0) argument specifies the color
(green in this case), and the 2 argument specifies the thickness of the rectangle lines.
11.The image with the detected faces and rectangles is displayed using
cv2.imshow('Face Detection', image).
Search Creators…… Page 50
21CSL66 | COMPUTER GRAPHICS AND IMAGE PROCESSING LABORATORY
12.The script waits for a key press (cv2.waitKey(0)) before closing the window.
13.Finally, the window is closed using cv2.destroyAllWindows().
THANKING YOU
Visit our Official Website: https://searchcreators.org/