0% found this document useful (0 votes)
12 views

Grad Book

Uploaded by

Mahmoud Abdallah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Grad Book

Uploaded by

Mahmoud Abdallah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 161

1

ADAS

Mohamed Mostafa Hasan


Mohamed Magdy Ibrahim
Mohamed Hosam Mohamed
Mohamed Abd-Wahab Ali
Ahmed Mohamed Abd El-Aleem
Mohamed Hasan Abd El-Rahman
Ahmed Nasser Ibrahim
Omar Yasser El-Hossiny
Mahmoud Mohamed Mahmoud
Hammad Saleh Soliman

Supervisor: Dr/ Mohamed Khairy


Department of Electrical Engineering
Faculty of Engineering at Suez Canal

University

July 2024

2
Contents
Abstract 4

Ch1. Introduction 5
1.1 Introduction 6
1.2 Problem Statement 7
1.3 Objectives 10
1.4Thesis 12

Ch 2. STM 32 15
ARM architecture 16
Cortex-A series 18
Nested Vectored Interrupt Controller (NVIC) 20
STM32F103C8T6 Microcontroller 25
GPIO of STM32F103C8T6 28
Timers in STM32F103C8T6 37
I2C in STM32F103C8T6 49

Ch 3. AI Models 55
1.Speed – hump sign detection 56
2.Traffic sign Detection 70
3.Lane Detection 79

Ch4. Mobile Application & Design 93


1)How is Splash screen built? 95
2) How Login and Register Screens built? 97
3) Home Screen: 110

Ch5. Testing 127


1.Intorduction 128
2. Hardware 128
2.1 COMPONENTS 128
3. Software Components 130
3.1 LIBRARIES USED 130
3.2CODE EXPLANATION 131
4.Workflow of the process 139

Ch6. Linux Customization 141


Introduction 142
2.Toolchain Overview 143
3.Setting Up the Toolchain 143
5.Understanding the Toolchain 144
6.Cross Compiling Applications 145
7.Best Practices for Workflow Organization 146
8.Example Workflow 147
9.Troubleshooting 147
Bootloader 148
Overview 148
Bootloader Stages 148
System on Chip (SOC) 149
Operation 149
Kernel Development Documentation 154
2.Compiling Device Tree 157
Root Filesystem Documentation for Raspberry Pi 4 157
Creating a Custom Root Filesystem with BusyBox 159
Maintaining the Root Filesystem 160
Conclusion 161

3
Abstract
Greetings from the start of a new era in automotive technology
and safety. The “Advanced Driving Assistance System (ADAS)” is a
shining example of innovation in the automobile sector, and it is
the product of the hardworking students of Suez Canal
University's Faculty of Engineering’s Department of Electrical
Engineering.

In a world where driving safety is of utmost importance, ADAS


appears to be a vital remedy for the rising rate of car crashes.
Our proposal, which seeks to greatly lessen the frequency and
severity of frontal crashes, embodies the pinnacle of engineering
brilliance. We are creating more than just a system when we
incorporate advanced features like Forward Collision Warning
(FCW), Blind Spot Warning (BSW), and Lane Keeping Assist (LKA).
Together, these features make us feel like a driving angel.

Our voyage through this research has been to develop a low-cost,


real-time driving recording system using cutting-edge sensor
technology and microcontrollers. The combination of these
elements, along with cutting-edge algorithms and machine
learning, opens the door to safer driving by improving both driver
awareness and vehicle safety.

Turning the pages of this book will take you on a tour through the
painstaking process of creating ADAS. Every stage of the process,
from conception to completion, demonstrates our dedication to
safety, innovation, and societal advancement. We cordially
encourage you to explore the intricacies, difficulties, and
achievements that characterize our project.
This book should be seen as a tribute to the abilities of young
engineers who have the guts to dream large and put in endless
effort to make those dreams come true.

4
Ch1. Introduction

5
1.1 Introduction

How does ADAS work?


Automobiles are the foundation of the next generation of mobile-
connected devices, with rapid advances being made in
autonomous vehicles. Autonomous application solutions are
partitioned into various chips, called systems on a chip (SoCs).
These chips connect sensors to actuators through interfaces and
high-performance electronic controller units (ECUs).
Self-driving cars use a variety of these applications and
technologies to gain 360-degree vision, both near (in the vehicle’s
immediate vicinity) and far. That means hardware designs are
using more advanced process nodes to meet ever-higher
performance targets while simultaneously reducing demands on
power and footprint.

6
1.2 Problem Statement

Despite significant advancements in vehicle technology, road


traffic accidents remain a leading cause of injury and death
worldwide. Human error is a major contributing factor in the
majority of these accidents, highlighting the need for systems
that can assist drivers in making safer decisions and responding
more effectively to potential hazards. Traditional vehicle safety
features, while effective to some extent, are not sufficient to
address the complexities of modern driving environments that
include varied road conditions, unpredictable weather, and
diverse traffic patterns. Furthermore, existing Advanced Driver
Assistance Systems (ADAS) often suffer from limitations in sensor
accuracy, data processing capabilities, and user interface design,
which can hinder their effectiveness and adoption.

The problem is compounded by the growing demand for vehicles


that are not only safer but also provide enhanced driving comfort
and convenience. Additionally, ensuring the privacy and security
of data collected by ADAS remains a significant challenge, as does
the need to comply with evolving automotive safety standards
and regulations. There is also a pressing need to develop cost-
effective ADAS solutions that can be integrated into a wide range
of vehicle models to make advanced safety features accessible to
a broader segment of consumers. Addressing these issues
requires a comprehensive approach that includes the
development of robust algorithms, the optimization of sensor
fusion, and the creation of user-friendly interfaces, all while
ensuring compliance with safety standards and protecting data
privacy.

7
Moreover, as the automotive industry moves towards fully
autonomous driving, the integration of ADAS technologies serves
as a critical stepping stone. The transition from driver-assisted to
fully autonomous vehicles demand seamless interoperability
between various systems and high reliability under all driving
conditions. This progression necessitates rigorous research and
development to overcome technical challenges, such as real-time
processing of vast amounts of data, ensuring system redundancy,
and enhancing machine learning models for predictive analytics.
Thus, this project aims to not only address the immediate
challenges of current ADAS implementations but also to lay the
groundwork for the future of autonomous transportation,
ultimately contributing to safer roads and more efficient traffic
management.

The advancement of ADAS technology also brings forth ethical


and social considerations that need to be addressed. The
deployment of these systems raises questions about the
responsibility and accountability in cases of system failure or
accidents involving ADAS-equipped vehicles. Public acceptance
and trust in ADAS and autonomous driving technologies are
crucial for their widespread adoption. Therefore, this project will
also focus on developing transparent and explainable AI models
to ensure that the decision-making processes of ADAS are
understandable to users and regulators alike. Engaging with
stakeholders, including policymakers, industry experts, and the
general public, will be essential to address these concerns and
build confidence in the technology.

8
Optimally combine sensor data :
- Autonomous, and ADAS, vehicles require highly accurate
sensing, with both low false positive and negative rates. Cost is
also a factor, so these requirements cannot generally be achieved
with a single type of sensor; multiple types of sensors need to be
used together. Joining the sensor outputs together is a challenge
which is achieved by using a sensor fusion system, however, the
very few solutions available on the market generally fail to
combine the sensor streams in a way which accounts for the
strengths and weaknesses of the different sensors.

Ability for a vehicle in front to monitor and react to collision


situations from the rear:
– Forward Collision warning systems are the more common once.
A front facing camera will monitor the traffic in front and
appropriately react/generate warnings for the driver to respond.
But if a vehicle that does not have a warning system is on a
collision path from the rear, there is nothing I can do. This is
where the concept of a rear facing collision warning system is
relevant. The need for this has come up more in the commercial
truck and buses domains.

Low-cost Electrification Components:


– Can there be any low-cost inverters, motors, DC-DC Converters
and Batteries for Off highway Automotive. Today normally these
are sourced from other countries like China, Taiwan etc.

9
1.3 Objectives
The primary objective of this thesis is to enhance vehicle safety
by developing and integrating Advanced Driver Assistance
Systems (ADAS) technologies that reduce the likelihood of
accidents through real-time alerts and automated interventions
for drivers. Another significant objective is to improve driver
assistance by creating and implementing features such as lane-
keeping assistance, adaptive cruise control, and automated
parking, thus reducing driver workload and improving
convenience. This project aims to investigate and optimize the
fusion of data from multiple sensors, including cameras, LiDAR,
and radar, to enhance the accuracy and reliability of ADAS
functionalities. Additionally, robust algorithms for object
detection, recognition, and classification will be designed and
validated to operate effectively under diverse environmental
conditions and complex driving scenarios.

Enhancing the user interface and experience of ADAS


technologies is another critical objective, ensuring intuitive and
effective interaction between the driver and the system. The
performance of ADAS components and the overall system will be
extensively tested and evaluated under varied real-world driving
conditions to ensure reliability and performance. This thesis also
aims to ensure that the developed ADAS solutions comply with
current automotive safety standards and regulations while
contributing to the development of future standards. Addressing
data privacy and security is paramount, and measures will be
implemented to protect the data collected and processed by
ADAS systems. Exploring the integration of ADAS technologies as
foundational elements for fully autonomous driving systems is
another key objective, bridging the gap between assisted and
autonomous driving. Finally, developing cost-effective ADAS
solutions to make advanced driver assistance technologies
accessible to a broader range of vehicle models and consumers is

10
an important objective of this project.

In addition to these core objectives, the project aims to advance


the field of machine learning and artificial intelligence as applied
to ADAS. By developing advanced predictive models and
enhancing real-time data processing capabilities, the project
seeks to improve the responsiveness and adaptability of ADAS
technologies. This includes creating algorithms that can learn and
adapt to individual driver behaviors and preferences, thereby
providing a more personalized driving experience. Furthermore,
the project will explore the use of deep learning techniques to
enhance the accuracy of object detection and classification,
particularly in challenging conditions such as low light or adverse
weather.

11
1.4Thesis

Chapter 1: Introduction
The introductory chapter sets the stage for the thesis by
providing an overview of the motivation, objectives, and
significance of the project. It outlines the challenges in the
current state of vehicle safety and driver assistance technologies
and explains the need for advanced ADAS solutions. The chapter
also introduces the key research questions and objectives that
the thesis aims to address.

Chapter 2: STM 32
This chapter provides an in-depth exploration of the STM32
microcontroller used in the ADAS project. It covers the
architecture and features of the STM32, explaining why it was
selected for this project. The chapter details the integration of
the STM32 with various sensors and the role it plays in processing
data for ADAS functionalities. Additionally, it discusses the
development environment, programming techniques, and
challenges encountered during the implementation.

Chapter 3: AI Models
This chapter focuses on the development and implementation of
AI models used in the ADAS project. It covers the design and
training of models for lane detection, speed sign detection, and
lights detection. The chapter discusses the datasets used, the
preprocessing techniques applied, and the evaluation metrics
employed to assess model performance. Additionally, it provides
insights into the challenges faced and solutions devised during
the development of these AI models.

12
Chapter 4: Mobile Application Design and Development
This chapter presents the design and development of the mobile
application that interfaces with the ADAS system. It discusses the
use of Figma for designing the app's user interface and the
development process for creating a seamless user experience.
The chapter also explains how the mobile app interacts with the
ADAS system, providing real-time alerts and information to the
driver.

Chapter 5: Testing
Chapter 5 presents testing of the ADAS system. It includes the
methodologies used for testing the system under various
conditions, the metrics for assessing performance, and the
results obtained from both simulated and real-world
environments. The chapter discusses the system's effectiveness
in improving vehicle safety and driver assistance, as well as its
compliance with relevant standards and regulations.

Chapter 6: Linux Kernel


This chapter delves into the customization of the Linux operating
system for the ADAS project. It covers the selection and
configuration of the Linux distribution, kernel customization, and
the optimization of system performance to meet the specific
needs of ADAS applications. The chapter also discusses the
development and integration of custom drivers, security
enhancements, and the implementation of real-time processing
capabilities

13
14
Ch 2. STM 32

15
ARM architecture
ARM architecture stands out as a pivotal force in modern computing due to
its unique blend of efficiency, scalability, and versatility. Its RISC design
philosophy ensures high performance and low power consumption, making
ARM processors ideal for a wide range of applications, from smartphones
and tablets to embedded systems and high-performance servers. The
extensive ecosystem and widespread adoption by major tech companies
underscore ARM's critical role in the evolution of digital technology. As
ARM continues to evolve, with advancements seen in the latest ARMv9, it
is poised to further cement its dominance in both existing and emerging
markets, driving innovation and efficiency in the tech industry.

ARM Cortex types


he ARM Cortex series is divided into several types, each optimized for
different application needs ranging from microcontrollers and real-time
systems to high-performance applications. Here’s a detailed breakdown of
the various ARM Cortex types:

Cortex-M series
The ARM Cortex-M comprises a collection of 32-bit RISC ARM processor
cores licensed by ARM Limited. These cores are specifically designed for
cost-effective and energy-efficient integrated circuits, widely used in
billions of consumer devices. While commonly found in microcontroller
chips, they are also integrated into various other chip types. The Cortex-M
family encompasses Cortex-M0, Cortex-M0+, Cortex-M1, Cortex-M3,
Cortex-M4, Cortex-M7, Cortex-M23, Cortex-M33, Cortex-M35P, Cortex-
M52, Cortex-M55, and Cortex-M85. Some of these cores offer a floating-
point unit (FPU) option, available for Cortex-M4, M7, M33, M35P, M52,
M55, and M85 cores, denoted as "Cortex-MxF" when the FPU is
integrated.

16
Cortex-M3
Architecture and Classification:
• Microarchitecture: ARMv7-M
• Instruction Set: Thumb-1, Thumb-2, Saturated (some), Divide
Key Features:
• ARMv7-M Architecture: Provides enhanced performance and
efficiency.
• Pipeline: 3-stage pipeline with branch speculation.
• Instruction Set Compatibility:
o Thumb-1 (entire).
o Thumb-2 (entire).
o Saturated arithmetic support.
o 32-bit hardware integer multiply with 32-bit or 64-bit result
(signed or unsigned, with add or subtract after multiply).
o 32-bit hardware integer divide (2–12 cycles).
• Interrupt Handling:
o Supports 1 to 240 interrupts, plus Non-Maskable Interrupt
(NMI).
o 12 cycle interrupt latency.
• Integrated Sleep Modes: Power-saving modes for energy-efficient
operation.
• Silicon Options:
o Optional Memory Protection Unit (MPU) with up to 8 regions.
Chips Utilizing Cortex-M3 Core:
1. ABOV: AC33Mx128, AC33Mx064
2. Actel/Microsemi/Microchip: SmartFusion, SmartFusion 2 (FPGA)
3. Analog Devices: ADUCM360, ADUCM361, ADUCM3029
4. Holtek: HT32F
5. Infineon: TLE9860, TLE987x

17
6. Microchip (Atmel): SAM 3A, 3N, 3S, 3U, 3X
7. NXP: LPC1300, LPC1700, LPC1800
8. ON: Q32M210
9. Realtek: RTL8710
10.Silicon Labs: Precision32
11.Silicon Labs (Energy Micro): EFM32 Tiny, Gecko, Leopard, Giant
12.ST: STM32 F1, F2, L1, W
13.TDK-Micronas: HVC4223F
14.Texas Instruments: F28, LM3, TMS470, OMAP 4, SimpleLink
Wireless MCUs (CC1310 Sub-GHz and CC2650
BLE+Zigbee+6LoWPAN)
15.Toshiba: TX03

Cortex-A series
The Cortex-A series, designed by ARM Holdings, offers high-performance,
power-efficient processors commonly used in smartphones, tablets, and
embedded systems. With scalability, multicore support, and ARMv8
architecture for 64-bit computing, they provide customization options and
integrate with other components like GPUs. Security features such as
TrustZone technology ensure data protection. Overall, Cortex-A processors
strike a balance between performance and efficiency, making them suitable
for various applications across industries.

1.Cortex-A5, A7, A8, A9:


• Key Features: Range from low to mid performance, used in a
variety of consumer electronics and embedded systems.
• Use Cases: Smartphones, tablets, low-power consumer electronics.
• Technical Specs: ARMv7-A architecture, 32-bit cores, varying
levels of performance and power efficiency.
2. Cortex-A15, A17:
• Key Features: Higher performance, suitable for more demanding
applications.

18
• Use Cases: Advanced smartphones, tablets, network devices.
• Technical Specs: ARMv7-A architecture, 32-bit cores, out-of-order
execution, high performance.
3. Cortex-A53, A55:
• Key Features: Based on ARMv8-A architecture, supporting 64-bit
computing.
• Use Cases: Modern smartphones, tablets, high-end embedded
systems.
• Technical Specs: ARMv8-A architecture, 32/64-bit cores,
big.LITTLE configuration support for energy efficiency.
4. Cortex-A57, A72, A73:
• Key Features: High performance for advanced computing and
multimedia applications.
• Use Cases: High-end smartphones, tablets, computing devices.
• Technical Specs: ARMv8-A architecture, 64-bit cores, improved
performance and power efficiency.
• The raspberrypi 4 that we used in this project is cortex-A72
In our project, we have chosen to implement the Cortex-M3 core due to its
well-balanced performance and versatility in embedded systems. Below is
an overview of the key characteristics of the Cortex-M3 core:
Microarchitecture
• ARMv7-M: The Cortex-M3 core is based on the ARMv7-M
microarchitecture, offering a modern and efficient foundation for
embedded applications.
Instruction Set
• Thumb-1, Thumb-2: The Cortex-M3 core supports the Thumb
instruction set architecture, providing efficient code execution and
improved code density, which is essential for embedded systems with
limited memory resources.
Advanced Instruction Support
• Saturated Arithmetic: Efficient handling of signal processing tasks
ensuring predictable behavior in mathematical operations.

19
• Divide: Hardware support for integer division operations, enhancing
computational capabilities.
Integrated Sleep Modes
• Power-efficient sleep modes optimize energy consumption,
extending battery life in battery-powered devices and reducing
overall power consumption in energy-conscious applications.
Memory Protection and Security Features
• Optional Memory Protection Unit (MPU): Provides hardware
support for memory protection, allowing definition of memory access
permissions to enhance system security.
• Optional TrustZone Technology: Enables hardware-enforced
isolation between secure and non-secure code and data, enhancing
system security in IoT and other sensitive applications.

Interrupt Handling
• Supports a configurable number of interrupts (1 to 240), enabling
efficient event-driven programming and real-time responsiveness.
Wide Range of Supported Chips and Development Boards
• The Cortex-M3 core is implemented in various microcontrollers from
different manufacturers, providing flexibility in selecting the right
chip for our project. Popular development boards like the STM32
series from STMicroelectronics offer Cortex-M3-based options,
providing a convenient platform for prototyping and development.

Nested Vectored Interrupt Controller (NVIC)


The Cortex-M3 processor incorporates the Nested Vectored Interrupt
Controller (NVIC), a key component responsible for managing interrupts in
the system. The NVIC plays a crucial role in ensuring the timely handling
of interrupts, which is essential for real-time embedded applications. Here's
an overview of the NVIC and its features:

20
NVIC functional description
The NVIC supports up to 240 interrupts each with up to 256 levels of
priority. You can change the priority of an interrupt dynamically. The
NVIC and the processor core interface are closely
coupled, to enable low latency interrupt processing and efficient processing
of late arriving interrupts. The NVIC maintains knowledge of the stacked,
or nested, interrupts to enable tail-chaining of interrupts. You can only fully
access the NVIC from privileged mode, but you can cause interrupts to
enter a pending state in user mode if you enable this capability using the
Configuration Control Register. Any other user mode access causes a bus
fault. You can access all NVIC registers using byte, halfword, and word
accesses unless otherwise stated. NVIC registers are located within the
SCS. All NVIC registers and system debug registers are little-endian
regardless of the endianness state of the processor.

Level versus pulse interrupts


The processor supports both level and pulse interrupts. A level interrupt is
held asserted until it is cleared by the ISR accessing the device. A pulse
interrupt is a variant of an edge model. For level interrupts, if the signal is
not deasserted before the return from the interrupt routine, the interrupt
again enters the pending state and re-activates. This is particularly useful
for FIFO and buffer-based devices because it ensures that they drain either
by a single ISR or by repeated invocations, with no extra work. This means
that the device holds the signal in assert until the
device is empty. A pulse interrupt can be reasserted during the ISR so that
the interrupt can be in the pending state and active at the same time. The
application design must ensure that a second pulse does not arrive before
the first pulse is activated. The second entry to the pending state has no
affect because it is already in that state. However, if the interrupt is asserted
for at least one cycle, the NVIC latches the pend bit. When the ISR
activates, the pend bit is cleared. If the interrupt asserts again while it is
activated, it can latch the pend bit again. Pulse interrupts are mostly used
for external signals and for rate or repeat signals.

21
Software interfaces:

22
SysTick Timer in Cortex-M3
The SysTick timer is a core component of the Cortex-M3 processor,
designed to provide timing and scheduling capabilities in embedded
systems. It offers a straightforward and efficient way to generate periodic
interrupts and measure time intervals. Here's an overview of the SysTick
timer and its features:
Timer Functionality
• The SysTick timer is a 24-bit down-counter that decrements at a
configurable frequency, typically based on the processor clock
frequency. Once the counter reaches zero, it generates an interrupt,
allowing the processor to execute a specific interrupt service routine
(ISR).
System Tick Interrupt
• The interrupt generated by the SysTick timer is commonly referred to
as the "System Tick Interrupt." It serves various purposes, such as
implementing real-time operating system (RTOS) tick handlers,
scheduling tasks, measuring time intervals, and triggering periodic
system maintenance tasks.

23
Software interfaces:

24
STM32F103C8T6 Microcontroller
The STM32F103xx medium-density performance line family incorporates
the high-performance Arm® Cortex®-M3 32-bit RISC core operating at a
72 MHz frequency, high-speed embedded memories (Flash memory up to
128 Kbytes and SRAM up to 20 Kbytes), and an extensive range of
enhanced I/Os and peripherals connected to two APB buses. All devices
offer two 12-bit ADCs, three general purpose 16-bit timers plus one PWM
timer, as well as standard and advanced communication interfaces: up to
two I2Cs and SPIs, three USARTs, an USB and a CAN.

The devices operate from a 2.0 to 3.6 V power supply. They are available in
both the –40 to +85°C temperature range and the –40 to +105 °C extended
temperature range. A comprehensive set of power-saving mode allows the
design of low-power applications.

The STM32F103xx medium-density performance line family includes


devices in six different package types: from 36 pins to 100 pins. Depending
on the device chosen, different sets of peripherals are included, the
description below gives an overview of the complete range of peripherals
proposed in this family.

These features make the STM32F103xx medium-density performance line


microcontroller family suitable for a wide range of applications such as
motor drives, application control, medical and handheld equipment, PC and
gaming peripherals, GPS platforms, industrial applications, PLCs, inverters,
printers, scanners, alarm systems, video intercoms, and HVACs.

25
reset and clock control (RCC) of STM32F103C8T6.

Clocks
Three different clock sources can be used to drive the system clock
(SYSCLK):
HSI oscillator clock
HSE oscillator clock
PLL clock
HSI clock
The HSI clock signal is generated from an internal 8 MHz RC Oscillator
and can be used

26
directly as a system clock or divided by 2 to be used as PLL input.
The HSI RC oscillator has the advantage of providing a clock source at low
cost (no external
components). It also has a faster startup time than the HSE crystal oscillator
however, even
with calibration the frequency is less accurate than an external crystal
oscillator or ceramic
resonator.

External crystal/ceramic resonator (HSE crystal)


The 4 to 16 MHz external oscillator has the advantage of producing a very
accurate rate on
the main clock in STM32F103C8T6 HSE is 8Mhz.

PLLs are used to derive different clock frequencies from an external crystal
or oscillator, providing the necessary clock signals for the CPU,
peripherals, and other system components. For example, the
STM32F103C8T6 microcontroller uses a PLL to generate its system clock
from an external 8 MHz crystal, which can be multiplied to achieve higher
clock frequencies, such as 72 MHz.

Software interfaces:

27
RCC configuration for this application:

This configuration chooses the PLL to be the cloak source for the
microcontroller. The input for the PLL in the 8MHz of the HSE multiplied
by 9 to produce output clock of 72MHz. The AHB pre-scaler is 1 means the
any peripheral that is connected to the AHB bus will run on 72Mhz. The
APB low-speed pre-scaler (APB1) is 4 so any peripheral connected to
APB1 bus will have 18Mhz clock. The APB High-speed pre-scaler (APB2)
is 1 so any peripheral connected to APB1 bus will have 72Mhz clock

GPIO of STM32F103C8T6
The STM32F103C8T6 microcontroller features versatile General Purpose
Input/Output (GPIO) ports, providing extensive capabilities for interfacing
with various peripherals and external devices. Here’s a detailed overview of
the GPIO functionalities for this specific microcontroller:
GPIO Overview:
• Total GPIO Pins: 37 (arranged in ports A, B, and C)
• Port Availability:
o Port A: PA0 to PA15
o Port B: PB0 to PB15
o Port C: PC13 to PC15

28
Key Features:
1. Port Modes:
o Input Modes:
▪ Analog Input
▪ Floating Input (default)
▪ Input with pull-up
▪ Input with pull-down
o Output Modes:
▪ Push-Pull
▪ Open-Drain

29
Input configuration
When the I/O Port is programmed as Input:
➢ The Output Buffer is disabled
➢ The Schmitt Trigger Input is activated
➢ The weak pull-up and pull-down resistors are activated or not
depending on input configuration (pull-up, pull-down or floating):
➢ The data present on the I/O pin is sampled into the Input Data
register every APB2 clock cycle
➢ A read access to the Input Data register obtains the I/O State.

30
Output configuration
When the I/O Port is programmed as Output:
➢ The Output Buffer is enabled:
➢ Open Drain Mode: A “0” in the Output register activates the N-
MOS while a “1” in the Output register leaves the port in Hi-Z (the
P-MOS is never activated)
➢ Push-Pull Mode: A “0” in the Output register activates the N-MOS
while a “1” in the Output register activates the P-MOS
➢ The Schmitt Trigger Input is activated.
➢ The weak pull-up and pull-down resistors are disabled.
➢ The data present on the I/O pin is sampled into the Input Data
register every APB2 clock cycle
➢ A read access to the Input Data register gets the I/O state in open
drain mode
➢ A read access to the Output Data register gets the last written value in
Push-Pull mode

31
Alternate Function Modes:
When the I/O Port is programmed as Alternate Function:
➢ The Output Buffer is turned on in Open Drain or Push-Pull
configuration
➢ The Output Buffer is driven by the signal coming from the peripheral
(alternate function out)
➢ The Schmitt Trigger Input is activated
➢ The weak pull-up and pull-down resistors are disabled.
➢ The data present on the I/O pin is sampled into the Input Data
register every APB2 clock cycle
➢ A read access to the Input Data register gets the I/O state in open
drain mode
➢ A read access to the Output Data register gets the last written value in
Push-Pull mode

32
2. Speed Configuration:
o Output Speed: Configurable for 2 MHz, 10 MHz, and 50
MHz.
3. Interrupts:
o External interrupt/event lines are available for all GPIO pins,
supporting both edge and level-triggered interrupts.
4. Lock Mechanism:
o GPIO pins can be locked to prevent further changes to the
configuration.

33
GPIO configurations for device peripherals:

GPIO configurations provided for the TIM1 and TIM8 advanced timers:

➢ Input Capture and Output Compare Channels: The GPIO pins


associated with these channels can be configured either for input
(floating for capture, push-pull for output) or output (push-pull).
➢ Complementary Output Channel: This indicates the availability of
complementary outputs for certain channels, which are configured as
alternate function push-pull.
➢ Break Input and External Trigger Input: These GPIO pins are
configured as input floating, suitable for receiving external signals
for break functionality and triggering the timer operations.

GPIO configurations provided for the general-purpose timers TIM2, TIM3,


TIM4, and TIM5:

➢ Input Capture and Output Compare Channels: The GPIO pins


associated with these channels can be configured either for input
(floating for capture, push-pull for output) or output (push-pull).
➢ External Trigger Input: This GPIO pin is configured as input
floating, suitable for receiving external signals to trigger the timer
operations.

34
GPIO configurations provided for the I2C:

➢ Optimal Configuration for I2C: The GPIO pins designated for I2C
clock (SCL) and data (SDA) lines are set to alternate function open
drain.
➢ Open Drain Benefit: This configuration is ideal for I2C communication
because it allows multiple devices to share the same bus without causing
conflicts. Each device can pull the line low, while a pull-up resistor
ensures the line goes high when no device is actively pulling it low.
➢ Alternate Function Capability: These GPIO pins can alternate
between general-purpose input/output and the specialized function
required for I2C communication, enhancing flexibility in device
utilization.
➢ Application-Specific Design: Implementing these configurations
ensures reliable and efficient communication between your device and
other I2C-compatible peripherals, such as sensors, EEPROMs, or other
integrated circuits.

Power Considerations:
• Each GPIO pin can sink or source a specific amount of current, and
care should be taken not to exceed these ratings to avoid damage.
• The power consumption of GPIO pins in different modes (input,
output, alternate function) should be considered, especially in low-
power applications.

35
Software interfaces:

36
Timers in STM32F103C8T6
The stm32f103C8T6 contain one advanced-control timer timer1 and The
general-purpose timers timer2, timer3, timer4
Timers consist of a 16-bit auto-reload counter driven by a programmable
pre-scaler. It may be used for a variety of purposes, including measuring the
pulse lengths of input signals (input capture) or generating output
waveforms (output compare, PWM, complementary PWM with dead-time
insertion). Pulse lengths and waveform periods can be modulated from a
few microseconds to several
milliseconds using the timer pre-scaler and the RCC clock controller pre-
scalers.
The advanced-control (TIM1 and TIM8) and general-purpose (TIMx)
timers are completely independent, and do not share any resources.

TIMx main features


General-purpose TIMx timer features include:
16-bit up, down, up/down auto-reload counter.
16-bit programmable prescaler used to divide (also “on the fly”) the
counter clock frequency by any factor between 1 and 65536.
Up to 4 independent channels for:
– Input capture
– Output compare
– PWM generation (Edge- and Center-aligned modes)
– One-pulse mode output
Synchronization circuit to control the timer with external signals and to
interconnect several timers.
Interrupt/DMA generation on the following events: – Update: counter
overflow/underflow, counter initialization (by software or internal/external
trigger)
– Trigger event (counter start, stop, initialization or count by
internal/external trigger)
– Input capture
– Output compare
Supports incremental (quadrature) encoder and hall-sensor circuitry for
positioning purposes
Trigger input for external clock or cycle-by-cycle current management

37
Timer-x block diagram

Timer modes

➢ Up counting
➢ Down counting
➢ Center-aligned mode (up/down counting)

38
Up counting mode
In up counting mode, the counter counts from 0 to the auto-reload value
(content of the TIMx_ARR register), then restarts from 0 and generates a
counter overflow event. When an update event occurs, all the registers are
updated and the update flag (UIF bit in TIMx_SR register) is set (depending
on the URS bit):
The buffer of the prescaler is reloaded with the preload value (content of
the TIMx_PSC register)
The auto-reload shadow register is updated with the preload value
(TIMx_ARR)
The following figures show some examples of the counter behavior for
different clock frequencies when TIMx_ARR=0x36.

39
Down counting mode
In downcounting mode, the counter counts from the auto-reload value
(content of the TIMx_ARR register) down to 0, then restarts from the auto-
reload value and generates a counter underflow event. When an update
event occurs, all the registers are updated and the update flag (UIF bit in
TIMx_SR register) is set (depending on the URS bit):
The buffer of the prescaler is reloaded with the preload value (content of
the TIMx_PSC register).
The auto-reload active register is updated with the preload value (content
of the TIMx_ARR register). Note that the auto-reload is updated before the
counter is reloaded, so that the next period is the expected one.
The following figures show some examples of the counter behavior for
different clock frequencies when TIMx_ARR=0x36.

40
Center-aligned mode (up/down counting)
In center-aligned mode, the counter counts from 0 to the auto-reload value
(content of the TIMx_ARR register) – 1, generates a counter overflow
event, then counts from the auto reload value down to 1 and generates a
counter underflow event. Then it restarts counting from 0. When an update
event occurs, all the registers are updated and the update flag (UIF bit in
TIMx_SR register) is set (depending on the URS bit):
The buffer of the pre-scaler is reloaded with the preload value (content of
the TIMx_PSC register).
The auto-reload active register is updated with the preload value (content
of the
TIMx_ARR register). Note that if the update source is a counter overflow,
the auto reload is updated before the counter is reloaded, so that the next
period is the expected one (the counter is loaded with the new value).
The following figures show some examples of the counter behavior for
different clock frequencies.

41
Capture/compare channels
Each Capture/Compare channel (see Figure 125) is built around a
capture/compare register (including a shadow register), an input stage for
capture (with digital filter, multiplexing and pre-scaler) and an output stage
(with comparator and output control). The input stage samples the
corresponding TIx input to generate a filtered signal TIxF. Then, an edge
detector with polarity selection generates a signal (TIxFPx) which can be
used as trigger input by the slave mode controller or as the capture
command. It is pre-scaled before the capture register (ICxPS).

42
The capture/compare block is made of one preload register and one shadow
register. Write and read always access the preload register.
In capture mode, captures are actually done in the shadow register, which is
copied into the preload register.
In compare mode, the content of the preload register is copied into the
shadow register which is compared to the counter.

Input capture mode


In Input capture mode, the Capture/Compare registers (TIMx_CCRx) are
used to latch the value of the counter after a transition detected by the
corresponding ICx signal. When a capture occurs, the corresponding
CCXIF flag (TIMx_SR register) is set and an interrupt or a DMA request
can be sent if they are enabled. If a capture occurs while the CCxIF flag
was already high, then the over-capture flag CCxOF (TIMx_SR register) is
set. CCxIF can be cleared by software by writing it to 0 or by reading the
captured data stored in the TIMx_CCRx register. CCxOF is cleared when
written to 0.

43
The following example shows how to capture the counter value in
TIMx_CCR1 when TI1 input rises. To do this, use the following procedure:
Select the active input: TIMx_CCR1 must be linked to the TI1 input, so
write the CC1S bits to 01 in the TIMx_CCMR1 register. As soon as CC1S
becomes different from 00, the channel is configured in input and the
TIMx_CCR1 register becomes read-only.
Program the needed input filter duration with respect to the signal
connected to the timer (by programming the ICxF bits in the
TIMx_CCMRx register if the input is one of the TIx inputs). Let’s imagine
that, when toggling, the input signal is not stable during at must five
internal clock cycles. We must program a filter duration longer than these
five clock cycles. We can validate a transition on TI1 when eight
consecutive samples with the new level have been detected (sampled at
fDTS frequency). Then write IC1F bits to 0011 in the TIMx_CCMR1
register.
Select the edge of the active transition on the TI1 channel by writing the
CC1P bit to 0 in the TIMx_CCER register (rising edge in this case).
Program the input prescaler. In our example, we wish the capture to be
performed at each valid transition, so the prescaler is disabled (write IC1PS
bits to 00 in the TIMx_CCMR1 register).
Enable capture from the counter into the capture register by setting the
CC1E bit in the TIMx_CCER register.
If needed, enable the related interrupt request by setting the CC1IE bit in
the
TIMx_DIER register, and/or the DMA request by setting the CC1DE bit in
the
TIMx_DIER register. When an input capture occurs:
The TIMx_CCR1 register gets the value of the counter on the active
transition.
CC1IF flag is set (interrupt flag). CC1OF is also set if at least two
consecutive captures occurred whereas the flag was not cleared.
An interrupt is generated depending on the CC1IE bit.
A DMA request is generated depending on the CC1DE bit.

44
PWM input mode
This mode is a particular case of input capture mode. The procedure is the
same except:
Two ICx signals are mapped on the same TIx input.
These 2 ICx signals are active on edges with opposite polarity.
One of the two TIxFP signals is selected as trigger input and the slave
mode controller is configured in reset mode.
For example, the user can measure the period (in TIMx_CCR1 register) and
the duty cycle (in TIMx_CCR2 register) of the PWM applied on TI1 using
the following procedure (depending on CK_INT frequency and prescaler
value):
Select the active input for TIMx_CCR1: write the CC1S bits to 01 in the
TIMx_CCMR1 register (TI1 selected).
Select the active polarity for TI1FP1 (used both for capture in
TIMx_CCR1 and counter clear): write the CC1P to ‘0’ (active on rising
edge).
Select the active input for TIMx_CCR2: write the CC2S bits to 10 in the
TIMx_CCMR1 register (TI1 selected).
Select the active polarity for TI1FP2 (used for capture in TIMx_CCR2):
write the CC2P bit to ‘1’ (active on falling edge).
Select the valid trigger input: write the TS bits to 101 in the
TIMx_SMCR register (TI1FP1 selected).
Configure the slave mode controller in reset mode: write the SMS bits to
100 in the TIMx_SMCR register.
Enable the captures: write the CC1E and CC2E bits to ‘1 in the
TIMx_CCER register.

45
PWM mode
Pulse width modulation mode allows generating a signal with a frequency
determined by the value of the TIMx_ARR register and a duty cycle
determined by the value of the TIMx_CCRx register.
The PWM mode can be selected independently on each channel (one PWM
per OCx output) by writing 110 (PWM mode 1) or ‘111 (PWM mode 2) in
the OCxM bits in the TIMx_CCMRx register. The user must enable the
corresponding preload register by setting the OCxPE bit in the
TIMx_CCMRx register, and eventually the auto-reload preload register by
setting the ARPE bit in the TIMx_CR1 register.
As the preload registers are transferred to the shadow registers only when
an update event occurs, before starting the counter, the user has to initialize
all the registers by setting the UG bit in the TIMx_EGR register.
OCx polarity is software programmable using the CCxP bit in the
TIMx_CCER register. It can be programmed as active high or active low.
OCx output is enabled by the CCxE bit in the TIMx_CCER register. Refer
to the TIMx_CCERx register description for more details. In PWM mode (1
or 2), TIMx_CNT and TIMx_CCRx are always compared to determine
whether TIMx_CCRx TIMx_CNT or TIMx_CNT TIMx_CCRx
(depending on the direction of the counter). However, to comply with the
ETRF (OCREF can be cleared by an external event through the ETR signal
until the next PWM period), the OCREF signal is asserted only:
When the result of the comparison changes, or
When the output compare mode (OCxM bits in TIMx_CCMRx register)
switches from the “frozen” configuration (no comparison, OCxM=‘000) to
one of the PWM modes (OCxM=‘110 or ‘111). This forces the PWM by
software while the timer is running.

PWM edge-aligned mode


Up counting configuration
Up counting is active when the DIR bit in the TIMx_CR1 register is low.
Refer to U pcounting mode. In the following example, we consider PWM
mode 1. The reference PWM signal OCxREF is high as long as TIMx_CNT
<TIMx_CCRx else it becomes low. If the compare value in TIMx_CCRx is
greater than the auto-reload value (in TIMx_ARR) then OCxREF is held at
‘1. If the compare value is 0 then OCxREF is held at ‘0. Figure 130 shows
some edge-aligned PWM waveforms in an example where TIMx_ARR=8.

46
Software interfaces:

47
48
I2C in STM32F103C8T6
I2C (inter-integrated circuit) bus Interface serves as an interface between
the microcontroller and the serial I2C bus. It provides multimaster
capability, and controls all I2C bus-specific sequencing, protocol,
arbitration and timing. It supports the standard mode (Sm, up to 100 kHz)
and Fm mode (Fm, up to 400 kHz).
It may be used for a variety of purposes, including CRC generation and
verification, SMBus (system management bus) and PMBus (power
management bus).
Depending on specific device implementation DMA capability can be
available for reduced CPU overload.

I2C main features


➢ Parallel-bus/I2C protocol converter
➢ Multimaster capability: the same interface can act as Master or Slave
➢ I2C Master features:
o Clock generation
o Start and Stop generation
➢ I2C Slave features:
o Programmable I2C Address detection
o Dual Addressing Capability to acknowledge 2 slave addresses
o Stop bit detection
➢ Generation and detection of 7-bit/10-bit addressing and General Call
➢ Supports different communication speeds:
o Standard Speed (up to 100 kHz)
o Fast Speed (up to 400 kHz)
➢ Analog noise filter
➢ Status flags:
o Transmitter/Receiver mode flag

Mode selection
The interface can operate in one of the four following modes:
➢ Slave transmitter

49
➢ Slave receiver
➢ Master transmitter
➢ Master receiver
By default, it operates in slave mode. The interface automatically switches
from slave to master, after it generates a START condition and from master
to slave, if an arbitration loss or a Stop generation occurs, allowing
multimaster capability.

I2C slave mode


By default the I2C interface operates in Slave mode. To switch from default
Slave mode to Master mode a Start condition generation is needed.
The peripheral input clock must be programmed in the I2C_CR2 register in
order to generate correct timings. The peripheral input clock frequency
must be at least:

50
➢ 2 MHz in Sm mode
➢ 4 MHz in Fm mode
As soon as a start condition is detected, the address is received from the
SDA line and sent to the shift register. Then it is compared with the address
of the interface (OAR1) and with OAR2 (if ENDUAL=1) or the General
Call address (if ENGC = 1).
Header or address not matched: the interface ignores it and waits for
another Start condition.
Header matched (10-bit mode only): the interface generates an
acknowledge pulse if the ACK bit is set and waits for the 8-bit slave
address. Address matched: the interface generates in sequence:
➢ An acknowledge pulse if the ACK bit is set
➢ The ADDR bit is set by hardware and an interrupt is generated if the
ITEVFEN bit is set.
➢ If ENDUAL=1, the software has to read the DUALF bit to check
which slave address has been acknowledged.
In 10-bit mode, after receiving the address sequence the slave is always in
Receiver mode. It will enter Transmitter mode on receiving a repeated Start
condition followed by the header sequence with matching address bits and
the least significant bit set (11110xx1). The TRA bit indicates whether the
slave is in Receiver or Transmitter mode.

Slave transmitter
Following the address reception and after clearing ADDR, the slave sends
bytes from the DR register to the SDA line via the internal shift register.
The slave stretches SCL low until ADDR is cleared and DR filled with the
data to be sent .
When the acknowledge pulse is received:
➢ The TxE bit is set by hardware with an interrupt if the ITEVFEN and
the ITBUFEN bits are set.
If TxE is set and some data were not written in the I2C_DR register before
the end of the next data transmission, the BTF bit is set and the interface
waits until BTF is cleared by a read to I2C_SR1 followed by a write to the
I2C_DR register, stretching SCL low.

51
Slave receiver
Following the address reception and after clearing ADDR, the slave
receives bytes from the SDA line into the DR register via the internal shift
register. After each byte the interface generates in sequence:
➢ An acknowledge pulse if the ACK bit is set.
➢ The RxNE bit is set by hardware and an interrupt is generated if the
ITEVFEN and ITBUFEN bit is set.
If RxNE is set and the data in the DR register is not read before the end of
the next data reception, the BTF bit is set and the interface waits until BTF
is cleared by a read from I2C_SR1 followed by a read from the I2C_DR
register, stretching SCL low.

52
➢ The EV1 event stretches SCL low until the end of the corresponding
software sequence.
➢ The EV2 software sequence must be completed before the end of the
current byte transfer
➢ After checking the SR1 register content, the user should perform the
complete clearing sequence for each flag found set.
Thus, for ADDR and STOPF flags, the following sequence is required
inside the I2C interrupt routine: READ SR1
if (ADDR == 1) {READ SR1; READ SR2} if (STOPF == 1) {READ
SR1; WRITE CR1}
The purpose is to make sure that both ADDR and STOPF flags are cleared if both
are found set.
Software interfaces:

53
I2c1 ISR

54
Ch 3. AI Models

55
1. Speed – hump sign detection

Model used (YOLOv5)


YOLO (You Only Look Once) is a state of the art, real-time object detection
model. The architecture is really fast and is able to process images at 45 frames
a second. Let’s start with an overview!

Overview:

Other modern detection models like DPM (Deformable Parts Model) or R-


CNN re-purpose classifiers to perform detection. These systems use a classifier
for an object and evaluate it at various locations and scales on a test image.
This makes the models have complex pipelines which are slow and hard to
optimize.

The authors of YOLO re-framed the object detection problem as a single


regression, straight from single regression to bounding boxes and object class
probabilities. The architecture used is a single convolutional network:

YOLO Architecture: Simultaneously predicts bounding boxes and class


probabilities for these boxes

56
The network has 24 convolutional layers followed by 2 fully connected layers.
Instead of inception modules(used by GoogLeNet), 1 × 1 reduction layers is
used followed by 3 × 3 convolutional layers. Fast YOLO uses a neural network
with fewer convolutional layers (9 instead of 24) and fewer filters in those
layers. Other than the size of the network, all training and testing parameters
are the same between YOLO and Fast YOLO.
YOLO trains on full images and directly optimizes inference performance,
making it a unified model.

YOLO Benefits

1. It’s extremely fast

2. Global reasoning about the image when making predictions

3. Model learns generalizable representation of objects

Limitations of YOLO

YOLO is a fast and simple model but it does have it’s own limitations:

1. The model imposes a spatial constraint which limits the number of nearby
objects the model can predict.

2. Model struggles with small objects which appear in groups, like flocks of birds.

3. Since annotations are used during training, the model struggles to generalize
objects in new or unusual aspect ratios or configurations.

57
4. During training, the loss function approximates detection performance, the loss
function treats errors the same in small bounding boxes versus large bounding
boxes.

Dataset
The dataset comprises 2,457 images, featuring 17 different speed limit
signs and a hump sign, totaling 18 distinct classes. Of these, 1,965 samples
are designated for training, while the remaining 492 samples are reserved
for evaluating the model's generalization and performance on unseen
data.

16 plotted samples with their corresponding boundary box and label

58
Code

Extract text file from xml file:

59
60
61
62
63
Model training
import os
os.chdir('/content/drive/MyDrive/yolo_training')

!ls

!git clone https://github.com/ultralytics/yolov5.git

os.chdir('yolov5')
!ls

!pip install -r requirements.txt

!python train.py --data data.yaml --weights yolov5s.pt --batch-size 8 --


name Model --img 640 --epochs 50

!python export.py --weights runs/train/Model/weights/best.pt --include


onnx --simplify --opset 12

#model summary:

Predict with yolo (class)


import cv2
import numpy as np
import os
import yaml
from yaml.loader import SafeLoader

class YOLO_Pred():
def __init__(self,onnx_model,data_yaml):
# load YAML
with open(data_yaml,mode='r') as f:
data_yaml = yaml.load(f,Loader=SafeLoader)
self.labels = data_yaml['names']

64
self.nc = data_yaml['nc']
# load YOLO model
self.yolo = cv2.dnn.readNetFromONNX(onnx_model)
self.yolo.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
self.yolo.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) def predictions(self,image):

row, col, d = image.shape


# get the YOLO prediction from the the image
# step-1 convert image into square image (array)
max_rc = max(row,col)
input_image = np.zeros((max_rc,max_rc,3),dtype=np.uint8)
input_image[0:row,0:col] = image
# step-2: get prediction from square array
INPUT_WH_YOLO = 640
blob =
cv2.dnn.blobFromImage(input_image,1/255,(INPUT_WH_YOLO,INPUT_WH_YOLO),swapRB=True,cr
op=False)
self.yolo.setInput(blob)
preds = self.yolo.forward() # detection or prediction from YOLO

# Non Maximum Supression


# step-1: filter detection based on confidence (0.4) and probability score (0.25)
detections = preds[0]
boxes = []
confidences = []
classes = []

# widht and height of the image (input_image)


image_w, image_h = input_image.shape[:2]
x_factor = image_w/INPUT_WH_YOLO
y_factor = image_h/INPUT_WH_YOLO

for i in range(len(detections)):
row = detections[i]
confidence = row[4] # confidence of detection an object
if confidence > 0.4:
class_score = row[5:].max() # maximum probability from 20 objects
class_id = row[5:].argmax() # get the index position at which max probabilty occur

if class_score > 0.25:


cx, cy, w, h = row[0:4]
# construct bounding from four values
# left, top, width and height
left = int((cx - 0.5*w)*x_factor)
top = int((cy - 0.5*h)*y_factor)
width = int(w*x_factor)
height = int(h*y_factor)

box = np.array([left,top,width,height])

65
# append values into the list
confidences.append(confidence)
boxes.append(box)
classes.append(class_id)

# clean
boxes_np = np.array(boxes).tolist()
confidences_np = np.array(confidences).tolist()

# NMS
index = cv2.dnn.NMSBoxes(boxes_np, confidences_np, 0.25, 0.45)
if len(index) > 0:
index = np.array(index).flatten()
else:
index = []

# Draw the Bounding


for ind in index:
# extract bounding box
x,y,w,h = boxes_np[ind]
bb_conf = int(confidences_np[ind]*100)
classes_id = classes[ind]
class_name = self.labels[classes_id]

text = f'{class_name}: {bb_conf}%'

cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(image,(x,y-30),(x+w,y),(255,255,255),-1)

cv2.putText(image,text,(x,y-
10),cv2.FONT_HERSHEY_PLAIN,0.7,(0,0,0),1)
return image

Object detection metrics

Let's start by discussing some metrics that are not only important to
YOLOv5 but are broadly applicable across different object detection
models.
• Intersection over Union (IoU): IoU is a measure that quantifies the
overlap between a predicted bounding box and a ground truth
bounding box. It plays a fundamental role in evaluating the accuracy
of object localization.

66
In general, IoU > 50% is considered as good prediction
• Average Precision (AP):
- Precision: The ratio of true positive detections to the total
number of detections. Precision = TP / (TP + FP)
- Recall: The ratio of true positive detections to the total number
of actual objects. Recall = TP / (TP + FN). High recall indicates that
the model misses fewer positive instances.
AP computes the area under the precision-recall curve, providing a
single value that encapsulates the model's precision and recall
performance.
• Mean Average Precision (mAP): mAP extends the concept of AP by
calculating the average AP values across multiple object classes. This
is useful in multi-class object detection scenarios to provide a
comprehensive evaluation of the model's performance.
• Precision and Recall: Precision quantifies the proportion of true
positives among all positive predictions, assessing the model's
capability to avoid false positives. On the other hand, Recall
calculates the proportion of true positives among all actual
positives, measuring the model's ability to detect all instances of a
class.
• F1 Score: The F1 Score is the harmonic mean of precision and recall,
providing a balanced assessment of a model's performance while
considering both false positives and false negatives.

67
Model predictions

A sample of 25 images with their class detection and accuracy.

68
model evaluation

Confusion matrix
A confusion matrix is a tool used to evaluate the performance of
classification models. In the context of YOLOv5, which is an object
detection model, the confusion matrix helps to understand how well
the model is performing in terms of correctly detecting and classifying
objects within an image.

Data set Distribution

69
Traffic sign Detection
Traffic light recognition (TLR) is an integral part of any intelligent vehicle,
which must function in the existing infrastructure. Pedestrian and sign
detection have recently seen great improvements due to the introduction
of learning based detectors using integral channel features. A similar push
have not yet been seen for the detection sub-problem of TLR, where
detection is dominated by methods based on heuristic models.
Recognition of traffic lights (TLs) is an integral part of Driver Assistance
Systems (DAS) in the transitional period between manually controlled cars
and a fully autonomous network of cars. Currently the focus of research in
computer vision systems for vehicles is divided in two. Major industrial
research groups, such as Daimlar and Google, are investing heavily in
autonomous vehicles and attempt to make computer vision based system
for the existing infrastructure. Other research done by academic
institutions, such as the LISA lab at UC San Diego and LaRA at ParisTech, are
targeting DAS, which is already available to consumers in some high-end
models. Existing commercial DAS capabilities include, warning of
impending collisions, emergency breaking, automatic lane changing,
keeping the advertised speed limit, and adaptive cruise control. For all
parts of DAS the urban environment posses a lot of challenges, especially
to the systems that rely on computer vision. One of the most important
challenge here is detecting and recognizing TLs at intersections. Ideally, the
TLs should be able to communicate both visually and using radio
communication. However, this requires investments in infrastructure,
something that is usually not a high priority.
Traffic light detection is the process of identifying and classifying traffic
lights in an environment, typically using computer vision techniques. This
capability is crucial for applications such as autonomous vehicles and
intelligent transportation systems. Traffic light detection involves
recognizing the presence of traffic lights, determining their state (red,
yellow, green), and potentially tracking their changes over time.
Key Components and Processes
Image Acquisition

70
Images or video frames are captured using cameras mounted on vehicles
or infrastructure. These images serve as the input for the traffic light
detection system.
Preprocessing
Preprocessing steps are applied to enhance the quality of the images and
make the detection process more efficient. This may include resizing,
normalization, and noise reduction.
Object Detection
Object detection algorithms, such as YOLO (You Only Look Once), Faster R-
CNN, or SSD (Single Shot MultiBox Detector), are used to detect the
presence of traffic lights in the images. These algorithms output bounding
boxes around detected traffic lights along with confidence scores.
Classification
Once traffic lights are detected, they need to be classified into different
states (red, yellow, green). This can be done using color-based
classification, deep learning-based classification, or a combination of both.
Tracking (Optional)
In dynamic scenarios, tracking algorithms can be used to follow the
detected traffic lights across consecutive frames. This helps in maintaining
consistent detection and handling occlusions or temporary visibility issues.
Decision Making
The detected and classified traffic light states are used to make decisions,
such as whether to stop, proceed, or prepare to stop. In autonomous
vehicles, this information is crucial for navigation and ensuring safety at
intersections.
Applications
Autonomous Vehicles
Traffic light detection is a critical component of the perception
system in autonomous vehicles. It allows the vehicle to understand
and respond to traffic signals, ensuring compliance with traffic rules
and safe navigation through intersections.

71
Driver Assistance Systems
Advanced driver assistance systems (ADAS) use traffic light detection to
provide warnings or recommendations to human drivers. For example, a
system might alert the driver when a red light is detected, helping to
prevent accidents.
Traffic Management
Traffic light detection systems can be used in smart city infrastructure to
monitor and manage traffic flow. By analyzing traffic light states and traffic
patterns, these systems can optimize traffic signal timings and reduce
congestion.
Challenges
Lighting Conditions
Traffic light detection systems must handle varying lighting conditions,
including different times of day, weather conditions, and shadows.
Occlusions
Traffic lights can be partially or fully occluded by other vehicles, buildings,
or foliage, making detection more challenging.
Variability in Traffic Lights
Traffic lights come in different shapes, sizes, and colors, and may be
mounted in various configurations. Detection systems must be robust to
this variability.
Real-time Processing
For applications in autonomous vehicles and driver assistance systems,
traffic light detection must be performed in real-time with low latency to
ensure timely decision-making.
By leveraging advanced computer vision and machine learning techniques,
traffic light detection systems can achieve high accuracy and reliability,
contributing to safer and more efficient transportation systems.

72
Dataset
The dataset comprises 1000 images, featuring 3 different classes; red,
yellow and green. 800 samples are designated for training, while the
remaining 200 samples are reserved for evaluating the model's
generalization and performance on unseen data.

8 plotted samples with their corresponding boundary box and label.

Data set Distribution

73
Model prediction

74
Model evaluation
Confusion matrix

F1 score
It is the harmonic mean of precision and recall, and provides a single
measure of a model's accuracy that balances both the precision (the
accuracy of the positive predictions) and the recall (the ability of the
model to find all the
positive instances).
Here's how you calculate
the F1 score:

75
labels correlogram
A labels correlogram is a visualization that shows the correlation or co-
occurrence frequency of different object classes within the same images. It
is typically represented as a heatmap, where each cell (i, j) represents the
co-occurrence frequency or correlation between class i and class j.

76
Precision-confidence curve

Precision recall curve

77
Recall confidence curve

Results

78
2.Lane Detection
History of Lane Detection:
Introduction: Lane detection is a crucial aspect of autonomous driving and
advanced driver assistance systems (ADAS). The technology has evolved
significantly over the years, moving from simple, rule-based methods to
sophisticated, machine learning-driven approaches. This brief history
highlights the key milestones and advancements in lane detection
technology.
Early Developments:
• 1970s - 1980s:
- The concept of lane detection began in the 1970s with research focusing
on basic image processing techniques. Early systems relied on edge
detection and simple
heuristics to identify lane markings.
- In the 1980s, more advanced algorithms using Hough Transform were
introduced.
These methods could detect straight lines in an image, which was useful
for identifying lane boundaries on straight roads.
Advancements in the 1990s:
• Improved Algorithms:
- The 1990s saw the development of more robust algorithms capable of
handling curved lanes. Techniques such as the Hough Transform were
refined to detect parabolic shapes, allowing for the detection of curved
lane markings.
- Research in this period also began to explore the use of Kalman filters for
lane tracking, improving the system's ability to predict and follow lane
positions over time.

2000s:
• Introduction of Machine Learning:
- With the advent of machine learning, lane detection systems began to
incorporate classifiers such as Support Vector Machines (SVMs) to
distinguish between lane markings and other road features.
- The use of image processing techniques combined with machine learning
algorithms allowed for more accurate and reliable lane detection under
various conditions.
• Integration with ADAS:
- Lane detection technology was increasingly integrated into commercial
ADAS, providing features such as lane departure warnings and lane-
keeping assistance.

79
- These systems combined sensor data from cameras and other sources,
such as lidar and radar, to enhance detection accuracy and reliability.

2010s:
• Deep Learning Revolution:
- The 2010s marked a significant leap forward with the introduction of
deep learning techniques. Convolutional Neural Networks (CNNs) became
the standard for image-based tasks, including lane detection.
- Networks such as AlexNet, VGGNet, and later more specialized
architectures like LaneNet and SCNN (Spatial CNN) were developed to
identify lane markings with high precision.
- Deep learning models trained on large datasets could handle diverse and
challenging road conditions, including variations in lighting, weather, and
road quality.
• Real-time Performance:
-Advances in hardware, particularly GPUs, allowed for the deployment of
realtime lane detection systems in commercial vehicles.
-Companies like Tesla, Google (Waymo), and other automotive
manufacturers began incorporating advanced lane detection systems into
their autonomous driving technologies.

2020s and Beyond:

• Current Trends and Future Directions:


- Current research focuses on improving the robustness and
generalizability of lane detection systems. Efforts are being
made to enhance performance in complex urban environments
with occlusions, varying lane markings, and construction zones.

- Multi-sensor fusion techniques are becoming more prevalent,


combining data from cameras, lidar, radar, and GPS to create a
comprehensive understanding of the driving environment.
- The use of synthetic data and simulation environments for
training and testing lane detection models is increasing,
allowing for safer and more efficient development cycles.

80
Conclusion: Lane detection technology has come a long way from its
early beginnings with simple edge detection techniques to the
sophisticated deep learning-based systems of today. As the field
continues to evolve, the focus remains on improving accuracy, reliability,
and real-time performance to support the advancement of autonomous
driving and enhance road safety.

This brief history outlines the key milestones in lane detection, providing
an overview of how the technology has progressed over the decades. It
highlights the critical role of research and development in advancing lane
detection capabilities, contributing to the broader field of autonomous
driving.

Steps of making lane detection:

Steps of making lane detection:


1. capture photo using open cv video capture
In OpenCV, the function used to capture video is cv2.VideoCapture(). This
function can be used to capture video from a file, a camera, or a video
stream.

Usage of cv2.VideoCapture()

1. Capturing Video from a File:


o To capture video from a file, pass the file path as an argument
to cv2.VideoCapture().

python
Copy code
cap = cv2.VideoCapture('your_video_file.mp4')

81
2.convert to gray scale

The function used to convert an image from color to grayscale in OpenCV


is cv2.cvtColor(). This function can be used with various color conversion
codes, but for converting a color image to grayscale, the code
cv2.COLOR_BGR2GRAY is used.

Example of Converting an Image from Color to Grayscale

Here's a simple example to demonstrate how to use cv2.cvtColor() to


convert a color image to grayscale:

python
Copy code
import cv2

# Load the color image


image = cv2.imread('your_image_file.jpg')

# Convert the image to grayscale


gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Display the original and grayscale images


cv2.imshow('Original Image', image)
cv2.imshow('Grayscale Image', gray_image)

# Wait for a key press and close the windows


cv2.waitKey(0)
cv2.destroyAllWindows()

82
3. reduce the noise (gaussian blur)

The function used to apply a Gaussian blur in OpenCV is


cv2.GaussianBlur(). This function smooths an image by convolving it with a
Gaussian kernel, which helps in reducing noise and detail.

Syntax of cv2.GaussianBlur()

python
Copy code
cv2.GaussianBlur(src, ksize, sigmaX[, dst[, sigmaY[, borderType]]])

• src: Input image (it should be 8-bit or 32-bit floating point).


• ksize: Gaussian kernel size. It should be a tuple of two odd integers,
representing the width and height of the kernel.
• sigmaX: Gaussian kernel standard deviation in the X direction.
• dst (optional): Output image of the same size and type as src.
• sigmaY (optional): Gaussian kernel standard deviation in the Y
direction. If it is zero, it is set to the same value as sigmaX.
• borderType (optional): Pixel extrapolation method.

4. edge detection by using canny detection


Why canny because it make a gaussian blur inside the function

Edge detection is a fundamental technique in image processing and


computer vision used to identify boundaries within images. Specifically,
Canny edge detection is a popular and effective method for detecting a
wide range of edges in an image. Here's a detailed overview of edge
detection and the Canny edge detection technique:

Edge Detection

Edge detection algorithms aim to identify points in an image where the


brightness changes sharply. These points often correspond to boundaries
between different objects or regions in the image. Edges typically
represent significant changes in intensity, color, or texture and are crucial
for various image processing tasks, including object recognition,
segmentation, and feature extraction.

83
Canny Edge Detection

The Canny edge detection algorithm was developed by John F. Canny in


1986 and is widely used due to its effectiveness and robustness. It involves
several stages to detect edges accurately while minimizing false positives:

1. Noise Reduction (Gaussian Blur):


o Before detecting edges, the image is smoothed using
Gaussian blur to reduce noise and unwanted details. This
helps to ensure that the detected edges are not due to noise.
o Function: cv2.GaussianBlur(image, (kernel_size, kernel_size),
sigmaX)
2. Gradient Calculation:
o The intensity gradients (both magnitude and direction) of the
image are calculated using Sobel kernels in both the
horizontal and vertical directions. These gradients help to
identify regions of the image where intensity changes rapidly.
o Functions: cv2.Sobel(image, ddepth, dx, dy, ksize)
3. Non-maximum Suppression:
o This step aims to thin out the edges to one-pixel wide lines. It
involves iterating through the gradient magnitude matrix to
suppress (remove) pixels that are not considered to be part of
an edge (non-maximum suppression).
o Concept: Suppress all the gradient values except the local
maxima, which indicates the sharpest change in intensity.
4. Double Thresholding:
o The gradient magnitudes are further analyzed by applying two
thresholds (high and low):

84
▪ Strong edges: Pixels with gradient values above the
high threshold are marked as strong edges.
▪ Weak edges: Pixels with gradient values between the
low and high thresholds are marked as weak edges.
o Function: cv2.Canny(image, threshold1, threshold2)

1. Edge Tracking by Hysteresis:


o Finally, edges are tracked by suppressing weak edges that are
not connected to strong edges. This ensures that only
meaningful edges are detected and reduces noise further.
o Concept: Edge pixels above the high threshold are considered
strong edges. Then, weak edge pixels connected to strong
edges are considered part of the edge.

85
5.finding the lane lines region of interest
To define a region of interest (ROI) in the context of lane detection, you
typically want to focus on the area of the image where the lanes are
expected to be. This helps reduce processing time and avoid detecting
unwanted edges outside of the lanes. Here’s how you can modify the lane
detection code to include a region of interest:

86
6. finding lane line using Hough transform

The Hough Transform is a powerful technique used in image processing


and computer vision to detect simple geometric shapes, particularly lines
and circles, in images that may be obscured by noise or other features.
Here's an explanation suitable for your graduation book:

Hough Transform in Image Processing

The Hough Transform, introduced by Paul Hough in 1962 and later refined
by Richard Duda and Peter Hart in 1972 for line detection, is a method to
detect shapes within an image, particularly straight lines and circles. It
operates by converting the problem of detecting these shapes from the
image space (x, y coordinates) to a parameter space (commonly known as
Hough space).

Key Concepts:

1. Parameter Space Representation:


o In the Hough Transform, each point in the image space is
transformed into a curve or point in the parameter space,
representing potential shapes. For lines, each point (x, y) in
the image space can be represented as a line in the parameter
space (ρ, θ), where:
▪ ρ (rho): Distance from the origin to the closest point on
the line.
▪ θ (theta): Angle between the x-axis and the normal to
the line.
2. Accumulation:
o The process involves voting or accumulation, where each
edge point in the image space votes for potential lines it could
be part of in the parameter space. The intersection of these
curves or peaks in the parameter space indicates the presence
of a line in the image.
3. Line Detection:
o To detect lines using the Hough Transform:
▪ Convert the image to grayscale and apply edge
detection (e.g., Canny edge detection).

87
▪ Create an accumulator (matrix) in the parameter space.
▪ For each edge pixel, compute possible lines (ρ, θ) and
increment corresponding cells in the accumulator.
▪ Identify peaks in the accumulator matrix, which
indicate the parameters of detected lines.
4. Circle Detection:
o For circle detection, a modified Hough Transform is used
where each point (x, y) in the image space votes for possible
circles (x_c, y_c, r) in the parameter space, where (x_c, y_c) is
the center of the circle and r is the radius.

Applications:

• Lane Detection: Used in autonomous vehicles to detect lane


markings on roads.
• Medical Imaging: Detecting lines in X-ray images or circles in MRI
scans.
• Industrial Applications: Inspecting parts for defects, detecting
straight lines in manufactured items.

Advantages:

• Robustness: Can detect shapes even when they are partially


obscured or noisy.
• Versatility: Applicable to a wide range of geometric shapes beyond
lines and circles with appropriate modifications.

Limitations:

• Computational Intensity: Requires significant computational


resources, especially for large images or complex shapes.
• Parameter Sensitivity: Performance can be affected by parameter
selection, such as threshold values.

Working Principle

1. Line Detection (Standard Hough Transform for Lines):

88
o Parameterization: Each point (x, y) in the image space can be
represented as a line in the Hough parameter space (ρ, θ),
where:
▪ ρ (rho): Distance from the origin (0, 0) to the closest
point on the line.
▪ θ (theta): Angle between the x-axis and the normal to
the line from the origin.
o Steps:
▪ Convert the image to grayscale and detect edges (e.g.,
using the Canny edge detector).
▪ Initialize an accumulator (matrix) in the (ρ, θ)
parameter space to zero.
▪ For each edge point (x, y) in the image:
▪ Compute possible lines (ρ, θ) that pass through
(x, y).
▪ Increment corresponding cells in the
accumulator.
▪ Peaks in the accumulator indicate lines in the image.
o Accumulator Space: Each cell (ρ, θ) in the accumulator
accumulates votes from edge points, where a higher vote
count indicates a higher likelihood of a line passing through
that point.
o Thresholding: After accumulation, thresholding is often
applied to select peaks in the accumulator matrix, which
correspond to lines detected in the image

89
The cartesian x and y has it is pros : The vertical lines has infinite slope.

90
7. calculate the average of the lines to make the right and the left side
and print the order to go right or left.

91
8. get the order to go right or left

Has two middle points the first for the two draw lines, the second for
middle of the frame and compare them together to get the order for the
drive to go right or to go left.

92
Ch4. Mobile Application &
Design

93
The main aim of this application is to control the car, in features of cars
and display the state of car.

This application is built with flutter framework which uses dart language, flutter is
supported by google Features of this application:
1) This application is built with MVVM Architecture pattern
MVVM (Model-View-ViewModel) is a software architectural pattern used
primarily in developing user interfaces.

The main components of MVVM are:

1. Model:

• Represents the data and the business logic of the application.


2. View:

• Represents the UI of the application.


• It displays the data and sends user interactions (events) to the
ViewModel.
3. ViewModel:

• Acts as an intermediary between the View and the Model.


• It holds the presentation logic and state.

2) This App is connected to FireBase Server, we will talk about it later.


I use state management called Cubit , which responsible to change the state of UI
This App consists of:
1. splash screen.
2. login and register screens.

3. home screen.

94
1)How is Splash screen built?

is very simple:
1. choose image
2. add its relative path to pubspec.yaml file
3. place it in a certain way in screen and make a beauty design
4. choose background color
5. make this screen hold for five seconds and then navigate to next screen

95
code:

import 'package:adas_app/constants.dart';
import 'package:adas_app/core/functions/add_remove_user_to_shared_preference.dart';
import 'package:adas_app/core/utils/app_images.dart';
import 'package:adas_app/features/auth/presentation/views/login_view.dart';
import 'package:adas_app/features/home/presentation/views/home_view.dart';
import 'package:flutter/material.dart';

class SplashViewBody extends StatefulWidget {


const SplashViewBody({super.key});

@override
State<SplashViewBody> createState() => _SplashViewBodyState();
}

class _SplashViewBodyState extends State<SplashViewBody> {

@override
void initState() {
Future.delayed(const Duration(seconds: 5), () async {
if (await checkCurrUser()) {
// ignore: use_build_context_synchronously
Navigator.pushReplacement(
context, MaterialPageRoute(builder: (context) => const HomeView()));
} else {
// ignore: use_build_context_synchronously
Navigator.pushReplacement(context,
MaterialPageRoute(builder: (context) => const LoginView()));
}
});
super.initState();
}

@override
Widget build(BuildContext context) {
return Center(
child: Container(
decoration: const BoxDecoration(
color: kBackgroundColor,
image: DecorationImage(image: AssetImage(Assets.imagesAdas2)),
),
),
);
}
}

96
2) How Login and Register Screens built?

97
Flow of those two screens:
1) If user does not have an account, then it will press on register button to
navigate to register screen and make an account # place image.

98
2) After making a new account, it will be asked to verify his new account.

3) after verifying his account, it will be able to navigate to home screen


where he is able to control the car.

99
4) login screen show error if there is an error in email or password and
asked user to enter it again.

5) If the user creates a new account with a weak password, it will show
him.
6) If the user signs in with the correct email, the next time he uses the
app, it will take him directly to the home screen because he is already
logged in (unless he signs out).

Note: login and register screens are always in a Portrait Orientation

100
code of Login screen:
import 'package:adas_app/core/utils/app_style.dart';
import
'package:adas_app/features/auth/presentation/manager/login_cubit/login_cubit.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/dont_have_email_button.dart

import
'package:adas_app/features/auth/presentation/views/widgets/login_email_form.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/login_password_form.dart';
import 'package:adas_app/features/home/presentation/views/home_view.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:gap/gap.dart';
import 'package:modal_progress_hud_nsn/modal_progress_hud_nsn.dart';
import 'auth_logo.dart';
import 'login_button.dart';
import 'login_with_google.dart';
import 'package:awesome_dialog/awesome_dialog.dart';

class LoginViewBody extends StatefulWidget {


const LoginViewBody({super.key});

@override
State<LoginViewBody> createState() => _LoginViewBodyState();
}

class _LoginViewBodyState extends State<LoginViewBody> {


GlobalKey<FormState> formKey = GlobalKey();
@override
Widget build(BuildContext context) {
bool isLoading = false ;
return BlocConsumer<LoginCubit, LoginState>(
listener: (context, state) {
if (state is LoginSuccess) {
Navigator.pushAndRemoveUntil(
context,
MaterialPageRoute(builder: (context) => const HomeView()),
(route) => false);
} else if (state is LoginFailure) {
isLoading = false;
showErrorDialog(context, state.errMessage);
}
else if ( state is LoginLoading) {
isLoading = true;
}else if (state is LoginInitial) {
isLoading = false;
}
},
builder: (context, state) {
return ModalProgressHUD(
inAsyncCall: isLoading,

child: Padding(

101
padding: const EdgeInsets.symmetric(horizontal: 30),
child: Form(
key: formKey,
child: CustomScrollView(
slivers: [
SliverFillRemaining(
hasScrollBody: false,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const Gap(30),
const Align(
alignment: Alignment.topRight,
child: DontHaveEmailButton(),
),
const Center(child: AuthLogo()),
const Gap(35),
const Text('Login', style: AppStyle.bold50),
const Gap(70),
const LoginEmailForm(),
const Gap(40),
const LoginPasswordForm(),
const Gap(40),
const Spacer(),
LoginButton(formKey: formKey),
const Gap(40),
const LoginWithGoogle(),
const Gap(70),
],
),
),
],
),
),
),
);
},
);
}

void showErrorDialog(BuildContext context, String errorMessage) {


AwesomeDialog(
context: context,
dialogType: DialogType.error,
dialogBackgroundColor: Colors.blue[50],
animType: AnimType.rightSlide,
title: 'Error',
titleTextStyle: AppStyle.semiBold40.copyWith(color: Colors.black),

102
desc: errorMessage,
padding: const EdgeInsets.all(20.0),
dialogBorderRadius: BorderRadius.circular(50),
descTextStyle: AppStyle.medium28,
btnOkOnPress: (){}
).show();
}

Code of Login State:


part of 'login_cubit.dart';

abstract class LoginState {}

class LoginInitial extends LoginState {}


class LoginLoading extends LoginState {}
class LoginSuccess extends LoginState {}
class EmailNotVerified extends LoginState {}
class LoginFailure extends LoginState {
final String errMessage;
LoginFailure(this.errMessage);

Code of Login_cubit:
import 'package:adas_app/core/functions/add_remove_user_to_shared_preference.dart';

import 'package:adas_app/core/utils/app_style.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:google_sign_in/google_sign_in.dart';
part 'login_state.dart';

class LoginCubit extends Cubit<LoginState> {


LoginCubit() : super(LoginInitial());

103
String _emailAddress = '';
String _password = '';
set emailAddress(String emailAddress) => _emailAddress = emailAddress;
set password(String password) => _password = password;
Future<void> login(context) async {
emit(LoginLoading());
try {
final credential = await FirebaseAuth.instance.signInWithEmailAndPassword(
email: _emailAddress, password: _password);
if (credential.user!.emailVerified) {
emit(LoginSuccess());
await addUserToSharedPreferences();
}else {
__showVerifiedDialog(context: context, credential: credential);
}
} on FirebaseAuthException catch (e) {
if (e.code == 'invalid-credential') {
emit(LoginFailure('Email Not Found'));
} else if (e.code == 'invalid-email') {
emit(LoginFailure(
'invalid email , email should be: email_name@*****.com'));
} else if (e.code == 'network-request-failed') {
emit(LoginFailure('please check your internet connection'));
} else {
emit(LoginFailure('unknown input'));
}
} catch (e) {
emit(LoginFailure(e.toString()));
}
}

code of Register Screen:

import
'package:adas_app/features/auth/presentation/manager/register_cubit/register_cubit.dar
t';
import 'package:adas_app/features/auth/presentation/views/widgets/auth_logo.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/register_button.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/register_confirm_password_f
orm.dart';

104
import
'package:adas_app/features/auth/presentation/views/widgets/register_email_form.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/register_password_form.dart
';
import 'package:adas_app/features/home/presentation/views/home_view.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:gap/gap.dart';
import 'package:modal_progress_hud_nsn/modal_progress_hud_nsn.dart';

import '../../../../../core/utils/app_style.dart';

class RegisterViewBody extends StatefulWidget {


const RegisterViewBody({super.key});

@override
State<RegisterViewBody> createState() => _RegisterViewBodyState();
}

class _RegisterViewBodyState extends State<RegisterViewBody> {


GlobalKey<FormState> formKey = GlobalKey();
@override
Widget build(BuildContext context) {
bool isLoading = false ;
return BlocConsumer<RegisterCubit, RegisterState>(
listener: (context, state) {
if (state is RegisterSuccess) {
isLoading = false ;
Navigator.pushAndRemoveUntil(
context,
MaterialPageRoute(builder: (context) => const HomeView()),
(route) => false);
} else if (state is RegisterFailure) {
isLoading =false ;
showErrorDialog(context, state.errMessage);
} else if (state is RegisterLoading){
isLoading = true ;
}
},
builder: (context, state) {
return ModalProgressHUD(
inAsyncCall: isLoading,
child: Padding(
padding: const EdgeInsets.symmetric(horizontal: 30),
child: Form(
key: formKey,

105
child: CustomScrollView(
slivers: [
SliverFillRemaining(
hasScrollBody: false,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const Gap(30),
const Center(child: AuthLogo()),
const Gap(35),
const Text('Register', style: AppStyle.bold50),
const Gap(45),
const RegisterEmailForm(),
const Gap(30),
const RegisterPasswordForm(),
const Gap(30),
const RegisterConfirmPasswordForm(),
const Gap(30),
const Spacer(),
RegisterButton(formKey: formKey),
const Gap(70),
],
),
),
],
),
),
),
);
},
);
}

void showErrorDialog(BuildContext context, String errorMessage) {


AwesomeDialog(
context: context,
dialogType: DialogType.error,
dialogBackgroundColor: Colors.blue[50],
animType: AnimType.rightSlide,
title: 'Error',
titleTextStyle: AppStyle.semiBold40.copyWith(color: Colors.black),
desc: errorMessage,
padding: const EdgeInsets.all(20.0),
dialogBorderRadius: BorderRadius.circular(50),
descTextStyle: AppStyle.medium28,
btnOkOnPress: () {},
).show();
}

106
}

descTextStyle: AppStyle.medium28,
btnOkOnPress: (){}
).show();
}

Code of Register State:


part of 'register_cubit.dart';

abstract class RegisterState {}

class RegisterInitial extends RegisterState {}

class RegisterLoading extends RegisterState {}

class RegisterSuccess extends RegisterState {}

class RegisterFailure extends RegisterState {


final String errMessage;

RegisterFailure(this.errMessage);
}

Code of Register Cubit:

import 'package:adas_app/core/functions/add_remove_user_to_shared_preference.dart';
import 'package:adas_app/core/utils/app_style.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';

107
part 'register_state.dart';

class RegisterCubit extends Cubit<RegisterState> {


RegisterCubit() : super(RegisterInitial());
String _emailAddress = '', _password = '', _confirmPassword = '';

set emailAddress(String emailAddress) => _emailAddress = emailAddress;


set password(String password) => _password = password;
set confirmPassword(String confirmPassword) =>
_confirmPassword = confirmPassword;
Future<void> register(context) async {
emit(RegisterLoading());
if (_password != _confirmPassword) {
emit(RegisterFailure('Password does not match.'));

return;
}
try {
final credential = await FirebaseAuth.instance.createUserWithEmailAndPassword(
email: _emailAddress, password: _password);
await FirebaseAuth.instance.currentUser!.sendEmailVerification();
if (credential.user!.emailVerified) {
emit(RegisterSuccess());
await addUserToSharedPreferences();
}else {
__showVerifiedDialog(context: context, credential: credential);
}

} on FirebaseAuthException catch (e) {


if (e.code == 'invalid-email') {
emit(RegisterFailure(
'Invalid email , email should be : email_name@*****.com'));
debugPrint(e.code);
} else if (e.code == 'weak-password') {
emit(RegisterFailure('The password provided is too weak'));
debugPrint(e.code);
} else if (e.code == 'email-already-in-use') {
emit(RegisterFailure('This email is already exist'));
debugPrint(e.code);
} else if (e.code == 'network-request-failed') {
emit(RegisterFailure('please check your internet connection'));
debugPrint(e.code);
} else {
emit(RegisterFailure('unknown input'));
debugPrint(e.code);
}
} catch (e) {
emit(RegisterFailure(e.toString()));

108
}
}

void __showVerifiedDialog({required BuildContext context , required UserCredential


credential} ) {
AwesomeDialog(
context: context,
dialogType: DialogType.info,
dialogBackgroundColor: Colors.blue[50],
animType: AnimType.rightSlide,
title: 'Verification Email Sent',
titleTextStyle: AppStyle.semiBold40.copyWith(color: Colors.black),
desc: "please check your email for verification ",
padding: const EdgeInsets.all(20.0),
dialogBorderRadius: BorderRadius.circular(50),
descTextStyle: AppStyle.medium28,
btnOkOnPress: () {
if ( credential.user!.emailVerified) {
emit(RegisterSuccess()) ;
addUserToSharedPreferences();
}else {
__showVerifiedDialog(context: context, credential: credential);
}
},
buttonsTextStyle: AppStyle.medium28,
btnOkColor: Colors.green,
btnCancelOnPress: (){}
).show();
}

109
3) Home Screen:

• This screen was designed to always be in landscape orientation.

We'll divide this screen into three parts: left, middle, and right.

1) Left part of home screen:

• There are two arrows: left arrow and right arrow, we use this to control the
direction of car move
• The four icons above control four features of the car, which are: FCW (forward
collision warning), traffic sign recognition, lane detection, and blind spot.
• In FCW, when it is on, the car will detect if anything is in front of it or near it,
and it will break the car automatically.
• In Traffic Sign Recognition, when it is on, the car can recognize road signs as
Bumb sign and max speed sign.
• When the Blind Spot feature is enabled, the car can detect nearby vehicles

110
and display a warning on the application.

• Battery shows the current charge of car.


The menu icon contains email of user and sign out feature:

111
2) Middle part of home screen:

• There is a car image


• A five-second warning with sound is displayed on the screen if
another car is in front of or near our vehicle

• If there is a Bumb sign on road, Bumb warning is displayed on


screen.

112
• If there is another car behind and near our car, a warning is
displayed on screen behind car image for five seconds with warning
sound.

• If blind spot sent warning to app, it would show warning on left


or right side of car for five seconds with warning sound.

• If the traffic light is on, car will detect the traffic light state if it is red,

113
yellow or green.

3) Right part of home screen:


• There are break icon, and speed icon.

• There are P, D and R Icon which control car motion.

• There is a Speedometer which displays the current speed of car.

• Max speed which is display max speed of road if and only if Traffic
sign Recognition is on.

114
code of home screen UI:
import
'package:adas_app/features/home/presentation/views/widgets/backgrond_decoration.dart';
import 'package:adas_app/features/home/presentation/views/widgets/backward_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/break_feature.dart';
import 'package:adas_app/features/home/presentation/views/widgets/car_battery.dart';
import 'package:adas_app/features/home/presentation/views/widgets/car_controll_icons.dart';
import 'package:adas_app/features/home/presentation/views/widgets/car_image.dart';
import
'package:adas_app/features/home/presentation/views/widgets/custom_button_for_rpn.dart';
import 'package:adas_app/features/home/presentation/views/widgets/custom_drawer.dart';
import 'package:adas_app/features/home/presentation/views/widgets/custom_menu_icon.dart';
import 'package:adas_app/features/home/presentation/views/widgets/forward_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/left_button.dart';
import 'package:adas_app/features/home/presentation/views/widgets/left_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/max_speed.dart';
import 'package:adas_app/features/home/presentation/views/widgets/right_button.dart';
import 'package:adas_app/features/home/presentation/views/widgets/right_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/speed_feature.dart';
import 'package:adas_app/features/home/presentation/views/widgets/speedometer.dart';
import
'package:adas_app/features/home/presentation/views/widgets/forward_bump_warning.dart';
import 'package:flutter/material.dart';
import 'package:gap/gap.dart';

class HomeBody extends StatefulWidget {


const HomeBody({super.key});

@override
State<HomeBody> createState() => _HomeBodyState();
}

class _HomeBodyState extends State<HomeBody> {


GlobalKey<ScaffoldState> scaffoldKey = GlobalKey();
@override
Widget build(BuildContext context) {
var height = MediaQuery.of(context).size.height;
debugPrint('height: $height');
var width = MediaQuery.of(context).size.width;
debugPrint('width: $width');
return Scaffold(
key: scaffoldKey,
drawer: const CustomDrawer(),
body: BackGroundDecoration(
child: Padding(
padding:
const EdgeInsets.only(right: 30.0, left: 30, bottom: 50, top: 20),

115
child: Row(
children: [
Expanded(
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
CustomMenuIcon(scaffoldKey: scaffoldKey),
const Gap(20),
const CarBattery(),
const Spacer(),
const CarControllIcons(),
const Spacer(),
const Row(
children: [LeftButton(), Gap(60), RightButton()],
)
],
),
),
const Expanded(
child: Column(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.center,
children: [
Row(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.center,
children: [
Expanded(child: LeftWarning()),
Column(
mainAxisSize: MainAxisSize.min,
children: [
Stack(
children: [
ForwardBumpWarning(),
ForwardWarning(),
],
),
CarImage(),
BackwardWarning(),
],
),
Expanded(child: RightWarning()),
],
)
],
),
),
const Expanded(

116
child: Column(
crossAxisAlignment: CrossAxisAlignment.end,
children: [
Gap(30),
Row(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
CustomSpeedoMeter(),
Gap(15),
MaxSpeed(),
],
),
Spacer(),
CustomButtonRPN(),
Gap(70),
Row(
mainAxisSize: MainAxisSize.min,
children: [
BreakFeature(),
Gap(20),
SpeedFeature(),
],
),
],
),
)
],
),
),
),
);
}
}

117
Firebase and FireStore:

Firebase and Firestore are components of Google’s cloud-based backend-


as-a-service (BaaS) platform, designed to help developers build and scale
applications quickly and efficiently. Here’s a brief overview of each:

Firebase

Firebase is a comprehensive platform that provides a suite of tools and


services for app development, including:

1. Authentication: Simplifies user authentication using email,


password, or third-party providers like Google, Facebook, and
Twitter.
2. Realtime Database: A NoSQL cloud database that allows data to be
synced in real-time across all clients.
3. Cloud Firestore: A flexible, scalable database for mobile, web, and
server development.
4. Cloud Storage: A robust, scalable solution for storing user-generated
content like photos and videos.
5. Hosting: Fast, secure, and reliable hosting for web apps, static and
dynamic content, and microservices.
6. Cloud Functions: Serverless functions that let you run backend code
in response to events triggered by Firebase features and HTTPS
requests.
7. Analytics: Free, unlimited reporting for up to 500 distinct events,
enabling detailed insights into user behavior.
8. Messaging: Tools like Firebase Cloud Messaging (FCM) for sending
notifications and messages to users across platforms.

Firestore

Firestore, or Firebase Cloud Firestore, is Firebase’s modern database


offering, designed to address the limitations of the Realtime Database. It is
a NoSQL document database that allows you to store, sync, and query data
for your mobile and web apps. Key features include:

118
1. Document-Oriented: Data is stored in documents (key-value pairs)
which are grouped into collections. This makes the structure more
intuitive and easier to manage than traditional SQL databases.
2. Real-time and Offline Support: Like the Realtime Database,
Firestore provides real-time updates and offline support, ensuring
that your app remains responsive even without a network
connection.
3. Scalability: Designed to scale automatically with the growth of your
application, handling large datasets and high traffic loads without
compromising performance.
4. Powerful Querying: Supports complex queries, including range
queries and array-contains operations, with indexing automatically
managed by Firestore.
5. Security Rules: Fine-grained security rules can be defined to control
access to the data, making it highly secure.
6. Seamless Integration: Integrates well with other Firebase services,
such as Authentication, Analytics, and Cloud Functions, providing a
cohesive development experience.

119
logic code of Traffic Sign Rec:

class TrafficSignRecCubit extends Cubit<TrafficSignRecState> {


TrafficSignRecCubit() : super(TrafficSignRecInitial()) {
_trafficSignRecRealTime();
}
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
bool isSelected = false;
Future<void> select(context) async {
try {
isSelected = !isSelected;
await adas
.doc(kTrafficSignRecDoc)
.set({kIsSelectedKey: isSelected}, SetOptions(merge: true));
if (isSelected == false) {
emit(TrafficSignRecOff());
}
} catch (e) {
showSnackBar(context, 'trafficSignRecCubit error : $e');
debugPrint('trafficSignRecCubit error : $e');
}
}

_trafficSignRecRealTime() {
adas.doc(kTrafficSignRecDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
if (data[kIsSelectedKey] == true) {
emit(TrafficSignRecOn(
maxSpeed: data[kMaxSpeedKey],
isBump: data[kIsBumpKey],
));
}
}).onError((error) {
debugPrint("trafficSignRecCubit error realtime : $error");
emit(TrafficSignRecFailure());
});
}

void setTrafficSignRecToDefault() {
adas.doc(kTrafficSignRecDoc).update({
kIsSelectedKey: false,
kIsBumpKey: false,
kMaxSpeedKey: 0,
});
}
}

120
2) lane detection:

class LaneDedectionCubit extends Cubit<LaneDedectionState> {


LaneDedectionCubit() : super(LaneDedectionOff());
bool isSelected = false;
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
var defaultdata = FirebaseFirestore.instance
.collection(kAdasCollection)
.doc(kLaneDedectionDoc)
.set({kIsSelectedKey: false});
Future<void> select(context) async {
isSelected = !isSelected;
try {
await adas.doc(kLaneDedectionDoc).set({kIsSelectedKey: isSelected});
if (isSelected) {
emit(LaneDedectionOn());
} else {
emit(LaneDedectionOff());
}
} catch (e) {
showSnackBar(context, 'laneDedectionCubit error : $e');
debugPrint('laneDedectionCubit error : $e');
}
}

void setLaneDetectionToDefault() {
adas.doc(kLaneDedectionDoc).update({kIsSelectedKey: false});
}
}

4) FCW:

class FCWCubit extends Cubit<FCWState> {


FCWCubit() : super(FCWOff());
final defaultState = FirebaseFirestore.instance
.collection(kAdasCollection)
.doc(kFCWDoc)
.set({kIsSelectedKey: false});
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
bool isSelected = false;
Future<void> select(context, {bool? isCancelled}) async {
try {

121
if (isCancelled == true) {
if (isSelected) {
isSelected = false;
await adas.doc(kFCWDoc).set({kIsSelectedKey: false});
emit(FCWOff());
}
return;
}
isSelected = !isSelected;
if (isSelected) {
await adas.doc(kFCWDoc).set({kIsSelectedKey: isSelected});
emit(FCWOn());
} else {
await adas.doc(kFCWDoc).set({kIsSelectedKey: isSelected});
emit(FCWOff());
}
} catch (e) {
showSnackBar(context, "autoDriveCubit error : $e");
}
}

void setFCWToDefault() {
adas.doc(kFCWDoc).set({kIsSelectedKey: false});
}

4) Blind Spot:
class BlindSpotCubit extends Cubit<BlindSpotState> {
BlindSpotCubit() : super(BlindSpotInitial()) {
__blindSpotRealTimeWarning();
}
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
void setBlindSpot(bool isSelected) {
try {
adas.doc(kBlindSpotDoc).set(
{
kIsSelectedKey: isSelected,
},
SetOptions(merge: true),
);
} catch (e) {
emit(BlindSpotFailure(error: e.toString()));
}

122
}

void turnOffRightWarning() {
try {
adas
.doc(kBlindSpotDoc)
.set({kRightWarningKey: false}, SetOptions(merge: true));
} catch (e) {
emit(BlindSpotFailure(error: e.toString()));
}
}

void turnOffLeftWarning() {
try {
adas
.doc(kBlindSpotDoc)
.set({kLeftWarningKey: false}, SetOptions(merge: true));
} catch (e) {
emit(BlindSpotFailure(error: e.toString()));
}
}

void __blindSpotRealTimeWarning() {
adas.doc(kBlindSpotDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
if (data[kIsSelectedKey] == true) {
if (data[kRightWarningKey] == true) {
emit(BlindSpotRightWarningOn());
}
if (data[kLeftWarningKey] == true) {
emit(BlindSpotLeftWarningOn());
}
}
}).onError((error) => emit(BlindSpotFailure(error: error.toString())));
}

void setBlindSpotToDefault() {
adas.doc(kBlindSpotDoc).update({
kIsSelectedKey: false,
kRightWarningKey: false,
kLeftWarningKey: false,
});
}
}

123
5) Car Control (speed, break , left , right , P , D , R ):

class CarControll {
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
Future<void> carState(context, {required String carState}) async {
try {
await adas.doc(kCarStateKey).set(
{
kCarStateKey: carState,
},

);
} catch (e) {
showSnackBar(context, 'car state error : $e');
}
}

Future<void> right(context, {required bool isRight}) async {


try {
await adas.doc(kRightKey).set({
kRightKey: isRight,
});
} catch (e) {
showSnackBar(context, "right error : $e");
}
}

Future<void> left(context, {required bool isLeft}) async {


try {
await adas.doc(kLeftKey).set({
kLeftKey: isLeft,
});
} catch (e) {
showSnackBar(context, "left error : $e");
}
}

Future<void> speed(context, {required bool isspeed}) async {


try {
await adas.doc(kSpeedKey).set({
kSpeedKey: isspeed,
});
} catch (e) {
showSnackBar(context, "speed error : $e");
}
}

Future<void> breaK(context, {required bool isbreak}) async {

124
try {
await adas.doc(kBreakKey).set({
kBreakKey: isbreak,
});
} catch (e) {
showSnackBar(context, "break error : $e");
}
}
}

6) Forward and Backward Warning:

class ForwardBackwardWarningCubit extends Cubit<ForwardBackwardWarningState> {


ForwardBackwardWarningCubit() : super(ForwardBackwardWarningInitial()) {
__forwardAndBackWardRealTimeWarning();
}
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);

cancelForwardWarning() {
try {
adas.doc(kForwardBackwardWarningDoc).update({kForwardWarningKey: false});
} catch (e) {
emit(ForwardBackwardWarningFailure(error: e.toString()));
}
}

cancelBackwardWarning() {
try {
adas.doc(kForwardBackwardWarningDoc).update({kBackwardWarningKey: false});
} catch (e) {
emit(ForwardBackwardWarningFailure(error: e.toString()));
}
}

__forwardAndBackWardRealTimeWarning() {
adas.doc(kForwardBackwardWarningDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
if (data[kForwardWarningKey] == true) {
emit(ForwardWarningOn());
}
if (data[kBackwardWarningKey] == true) {
emit(BackwardWarningOn());
}

}).onError((error) =>
emit(ForwardBackwardWarningFailure(error: error.toString())));
}
}

125
7) Get current speed of car:

1) class GetSpeedCubit extends Cubit<GetSpeedState> {


2) GetSpeedCubit() : super(GetSpeedInitial()) {
3) __getSpeedRealTime();
4) }
5) CollectionReference adas =
6) FirebaseFirestore.instance.collection(kAdasCollection);
7) void __getSpeedRealTime() {
8) adas.doc(kGetSpeedDoc).snapshots().listen((event) {
9) Map<String, dynamic> data = event.data() as Map<String, dynamic>;
10) emit(GetSpeedSuccess(currentSpeed: data[kCurrentSpeedKey]));
11) }).onError((error) => emit(GetSpeedFailure(error: error.toString())));
12) }
13) }

8) get current battery of car:


class GetSpeedCubit extends Cubit<GetSpeedState> {
GetSpeedCubit() : super(GetSpeedInitial()) {
__getSpeedRealTime();
}
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
void __getSpeedRealTime() {
adas.doc(kGetSpeedDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
emit(GetSpeedSuccess(currentSpeed: data[kCurrentSpeedKey]));
}).onError((error) => emit(GetSpeedFailure(error: error.toString())));
}
}

126
Ch5. Testing

127
1.Intorduction
1.1 PURPOSE OF THE DOCUMENTATION
The purpose of this documentation is to provide a comprehensive
guide for understanding, setting up, and operating the test car used in our
collision avoidance application project. This document aims to serve
multiple audiences, including developers, researchers, and students, by
detailing the hardware and software components, assembly instructions,
and operational procedures. By offering clear, step-by-step guidance and
thorough explanations of the system's functionality

1.2 THE ROLE OF THE TEST CAR IN THE PROJECT


The test car plays a crucial role in our collision avoidance application
by serving as a controlled, predictable platform for evaluating and fine-
tuning our algorithms. The test car, equipped with an ESP32
microcontroller and motor control system, mimics real-world driving
scenarios in a safe and manageable environment. By simulating various
driving conditions and potential collision scenarios, the test car allows us
to rigorously test and validate the performance of our collision avoidance
system before deployment in actual vehicles.

2. Hardware

2.1 COMPONENTS
2.1.1 ESP32 Microcontroller

The ESP32 is a powerful microcontroller developed by Espressif


Systems. It features a dual-core processor with integrated Wi-Fi and
Bluetooth capabilities, making it an excellent choice for IoT applications.
The ESP32's high processing power, combined with its extensive GPIO pins
and support for various communication protocols, allows for seamless
integration with other hardware components. In this project, the ESP32
serves as the central control unit, processing input from the WebSocket
server and controlling the car's motors based on user commands.

128
2.1.2 Motor Driver

A motor driver is essential for interfacing the microcontroller with


the DC motors. In this project, we used the (l298n) motor driver, the
motor driver receives control signals from the ESP32 and provides the
necessary current and voltage to drive the motors. The motor driver used
here is typically an H-bridge, which allows for bi-directional control of the
motors, enabling the car to move forward, backward, and turn left or right.
It also provides speed control through PWM (Pulse Width Modulation)
signals, allowing for precise movement adjustments.

2.1.3 DC Motors

The DC motors are the primary actuators that move the test car.
These motors convert electrical energy into mechanical motion. The
project uses two DC motors, each connected to one of the car's wheels. By
controlling the speed and direction of these motors, the test car can
perform various maneuvers, such as moving forward, reversing, and
turning. The motors are driven by the motor driver based on commands
from the ESP32, ensuring coordinated and controlled movement.

2.1.4 Power Supply

A stable and reliable power supply is crucial for the operation of the
test car. The power supply provides the necessary voltage and current to
the ESP32, motor driver, and DC motors. Typically, a rechargeable battery
pack is used to power the test car, offering portability and ease of use. We
used two batteries (3v each) as the voltage rating of the power supply
should match the requirements of the motors and the microcontroller to
ensure efficient and safe operation.

2.1.5 Wi-Fi Module (built-in ESP32)

The built-in Wi-Fi module in the ESP32 is a key component for


enabling wireless communication between the test car and the user's
control interface. This module allows the ESP32 to connect to a Wi-Fi
network, creating a local access point for communication. Through this
connection, commands can be sent from a web-based interface to the
ESP32 via WebSocket, enabling real-time control of the car. The Wi-Fi

129
capability also facilitates easy updates and debugging, making the ESP32 a
versatile and powerful choice for the project.

3. Software Components
3.1 LIBRARIES USED

3.1.1 Arduino.h
Purpose:
The Arduino.h library is the core library for Arduino development,
providing the essential functions and definitions for programming
microcontrollers in the Arduino ecosystem.

Usage in the Project:

Pin Configuration: Used to set the motor control pins as outputs in the
setUpPinModes function.

Motor Control: Used in the processCarMovement function to control the


direction and speed of the car's motors.

3.1.2 WiFi.h
Purpose: The WiFi.h library enables the ESP32 microcontroller to connect
to Wi-Fi networks and act as a Wi-Fi access point or station.

Usage in the Project: Access Point Setup: WiFi.softAP(ssid, password) sets


up the ESP32 as a Wi-Fi access point to allow the client to connect and
control the car.

3.1.3 AsyncTCP.h
Purpose: The AsyncTCP.h library provides asynchronous TCP
communication, which is essential for non-blocking network operations. It
is often used in conjunction with the ESPAsyncWebServer library.

Usage in the Project: Foundation for Async Web Server: The AsyncTCP.h
library is required for the ESPAsyncWebServer to function, providing the
underlying TCP capabilities.

130
3.1.4 ESPAsyncWebServer.h
Purpose: The ESPAsyncWebServer.h library provides an asynchronous web
server for the ESP32, enabling efficient handling of HTTP requests and
WebSocket communication.

Usage in the Project:


HTTP Server: Handles HTTP requests to serve the control interface and
handle 404 errors.

WebSocket Communication: Manages the WebSocket connection to


receive control commands from the client and send responses.

3.2CODE EXPLANATION
3.2.1 Global
AsyncWebServer server(80);

This line creates an instance of the AsyncWebServer class,


which is used to handle HTTP requests and responses.

AsyncWebServer: This is a class provided by the


ESPAsyncWebServer library. It is used to create an
asynchronous web server that can handle multiple requests
simultaneously without blocking the main program
execution.

server: This is the name of the instance of the AsyncWebServer class.

(80): This specifies the port number on which the server will listen for
incoming HTTP requests. Port 80 is the default port for HTTP traffic.

By creating this instance, you are setting up an HTTP server that listens on
port 80, ready to handle incoming HTTP requests from clients (e.g., web
browsers).

AsyncWebSocket ws("/ws");

131
This line creates an instance of the AsyncWebSocket class, which is used to
handle WebSocket connections.

AsyncWebSocket: This is a class provided by the ESPAsyncWebServer


library. It is used to create a WebSocket server that can handle real-time
communication between the server and clients.

ws: This is the name of the instance of the AsyncWebSocket class.

("/ws"): This specifies the URL path for the WebSocket endpoint. Clients
will connect to this WebSocket server using the URL ws://<server_ip>/ws.

By creating this instance, you are setting up a WebSocket server that


listens on the /ws endpoint, ready to handle WebSocket connections from
clients.

3.2.2 Setup function


The setup function is called once when the
esp32 is powered up or on hard reset and it
initializes the hardware and sets up the
network and web server for the ESP32
microcontroller to function as a Wi-Fi access
point and a WebSocket server. Here's a
detailed breakdown of each part:

setUpPinModes()
This function is called to configure the GPIO
pins of the ESP32 that control the motor
drivers. It sets the pin modes to OUTPUT so that they can be used to send
signals to the motors.

Serial.begin (115200)
This initializes the serial communication at a baud rate of 115200 bits per
second. It is used for debugging purposes, allowing you to print messages
to the serial monitor.

132
Setting Up the Wi-Fi Access Point
WiFi.softAP(ssid, password): This function sets up the ESP32 as a Wi-Fi
access point with the SSID (network name) and password specified by the
ssid and password variables.

IPAddress IP = WiFi.softAPIP(): Retrieves the IP address assigned to the


ESP32 in the access point mode.

Serial.print and Serial.println: Print the IP address to the serial monitor for
debugging purposes.

Setting Up the HTTP Server


server.on("/", HTTP_GET, handleRoot): This sets up an HTTP GET route for
the root URL ("/"). When a client makes a GET request to the root URL, the
handleRoot function is called to handle the request.

server.onNotFound(handleNotFound): This sets up a handler for requests


to URLs that do not match any defined routes. The handleNotFound
function is called to handle 404 (Not Found) errors.

ws.onEvent(onWebSocketEvent);

This line registers an event handler for the WebSocket server.

onEvent: This method of the AsyncWebSocket class is used to register a


callback function that will handle various WebSocket events (like
connection, disconnection, and message reception).

onWebSocketEvent: This is the name of the callback function that will


handle the WebSocket events. This function will be called whenever a
WebSocket event occurs.

By setting ws.onEvent(onWebSocketEvent);, you are specifying that the


onWebSocketEvent function should be called to handle events related to
the WebSocket.

server.addHandler(&ws);

133
This line adds the WebSocket handler to the HTTP server.

addHandler(&ws): This method of the AsyncWebServer class adds the


WebSocket server (represented by ws) to the HTTP server.

By adding the WebSocket handler to the HTTP server, you are integrating
WebSocket functionality into the web server. This allows the server to
handle WebSocket connections on the specified path (in this case, "/ws").

server.begin();

This line starts the HTTP server.

begin(): This method of the AsyncWebServer class starts the server,


making it ready to accept incoming HTTP and WebSocket requests.

By calling server.begin();, you are starting the web server,


enabling it to listen for and respond to HTTP requests as
well as WebSocket connections.

3.2.3 Serup pin mode


Certainly! This function, setUpPinModes(), configures
specific pins on the ESP32 as outputs and initializes them
to a low state.

pinMode: Configures the pins for the motor control as


output pins.

digitalWrite: Sets the initial state of these output pins to LOW.

This setup ensures that the motor control pins are ready for use and are
initially turned off. In the functions, we use defined names (constants) to
refer to specific pin numbers. This approach improves code readability and
maintainability.

3.2.4 Handle root


Purpose: This function handles HTTP GET requests to the root URL ("/") of
the web server.

134
Parameter: AsyncWebServerRequest *request: A pointer to the request
object that represents the HTTP request.

Action:

• request->send_P(200, "text/html", htmlHomePage);


• send_P is a method of the AsyncWebServerRequest object used to
send a response to the client.
• 200 is the HTTP status code indicating the request was successful.
• "text/html" specifies the content type of the response, indicating
that the content is HTML.
• htmlHomePage is a pointer to the HTML content stored in program
memory (using the PROGMEM
keyword).
3.2.5 Handle not found
Purpose: This function handles requests to
URLs that are not found on the server.

Parameter: AsyncWebServerRequest *request: A pointer to the request


object that represents the HTTP request.

Action:

• request->send(404, "text/plain", "File Not Found");


• send is a method of the AsyncWebServerRequest object used to
send a response to the client.
• 404 is the HTTP status code indicating that the requested resource
was not found.
• "text/plain" specifies the content type of the response, indicating
that the content is plain text.
• "File Not Found" is the plain text message sent as the response
content.

135
3.2.6 Web socket event
Certainly! The
onWebSocketEvent
function handles various
WebSocket events for the
server. Let's go through the
function in detail, breaking
down each part and its
workflow.

Parameters:
• AsyncWebSocket
*server: Pointer to
the WebSocket
server.

• AsyncWebSocketClient *client: Pointer to the WebSocket client


triggering the event.
• AwsEventType type: The type of WebSocket event (e.g., connect,
disconnect, data).
• void *arg: Additional arguments related to the event.
• uint8_t *data: Pointer to the data received.
• size_t len: Length of the received data.
Event Handling
The function handles different WebSocket events using a switch statement
based on the type parameter.

WS_EVT_CONNECT

Description: This case is triggered when a new WebSocket client connects


to the server.

Actions: Logs a message indicating the client's ID and remote IP address.

WS_EVT_DISCONNECT

Description: This case is triggered when a WebSocket client disconnects


from the server.

136
Actions:

• Logs a message indicating the client's ID


• Calls processCarMovement(0) to stop the car's movement when a
client disconnects.
WS_EVT_DATA

Description: This case is triggered when the server receives data from a
client.

Actions:

• Casts arg to an AwsFrameInfo pointer to get frame information.


• Checks if the data frame is complete (info->final), the first frame
(info->index == 0), of the correct length (info->len == sizeof(int)),
and binary (info->opcode == WS_BINARY).
• Copies the received data into an integer variable integerValue using
memcpy.
• Calls processCarMovement(integerValue) to control the car based
on the received integer value.
WS_EVT_PONG and WS_EVT_ERROR

Description: These cases are triggered for pong and error events.

Actions: Currently, no specific actions are taken for these events.

137
3.2.7 Process car movement
Purpose:

The processCarMovement function controls the


movement of the test car based on the input value
received. It sets the appropriate motor directions
and speeds to move the car in the desired direction
(up, down, left, right, or stop).

Parameters: int inputValue: An integer representing


the desired movement direction of the car. The
possible values are defined as constants:

• UP: Move the car forward.


• DOWN: Move the car backward.
• LEFT: Turn the car left.
• RIGHT: Turn the car right.
• STOP: Stop the car.
Switch Statement: The switch statement checks the value of inputValue
and executes the corresponding case to control the car's movement.

• UP: Sets both motors to move forward at the


maximum speed
• DOWN: Sets both motors to move backward
at the maximum speed.
• LEFT: Sets motor 2 to move forward at a
lower speed and motor 1 to move forward at
maximum speed, causing the car to turn left.
• RIGHT: Sets motor 2 to move forward at
maximum speed and motor 1 to move
forward at a lower speed, causing the car to
turn right.
• STOP: Stops both motors by setting their
control pins to LOW.
3.2.8 Loop
The loop() function in the ESP32 code is responsible for maintaining the
WebSocket connections by cleaning up inactive clients. Here is the
detailed explanation:

138
Function Purpose: The loop() function is an integral part of the Arduino
framework. It runs continuously after the setup() function has
completed. In this specific code, the loop() function performs
a single task:

ws.cleanupClients();: This method belongs to the


AsyncWebSocket class and is responsible for cleaning up WebSocket
clients.

Detailed Explanation: Continuous Execution: The loop() function runs in an


infinite loop, repeatedly executing its contents as fast as the
microcontroller can.

WebSocket Client Cleanup: The cleanupClients() method is called on the


ws object (an instance of AsyncWebSocket).

Purpose: This method scans through all connected WebSocket clients and
performs necessary cleanup tasks. These tasks might include closing
connections that are no longer active or removing clients that have
disconnected.

Importance: Regular cleanup of clients is crucial for maintaining the


stability and performance of the WebSocket server. Without this cleanup,
resources might not be released properly, leading to potential memory
leaks or an increased number of inactive connections.

4.Workflow of the process


The workflow of the test car starts when a user connects to the Wi-Fi
network created by the car and continues until the user sends commands
that affect the car's motor motions. Here's a step-by-step explanation:

4.1. Setup Phase


• Powering Up: The car is powered on, and the ESP32 microcontroller
begins executing the setup() function.
• Setting Up Pin Modes: The setUpPinModes() function is called to
configure the GPIO pins connected to the motor driver as output
pins.
• Initializing Serial Communication: Serial communication is initialized
for debugging purposes.
• Setting Up Wi-Fi Access Point

139
The ESP32 creates a Wi-Fi access point using the provided
SSID and password.
The HTTP server and WebSocket server are initialized and
configured to handle client requests.
4.2. Client Connection
• Connecting to Wi-Fi: The user connects their device to the Wi-Fi
network created by the car.
• Accessing the Control Interface: The user opens a web browser and
navigates to the car's IP address to access the control interface
served by the ESP32.
4.3. User Interaction
• Loading the Web Interface: The HTML page is loaded, initializing the
WebSocket connection to the ESP32
• User Touches a Control: The user touches a control button on the
web interface, triggering the onTouchStartAndEnd function.
• WebSocket Communication
1-Sending a Command: The onTouchStartAndEnd function
sends a WebSocket message containing the integer value
representing the command (e.g., UP, DOWN, LEFT, RIGHT,
STOP).
2-Receiving the Command: The ESP32 receives the
WebSocket message and invokes the onWebSocketEvent
function.
4.5. Processing the Command
• Interpreting the Command: The processCarMovement function
interprets the integer value and adjusts the motor driver pins
accordingly.
• controlling the Motors: The appropriate GPIO pins are set to HIGH
or LOW to control the direction and speed of the motors, causing
the car to move as commanded.
4.6. Continuous Operation
• Cleaning Up WebSocket Clients: In the loop() function, the
WebSocket clients are continuously cleaned up to ensure smooth
operation.

140
Ch6. Linux Customization

141
Introduction
Building a custom operating system for embedded systems like the Raspberry Pi 4 offers
several significant advantages, including optimizing performance, reducing unnecessary
components, and enhancing security. Customizing the OS allows developers to tailor the
system to meet specific requirements, leading to a more efficient and secure deployment.

Why Customize an OS?


1. Performance Optimization: By including only the necessary components, the system can
run more efficiently, which is critical in resource-constrained environments.
2. Security: Reducing the attack surface by excluding unnecessary software can enhance
the overall security of the system.
3. Size Reduction: A custom OS can be significantly smaller, freeing up valuable storage
space and making updates and maintenance easier.
4. Control and Flexibility: Full control over the OS allows for fine-tuning and customization
that proprietary operating systems do not offer.
5. Support for Specific Hardware: Customizing the OS ensures better compatibility and
performance with specific hardware configurations, like the Raspberry Pi 4.

Key Points for Linux OS Customization


Toolchain: Essential for cross-compiling software for the target architecture.
Bootloader: Responsible for initializing hardware and loading the kernel.
Kernel: The core of the OS, managing hardware, running processes, and providing essential
services.
Root Filesystem: Contains the necessary binaries, libraries, and configuration files to boot and
run the system.

This documentation will clarify the process of setting up a custom OS for the Raspberry Pi 4,
from setting up the toolchain to configuring the bootloader, kernel, and root filesystem.

142
2. Toolchain Overview
Components
Compiler: Converts source code into executable binaries.
Libraries: Essential libraries for development.
Debugger: Tools for debugging applications. Build
System: Manages the build process.

Workflow
Host Machine: Development machine.
Target Machine: Running the embedded system.

3. Setting Up the Toolchain


System Requirements
Host machine specifications Supported
operating systems

Installation Steps
1. Install Essential Tools:

2. Clone, Configure, and Install crosstool-NG:

143
4.1 Finding a Toolchain
You can either use a pre-built toolchain or compile your own through crosstool-NG. For the
latter, follow these steps:

1. Clean and List Samples:

2. Choose Sample and Configure:

3. Build the Toolchain:

The output path for your toolchain will be ~/x-tools .

4.2 Building a Toolchain for Raspberry Pi 4


Use crosstool-NG to build a toolchain for Raspberry Pi 4:

5. Understanding the Toolchain


The toolchain comprises various components, including static and dynamic libraries.
Understanding these components is crucial for effective cross-compilation.

144
6. Cross Compiling Applications
Learn to cross-compile applications for Raspberry Pi 3 using the configured toolchain.

Compiling Programs
GCC Command:

Running Programs
QEMU:

Debugging Programs
GDB:

145
7. Best Practices for Workflow Organization
Setting Up Sysroot
1. Define the sysroot path.

2. Configure the compiler to use the sysroot.

PATH Management
1. Add Toolchain to PATH:

2. Make Alias for Raspberry Pi 3 G++ Compiler:

146
8. Example Workflow
1. Write Code:

2. Compile Code:

3. Transfer to Target:

4. Run on Target:

./hello

9. Troubleshooting
Common Issues
Compiler Not Found:
Ensure the toolchain path is correctly added to the $PATH .
Execution Errors:
Verify the target architecture compatibility.
How to add new libraries?
Update the sysroot with the new library files and include paths.

147
Bootloader
Overview
The bootloader is a crucial piece of software responsible for initializing the hardware and
starting the operating system. It is typically stored in non-volatile memory and runs immediately
after the system is powered on.

Bootloader Stages
1. Power On
When the system is powered on, the bootloader process begins. The power-on sequence
involves several stages that ensure the hardware is properly initialized before the operating
system is loaded.

2. First Stage: Initial Booting Sequence


BIOS/ROM: The system starts with the BIOS or ROM code, which performs initial hardware
checks and setup.
CPU: The Central Processing Unit (CPU) starts executing the code from the BIOS/ROM.
Peripherals: Initial setup of essential peripherals.

3. Secondary Program Loader (SPL)


The SPL is responsible for loading the full bootloader into memory. Key tasks performed by the
SPL include:

Initializing DRAM and other essential hardware components.


Copying the bootloader from non-volatile memory (such as NAND or eMMC) to DRAM. Preparing
the system for bootloader execution.

148
4. Full Bootloader (U-Boot)
Once the SPL has completed its tasks, control is handed over to the full bootloader, typically U-
Boot. The full bootloader performs more comprehensive hardware initialization and prepares the
system to load the operating system. U-Boot tasks include:

Loading and executing the kernel. Setting


up device trees.
Configuring environment variables and boot parameters.

System on Chip (SOC)


The boot process is highly dependent on the architecture and capabilities of the System on Chip
(SOC). The SOC integrates various components, including:

CPU: The central processing unit responsible for executing instructions.


Peripherals: Interfaces for communication with external devices.
DRAM: Dynamic RAM for temporary data storage.
ROM: Read-only memory for storing the bootloader and other critical code.
SRAM: Static RAM for fast, small-scale storage.
Boot Configuration: Determines the boot mode and parameters.

SOC Usage Scenarios


General Purpose: SOCs can be used in general-purpose computing applications such as
personal computers.
Microcontroller: SOCs can be tailored for specific tasks in microcontroller applications.

Operation
The bootloader operates through several key stages:
1. Download Source Code: Obtain the bootloader source code.
2. Configure: Customize the bootloader for the specific SOC and hardware.
3. Compile: Build the bootloader binary using tools such as GCC.
4. Deploy: Flash the bootloader binary to the target device.
5. Execute: The bootloader initializes the hardware and loads the operating system.

149
Download Source Code
Download the U-Boot source code from the official repository:

Configure U-Boot
Configure U-Boot for your specific board. For example, for an ARM-based board:

Customize Configuration (Optional)


Customize the U-Boot configuration if needed:

Compile U-Boot
Compile the U-Boot bootloader:

Deploy U-Boot
Flash the compiled U-Boot binary to the target device. The method of deployment will depend
on your hardware. Here are a few examples:

For SD Card:

150
U-Boot Commands
Common U-Boot Commands:

Print environment variables:

Set environment variables:

Save environment variables:

Boot the kernel:

Load a file over TFTP:

Load a file from an SD card:

151
3.2 Operations on Device Tree
Device Tree → Binary:

Binary → Device Tree:

4. U-Boot
Support multiple architectures and boards.

Operations

Downloading U-Boot:

Building U-Boot for RASPI3:

Installing U-Boot (SD-CARD Version)

1. Use to create a FAT16 partition on the SD-CARD.


2. Install /boot partition.
3. Insert the SD-CARD into your hardware.

152
4.1 Using U-Boot
4.1.1 Convert zImage into uImage (HOST)

4.1.2 Loading Images from Flash into RAM (inside U-Boot)

4.1.3 Booting Kernel

Boot Configuration
The boot configuration defines how the bootloader operates and interacts with the hardware. It
includes:

Boot Modes: Various boot modes determine how the bootloader is loaded (e.g., from NAND, SD
card, network).
Device Tree: Describes the hardware components and their configuration to the operating
system.

153
Kernel Development Documentation
first we must Ensure the following packages are installed on our system:

1. Actions to Build Kernel


1.1 Choose Kernel Version
a. From vendor: Raspberry Pi Documentation

b. From the main branch:

1. Visit Kernel.org for main branches (Kernel Archives).


2. Download the version:

3. Extract the downloaded file:

1.2 Understand Kernel Source Code Structure


Explore the structure of the kernel source code to familiarize yourself with its components.

154
1.3 Understanding Kernel Config Mechanism
The kernel configuration is a critical step in building the Linux kernel. It determines what
features and drivers will be included. This is typically done using the tool which
provides a menu-driven interface for configuration.

Steps to Configure the Kernel

1. Start menuconfig :

2. Navigate the Menu:


Use the arrow keys to navigate the menu. Press to select a menu item and to

go back. Each item in the menu corresponds to a kernel feature or driver that can be enabled or
disabled.
3. Select Features:
or [M] : Enabled
[*] (built-in or as a module)
[ ] : Disabled
4. Save Configuration:
After selecting the desired features, save the configuration to by exiting the
menu (usually by selecting and confirming).

Example Configuration Options

Processor Type and Features: Choose the specific CPU architecture and processor features.
General Setup: Basic system settings, including kernel compression and initial RAM disk
(initramfs) support.
Device Drivers: Enable or disable drivers for various hardware components like USB, network,
and sound.
File Systems: Select support for different file systems like ext4, XFS, or NTFS.
Networking: Configure network protocols and options, including IPv4, IPv6, and wireless
networking.

155
1.4 Identify Kernel Version
To check the kernel version:

1.5 Compiling Kernel for Specific Target


Most Important Targets:

VMLinux zImage
uImage distclean

Most Used targets:

156
2. Compiling Device Tree
a. Find device tree for target
Device tree for Raspi4:

b. Compile device tree

3. Booting Kernel on QEMU

Root Filesystem Documentation for Raspberry Pi 4


Overview of Root Filesystem
The root filesystem (RFS) is the primary filesystem in a Unix-like operating system. It contains
all the necessary files and directories required for the system to boot and run. For Raspberry Pi
4, the root filesystem is critical for the device's functionality and performance.

Setting Up the Root Filesystem


To set up the root filesystem on a Raspberry Pi 4, follow these steps:

157
1. Format the SD Card:

2. Create a Mount Point:

3. Mount the Filesystem:

4. Copy Files to the Filesystem:

Mounting the Root Filesystem


To ensure the root filesystem is mounted correctly during boot, update the file:

This line specifies the device, mount point, filesystem type, mount options, dump, and fsck
order.

Essential Components
A root filesystem typically includes the following directories:
/bin: Essential command binaries
/etc: Configuration files
/home: User home directories
/lib: Shared libraries
/root: Root user's home directory
/sbin: System binaries
/usr: User binaries and libraries

158
Creating a Custom Root Filesystem with BusyBox
BusyBox combines tiny versions of many common UNIX utilities into a single small executable,
making it ideal for embedded systems like the Raspberry Pi 4.

Steps to Create a Custom Root Filesystem with BusyBox:


1. Download BusyBox:

2. Configure BusyBox:

In the configuration menu, select the desired features and ensure the installation directory is set
to /mnt/rfs .
3. Build and Install BusyBox:

4. Create Necessary Directories:

5. Copy Libraries:

Identify the required libraries for BusyBox and copy them to the root filesystem.

159
Once the root filesystem is ready, deploy it to the Raspberry Pi 4:

1. Unmount the Filesystem:

2. Insert the SD Card into the Raspberry Pi 4.


3. Power on the Raspberry Pi 4 and ensure it boots correctly.

Maintaining the Root Filesystem


Regular maintenance of the root filesystem is essential for optimal performance. Here are some
tips:

Regular Updates:

Filesystem Check:

Backup Important Data:

160
Conclusion
Customizing an operating system for the Raspberry Pi 4 allows for a tailored and optimized
computing environment. This documentation has outlined the comprehensive process for
achieving this customization, including:

1. Setting Up the Development Environment: Establishing a robust development


environment with all necessary tools and dependencies.
2. Selecting a Base OS: Choosing an appropriate base operating system such as Raspbian,
Ubuntu, or a minimal Linux distribution.
3. Customizing the Kernel: Modifying the kernel to include or exclude features according to
specific requirements.
4. Building Custom Packages: Creating and integrating custom software packages and
drivers.
5. Configuring Boot Settings: Adjusting bootloader and startup configurations for optimal
performance.
6. Enhancing Security: Implementing security measures including user permissions, firewall
settings, and encryption.
7. Optimizing Performance: Fine-tuning system performance through techniques like
overclocking and memory management.
8. Creating a Custom Image: Packaging the customized OS into a distributable image for
easy deployment.
9. Testing and Debugging: Conducting thorough testing of the custom OS on the Raspberry
Pi 4, identifying issues, and performing necessary debugging.

By following these steps, the project successfully demonstrates the capability to create a highly
efficient and specialized operating system that maximizes the potential of the Raspberry Pi 4.
This customized OS can be utilized for various applications, ranging from educational projects
to specific practical implementations, showcasing the versatility and power of the Raspberry Pi 4
platform.

161

You might also like