Grad Book
Grad Book
ADAS
University
July 2024
2
Contents
Abstract 4
Ch1. Introduction 5
1.1 Introduction 6
1.2 Problem Statement 7
1.3 Objectives 10
1.4Thesis 12
Ch 2. STM 32 15
ARM architecture 16
Cortex-A series 18
Nested Vectored Interrupt Controller (NVIC) 20
STM32F103C8T6 Microcontroller 25
GPIO of STM32F103C8T6 28
Timers in STM32F103C8T6 37
I2C in STM32F103C8T6 49
Ch 3. AI Models 55
1.Speed – hump sign detection 56
2.Traffic sign Detection 70
3.Lane Detection 79
3
Abstract
Greetings from the start of a new era in automotive technology
and safety. The “Advanced Driving Assistance System (ADAS)” is a
shining example of innovation in the automobile sector, and it is
the product of the hardworking students of Suez Canal
University's Faculty of Engineering’s Department of Electrical
Engineering.
Turning the pages of this book will take you on a tour through the
painstaking process of creating ADAS. Every stage of the process,
from conception to completion, demonstrates our dedication to
safety, innovation, and societal advancement. We cordially
encourage you to explore the intricacies, difficulties, and
achievements that characterize our project.
This book should be seen as a tribute to the abilities of young
engineers who have the guts to dream large and put in endless
effort to make those dreams come true.
4
Ch1. Introduction
5
1.1 Introduction
6
1.2 Problem Statement
7
Moreover, as the automotive industry moves towards fully
autonomous driving, the integration of ADAS technologies serves
as a critical stepping stone. The transition from driver-assisted to
fully autonomous vehicles demand seamless interoperability
between various systems and high reliability under all driving
conditions. This progression necessitates rigorous research and
development to overcome technical challenges, such as real-time
processing of vast amounts of data, ensuring system redundancy,
and enhancing machine learning models for predictive analytics.
Thus, this project aims to not only address the immediate
challenges of current ADAS implementations but also to lay the
groundwork for the future of autonomous transportation,
ultimately contributing to safer roads and more efficient traffic
management.
8
Optimally combine sensor data :
- Autonomous, and ADAS, vehicles require highly accurate
sensing, with both low false positive and negative rates. Cost is
also a factor, so these requirements cannot generally be achieved
with a single type of sensor; multiple types of sensors need to be
used together. Joining the sensor outputs together is a challenge
which is achieved by using a sensor fusion system, however, the
very few solutions available on the market generally fail to
combine the sensor streams in a way which accounts for the
strengths and weaknesses of the different sensors.
9
1.3 Objectives
The primary objective of this thesis is to enhance vehicle safety
by developing and integrating Advanced Driver Assistance
Systems (ADAS) technologies that reduce the likelihood of
accidents through real-time alerts and automated interventions
for drivers. Another significant objective is to improve driver
assistance by creating and implementing features such as lane-
keeping assistance, adaptive cruise control, and automated
parking, thus reducing driver workload and improving
convenience. This project aims to investigate and optimize the
fusion of data from multiple sensors, including cameras, LiDAR,
and radar, to enhance the accuracy and reliability of ADAS
functionalities. Additionally, robust algorithms for object
detection, recognition, and classification will be designed and
validated to operate effectively under diverse environmental
conditions and complex driving scenarios.
10
an important objective of this project.
11
1.4Thesis
Chapter 1: Introduction
The introductory chapter sets the stage for the thesis by
providing an overview of the motivation, objectives, and
significance of the project. It outlines the challenges in the
current state of vehicle safety and driver assistance technologies
and explains the need for advanced ADAS solutions. The chapter
also introduces the key research questions and objectives that
the thesis aims to address.
Chapter 2: STM 32
This chapter provides an in-depth exploration of the STM32
microcontroller used in the ADAS project. It covers the
architecture and features of the STM32, explaining why it was
selected for this project. The chapter details the integration of
the STM32 with various sensors and the role it plays in processing
data for ADAS functionalities. Additionally, it discusses the
development environment, programming techniques, and
challenges encountered during the implementation.
Chapter 3: AI Models
This chapter focuses on the development and implementation of
AI models used in the ADAS project. It covers the design and
training of models for lane detection, speed sign detection, and
lights detection. The chapter discusses the datasets used, the
preprocessing techniques applied, and the evaluation metrics
employed to assess model performance. Additionally, it provides
insights into the challenges faced and solutions devised during
the development of these AI models.
12
Chapter 4: Mobile Application Design and Development
This chapter presents the design and development of the mobile
application that interfaces with the ADAS system. It discusses the
use of Figma for designing the app's user interface and the
development process for creating a seamless user experience.
The chapter also explains how the mobile app interacts with the
ADAS system, providing real-time alerts and information to the
driver.
Chapter 5: Testing
Chapter 5 presents testing of the ADAS system. It includes the
methodologies used for testing the system under various
conditions, the metrics for assessing performance, and the
results obtained from both simulated and real-world
environments. The chapter discusses the system's effectiveness
in improving vehicle safety and driver assistance, as well as its
compliance with relevant standards and regulations.
13
14
Ch 2. STM 32
15
ARM architecture
ARM architecture stands out as a pivotal force in modern computing due to
its unique blend of efficiency, scalability, and versatility. Its RISC design
philosophy ensures high performance and low power consumption, making
ARM processors ideal for a wide range of applications, from smartphones
and tablets to embedded systems and high-performance servers. The
extensive ecosystem and widespread adoption by major tech companies
underscore ARM's critical role in the evolution of digital technology. As
ARM continues to evolve, with advancements seen in the latest ARMv9, it
is poised to further cement its dominance in both existing and emerging
markets, driving innovation and efficiency in the tech industry.
Cortex-M series
The ARM Cortex-M comprises a collection of 32-bit RISC ARM processor
cores licensed by ARM Limited. These cores are specifically designed for
cost-effective and energy-efficient integrated circuits, widely used in
billions of consumer devices. While commonly found in microcontroller
chips, they are also integrated into various other chip types. The Cortex-M
family encompasses Cortex-M0, Cortex-M0+, Cortex-M1, Cortex-M3,
Cortex-M4, Cortex-M7, Cortex-M23, Cortex-M33, Cortex-M35P, Cortex-
M52, Cortex-M55, and Cortex-M85. Some of these cores offer a floating-
point unit (FPU) option, available for Cortex-M4, M7, M33, M35P, M52,
M55, and M85 cores, denoted as "Cortex-MxF" when the FPU is
integrated.
16
Cortex-M3
Architecture and Classification:
• Microarchitecture: ARMv7-M
• Instruction Set: Thumb-1, Thumb-2, Saturated (some), Divide
Key Features:
• ARMv7-M Architecture: Provides enhanced performance and
efficiency.
• Pipeline: 3-stage pipeline with branch speculation.
• Instruction Set Compatibility:
o Thumb-1 (entire).
o Thumb-2 (entire).
o Saturated arithmetic support.
o 32-bit hardware integer multiply with 32-bit or 64-bit result
(signed or unsigned, with add or subtract after multiply).
o 32-bit hardware integer divide (2–12 cycles).
• Interrupt Handling:
o Supports 1 to 240 interrupts, plus Non-Maskable Interrupt
(NMI).
o 12 cycle interrupt latency.
• Integrated Sleep Modes: Power-saving modes for energy-efficient
operation.
• Silicon Options:
o Optional Memory Protection Unit (MPU) with up to 8 regions.
Chips Utilizing Cortex-M3 Core:
1. ABOV: AC33Mx128, AC33Mx064
2. Actel/Microsemi/Microchip: SmartFusion, SmartFusion 2 (FPGA)
3. Analog Devices: ADUCM360, ADUCM361, ADUCM3029
4. Holtek: HT32F
5. Infineon: TLE9860, TLE987x
17
6. Microchip (Atmel): SAM 3A, 3N, 3S, 3U, 3X
7. NXP: LPC1300, LPC1700, LPC1800
8. ON: Q32M210
9. Realtek: RTL8710
10.Silicon Labs: Precision32
11.Silicon Labs (Energy Micro): EFM32 Tiny, Gecko, Leopard, Giant
12.ST: STM32 F1, F2, L1, W
13.TDK-Micronas: HVC4223F
14.Texas Instruments: F28, LM3, TMS470, OMAP 4, SimpleLink
Wireless MCUs (CC1310 Sub-GHz and CC2650
BLE+Zigbee+6LoWPAN)
15.Toshiba: TX03
Cortex-A series
The Cortex-A series, designed by ARM Holdings, offers high-performance,
power-efficient processors commonly used in smartphones, tablets, and
embedded systems. With scalability, multicore support, and ARMv8
architecture for 64-bit computing, they provide customization options and
integrate with other components like GPUs. Security features such as
TrustZone technology ensure data protection. Overall, Cortex-A processors
strike a balance between performance and efficiency, making them suitable
for various applications across industries.
18
• Use Cases: Advanced smartphones, tablets, network devices.
• Technical Specs: ARMv7-A architecture, 32-bit cores, out-of-order
execution, high performance.
3. Cortex-A53, A55:
• Key Features: Based on ARMv8-A architecture, supporting 64-bit
computing.
• Use Cases: Modern smartphones, tablets, high-end embedded
systems.
• Technical Specs: ARMv8-A architecture, 32/64-bit cores,
big.LITTLE configuration support for energy efficiency.
4. Cortex-A57, A72, A73:
• Key Features: High performance for advanced computing and
multimedia applications.
• Use Cases: High-end smartphones, tablets, computing devices.
• Technical Specs: ARMv8-A architecture, 64-bit cores, improved
performance and power efficiency.
• The raspberrypi 4 that we used in this project is cortex-A72
In our project, we have chosen to implement the Cortex-M3 core due to its
well-balanced performance and versatility in embedded systems. Below is
an overview of the key characteristics of the Cortex-M3 core:
Microarchitecture
• ARMv7-M: The Cortex-M3 core is based on the ARMv7-M
microarchitecture, offering a modern and efficient foundation for
embedded applications.
Instruction Set
• Thumb-1, Thumb-2: The Cortex-M3 core supports the Thumb
instruction set architecture, providing efficient code execution and
improved code density, which is essential for embedded systems with
limited memory resources.
Advanced Instruction Support
• Saturated Arithmetic: Efficient handling of signal processing tasks
ensuring predictable behavior in mathematical operations.
19
• Divide: Hardware support for integer division operations, enhancing
computational capabilities.
Integrated Sleep Modes
• Power-efficient sleep modes optimize energy consumption,
extending battery life in battery-powered devices and reducing
overall power consumption in energy-conscious applications.
Memory Protection and Security Features
• Optional Memory Protection Unit (MPU): Provides hardware
support for memory protection, allowing definition of memory access
permissions to enhance system security.
• Optional TrustZone Technology: Enables hardware-enforced
isolation between secure and non-secure code and data, enhancing
system security in IoT and other sensitive applications.
Interrupt Handling
• Supports a configurable number of interrupts (1 to 240), enabling
efficient event-driven programming and real-time responsiveness.
Wide Range of Supported Chips and Development Boards
• The Cortex-M3 core is implemented in various microcontrollers from
different manufacturers, providing flexibility in selecting the right
chip for our project. Popular development boards like the STM32
series from STMicroelectronics offer Cortex-M3-based options,
providing a convenient platform for prototyping and development.
20
NVIC functional description
The NVIC supports up to 240 interrupts each with up to 256 levels of
priority. You can change the priority of an interrupt dynamically. The
NVIC and the processor core interface are closely
coupled, to enable low latency interrupt processing and efficient processing
of late arriving interrupts. The NVIC maintains knowledge of the stacked,
or nested, interrupts to enable tail-chaining of interrupts. You can only fully
access the NVIC from privileged mode, but you can cause interrupts to
enter a pending state in user mode if you enable this capability using the
Configuration Control Register. Any other user mode access causes a bus
fault. You can access all NVIC registers using byte, halfword, and word
accesses unless otherwise stated. NVIC registers are located within the
SCS. All NVIC registers and system debug registers are little-endian
regardless of the endianness state of the processor.
21
Software interfaces:
22
SysTick Timer in Cortex-M3
The SysTick timer is a core component of the Cortex-M3 processor,
designed to provide timing and scheduling capabilities in embedded
systems. It offers a straightforward and efficient way to generate periodic
interrupts and measure time intervals. Here's an overview of the SysTick
timer and its features:
Timer Functionality
• The SysTick timer is a 24-bit down-counter that decrements at a
configurable frequency, typically based on the processor clock
frequency. Once the counter reaches zero, it generates an interrupt,
allowing the processor to execute a specific interrupt service routine
(ISR).
System Tick Interrupt
• The interrupt generated by the SysTick timer is commonly referred to
as the "System Tick Interrupt." It serves various purposes, such as
implementing real-time operating system (RTOS) tick handlers,
scheduling tasks, measuring time intervals, and triggering periodic
system maintenance tasks.
23
Software interfaces:
24
STM32F103C8T6 Microcontroller
The STM32F103xx medium-density performance line family incorporates
the high-performance Arm® Cortex®-M3 32-bit RISC core operating at a
72 MHz frequency, high-speed embedded memories (Flash memory up to
128 Kbytes and SRAM up to 20 Kbytes), and an extensive range of
enhanced I/Os and peripherals connected to two APB buses. All devices
offer two 12-bit ADCs, three general purpose 16-bit timers plus one PWM
timer, as well as standard and advanced communication interfaces: up to
two I2Cs and SPIs, three USARTs, an USB and a CAN.
The devices operate from a 2.0 to 3.6 V power supply. They are available in
both the –40 to +85°C temperature range and the –40 to +105 °C extended
temperature range. A comprehensive set of power-saving mode allows the
design of low-power applications.
25
reset and clock control (RCC) of STM32F103C8T6.
Clocks
Three different clock sources can be used to drive the system clock
(SYSCLK):
HSI oscillator clock
HSE oscillator clock
PLL clock
HSI clock
The HSI clock signal is generated from an internal 8 MHz RC Oscillator
and can be used
26
directly as a system clock or divided by 2 to be used as PLL input.
The HSI RC oscillator has the advantage of providing a clock source at low
cost (no external
components). It also has a faster startup time than the HSE crystal oscillator
however, even
with calibration the frequency is less accurate than an external crystal
oscillator or ceramic
resonator.
PLLs are used to derive different clock frequencies from an external crystal
or oscillator, providing the necessary clock signals for the CPU,
peripherals, and other system components. For example, the
STM32F103C8T6 microcontroller uses a PLL to generate its system clock
from an external 8 MHz crystal, which can be multiplied to achieve higher
clock frequencies, such as 72 MHz.
Software interfaces:
27
RCC configuration for this application:
This configuration chooses the PLL to be the cloak source for the
microcontroller. The input for the PLL in the 8MHz of the HSE multiplied
by 9 to produce output clock of 72MHz. The AHB pre-scaler is 1 means the
any peripheral that is connected to the AHB bus will run on 72Mhz. The
APB low-speed pre-scaler (APB1) is 4 so any peripheral connected to
APB1 bus will have 18Mhz clock. The APB High-speed pre-scaler (APB2)
is 1 so any peripheral connected to APB1 bus will have 72Mhz clock
GPIO of STM32F103C8T6
The STM32F103C8T6 microcontroller features versatile General Purpose
Input/Output (GPIO) ports, providing extensive capabilities for interfacing
with various peripherals and external devices. Here’s a detailed overview of
the GPIO functionalities for this specific microcontroller:
GPIO Overview:
• Total GPIO Pins: 37 (arranged in ports A, B, and C)
• Port Availability:
o Port A: PA0 to PA15
o Port B: PB0 to PB15
o Port C: PC13 to PC15
28
Key Features:
1. Port Modes:
o Input Modes:
▪ Analog Input
▪ Floating Input (default)
▪ Input with pull-up
▪ Input with pull-down
o Output Modes:
▪ Push-Pull
▪ Open-Drain
29
Input configuration
When the I/O Port is programmed as Input:
➢ The Output Buffer is disabled
➢ The Schmitt Trigger Input is activated
➢ The weak pull-up and pull-down resistors are activated or not
depending on input configuration (pull-up, pull-down or floating):
➢ The data present on the I/O pin is sampled into the Input Data
register every APB2 clock cycle
➢ A read access to the Input Data register obtains the I/O State.
30
Output configuration
When the I/O Port is programmed as Output:
➢ The Output Buffer is enabled:
➢ Open Drain Mode: A “0” in the Output register activates the N-
MOS while a “1” in the Output register leaves the port in Hi-Z (the
P-MOS is never activated)
➢ Push-Pull Mode: A “0” in the Output register activates the N-MOS
while a “1” in the Output register activates the P-MOS
➢ The Schmitt Trigger Input is activated.
➢ The weak pull-up and pull-down resistors are disabled.
➢ The data present on the I/O pin is sampled into the Input Data
register every APB2 clock cycle
➢ A read access to the Input Data register gets the I/O state in open
drain mode
➢ A read access to the Output Data register gets the last written value in
Push-Pull mode
31
Alternate Function Modes:
When the I/O Port is programmed as Alternate Function:
➢ The Output Buffer is turned on in Open Drain or Push-Pull
configuration
➢ The Output Buffer is driven by the signal coming from the peripheral
(alternate function out)
➢ The Schmitt Trigger Input is activated
➢ The weak pull-up and pull-down resistors are disabled.
➢ The data present on the I/O pin is sampled into the Input Data
register every APB2 clock cycle
➢ A read access to the Input Data register gets the I/O state in open
drain mode
➢ A read access to the Output Data register gets the last written value in
Push-Pull mode
32
2. Speed Configuration:
o Output Speed: Configurable for 2 MHz, 10 MHz, and 50
MHz.
3. Interrupts:
o External interrupt/event lines are available for all GPIO pins,
supporting both edge and level-triggered interrupts.
4. Lock Mechanism:
o GPIO pins can be locked to prevent further changes to the
configuration.
33
GPIO configurations for device peripherals:
GPIO configurations provided for the TIM1 and TIM8 advanced timers:
34
GPIO configurations provided for the I2C:
➢ Optimal Configuration for I2C: The GPIO pins designated for I2C
clock (SCL) and data (SDA) lines are set to alternate function open
drain.
➢ Open Drain Benefit: This configuration is ideal for I2C communication
because it allows multiple devices to share the same bus without causing
conflicts. Each device can pull the line low, while a pull-up resistor
ensures the line goes high when no device is actively pulling it low.
➢ Alternate Function Capability: These GPIO pins can alternate
between general-purpose input/output and the specialized function
required for I2C communication, enhancing flexibility in device
utilization.
➢ Application-Specific Design: Implementing these configurations
ensures reliable and efficient communication between your device and
other I2C-compatible peripherals, such as sensors, EEPROMs, or other
integrated circuits.
Power Considerations:
• Each GPIO pin can sink or source a specific amount of current, and
care should be taken not to exceed these ratings to avoid damage.
• The power consumption of GPIO pins in different modes (input,
output, alternate function) should be considered, especially in low-
power applications.
35
Software interfaces:
36
Timers in STM32F103C8T6
The stm32f103C8T6 contain one advanced-control timer timer1 and The
general-purpose timers timer2, timer3, timer4
Timers consist of a 16-bit auto-reload counter driven by a programmable
pre-scaler. It may be used for a variety of purposes, including measuring the
pulse lengths of input signals (input capture) or generating output
waveforms (output compare, PWM, complementary PWM with dead-time
insertion). Pulse lengths and waveform periods can be modulated from a
few microseconds to several
milliseconds using the timer pre-scaler and the RCC clock controller pre-
scalers.
The advanced-control (TIM1 and TIM8) and general-purpose (TIMx)
timers are completely independent, and do not share any resources.
37
Timer-x block diagram
Timer modes
➢ Up counting
➢ Down counting
➢ Center-aligned mode (up/down counting)
38
Up counting mode
In up counting mode, the counter counts from 0 to the auto-reload value
(content of the TIMx_ARR register), then restarts from 0 and generates a
counter overflow event. When an update event occurs, all the registers are
updated and the update flag (UIF bit in TIMx_SR register) is set (depending
on the URS bit):
The buffer of the prescaler is reloaded with the preload value (content of
the TIMx_PSC register)
The auto-reload shadow register is updated with the preload value
(TIMx_ARR)
The following figures show some examples of the counter behavior for
different clock frequencies when TIMx_ARR=0x36.
39
Down counting mode
In downcounting mode, the counter counts from the auto-reload value
(content of the TIMx_ARR register) down to 0, then restarts from the auto-
reload value and generates a counter underflow event. When an update
event occurs, all the registers are updated and the update flag (UIF bit in
TIMx_SR register) is set (depending on the URS bit):
The buffer of the prescaler is reloaded with the preload value (content of
the TIMx_PSC register).
The auto-reload active register is updated with the preload value (content
of the TIMx_ARR register). Note that the auto-reload is updated before the
counter is reloaded, so that the next period is the expected one.
The following figures show some examples of the counter behavior for
different clock frequencies when TIMx_ARR=0x36.
40
Center-aligned mode (up/down counting)
In center-aligned mode, the counter counts from 0 to the auto-reload value
(content of the TIMx_ARR register) – 1, generates a counter overflow
event, then counts from the auto reload value down to 1 and generates a
counter underflow event. Then it restarts counting from 0. When an update
event occurs, all the registers are updated and the update flag (UIF bit in
TIMx_SR register) is set (depending on the URS bit):
The buffer of the pre-scaler is reloaded with the preload value (content of
the TIMx_PSC register).
The auto-reload active register is updated with the preload value (content
of the
TIMx_ARR register). Note that if the update source is a counter overflow,
the auto reload is updated before the counter is reloaded, so that the next
period is the expected one (the counter is loaded with the new value).
The following figures show some examples of the counter behavior for
different clock frequencies.
41
Capture/compare channels
Each Capture/Compare channel (see Figure 125) is built around a
capture/compare register (including a shadow register), an input stage for
capture (with digital filter, multiplexing and pre-scaler) and an output stage
(with comparator and output control). The input stage samples the
corresponding TIx input to generate a filtered signal TIxF. Then, an edge
detector with polarity selection generates a signal (TIxFPx) which can be
used as trigger input by the slave mode controller or as the capture
command. It is pre-scaled before the capture register (ICxPS).
42
The capture/compare block is made of one preload register and one shadow
register. Write and read always access the preload register.
In capture mode, captures are actually done in the shadow register, which is
copied into the preload register.
In compare mode, the content of the preload register is copied into the
shadow register which is compared to the counter.
43
The following example shows how to capture the counter value in
TIMx_CCR1 when TI1 input rises. To do this, use the following procedure:
Select the active input: TIMx_CCR1 must be linked to the TI1 input, so
write the CC1S bits to 01 in the TIMx_CCMR1 register. As soon as CC1S
becomes different from 00, the channel is configured in input and the
TIMx_CCR1 register becomes read-only.
Program the needed input filter duration with respect to the signal
connected to the timer (by programming the ICxF bits in the
TIMx_CCMRx register if the input is one of the TIx inputs). Let’s imagine
that, when toggling, the input signal is not stable during at must five
internal clock cycles. We must program a filter duration longer than these
five clock cycles. We can validate a transition on TI1 when eight
consecutive samples with the new level have been detected (sampled at
fDTS frequency). Then write IC1F bits to 0011 in the TIMx_CCMR1
register.
Select the edge of the active transition on the TI1 channel by writing the
CC1P bit to 0 in the TIMx_CCER register (rising edge in this case).
Program the input prescaler. In our example, we wish the capture to be
performed at each valid transition, so the prescaler is disabled (write IC1PS
bits to 00 in the TIMx_CCMR1 register).
Enable capture from the counter into the capture register by setting the
CC1E bit in the TIMx_CCER register.
If needed, enable the related interrupt request by setting the CC1IE bit in
the
TIMx_DIER register, and/or the DMA request by setting the CC1DE bit in
the
TIMx_DIER register. When an input capture occurs:
The TIMx_CCR1 register gets the value of the counter on the active
transition.
CC1IF flag is set (interrupt flag). CC1OF is also set if at least two
consecutive captures occurred whereas the flag was not cleared.
An interrupt is generated depending on the CC1IE bit.
A DMA request is generated depending on the CC1DE bit.
44
PWM input mode
This mode is a particular case of input capture mode. The procedure is the
same except:
Two ICx signals are mapped on the same TIx input.
These 2 ICx signals are active on edges with opposite polarity.
One of the two TIxFP signals is selected as trigger input and the slave
mode controller is configured in reset mode.
For example, the user can measure the period (in TIMx_CCR1 register) and
the duty cycle (in TIMx_CCR2 register) of the PWM applied on TI1 using
the following procedure (depending on CK_INT frequency and prescaler
value):
Select the active input for TIMx_CCR1: write the CC1S bits to 01 in the
TIMx_CCMR1 register (TI1 selected).
Select the active polarity for TI1FP1 (used both for capture in
TIMx_CCR1 and counter clear): write the CC1P to ‘0’ (active on rising
edge).
Select the active input for TIMx_CCR2: write the CC2S bits to 10 in the
TIMx_CCMR1 register (TI1 selected).
Select the active polarity for TI1FP2 (used for capture in TIMx_CCR2):
write the CC2P bit to ‘1’ (active on falling edge).
Select the valid trigger input: write the TS bits to 101 in the
TIMx_SMCR register (TI1FP1 selected).
Configure the slave mode controller in reset mode: write the SMS bits to
100 in the TIMx_SMCR register.
Enable the captures: write the CC1E and CC2E bits to ‘1 in the
TIMx_CCER register.
45
PWM mode
Pulse width modulation mode allows generating a signal with a frequency
determined by the value of the TIMx_ARR register and a duty cycle
determined by the value of the TIMx_CCRx register.
The PWM mode can be selected independently on each channel (one PWM
per OCx output) by writing 110 (PWM mode 1) or ‘111 (PWM mode 2) in
the OCxM bits in the TIMx_CCMRx register. The user must enable the
corresponding preload register by setting the OCxPE bit in the
TIMx_CCMRx register, and eventually the auto-reload preload register by
setting the ARPE bit in the TIMx_CR1 register.
As the preload registers are transferred to the shadow registers only when
an update event occurs, before starting the counter, the user has to initialize
all the registers by setting the UG bit in the TIMx_EGR register.
OCx polarity is software programmable using the CCxP bit in the
TIMx_CCER register. It can be programmed as active high or active low.
OCx output is enabled by the CCxE bit in the TIMx_CCER register. Refer
to the TIMx_CCERx register description for more details. In PWM mode (1
or 2), TIMx_CNT and TIMx_CCRx are always compared to determine
whether TIMx_CCRx TIMx_CNT or TIMx_CNT TIMx_CCRx
(depending on the direction of the counter). However, to comply with the
ETRF (OCREF can be cleared by an external event through the ETR signal
until the next PWM period), the OCREF signal is asserted only:
When the result of the comparison changes, or
When the output compare mode (OCxM bits in TIMx_CCMRx register)
switches from the “frozen” configuration (no comparison, OCxM=‘000) to
one of the PWM modes (OCxM=‘110 or ‘111). This forces the PWM by
software while the timer is running.
46
Software interfaces:
47
48
I2C in STM32F103C8T6
I2C (inter-integrated circuit) bus Interface serves as an interface between
the microcontroller and the serial I2C bus. It provides multimaster
capability, and controls all I2C bus-specific sequencing, protocol,
arbitration and timing. It supports the standard mode (Sm, up to 100 kHz)
and Fm mode (Fm, up to 400 kHz).
It may be used for a variety of purposes, including CRC generation and
verification, SMBus (system management bus) and PMBus (power
management bus).
Depending on specific device implementation DMA capability can be
available for reduced CPU overload.
Mode selection
The interface can operate in one of the four following modes:
➢ Slave transmitter
49
➢ Slave receiver
➢ Master transmitter
➢ Master receiver
By default, it operates in slave mode. The interface automatically switches
from slave to master, after it generates a START condition and from master
to slave, if an arbitration loss or a Stop generation occurs, allowing
multimaster capability.
50
➢ 2 MHz in Sm mode
➢ 4 MHz in Fm mode
As soon as a start condition is detected, the address is received from the
SDA line and sent to the shift register. Then it is compared with the address
of the interface (OAR1) and with OAR2 (if ENDUAL=1) or the General
Call address (if ENGC = 1).
Header or address not matched: the interface ignores it and waits for
another Start condition.
Header matched (10-bit mode only): the interface generates an
acknowledge pulse if the ACK bit is set and waits for the 8-bit slave
address. Address matched: the interface generates in sequence:
➢ An acknowledge pulse if the ACK bit is set
➢ The ADDR bit is set by hardware and an interrupt is generated if the
ITEVFEN bit is set.
➢ If ENDUAL=1, the software has to read the DUALF bit to check
which slave address has been acknowledged.
In 10-bit mode, after receiving the address sequence the slave is always in
Receiver mode. It will enter Transmitter mode on receiving a repeated Start
condition followed by the header sequence with matching address bits and
the least significant bit set (11110xx1). The TRA bit indicates whether the
slave is in Receiver or Transmitter mode.
Slave transmitter
Following the address reception and after clearing ADDR, the slave sends
bytes from the DR register to the SDA line via the internal shift register.
The slave stretches SCL low until ADDR is cleared and DR filled with the
data to be sent .
When the acknowledge pulse is received:
➢ The TxE bit is set by hardware with an interrupt if the ITEVFEN and
the ITBUFEN bits are set.
If TxE is set and some data were not written in the I2C_DR register before
the end of the next data transmission, the BTF bit is set and the interface
waits until BTF is cleared by a read to I2C_SR1 followed by a write to the
I2C_DR register, stretching SCL low.
51
Slave receiver
Following the address reception and after clearing ADDR, the slave
receives bytes from the SDA line into the DR register via the internal shift
register. After each byte the interface generates in sequence:
➢ An acknowledge pulse if the ACK bit is set.
➢ The RxNE bit is set by hardware and an interrupt is generated if the
ITEVFEN and ITBUFEN bit is set.
If RxNE is set and the data in the DR register is not read before the end of
the next data reception, the BTF bit is set and the interface waits until BTF
is cleared by a read from I2C_SR1 followed by a read from the I2C_DR
register, stretching SCL low.
52
➢ The EV1 event stretches SCL low until the end of the corresponding
software sequence.
➢ The EV2 software sequence must be completed before the end of the
current byte transfer
➢ After checking the SR1 register content, the user should perform the
complete clearing sequence for each flag found set.
Thus, for ADDR and STOPF flags, the following sequence is required
inside the I2C interrupt routine: READ SR1
if (ADDR == 1) {READ SR1; READ SR2} if (STOPF == 1) {READ
SR1; WRITE CR1}
The purpose is to make sure that both ADDR and STOPF flags are cleared if both
are found set.
Software interfaces:
53
I2c1 ISR
54
Ch 3. AI Models
55
1. Speed – hump sign detection
Overview:
56
The network has 24 convolutional layers followed by 2 fully connected layers.
Instead of inception modules(used by GoogLeNet), 1 × 1 reduction layers is
used followed by 3 × 3 convolutional layers. Fast YOLO uses a neural network
with fewer convolutional layers (9 instead of 24) and fewer filters in those
layers. Other than the size of the network, all training and testing parameters
are the same between YOLO and Fast YOLO.
YOLO trains on full images and directly optimizes inference performance,
making it a unified model.
YOLO Benefits
Limitations of YOLO
YOLO is a fast and simple model but it does have it’s own limitations:
1. The model imposes a spatial constraint which limits the number of nearby
objects the model can predict.
2. Model struggles with small objects which appear in groups, like flocks of birds.
3. Since annotations are used during training, the model struggles to generalize
objects in new or unusual aspect ratios or configurations.
57
4. During training, the loss function approximates detection performance, the loss
function treats errors the same in small bounding boxes versus large bounding
boxes.
Dataset
The dataset comprises 2,457 images, featuring 17 different speed limit
signs and a hump sign, totaling 18 distinct classes. Of these, 1,965 samples
are designated for training, while the remaining 492 samples are reserved
for evaluating the model's generalization and performance on unseen
data.
58
Code
59
60
61
62
63
Model training
import os
os.chdir('/content/drive/MyDrive/yolo_training')
!ls
os.chdir('yolov5')
!ls
#model summary:
class YOLO_Pred():
def __init__(self,onnx_model,data_yaml):
# load YAML
with open(data_yaml,mode='r') as f:
data_yaml = yaml.load(f,Loader=SafeLoader)
self.labels = data_yaml['names']
64
self.nc = data_yaml['nc']
# load YOLO model
self.yolo = cv2.dnn.readNetFromONNX(onnx_model)
self.yolo.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
self.yolo.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) def predictions(self,image):
for i in range(len(detections)):
row = detections[i]
confidence = row[4] # confidence of detection an object
if confidence > 0.4:
class_score = row[5:].max() # maximum probability from 20 objects
class_id = row[5:].argmax() # get the index position at which max probabilty occur
box = np.array([left,top,width,height])
65
# append values into the list
confidences.append(confidence)
boxes.append(box)
classes.append(class_id)
# clean
boxes_np = np.array(boxes).tolist()
confidences_np = np.array(confidences).tolist()
# NMS
index = cv2.dnn.NMSBoxes(boxes_np, confidences_np, 0.25, 0.45)
if len(index) > 0:
index = np.array(index).flatten()
else:
index = []
cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(image,(x,y-30),(x+w,y),(255,255,255),-1)
cv2.putText(image,text,(x,y-
10),cv2.FONT_HERSHEY_PLAIN,0.7,(0,0,0),1)
return image
Let's start by discussing some metrics that are not only important to
YOLOv5 but are broadly applicable across different object detection
models.
• Intersection over Union (IoU): IoU is a measure that quantifies the
overlap between a predicted bounding box and a ground truth
bounding box. It plays a fundamental role in evaluating the accuracy
of object localization.
66
In general, IoU > 50% is considered as good prediction
• Average Precision (AP):
- Precision: The ratio of true positive detections to the total
number of detections. Precision = TP / (TP + FP)
- Recall: The ratio of true positive detections to the total number
of actual objects. Recall = TP / (TP + FN). High recall indicates that
the model misses fewer positive instances.
AP computes the area under the precision-recall curve, providing a
single value that encapsulates the model's precision and recall
performance.
• Mean Average Precision (mAP): mAP extends the concept of AP by
calculating the average AP values across multiple object classes. This
is useful in multi-class object detection scenarios to provide a
comprehensive evaluation of the model's performance.
• Precision and Recall: Precision quantifies the proportion of true
positives among all positive predictions, assessing the model's
capability to avoid false positives. On the other hand, Recall
calculates the proportion of true positives among all actual
positives, measuring the model's ability to detect all instances of a
class.
• F1 Score: The F1 Score is the harmonic mean of precision and recall,
providing a balanced assessment of a model's performance while
considering both false positives and false negatives.
67
Model predictions
68
model evaluation
Confusion matrix
A confusion matrix is a tool used to evaluate the performance of
classification models. In the context of YOLOv5, which is an object
detection model, the confusion matrix helps to understand how well
the model is performing in terms of correctly detecting and classifying
objects within an image.
69
Traffic sign Detection
Traffic light recognition (TLR) is an integral part of any intelligent vehicle,
which must function in the existing infrastructure. Pedestrian and sign
detection have recently seen great improvements due to the introduction
of learning based detectors using integral channel features. A similar push
have not yet been seen for the detection sub-problem of TLR, where
detection is dominated by methods based on heuristic models.
Recognition of traffic lights (TLs) is an integral part of Driver Assistance
Systems (DAS) in the transitional period between manually controlled cars
and a fully autonomous network of cars. Currently the focus of research in
computer vision systems for vehicles is divided in two. Major industrial
research groups, such as Daimlar and Google, are investing heavily in
autonomous vehicles and attempt to make computer vision based system
for the existing infrastructure. Other research done by academic
institutions, such as the LISA lab at UC San Diego and LaRA at ParisTech, are
targeting DAS, which is already available to consumers in some high-end
models. Existing commercial DAS capabilities include, warning of
impending collisions, emergency breaking, automatic lane changing,
keeping the advertised speed limit, and adaptive cruise control. For all
parts of DAS the urban environment posses a lot of challenges, especially
to the systems that rely on computer vision. One of the most important
challenge here is detecting and recognizing TLs at intersections. Ideally, the
TLs should be able to communicate both visually and using radio
communication. However, this requires investments in infrastructure,
something that is usually not a high priority.
Traffic light detection is the process of identifying and classifying traffic
lights in an environment, typically using computer vision techniques. This
capability is crucial for applications such as autonomous vehicles and
intelligent transportation systems. Traffic light detection involves
recognizing the presence of traffic lights, determining their state (red,
yellow, green), and potentially tracking their changes over time.
Key Components and Processes
Image Acquisition
70
Images or video frames are captured using cameras mounted on vehicles
or infrastructure. These images serve as the input for the traffic light
detection system.
Preprocessing
Preprocessing steps are applied to enhance the quality of the images and
make the detection process more efficient. This may include resizing,
normalization, and noise reduction.
Object Detection
Object detection algorithms, such as YOLO (You Only Look Once), Faster R-
CNN, or SSD (Single Shot MultiBox Detector), are used to detect the
presence of traffic lights in the images. These algorithms output bounding
boxes around detected traffic lights along with confidence scores.
Classification
Once traffic lights are detected, they need to be classified into different
states (red, yellow, green). This can be done using color-based
classification, deep learning-based classification, or a combination of both.
Tracking (Optional)
In dynamic scenarios, tracking algorithms can be used to follow the
detected traffic lights across consecutive frames. This helps in maintaining
consistent detection and handling occlusions or temporary visibility issues.
Decision Making
The detected and classified traffic light states are used to make decisions,
such as whether to stop, proceed, or prepare to stop. In autonomous
vehicles, this information is crucial for navigation and ensuring safety at
intersections.
Applications
Autonomous Vehicles
Traffic light detection is a critical component of the perception
system in autonomous vehicles. It allows the vehicle to understand
and respond to traffic signals, ensuring compliance with traffic rules
and safe navigation through intersections.
71
Driver Assistance Systems
Advanced driver assistance systems (ADAS) use traffic light detection to
provide warnings or recommendations to human drivers. For example, a
system might alert the driver when a red light is detected, helping to
prevent accidents.
Traffic Management
Traffic light detection systems can be used in smart city infrastructure to
monitor and manage traffic flow. By analyzing traffic light states and traffic
patterns, these systems can optimize traffic signal timings and reduce
congestion.
Challenges
Lighting Conditions
Traffic light detection systems must handle varying lighting conditions,
including different times of day, weather conditions, and shadows.
Occlusions
Traffic lights can be partially or fully occluded by other vehicles, buildings,
or foliage, making detection more challenging.
Variability in Traffic Lights
Traffic lights come in different shapes, sizes, and colors, and may be
mounted in various configurations. Detection systems must be robust to
this variability.
Real-time Processing
For applications in autonomous vehicles and driver assistance systems,
traffic light detection must be performed in real-time with low latency to
ensure timely decision-making.
By leveraging advanced computer vision and machine learning techniques,
traffic light detection systems can achieve high accuracy and reliability,
contributing to safer and more efficient transportation systems.
72
Dataset
The dataset comprises 1000 images, featuring 3 different classes; red,
yellow and green. 800 samples are designated for training, while the
remaining 200 samples are reserved for evaluating the model's
generalization and performance on unseen data.
73
Model prediction
74
Model evaluation
Confusion matrix
F1 score
It is the harmonic mean of precision and recall, and provides a single
measure of a model's accuracy that balances both the precision (the
accuracy of the positive predictions) and the recall (the ability of the
model to find all the
positive instances).
Here's how you calculate
the F1 score:
75
labels correlogram
A labels correlogram is a visualization that shows the correlation or co-
occurrence frequency of different object classes within the same images. It
is typically represented as a heatmap, where each cell (i, j) represents the
co-occurrence frequency or correlation between class i and class j.
76
Precision-confidence curve
77
Recall confidence curve
Results
78
2.Lane Detection
History of Lane Detection:
Introduction: Lane detection is a crucial aspect of autonomous driving and
advanced driver assistance systems (ADAS). The technology has evolved
significantly over the years, moving from simple, rule-based methods to
sophisticated, machine learning-driven approaches. This brief history
highlights the key milestones and advancements in lane detection
technology.
Early Developments:
• 1970s - 1980s:
- The concept of lane detection began in the 1970s with research focusing
on basic image processing techniques. Early systems relied on edge
detection and simple
heuristics to identify lane markings.
- In the 1980s, more advanced algorithms using Hough Transform were
introduced.
These methods could detect straight lines in an image, which was useful
for identifying lane boundaries on straight roads.
Advancements in the 1990s:
• Improved Algorithms:
- The 1990s saw the development of more robust algorithms capable of
handling curved lanes. Techniques such as the Hough Transform were
refined to detect parabolic shapes, allowing for the detection of curved
lane markings.
- Research in this period also began to explore the use of Kalman filters for
lane tracking, improving the system's ability to predict and follow lane
positions over time.
2000s:
• Introduction of Machine Learning:
- With the advent of machine learning, lane detection systems began to
incorporate classifiers such as Support Vector Machines (SVMs) to
distinguish between lane markings and other road features.
- The use of image processing techniques combined with machine learning
algorithms allowed for more accurate and reliable lane detection under
various conditions.
• Integration with ADAS:
- Lane detection technology was increasingly integrated into commercial
ADAS, providing features such as lane departure warnings and lane-
keeping assistance.
79
- These systems combined sensor data from cameras and other sources,
such as lidar and radar, to enhance detection accuracy and reliability.
2010s:
• Deep Learning Revolution:
- The 2010s marked a significant leap forward with the introduction of
deep learning techniques. Convolutional Neural Networks (CNNs) became
the standard for image-based tasks, including lane detection.
- Networks such as AlexNet, VGGNet, and later more specialized
architectures like LaneNet and SCNN (Spatial CNN) were developed to
identify lane markings with high precision.
- Deep learning models trained on large datasets could handle diverse and
challenging road conditions, including variations in lighting, weather, and
road quality.
• Real-time Performance:
-Advances in hardware, particularly GPUs, allowed for the deployment of
realtime lane detection systems in commercial vehicles.
-Companies like Tesla, Google (Waymo), and other automotive
manufacturers began incorporating advanced lane detection systems into
their autonomous driving technologies.
80
Conclusion: Lane detection technology has come a long way from its
early beginnings with simple edge detection techniques to the
sophisticated deep learning-based systems of today. As the field
continues to evolve, the focus remains on improving accuracy, reliability,
and real-time performance to support the advancement of autonomous
driving and enhance road safety.
This brief history outlines the key milestones in lane detection, providing
an overview of how the technology has progressed over the decades. It
highlights the critical role of research and development in advancing lane
detection capabilities, contributing to the broader field of autonomous
driving.
Usage of cv2.VideoCapture()
python
Copy code
cap = cv2.VideoCapture('your_video_file.mp4')
81
2.convert to gray scale
python
Copy code
import cv2
82
3. reduce the noise (gaussian blur)
Syntax of cv2.GaussianBlur()
python
Copy code
cv2.GaussianBlur(src, ksize, sigmaX[, dst[, sigmaY[, borderType]]])
Edge Detection
83
Canny Edge Detection
84
▪ Strong edges: Pixels with gradient values above the
high threshold are marked as strong edges.
▪ Weak edges: Pixels with gradient values between the
low and high thresholds are marked as weak edges.
o Function: cv2.Canny(image, threshold1, threshold2)
85
5.finding the lane lines region of interest
To define a region of interest (ROI) in the context of lane detection, you
typically want to focus on the area of the image where the lanes are
expected to be. This helps reduce processing time and avoid detecting
unwanted edges outside of the lanes. Here’s how you can modify the lane
detection code to include a region of interest:
86
6. finding lane line using Hough transform
The Hough Transform, introduced by Paul Hough in 1962 and later refined
by Richard Duda and Peter Hart in 1972 for line detection, is a method to
detect shapes within an image, particularly straight lines and circles. It
operates by converting the problem of detecting these shapes from the
image space (x, y coordinates) to a parameter space (commonly known as
Hough space).
Key Concepts:
87
▪ Create an accumulator (matrix) in the parameter space.
▪ For each edge pixel, compute possible lines (ρ, θ) and
increment corresponding cells in the accumulator.
▪ Identify peaks in the accumulator matrix, which
indicate the parameters of detected lines.
4. Circle Detection:
o For circle detection, a modified Hough Transform is used
where each point (x, y) in the image space votes for possible
circles (x_c, y_c, r) in the parameter space, where (x_c, y_c) is
the center of the circle and r is the radius.
Applications:
Advantages:
Limitations:
Working Principle
88
o Parameterization: Each point (x, y) in the image space can be
represented as a line in the Hough parameter space (ρ, θ),
where:
▪ ρ (rho): Distance from the origin (0, 0) to the closest
point on the line.
▪ θ (theta): Angle between the x-axis and the normal to
the line from the origin.
o Steps:
▪ Convert the image to grayscale and detect edges (e.g.,
using the Canny edge detector).
▪ Initialize an accumulator (matrix) in the (ρ, θ)
parameter space to zero.
▪ For each edge point (x, y) in the image:
▪ Compute possible lines (ρ, θ) that pass through
(x, y).
▪ Increment corresponding cells in the
accumulator.
▪ Peaks in the accumulator indicate lines in the image.
o Accumulator Space: Each cell (ρ, θ) in the accumulator
accumulates votes from edge points, where a higher vote
count indicates a higher likelihood of a line passing through
that point.
o Thresholding: After accumulation, thresholding is often
applied to select peaks in the accumulator matrix, which
correspond to lines detected in the image
89
The cartesian x and y has it is pros : The vertical lines has infinite slope.
90
7. calculate the average of the lines to make the right and the left side
and print the order to go right or left.
91
8. get the order to go right or left
Has two middle points the first for the two draw lines, the second for
middle of the frame and compare them together to get the order for the
drive to go right or to go left.
92
Ch4. Mobile Application &
Design
93
The main aim of this application is to control the car, in features of cars
and display the state of car.
This application is built with flutter framework which uses dart language, flutter is
supported by google Features of this application:
1) This application is built with MVVM Architecture pattern
MVVM (Model-View-ViewModel) is a software architectural pattern used
primarily in developing user interfaces.
1. Model:
3. home screen.
94
1)How is Splash screen built?
is very simple:
1. choose image
2. add its relative path to pubspec.yaml file
3. place it in a certain way in screen and make a beauty design
4. choose background color
5. make this screen hold for five seconds and then navigate to next screen
95
code:
import 'package:adas_app/constants.dart';
import 'package:adas_app/core/functions/add_remove_user_to_shared_preference.dart';
import 'package:adas_app/core/utils/app_images.dart';
import 'package:adas_app/features/auth/presentation/views/login_view.dart';
import 'package:adas_app/features/home/presentation/views/home_view.dart';
import 'package:flutter/material.dart';
@override
State<SplashViewBody> createState() => _SplashViewBodyState();
}
@override
void initState() {
Future.delayed(const Duration(seconds: 5), () async {
if (await checkCurrUser()) {
// ignore: use_build_context_synchronously
Navigator.pushReplacement(
context, MaterialPageRoute(builder: (context) => const HomeView()));
} else {
// ignore: use_build_context_synchronously
Navigator.pushReplacement(context,
MaterialPageRoute(builder: (context) => const LoginView()));
}
});
super.initState();
}
@override
Widget build(BuildContext context) {
return Center(
child: Container(
decoration: const BoxDecoration(
color: kBackgroundColor,
image: DecorationImage(image: AssetImage(Assets.imagesAdas2)),
),
),
);
}
}
96
2) How Login and Register Screens built?
97
Flow of those two screens:
1) If user does not have an account, then it will press on register button to
navigate to register screen and make an account # place image.
98
2) After making a new account, it will be asked to verify his new account.
99
4) login screen show error if there is an error in email or password and
asked user to enter it again.
5) If the user creates a new account with a weak password, it will show
him.
6) If the user signs in with the correct email, the next time he uses the
app, it will take him directly to the home screen because he is already
logged in (unless he signs out).
100
code of Login screen:
import 'package:adas_app/core/utils/app_style.dart';
import
'package:adas_app/features/auth/presentation/manager/login_cubit/login_cubit.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/dont_have_email_button.dart
import
'package:adas_app/features/auth/presentation/views/widgets/login_email_form.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/login_password_form.dart';
import 'package:adas_app/features/home/presentation/views/home_view.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:gap/gap.dart';
import 'package:modal_progress_hud_nsn/modal_progress_hud_nsn.dart';
import 'auth_logo.dart';
import 'login_button.dart';
import 'login_with_google.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
@override
State<LoginViewBody> createState() => _LoginViewBodyState();
}
child: Padding(
101
padding: const EdgeInsets.symmetric(horizontal: 30),
child: Form(
key: formKey,
child: CustomScrollView(
slivers: [
SliverFillRemaining(
hasScrollBody: false,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const Gap(30),
const Align(
alignment: Alignment.topRight,
child: DontHaveEmailButton(),
),
const Center(child: AuthLogo()),
const Gap(35),
const Text('Login', style: AppStyle.bold50),
const Gap(70),
const LoginEmailForm(),
const Gap(40),
const LoginPasswordForm(),
const Gap(40),
const Spacer(),
LoginButton(formKey: formKey),
const Gap(40),
const LoginWithGoogle(),
const Gap(70),
],
),
),
],
),
),
),
);
},
);
}
102
desc: errorMessage,
padding: const EdgeInsets.all(20.0),
dialogBorderRadius: BorderRadius.circular(50),
descTextStyle: AppStyle.medium28,
btnOkOnPress: (){}
).show();
}
Code of Login_cubit:
import 'package:adas_app/core/functions/add_remove_user_to_shared_preference.dart';
import 'package:adas_app/core/utils/app_style.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:google_sign_in/google_sign_in.dart';
part 'login_state.dart';
103
String _emailAddress = '';
String _password = '';
set emailAddress(String emailAddress) => _emailAddress = emailAddress;
set password(String password) => _password = password;
Future<void> login(context) async {
emit(LoginLoading());
try {
final credential = await FirebaseAuth.instance.signInWithEmailAndPassword(
email: _emailAddress, password: _password);
if (credential.user!.emailVerified) {
emit(LoginSuccess());
await addUserToSharedPreferences();
}else {
__showVerifiedDialog(context: context, credential: credential);
}
} on FirebaseAuthException catch (e) {
if (e.code == 'invalid-credential') {
emit(LoginFailure('Email Not Found'));
} else if (e.code == 'invalid-email') {
emit(LoginFailure(
'invalid email , email should be: email_name@*****.com'));
} else if (e.code == 'network-request-failed') {
emit(LoginFailure('please check your internet connection'));
} else {
emit(LoginFailure('unknown input'));
}
} catch (e) {
emit(LoginFailure(e.toString()));
}
}
import
'package:adas_app/features/auth/presentation/manager/register_cubit/register_cubit.dar
t';
import 'package:adas_app/features/auth/presentation/views/widgets/auth_logo.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/register_button.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/register_confirm_password_f
orm.dart';
104
import
'package:adas_app/features/auth/presentation/views/widgets/register_email_form.dart';
import
'package:adas_app/features/auth/presentation/views/widgets/register_password_form.dart
';
import 'package:adas_app/features/home/presentation/views/home_view.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:gap/gap.dart';
import 'package:modal_progress_hud_nsn/modal_progress_hud_nsn.dart';
import '../../../../../core/utils/app_style.dart';
@override
State<RegisterViewBody> createState() => _RegisterViewBodyState();
}
105
child: CustomScrollView(
slivers: [
SliverFillRemaining(
hasScrollBody: false,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const Gap(30),
const Center(child: AuthLogo()),
const Gap(35),
const Text('Register', style: AppStyle.bold50),
const Gap(45),
const RegisterEmailForm(),
const Gap(30),
const RegisterPasswordForm(),
const Gap(30),
const RegisterConfirmPasswordForm(),
const Gap(30),
const Spacer(),
RegisterButton(formKey: formKey),
const Gap(70),
],
),
),
],
),
),
),
);
},
);
}
106
}
descTextStyle: AppStyle.medium28,
btnOkOnPress: (){}
).show();
}
RegisterFailure(this.errMessage);
}
import 'package:adas_app/core/functions/add_remove_user_to_shared_preference.dart';
import 'package:adas_app/core/utils/app_style.dart';
import 'package:awesome_dialog/awesome_dialog.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
107
part 'register_state.dart';
return;
}
try {
final credential = await FirebaseAuth.instance.createUserWithEmailAndPassword(
email: _emailAddress, password: _password);
await FirebaseAuth.instance.currentUser!.sendEmailVerification();
if (credential.user!.emailVerified) {
emit(RegisterSuccess());
await addUserToSharedPreferences();
}else {
__showVerifiedDialog(context: context, credential: credential);
}
108
}
}
109
3) Home Screen:
We'll divide this screen into three parts: left, middle, and right.
• There are two arrows: left arrow and right arrow, we use this to control the
direction of car move
• The four icons above control four features of the car, which are: FCW (forward
collision warning), traffic sign recognition, lane detection, and blind spot.
• In FCW, when it is on, the car will detect if anything is in front of it or near it,
and it will break the car automatically.
• In Traffic Sign Recognition, when it is on, the car can recognize road signs as
Bumb sign and max speed sign.
• When the Blind Spot feature is enabled, the car can detect nearby vehicles
110
and display a warning on the application.
111
2) Middle part of home screen:
112
• If there is another car behind and near our car, a warning is
displayed on screen behind car image for five seconds with warning
sound.
• If the traffic light is on, car will detect the traffic light state if it is red,
113
yellow or green.
• Max speed which is display max speed of road if and only if Traffic
sign Recognition is on.
114
code of home screen UI:
import
'package:adas_app/features/home/presentation/views/widgets/backgrond_decoration.dart';
import 'package:adas_app/features/home/presentation/views/widgets/backward_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/break_feature.dart';
import 'package:adas_app/features/home/presentation/views/widgets/car_battery.dart';
import 'package:adas_app/features/home/presentation/views/widgets/car_controll_icons.dart';
import 'package:adas_app/features/home/presentation/views/widgets/car_image.dart';
import
'package:adas_app/features/home/presentation/views/widgets/custom_button_for_rpn.dart';
import 'package:adas_app/features/home/presentation/views/widgets/custom_drawer.dart';
import 'package:adas_app/features/home/presentation/views/widgets/custom_menu_icon.dart';
import 'package:adas_app/features/home/presentation/views/widgets/forward_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/left_button.dart';
import 'package:adas_app/features/home/presentation/views/widgets/left_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/max_speed.dart';
import 'package:adas_app/features/home/presentation/views/widgets/right_button.dart';
import 'package:adas_app/features/home/presentation/views/widgets/right_warning.dart';
import 'package:adas_app/features/home/presentation/views/widgets/speed_feature.dart';
import 'package:adas_app/features/home/presentation/views/widgets/speedometer.dart';
import
'package:adas_app/features/home/presentation/views/widgets/forward_bump_warning.dart';
import 'package:flutter/material.dart';
import 'package:gap/gap.dart';
@override
State<HomeBody> createState() => _HomeBodyState();
}
115
child: Row(
children: [
Expanded(
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
CustomMenuIcon(scaffoldKey: scaffoldKey),
const Gap(20),
const CarBattery(),
const Spacer(),
const CarControllIcons(),
const Spacer(),
const Row(
children: [LeftButton(), Gap(60), RightButton()],
)
],
),
),
const Expanded(
child: Column(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.center,
children: [
Row(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.center,
children: [
Expanded(child: LeftWarning()),
Column(
mainAxisSize: MainAxisSize.min,
children: [
Stack(
children: [
ForwardBumpWarning(),
ForwardWarning(),
],
),
CarImage(),
BackwardWarning(),
],
),
Expanded(child: RightWarning()),
],
)
],
),
),
const Expanded(
116
child: Column(
crossAxisAlignment: CrossAxisAlignment.end,
children: [
Gap(30),
Row(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
CustomSpeedoMeter(),
Gap(15),
MaxSpeed(),
],
),
Spacer(),
CustomButtonRPN(),
Gap(70),
Row(
mainAxisSize: MainAxisSize.min,
children: [
BreakFeature(),
Gap(20),
SpeedFeature(),
],
),
],
),
)
],
),
),
),
);
}
}
117
Firebase and FireStore:
Firebase
Firestore
118
1. Document-Oriented: Data is stored in documents (key-value pairs)
which are grouped into collections. This makes the structure more
intuitive and easier to manage than traditional SQL databases.
2. Real-time and Offline Support: Like the Realtime Database,
Firestore provides real-time updates and offline support, ensuring
that your app remains responsive even without a network
connection.
3. Scalability: Designed to scale automatically with the growth of your
application, handling large datasets and high traffic loads without
compromising performance.
4. Powerful Querying: Supports complex queries, including range
queries and array-contains operations, with indexing automatically
managed by Firestore.
5. Security Rules: Fine-grained security rules can be defined to control
access to the data, making it highly secure.
6. Seamless Integration: Integrates well with other Firebase services,
such as Authentication, Analytics, and Cloud Functions, providing a
cohesive development experience.
119
logic code of Traffic Sign Rec:
_trafficSignRecRealTime() {
adas.doc(kTrafficSignRecDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
if (data[kIsSelectedKey] == true) {
emit(TrafficSignRecOn(
maxSpeed: data[kMaxSpeedKey],
isBump: data[kIsBumpKey],
));
}
}).onError((error) {
debugPrint("trafficSignRecCubit error realtime : $error");
emit(TrafficSignRecFailure());
});
}
void setTrafficSignRecToDefault() {
adas.doc(kTrafficSignRecDoc).update({
kIsSelectedKey: false,
kIsBumpKey: false,
kMaxSpeedKey: 0,
});
}
}
120
2) lane detection:
void setLaneDetectionToDefault() {
adas.doc(kLaneDedectionDoc).update({kIsSelectedKey: false});
}
}
4) FCW:
121
if (isCancelled == true) {
if (isSelected) {
isSelected = false;
await adas.doc(kFCWDoc).set({kIsSelectedKey: false});
emit(FCWOff());
}
return;
}
isSelected = !isSelected;
if (isSelected) {
await adas.doc(kFCWDoc).set({kIsSelectedKey: isSelected});
emit(FCWOn());
} else {
await adas.doc(kFCWDoc).set({kIsSelectedKey: isSelected});
emit(FCWOff());
}
} catch (e) {
showSnackBar(context, "autoDriveCubit error : $e");
}
}
void setFCWToDefault() {
adas.doc(kFCWDoc).set({kIsSelectedKey: false});
}
4) Blind Spot:
class BlindSpotCubit extends Cubit<BlindSpotState> {
BlindSpotCubit() : super(BlindSpotInitial()) {
__blindSpotRealTimeWarning();
}
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
void setBlindSpot(bool isSelected) {
try {
adas.doc(kBlindSpotDoc).set(
{
kIsSelectedKey: isSelected,
},
SetOptions(merge: true),
);
} catch (e) {
emit(BlindSpotFailure(error: e.toString()));
}
122
}
void turnOffRightWarning() {
try {
adas
.doc(kBlindSpotDoc)
.set({kRightWarningKey: false}, SetOptions(merge: true));
} catch (e) {
emit(BlindSpotFailure(error: e.toString()));
}
}
void turnOffLeftWarning() {
try {
adas
.doc(kBlindSpotDoc)
.set({kLeftWarningKey: false}, SetOptions(merge: true));
} catch (e) {
emit(BlindSpotFailure(error: e.toString()));
}
}
void __blindSpotRealTimeWarning() {
adas.doc(kBlindSpotDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
if (data[kIsSelectedKey] == true) {
if (data[kRightWarningKey] == true) {
emit(BlindSpotRightWarningOn());
}
if (data[kLeftWarningKey] == true) {
emit(BlindSpotLeftWarningOn());
}
}
}).onError((error) => emit(BlindSpotFailure(error: error.toString())));
}
void setBlindSpotToDefault() {
adas.doc(kBlindSpotDoc).update({
kIsSelectedKey: false,
kRightWarningKey: false,
kLeftWarningKey: false,
});
}
}
123
5) Car Control (speed, break , left , right , P , D , R ):
class CarControll {
CollectionReference adas =
FirebaseFirestore.instance.collection(kAdasCollection);
Future<void> carState(context, {required String carState}) async {
try {
await adas.doc(kCarStateKey).set(
{
kCarStateKey: carState,
},
);
} catch (e) {
showSnackBar(context, 'car state error : $e');
}
}
124
try {
await adas.doc(kBreakKey).set({
kBreakKey: isbreak,
});
} catch (e) {
showSnackBar(context, "break error : $e");
}
}
}
cancelForwardWarning() {
try {
adas.doc(kForwardBackwardWarningDoc).update({kForwardWarningKey: false});
} catch (e) {
emit(ForwardBackwardWarningFailure(error: e.toString()));
}
}
cancelBackwardWarning() {
try {
adas.doc(kForwardBackwardWarningDoc).update({kBackwardWarningKey: false});
} catch (e) {
emit(ForwardBackwardWarningFailure(error: e.toString()));
}
}
__forwardAndBackWardRealTimeWarning() {
adas.doc(kForwardBackwardWarningDoc).snapshots().listen((event) {
Map<String, dynamic> data = event.data() as Map<String, dynamic>;
if (data[kForwardWarningKey] == true) {
emit(ForwardWarningOn());
}
if (data[kBackwardWarningKey] == true) {
emit(BackwardWarningOn());
}
}).onError((error) =>
emit(ForwardBackwardWarningFailure(error: error.toString())));
}
}
125
7) Get current speed of car:
126
Ch5. Testing
127
1.Intorduction
1.1 PURPOSE OF THE DOCUMENTATION
The purpose of this documentation is to provide a comprehensive
guide for understanding, setting up, and operating the test car used in our
collision avoidance application project. This document aims to serve
multiple audiences, including developers, researchers, and students, by
detailing the hardware and software components, assembly instructions,
and operational procedures. By offering clear, step-by-step guidance and
thorough explanations of the system's functionality
2. Hardware
2.1 COMPONENTS
2.1.1 ESP32 Microcontroller
128
2.1.2 Motor Driver
2.1.3 DC Motors
The DC motors are the primary actuators that move the test car.
These motors convert electrical energy into mechanical motion. The
project uses two DC motors, each connected to one of the car's wheels. By
controlling the speed and direction of these motors, the test car can
perform various maneuvers, such as moving forward, reversing, and
turning. The motors are driven by the motor driver based on commands
from the ESP32, ensuring coordinated and controlled movement.
A stable and reliable power supply is crucial for the operation of the
test car. The power supply provides the necessary voltage and current to
the ESP32, motor driver, and DC motors. Typically, a rechargeable battery
pack is used to power the test car, offering portability and ease of use. We
used two batteries (3v each) as the voltage rating of the power supply
should match the requirements of the motors and the microcontroller to
ensure efficient and safe operation.
129
capability also facilitates easy updates and debugging, making the ESP32 a
versatile and powerful choice for the project.
3. Software Components
3.1 LIBRARIES USED
3.1.1 Arduino.h
Purpose:
The Arduino.h library is the core library for Arduino development,
providing the essential functions and definitions for programming
microcontrollers in the Arduino ecosystem.
Pin Configuration: Used to set the motor control pins as outputs in the
setUpPinModes function.
3.1.2 WiFi.h
Purpose: The WiFi.h library enables the ESP32 microcontroller to connect
to Wi-Fi networks and act as a Wi-Fi access point or station.
3.1.3 AsyncTCP.h
Purpose: The AsyncTCP.h library provides asynchronous TCP
communication, which is essential for non-blocking network operations. It
is often used in conjunction with the ESPAsyncWebServer library.
Usage in the Project: Foundation for Async Web Server: The AsyncTCP.h
library is required for the ESPAsyncWebServer to function, providing the
underlying TCP capabilities.
130
3.1.4 ESPAsyncWebServer.h
Purpose: The ESPAsyncWebServer.h library provides an asynchronous web
server for the ESP32, enabling efficient handling of HTTP requests and
WebSocket communication.
3.2CODE EXPLANATION
3.2.1 Global
AsyncWebServer server(80);
(80): This specifies the port number on which the server will listen for
incoming HTTP requests. Port 80 is the default port for HTTP traffic.
By creating this instance, you are setting up an HTTP server that listens on
port 80, ready to handle incoming HTTP requests from clients (e.g., web
browsers).
AsyncWebSocket ws("/ws");
131
This line creates an instance of the AsyncWebSocket class, which is used to
handle WebSocket connections.
("/ws"): This specifies the URL path for the WebSocket endpoint. Clients
will connect to this WebSocket server using the URL ws://<server_ip>/ws.
setUpPinModes()
This function is called to configure the GPIO
pins of the ESP32 that control the motor
drivers. It sets the pin modes to OUTPUT so that they can be used to send
signals to the motors.
Serial.begin (115200)
This initializes the serial communication at a baud rate of 115200 bits per
second. It is used for debugging purposes, allowing you to print messages
to the serial monitor.
132
Setting Up the Wi-Fi Access Point
WiFi.softAP(ssid, password): This function sets up the ESP32 as a Wi-Fi
access point with the SSID (network name) and password specified by the
ssid and password variables.
Serial.print and Serial.println: Print the IP address to the serial monitor for
debugging purposes.
ws.onEvent(onWebSocketEvent);
server.addHandler(&ws);
133
This line adds the WebSocket handler to the HTTP server.
By adding the WebSocket handler to the HTTP server, you are integrating
WebSocket functionality into the web server. This allows the server to
handle WebSocket connections on the specified path (in this case, "/ws").
server.begin();
This setup ensures that the motor control pins are ready for use and are
initially turned off. In the functions, we use defined names (constants) to
refer to specific pin numbers. This approach improves code readability and
maintainability.
134
Parameter: AsyncWebServerRequest *request: A pointer to the request
object that represents the HTTP request.
Action:
Action:
135
3.2.6 Web socket event
Certainly! The
onWebSocketEvent
function handles various
WebSocket events for the
server. Let's go through the
function in detail, breaking
down each part and its
workflow.
Parameters:
• AsyncWebSocket
*server: Pointer to
the WebSocket
server.
WS_EVT_CONNECT
WS_EVT_DISCONNECT
136
Actions:
Description: This case is triggered when the server receives data from a
client.
Actions:
Description: These cases are triggered for pong and error events.
137
3.2.7 Process car movement
Purpose:
138
Function Purpose: The loop() function is an integral part of the Arduino
framework. It runs continuously after the setup() function has
completed. In this specific code, the loop() function performs
a single task:
Purpose: This method scans through all connected WebSocket clients and
performs necessary cleanup tasks. These tasks might include closing
connections that are no longer active or removing clients that have
disconnected.
139
The ESP32 creates a Wi-Fi access point using the provided
SSID and password.
The HTTP server and WebSocket server are initialized and
configured to handle client requests.
4.2. Client Connection
• Connecting to Wi-Fi: The user connects their device to the Wi-Fi
network created by the car.
• Accessing the Control Interface: The user opens a web browser and
navigates to the car's IP address to access the control interface
served by the ESP32.
4.3. User Interaction
• Loading the Web Interface: The HTML page is loaded, initializing the
WebSocket connection to the ESP32
• User Touches a Control: The user touches a control button on the
web interface, triggering the onTouchStartAndEnd function.
• WebSocket Communication
1-Sending a Command: The onTouchStartAndEnd function
sends a WebSocket message containing the integer value
representing the command (e.g., UP, DOWN, LEFT, RIGHT,
STOP).
2-Receiving the Command: The ESP32 receives the
WebSocket message and invokes the onWebSocketEvent
function.
4.5. Processing the Command
• Interpreting the Command: The processCarMovement function
interprets the integer value and adjusts the motor driver pins
accordingly.
• controlling the Motors: The appropriate GPIO pins are set to HIGH
or LOW to control the direction and speed of the motors, causing
the car to move as commanded.
4.6. Continuous Operation
• Cleaning Up WebSocket Clients: In the loop() function, the
WebSocket clients are continuously cleaned up to ensure smooth
operation.
140
Ch6. Linux Customization
141
Introduction
Building a custom operating system for embedded systems like the Raspberry Pi 4 offers
several significant advantages, including optimizing performance, reducing unnecessary
components, and enhancing security. Customizing the OS allows developers to tailor the
system to meet specific requirements, leading to a more efficient and secure deployment.
This documentation will clarify the process of setting up a custom OS for the Raspberry Pi 4,
from setting up the toolchain to configuring the bootloader, kernel, and root filesystem.
142
2. Toolchain Overview
Components
Compiler: Converts source code into executable binaries.
Libraries: Essential libraries for development.
Debugger: Tools for debugging applications. Build
System: Manages the build process.
Workflow
Host Machine: Development machine.
Target Machine: Running the embedded system.
Installation Steps
1. Install Essential Tools:
143
4.1 Finding a Toolchain
You can either use a pre-built toolchain or compile your own through crosstool-NG. For the
latter, follow these steps:
144
6. Cross Compiling Applications
Learn to cross-compile applications for Raspberry Pi 3 using the configured toolchain.
Compiling Programs
GCC Command:
Running Programs
QEMU:
Debugging Programs
GDB:
145
7. Best Practices for Workflow Organization
Setting Up Sysroot
1. Define the sysroot path.
PATH Management
1. Add Toolchain to PATH:
146
8. Example Workflow
1. Write Code:
2. Compile Code:
3. Transfer to Target:
4. Run on Target:
./hello
9. Troubleshooting
Common Issues
Compiler Not Found:
Ensure the toolchain path is correctly added to the $PATH .
Execution Errors:
Verify the target architecture compatibility.
How to add new libraries?
Update the sysroot with the new library files and include paths.
147
Bootloader
Overview
The bootloader is a crucial piece of software responsible for initializing the hardware and
starting the operating system. It is typically stored in non-volatile memory and runs immediately
after the system is powered on.
Bootloader Stages
1. Power On
When the system is powered on, the bootloader process begins. The power-on sequence
involves several stages that ensure the hardware is properly initialized before the operating
system is loaded.
148
4. Full Bootloader (U-Boot)
Once the SPL has completed its tasks, control is handed over to the full bootloader, typically U-
Boot. The full bootloader performs more comprehensive hardware initialization and prepares the
system to load the operating system. U-Boot tasks include:
Operation
The bootloader operates through several key stages:
1. Download Source Code: Obtain the bootloader source code.
2. Configure: Customize the bootloader for the specific SOC and hardware.
3. Compile: Build the bootloader binary using tools such as GCC.
4. Deploy: Flash the bootloader binary to the target device.
5. Execute: The bootloader initializes the hardware and loads the operating system.
149
Download Source Code
Download the U-Boot source code from the official repository:
Configure U-Boot
Configure U-Boot for your specific board. For example, for an ARM-based board:
Compile U-Boot
Compile the U-Boot bootloader:
Deploy U-Boot
Flash the compiled U-Boot binary to the target device. The method of deployment will depend
on your hardware. Here are a few examples:
For SD Card:
150
U-Boot Commands
Common U-Boot Commands:
151
3.2 Operations on Device Tree
Device Tree → Binary:
4. U-Boot
Support multiple architectures and boards.
Operations
Downloading U-Boot:
152
4.1 Using U-Boot
4.1.1 Convert zImage into uImage (HOST)
Boot Configuration
The boot configuration defines how the bootloader operates and interacts with the hardware. It
includes:
Boot Modes: Various boot modes determine how the bootloader is loaded (e.g., from NAND, SD
card, network).
Device Tree: Describes the hardware components and their configuration to the operating
system.
153
Kernel Development Documentation
first we must Ensure the following packages are installed on our system:
154
1.3 Understanding Kernel Config Mechanism
The kernel configuration is a critical step in building the Linux kernel. It determines what
features and drivers will be included. This is typically done using the tool which
provides a menu-driven interface for configuration.
1. Start menuconfig :
go back. Each item in the menu corresponds to a kernel feature or driver that can be enabled or
disabled.
3. Select Features:
or [M] : Enabled
[*] (built-in or as a module)
[ ] : Disabled
4. Save Configuration:
After selecting the desired features, save the configuration to by exiting the
menu (usually by selecting and confirming).
Processor Type and Features: Choose the specific CPU architecture and processor features.
General Setup: Basic system settings, including kernel compression and initial RAM disk
(initramfs) support.
Device Drivers: Enable or disable drivers for various hardware components like USB, network,
and sound.
File Systems: Select support for different file systems like ext4, XFS, or NTFS.
Networking: Configure network protocols and options, including IPv4, IPv6, and wireless
networking.
155
1.4 Identify Kernel Version
To check the kernel version:
VMLinux zImage
uImage distclean
156
2. Compiling Device Tree
a. Find device tree for target
Device tree for Raspi4:
157
1. Format the SD Card:
This line specifies the device, mount point, filesystem type, mount options, dump, and fsck
order.
Essential Components
A root filesystem typically includes the following directories:
/bin: Essential command binaries
/etc: Configuration files
/home: User home directories
/lib: Shared libraries
/root: Root user's home directory
/sbin: System binaries
/usr: User binaries and libraries
158
Creating a Custom Root Filesystem with BusyBox
BusyBox combines tiny versions of many common UNIX utilities into a single small executable,
making it ideal for embedded systems like the Raspberry Pi 4.
2. Configure BusyBox:
In the configuration menu, select the desired features and ensure the installation directory is set
to /mnt/rfs .
3. Build and Install BusyBox:
5. Copy Libraries:
Identify the required libraries for BusyBox and copy them to the root filesystem.
159
Once the root filesystem is ready, deploy it to the Raspberry Pi 4:
Regular Updates:
Filesystem Check:
160
Conclusion
Customizing an operating system for the Raspberry Pi 4 allows for a tailored and optimized
computing environment. This documentation has outlined the comprehensive process for
achieving this customization, including:
By following these steps, the project successfully demonstrates the capability to create a highly
efficient and specialized operating system that maximizes the potential of the Raspberry Pi 4.
This customized OS can be utilized for various applications, ranging from educational projects
to specific practical implementations, showcasing the versatility and power of the Raspberry Pi 4
platform.
161