This document is a compilation of my notes for COE 538: Microprocessor Systems at TMU. All information comes from my professor’s lectures, as well as the course textbook
Adam Szava - 2tor.ca
F2022
We begin this course by looking at how computers understand numbers. Due to the on-off nature of electricity, computers used binary to represent numbers. The unit used to represent the on-off state is the bit. Computers are meant to interact with humans, which binary is not ideal for, and so conversion between bases is common. In fact we often see mixed use of bases in the same program at the microcontroller level.
The following table shows the four main bases we use, with their designated identifier.
For example:
$$ 15 = \% 1111 = @17=\$F $$
<aside> 💡 Note that it’s also very common to use $0\times$ as a prefix for hexadecimal, as in $\$3F = 0\times 3F$.
</aside>
The number of bits used by a computer to represent a number is usually a multiple of $8$, called a byte. We often measure computation capacity with the number of bits that a computer can operate on in one operation.
In this subsection we look at the different elements of hardware inside of a computer.
The processor is responsible for performing all computational operations and coordinating the resources available to it. The CPU consists of three major components: