window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-63172957-1');
Created by potrace 1.16, written by Peter Selinger 2001-2019
Back to blog
Computer Science

Hardware and Computer Architectures (Part 1)

Article index:


In 1981, Jack Tramiel was the owner of Commodore, a computer and calculator company. Commodore also owned the chip making company MOS Technologies. It was at this time that the company embarked on an ambitious project: offering a high-powered personal computer for about $600.

MOS Technologies had created an iconic chip, the MOS 6510, which was the basis for the new computer. Since Commodore owned a chip-making company themselves, they did not need to negotiate with another independent company for deals on the chip—they were able to get them at cost. These chips were rated at about 1MHz clock speed, with a data width of 8-bits and an address width of 16-bits. Compared to 21st century chips, they are very limited, but for 1981 it was more than adequate. 16-bit addressing let this new computer, the Commodore 64 (C64), address 64KB of RAM—which is where the name came from. At the time, this amount of RAM was unheard of, especially at the price (Commodore 64, n.d.).

The Commodore 64 also had a sound chip that could play multiple voices at once—other computers at the time had far inferior sound quality. Eventually, as production and demand increased, the price of computer parts decreased, and the Commodore 64 was selling for $199. Nothing on the market for personal computers could match its performance and price; software companies created over 10,000 titles for it. It eventually went on to become the best selling personal computer ever made, selling between an estimated 17 million and 30 million units. It also had a joystick port and a modem port, enabling users to play thousands of games and connect with other computers. (There was no internet yet in the 1980s).

The Commodore 64 was built around the BASIC programming language; using the Commodore meant you had to learn some programming. In the 1980s it became the gateway for many young programmers to full-time jobs in the industry. Today many in the computer field have not heard of the Commodore 64 and its record-setting history, yet it is considered by some as the best computer ever built (McFadden, 2019). The C64 used Von Neumann architecture, a computer design that is still used today.

Computer Types and their Architecture

A computer, at its most basic level, consists of a processor and memory storage. The processor is able to perform various calculations and commands using the data stored in memory. It takes that input from memory and creates unique output, the results of the processing.

The bits (0s and 1s) stored in the memory fall into two categories: data and instructions. They are both binary numbers; the context tells the processor whether each group of bits is a command (telling it to do something) or data (information to be processed). This is the concept behind Von Neumann Architecture, a basic design for computer systems that is still in use today.

Besides the processor and memory, there also should be some type of input and output system—the basic architecture doesn’t require a specific definition of this, but it is assumed since a human being needs to communicate with the device to use it. It is sufficient to say that there is some system for entering data (in binary, of course) and a system for outputting the data.

A CPU-based system works like this: it checks for the memory address of the next instruction and then reads that instruction. If the instruction requires data access, it gets the address of the data and reads it. Then it performs processing on the data and sends it to the output memory address. Repeat this millions or billions of times per second and you have an advanced computer.

Von Neumann broke down the CPU into subcomponents. The Arithmetic and Logic Unit, or ALU, is responsible for just what its name says: it performs basic arithmetic calculations on data, such as adding and subtracting, but also performs logic operations such as AND and OR. The Control Unit also performs the tasks for which it is named: it controls the operation of the ALU and communication with the input and output devices. It interprets processor instructions and carries them out. A CPU has its own internal memory storage built right on to the chip as registers. These are very small in data size, measured in bits, 8 or 32 bits, for example. Newer processors have 64-bit registers. This simply means that in each register on the CPU it can store 64 bits (or eight bytes). These registers are used for different purposes.

Von Neumann Registers

There are five registers specific to the Von Neumann design. These are as follows:

  1. Program Counter (PC) contains the memory address of the next instruction to read from memory.
  2. Memory Address Register (MAR) contains the address of the current instruction in memory, or the next data transfer address.
  3. Memory Data Register (MDR) contains the contents of the memory address that the MAR is pointing to and contains data to be transferred.
  4. Accumulator (AC) this contains data that have been processed or are about to be processed, including arithmetic or logic results.
  5. Current Instruction register (CIR) contains the current binary instruction that is being executed.

Data Buses

A bus is simply a connection that is used for data transfer. A CPU will need to have a way for input and output—therefore it will use buses. The standard Von Neumann design incorporates an address bus, which is used to send memory addresses, a control bus, which carries instructions and special signals from devices (such as their status), and a data bus, which simply sends and receives data.

Since the CPU is not only performing calculations but controlling devices, it needs a way to know what the other devices are doing, and whether they are ready to perform a new task. The control bus is used to send and receive these status updates from devices as well as to send instructions to them.

Memory Addressing

In this architecture, each byte of memory has a unique binary address. You could think of them each as little mailboxes, storing a different binary number and also having a binary address. That means that the number of bits used for addressing will identify the number of bytes available. For instance, the Commodore 64 had 64K of addressable memory. This was accomplished by using 16-bit addressing. The total number of bytes that can be addressed is 216 which is 65,536 bytes. This was considered 64K. (At the time, a kilobyte was referred to as both 1,024 (210) and 1,000 bytes. In 1998, the standard was set to 1,000 bytes.)

Each 16-bit address points to a mailbox (memory location) that contains exactly one byte (inside the box). Some registers store memory addresses and use them to point to the data inside. It is used as a way to keep track of the last CPU-accessed memory byte so it can then read the next byte in sequence (if you are reading sequentially).

Memory Usage

In the Von Neumann architecture, memory can be used for both data and instructions. This lends versatility to the design as the available memory can be split between them any way the programmer wishes; the computer program itself can also be written to utilize the memory any way it wishes as it runs. There is a slight disadvantage with this design, as data and instructions are on the same memory bus. If the CPU is reading a large volume of data, it may have to wait many cycles to load the next instruction.

Processors and Memory

To solve the problem of delay in accessing the RAM directly from a CPU, a local cache of memory was added directly to the CPU itself. Since the processor (CPU) is always accessing memory (RAM), personal computer designers place the memory very close to it on the motherboard. The motherboard has tiny lines etched into its surface that act like little wires in order to transmit binary data signals back and forth.

Though the CPU is very close to where the RAM sits on a PC motherboard, when you are dealing with billions of cycles per second, this distance can still cause a slowdown. This is where cache memory comes in; it functions as a small storehouse of memory directly on the chip which improves memory access time. This establishes a “supply chain” of data from the largest and slowest storage medium (the hard disk) through the RAM­ and, finally, to the cache.

Let’s say you have a PC with a 1TB hard drive, 32GB of RAM, and an Intel i7 CPU. On the CPU there are three levels of cache with 2MB, 256KB, and 32KB of storage. The closer you get to the actual CPU, the smaller the memory size needed. As you can see from the diagram, there are five steps of memory used by the processor. The hard drive is the slowest, but also the largest. It stores your entire library of applications and data. However, you don’t need to access all of that at once—it depends on what task the user is performing at the moment. When you open up an application, it copies itself into RAM. RAM is smaller in memory size, but much faster than a hard drive (though solid-state drives (SSD) are faster than a hard disk drive (HDD), they still are both slower than RAM). RAM is meant to be smaller since it only stores what you are currently working on, instead of everything. The CPU accesses data from the RAM directly and copies a smaller segment of the current instructions or data to its L3 cache. As you can see, the i7 has two more stages of cache, L2 and L1, each smaller, and closer to the CPU itself. This is done because of the weakness of the Von Neumann design, where memory is accessed only through one bus, whether it be data or instructions. Using algorithms, the CPU decides which data should be placed in its own local caches, and it can get to them very quickly. Notice that the L1 cache in our example is only 32KB of memory, whereas the hard disk is 1TB, which is approximately 30,000 times larger. This system sifts the data down to the most significant at any given moment and provides it to the processor in the L1 cache. If needed data is not in the L1, then it checks L2, L3, and then RAM. The closer the needed data is, the faster it can be accessed.

Personal Computers and RAM

Currently, Windows 10 is the most popular desktop operating system in the world, hovering at about 75% of the market (, 2020). When a desktop is started up, it first loads (copies) the operating system into RAM. Windows 10 requires at least 2GB. That means even if you have 32GB as in the last example, only 30GB would be actually available to you for data and applications. Remember that RAM is a holding space for current programs and data that you are working on and it can change. When you close an app, it frees up memory for a new app to use. RAM is volatile memory, unlike the hard disk which is more permanent. If you were opening up a picture of your cat in a photo editing software, both the software and the data from the cat photo would be stored in RAM while you were working on it. When you close the app (and, hopefully, save your changes), the memory is released.

Input and Output

Data is stored in memory from the hard drive up to the CPU cache. Of course, when a human uses a computer, they will interface with it and create new data for input. In response to this input, the computer will then respond with output. Input begins with the user and ends with an output device.


Input device Description
Keyboard A user enters keystrokes which are converted into electrical signals, and then into binary codes.
Mouse A user enters input by moving the mouse, creating electrical signals which are converted into binary coordinates.
External Devices USB devices such as cameras and microphones also input electrical signals that are then turned into pixels and frames for video, or a digital sound wave for audio.
External Data Sources Through a network card, a computer can receive new data. It can also receive data through a portable USB drive; these data are already in binary form.


Output device Description
Monitor A continuous signal (at 60 frames per second or 60Hz) is sent to the screen with data for each pixel. The mouse position is continually updated based on input.
Printer A document is broken down into primary ink colors and pixels and then printed to paper.
Speakers The computer converts sound to analog from digital and plays it through the speakers.
Network card A computer can send (and receive) binary information over a network.

This is the input and output at each end of a full computer system. Internally, however, there is also an input and output system for the processor. Data from devices such as the keyboard or a microphone require some processing before being sent to the computer; this is often referred to as IOP, or input-output processing. Once the data are properly arranged into bytes matching the proper word size, they can then be sent through the data bus to the processor.

The CPU uses the control, address, and data buses to communicate with RAM and external devices. In some designs, these three are merged together into a system bus.

Input and output devices on a computer are typically assigned a binary “channel code” for the CPU; that means when a CPU receives this code it will be followed by a status message from that device. The CPU can also use this code to send information back to the device. This information flows along the control bus. Control codes for a device can include things like ready, busy, error, and request for input or output. Aside from the control bus, devices can also use the data bus to send data and the address bus to point to a specific memory address that may be read from or written to.

So far, we have discussed communication between hardware elements such as the CPU, RAM, monitor, mouse, etc. The operating system adds an extra level of complexity to a computer system, but a necessary one. Without an operating system, a user would not be able to communicate with the computer hardware.


Back to blog

Wordpress Developer Loader, Web Developer Loader , Front End Developer Loader Jack is thinking