Computer Systems Architecture

 

Computer systems architecture is a complicated mixture of hardware components, logical structures, and architecture concepts:

 

Internal system components

CPU

The CPU, or central processing unit, is an integrated circuit responsible for reading the series of tasks that make up a program and executing those tasks in the correct order. The CPU is also commonly referred to as a processor or microprocessor.  While most commonly found in desktop PC’s and Laptops a CPU can actually be found in a wide range of electronic devices such as mobile phones, TV’s, microwaves and fridges.

A CPU needs three main components to perform its role correctly. These are the arithmetic logic unit or ALU, the control unit or CU, and the registers.

The arithmetic logic unit or ALU can be referred to as the heart of the CPU and is responsible for performing the mathematical operations such as addition and subtraction. It is also responsible for performing the logic operations such as comparing two values to see if they are the same or not.

The control unit or CU responsible for reading instructions from the main memory, it then interprets those instructions, using the ALU for any calculations that might be necessary, it then communicates these instructions to other parts of the computer through a series of signals that it outputs.

Registers are small amounts of storage quickly accessible within the CPU. After instructions have been loaded from the main memory the data can be stored in the registers by the CU and worked on by the ALU before the result is then outputted again by the CU.

Main Memory

The main memory of a computer, also known as the RAM or Random Access Memory, is a type of volatile storage, meaning the data inside is only stored as long as there is power to it and a loss of power means a loss of whatever was stored in the RAM. With random access memory, the data can be read or written randomly rather than having to be stored and read sequentially as is the case with hard drives and CDs. Being able to read and write randomly speeds things up considerably.

The very nature of RAM makes it perfect for the role of main memory where it will hold information the CPU needs to access quickly but not permanently.

Input Devices

Input devices is a broad term that refers to anything capable of providing input, or instructional data, to the computer. For instance, we can use a keyboard to type information into the computer so this is an input device. We can use a mouse to indicate our wishes to the computer through a series of clicks and the same can be done with a touch screen so these are both input devices too. There are many different types of input devices capable of inputting different types of data from microphones to drawing tablets.

The instructions, or data, from these input devices, can be stored in the RAM where they can then be accessed by the CPU. The CPU will process these instructions and communicate your wishes to the rest of the computer components through its series of signals.

Output Devices

Like input devices, output devices are another general term except that in this case it is used to describe anything capable of taking signals from the CPU and outputting it to the user in various forms of data.

There are also many types of output devices from printers to computer monitors.

System BUS

The system BUS is the critical network of flat wires that run across a motherboard linking all the components and external ports. The system BUS carries all the information back and forward between the different components and to the external ports allowing components to communicate not just with each other but also with external input and output devices connected to those ports. Without the system BUS, the different parts of the computer would be isolated and unable to communicate with each other.

 

The stored program concept, Von Neumann architecture, and Harvard architecture

The stored program concept is the principal concept behind modern computers that can store programs internally to memory, fetch those instructions from memory, decode those instructions and execute them. Before the stored program concept machines were built pre-programmed for a specific task only and were very difficult to re-program, in some cases impossible to re-program. Calculators are a prime example of this they are made with the ability to work as a calculator but you can’t add other programs to them afterward to make them do something else like run a web browser. Thanks to the stored program concept modern machines can not only work as a calculator but can have other programs stored inside them to achieve many other roles.

The fetch-execute cycle mentioned in [1.2] and the hardware components it utilizes such as the CPU, RAM and BUS are what make up the Von Neumann architecture a sequential system which in turn is one of the ways the stored program concept can be made a reality. While the Von Neumann architecture has been successful for many years it is certainly not flawless. One problem inherent in this model is the fact that all data has to pass back and forth along the same data BUS. The data moves along this BUS a lot slower than the CPU can execute the instructions meaning the CPU ends up waiting around for the data to arrive and has led to the coining of the term the Von Neumann bottleneck. The cache was invented as a way of reducing this bottleneck by storing the most frequently used instructions closer to the CPU.

The Harvard architecture is another model that makes the stored program concept a reality. It aims to deal with the Von Neumann bottleneck by having one BUS for the instructions and another BUS for the rest of the data. The theory behind this is that with data transfers being possible on each BUS at the same time the bottleneck wouldn’t happen and the whole process would be completed much faster. The downside to this system is that in order to be effective it needs two memory systems working alongside each other in order to be efficient and if these two memory systems are isolated and unable to communicate the whole system breaks down. This means that the Harvard architecture is more complicated to implement and it is argued that unless high performance is a requirement making the extra hassle worth it then usually the slower but easier to implement Von Neumann architecture is more practical.

 

The ‘fetch-execute’ cycle, Arithmetic Logic Unit, Control Unit, clock, and registers

The Fetch-execute cycle, sometimes known as the fetch-decode-execute cycle, refers to the process the CPU carries out in order to retrieve instructions from the RAM, process and then execute them.

The process is actually a little more complicated than that, though.

Step 1 -A PC or Program Counter within the CPU keeps track of the location within the RAM of the next instruction to be processed. The Control Unit reads the location of the next instruction within the program counter and then fetches that instruction from the RAM.

Step 2 -The Control Unit then decodes the instruction and places the resulting information into the appropriate registers. The Control Unit then tells the Arithmetic Logic Unit what to do with the data in these registers.

Step 3 – The Arithmetic Logic Unit does as it is told, executing the instruction and placing the result in another register. The Control Unit then takes this result and puts it back into the RAM. The Program Counter then moves to the next instruction and the whole process starts all over again.

This next image also shows the cycle but in a little more detail:

The whole process seems like a lot to do and you would expect the CPU to take a considerable amount of time to get through all the tasks in that cycle. In actuality, though the CPU is capable of performing that cycle incredibly fast. This is known as the clock speed and we measure this by how many cycles per second a processor can perform.

Looking at Intel’s i7 processors on their websites we can get an idea of just how quickly modern CPU’s are capable of performing that cycle.

Here you can see two different processors from Intel and their corresponding clock speed. One is shown to have a clock speed of 1.3 GHz or gigahertz and the other is shown to be 2.7 GHz or gigahertz. The number of gigahertz means the number of billion cycles per second. So in these cases, one processor runs at 1.3 billion cycles per second and the other runs at 2.7 billion cycles per second. There is also megahertz which measures millions of cycles per second for older processors and kilohertz for even older processors.

 

The importance of cache and virtual memory in a system

The Cache is another form of memory, usually a small amount, and is usually part of the CPU. Being part of the CPU, and therefore closer to it than the RAM, it is much faster for the CPU to access the Cache than the RAM. In some cases, there will be extra levels of Cache separate from the actual CPU but still close enough to be faster than using the RAM. This makes it ideal for temporarily holding frequently used instructions and data. The idea is that if the CPU is going to need it often then keeping it as close as possible will speed things up dramatically. The larger the cache the more instructions that can be kept close by and quickly accessed before having to journey the distance to the slower RAM.

Virtual memory is a feature of most operating systems and comes into play when your RAM is full. Your computer only has a certain amount of RAM available and if you have enough applications open at the same time all using the RAM it will quickly become full. If the RAM filled up and had no other option you would have to close one of your open applications to free up some RAM before you could open another.

Luckily there is another option in the form of virtual memory. Virtual memory allows the computer to take some of the information filling up the RAM and write it to the hard drive in what is known as a page file. This happens automatically and for most situations, you won’t have a problem. However, writing to the hard drive is a lot slower than RAM so if you keep opening more and more applications to the point where the computer has to depend on this virtual memory in the slower hard drive too heavily then you’ll start to notice increasingly poor performance.