Hardware and Computer Architectures (Part 2)Article contents:
Interfaces and Drivers
An operating system performs three main functions for a computer:
- Provides an interface
- Controls the hardware
- Runs applications
An interface is the combination of all of the methods designed for a user to interact with a computer, input and output. You might remember that the CPU controls the hardware after receiving commands from the operating system. A user gives a command, which is relayed through the operating system to the hardware, including the CPU. Then the CPU can direct the hardware to perform the individual tasks at a granular level.
The Computer Interface
The earliest computers used switches and punch cards to communicate with their users. Eventually, video screens were attached—these were the large and heavy CRT monitors. There were no graphics when these first came out, so all computer input and output was text on a screen. This was known as a command line interface operating system. Many mainframes used this type of interface; universities had large UNIX servers for their computer science departments so students could compile their code. As computers became personal, the graphical user interface, or GUI, grew in popularity. The Windows interface is the most popular for personal computers, followed by Mac OS (which is based on Linux, a type of UNIX.) They both provide a very similar experience for the user.
Whether you are using a Mac or a Windows machine, the same metaphors are being used. Applications can be run by clicking on “icons” on the desktop. Double clicking, single clicking, and right clicking all have specific meanings that are similar in both operating systems. Windows and MacOS also use “windows” for applications, where an app normally takes up a rectangular area on the screen that can be resized or temporarily “minimized” off the screen. As a computer user at this point in life, you have been trained for these computer interfaces. It is expected that you have a mouse, a monitor, and a keyboard. You know what icons mean, what a status bar is for, and how to move around and use windowed applications on your screen. But this didn’t just come to you naturally, you had to learn it. If another computer came along with a very different interface, it would be difficult to use at first. Operating system programmers consider the user interface when each new upgrade is released and rarely make major changes.
The operating system takes input from the user primarily through keyboard and mouse, and shows output firstly through the monitor, and secondarily through the speakers (for example, those annoying little sounds that let you know you can’t do something).
Consider this little part of the interface: the spinning ball (Mac) or hourglass (Windows). When you click an app, it is copied into RAM (loading). Without the spinning ball or hourglass, you wouldn’t know that the computer is doing anything and most likely would repeatedly click the icon, thinking that it is not working. This seemingly insignificant part of the interface provides important feedback to the user that something is happening.
Drivers: Talking to Devices
We mentioned that one of the jobs of the operating system is to control the hardware. This includes any device connected to the computer. Standard internal devices such as the hard disk and video card are included, as well as things like external webcams and printers. Each device is different and performs a different function, yet they all must talk to the same operating system. The method of communication involves a special piece of software called a driver.
In the early days of Windows, installing drivers for a piece of software was a complex task. The user often had to specify the IRQ and the I/O port address—it was important to make sure that no other device was using the same settings. Standard PC design allowed for 16 IRQs for devices to send an interrupt request to the CPU, basically demanding its attention. Modern Windows and Mac machines configure these automatically, so the user has no need to worry about them (and most users do not know they even exist).
Besides sending IRQ signals, the driver translates data between itself and the operating system. As you can imagine, a network card, a printer, and a video card would all be using very different contexts for their data and command signals. Since PC parts are open to almost any manufacturer, each company is responsible for the development and operation of their own drivers for their hardware.
Any device connected to a computer has a device controller, which is responsible for the binary input and output to the device. The device controller can also signal the CPU with an interrupt. A device driver has to be designed specifically to talk with both the operating system that is in use and the hardware device’s controller. Sometimes, bugs are discovered as new driver updates are released, so it is a good idea to check from time to time that a computer system has the latest drivers. The operating system relays commands from the user to the appropriate device via the device driver, which talks to the device controller, which then communicates with the device.
Sometimes the demand for a specific computing task is more than just a single desktop or server machine can handle. This is where we enter the world of high-performance computing. On one hand, there is a simple way to perform these kinds of tasks: using a cluster. A cluster is a group of identical servers that share the workload for a certain task. On the other, sometimes the demand is even greater than a cluster can handle—in these cases we need a supercomputer.
You might imagine taking a regular computer and just making it giant-sized to create a supercomputer. However, the technologies involved don’t scale that way. You can think of it like an automobile—you can make bigger and bigger cars with bigger engines, but the larger engines will weigh more, eventually cancelling out any gains in horsepower and speed. A single desktop computer with an enormous RAM space would have to spend more time managing all of that memory and eventually would be useless. Instead of building a huge computer with one CPU, we use the concept of parallel processing. This is when we take a task and break it down into sub-tasks and assign a separate CPU to complete it. Several CPUs are linked together in parallel to accomplish this.
Imagine a popular website such as Wikipedia or Amazon. There is no way a single computer could handle all of the processing power of all those Wiki articles or all of those Amazon orders. Yet, when a user accesses these sites, they expect to get the same experience every time. One method to achieve this is by making a server cluster. A cluster is a group of identical servers; think of them as “clones” that can respond to web requests. Not only do these clones exist to respond to multiple requests, but they also are spread out geographically all over the world. A user in South America, for instance, could connect to one close to their location. When a server is updated, there is a “master” server that is then copied or cloned to all of the servers in the cluster through a version control software that makes sure that all servers eventually get every update.
A large retail website could make use of multiple server clusters, splitting the load between the front end (the website users see) and the back end (the database with all the images, descriptions, and prices). When the user types in the URL for the site, it connects to one of the clones in the cluster (usually selected by a load balancing system). When the user performs a search for a “large black umbrella” the front end server connects to one of the back end cluster servers and requests the umbrella items list that match the query. A cluster provides the advantage of not only evenly splitting up a task (load balancing) but providing fault tolerance: if one server goes down, the others are still functioning.
When you need more power than just a cluster, then a supercomputer is the answer. These machines are rare and cost hundreds of millions of dollars. They are similar to clusters in at least one way: they use parallel processing, where tasks are split up between processors. One of the newest supercomputers is in Germany, the SuperMUC-NG.
Computing power for desktops and supercomputers can be measured in something called FLOPS, which stands for floating point operations per second. Basically, this is an operation on numbers with a floating (movable) decimal point. Modern desktops are no slouch at this, clocking in at approximately 100 gigaFLOPS. A gigaFLOP is one billion calculations (per second).
The SuperMUC-NG is rated at approximately 26 petaFLOPS. A petaFLOP is 1015 floating point operations (per second). This means that this supercomputer is 260,000 times more powerful than an average desktop. This is accomplished by putting 300,000 very high-end, water-cooled CPUs together in the same room and connecting them together to work in parallel. This computer also has 700 TB of RAM (SuperMUC-NG, n.d.).
Supercomputers are often used in scientific fields to analyze vast amounts of data. For instance, predicting the weather involves an enormous number of data points; a supercomputer is a good match for these kinds of problems. Scientists can request time on supercomputers to do research, bringing them the massive power of Earth’s most amazing computers.