window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-63172957-1');
Created by potrace 1.16, written by Peter Selinger 2001-2019
Back to blog
Computer Science

Software (BIOS, Information Systems, Embedded Systems & Life Cycle)

Article contents:

Microsoft was brought into existence by Bill Gates and Paul Allen in 1975. In the 1980s, IBM requested that they create an operating system for their personal computer. They bought a basic disk operating software, 86-DOS, from a smaller company and rewrote it to work on the IBM-PC (Curtis, 2014). This simple operating system was text-based: there were no icons, no desktop, no windows. It showed you a command line where you typed in instructions to the computer. The name DOS comes from Disk Operating System, meaning it would allow the user to access files on floppy and hard disks.

Gates decided to sell the OS to IBM but keep the copyright; his idea paid off when other computer manufacturers also wanted to use his MS-DOS (Microsoft DOS) product. Soon, Microsoft was a major player in the operating systems market. In 1989, they released Microsoft Office, a suite (collection) of applications that was used for office productivity. The names Word, Excel, and PowerPoint soon became recognized as the international standard for digital office files.

Microsoft released their first successful version of Windows in 1992: Windows 3.1 (Windows 3.0 was released in 1990 but was not nearly as well-adopted as 3.1.). This led to a great success where companies moved away from command line operating systems and toward a graphical system. Successful versions of Windows followed: 95, 98, NT, 2000, XP, 7, 8, and 10 (“9” was not used in order to disambiguate between 95 and 98). As recently as 2013 Windows operating systems were on over 90 percent of the worlds desktop computers (Liu, 2020). Microsoft Office and Windows are still going strong today, holding the majority of market share for both office applications and operating systems.

BIOS and Operating Systems

A computer system is comprised of both hardware and software: the physical devices that make up the computer and the instructions to use it. When a computer is activated, there must be a way for it to find and execute software instructions. If the computer always performed the same task every time it would be very easy—it would just be built into the hardware to always execute the same instructions. One of the advantages of having a computer is that it can perform many tasks. Every time it is turned, on a computer can be used for something different.

When a computer is turned on (or booted up) it knows nothing from its previous usage; it is a blank slate. It needs to know where to start. The BIOS provides a starting line for a computer every time it is turned on.

Once you turn on a computer it is hard-wired to look for instructions on thechipset. At this starting line, it executes the instructions on the BIOS chip. The acronym BIOS stands for Basic Input Output System. Before the computer can do any input and output, it has to start up. Computers are shipped from the manufacturer with instructions already placed on the BIOS chip. These can be updated, or “flashed,” from time to time as new updates are released.

One of the very first things a BIOS chip will instruct the computer to do is detect and test hardware that is connected to it. Older systems from the 1980s and 1990s had to be configured ahead of time for the BIOS to find devices; current BIOS systems auto-detect most if not all hardware. BIOS performs a POST, or power on self-test to check that the required hardware is there and that it is functioning properly. Required hardware includes things like RAM, CPU, and storage. It quickly sends some queries to the hardware parts and if the responses are correct, they pass the test.

The BIOS then directs the hardware to the location of the operating system. The BIOS is “Basic” and can provide the user with simple testing and configuration functions, but it cannot run applications or display advanced graphics. Only an operating system can do these things. The BIOS will either already have been configured with the location of the OS, or it will start searching in the most likely locations (like hard disks) for one. Once an OS is detected, the rest of the boot process finishes—a copy of the operating system (like Windows or MacOS) is loaded into memory. If no OS is detected, the boot process will stop and return an error. This may require you to go into the BIOS configuration screen, usually accessed by hitting a special key during the boot process.

BIOS configuration screens have not changed much in the last three decades. They provide a basic text interface for the user to identify devices like hard drives, provide other basic hardware configuration settings, and often have password protection.

The CMOS is another chip used in computers. Originally, it contained all of the BIOS configuration information. A drawback of this chip is that it required a small battery connected to the motherboard and when the battery died, all the settings were lost. Today’s computers use flash memory which does not need continual power

to preserve the settings. The CMOS still exists in personal computers today, but its main function now is to preserve the system clock (current time) for the computer. If the CMOS battery dies in your modern computer, you will have to keep resetting the clock every time the PC is started until you replace it.

Input and Output

The BIOS, once it identifies the hardware on a computer, provides access for software (operating systems and applications) to access that hardware. BIOS chips provide channels to software to allow direct access to devices without having to know hardware addresses. This is helpful because sometimes hardware addresses change, as well as the actual PC components. It ensures seamless transition where the BIOS knows where the hardware is and software can always rely upon it to access devices. Since the BIOS chip has instructions on it, it may seem similar to software. However, it is in its own separate class: firmware. Firmware is hard-coded instructions built into a device that usually cannot be changed (though it is updated periodically). This is good because the BIOS needs to be the same every time to start up the computer.

The BIOS handles hardware requests from the operating system, applications, and drivers: these are all types of software. Applications make hardware requests to play sounds and display things on the screen and must actually pass through the operating system first. Drivers are used by the operating system to translate information into formats that a device can use. Input and output travels between the BIOS.

Applications use an Application Program Interface (API) to make requests of the hardware. The API interacts with the operating system. The operating system uses drivers to speak to the BIOS which then relays the requests to the hardware. Application programmers will have to learn the proper API procedures to communicate with the operating system that they are working with, be it Windows, Android, iOS, MacOS, Linux, or something else.

Operating System Function

The BIOS gets things started and then gives control to the operating system or “OS.” However, the BIOS is in continual use by the OS to access hardware after the full boot up of a computer is completed. The DOS operating system provided by Microsoft allowed users basic access to a computer’s functionality. One of its primary functions was disk access—this is crucial to an operating system. Along with that, an operating system has three main functions:

  1. Providing an interface
  2. Controlling hardware
  3. Running applications

Disk storage is included in hardware, but disks need much more than just a driver; they need a complete file system that manages and stores information. Operating systems implement file systems for this purpose.

File systems

The MS-DOS operating system used a simple system called FAT—file allocation tables. This was used by “formatting” a disk. Both floppy and hard disks were formatted with FAT (and sometimes still are today). Formatting a drive prepares it for use; it also erases all the current content on the drive. FAT divides a disk into clusters, and the contents of the clusters are stored in the file allocation table. For instance, one of the clusters will have the beginning of the OS and the BIOS will be pointed there to start the system. The original FAT, now referred to as FAT16, uses 16-bit addressing, so a maximum of 65,536 clusters could be created. Later, as hard drives grew in size, it was replaced by FAT32, with a 32-bit address for over 260 million clusters.

File tables identify files by file name and their starting location on the drive. FAT systems put files in the next available open space. Since earlier files could be deleted, this ended up putting parts of files all over the drive and led to drive fragmentation. File systems and operating systems go together; every operating system has a primary file system.

Operating System Primary File System Features
Windows 10NTFS—New Technology File SystemFile encryptionFile securityLarge file sizesLarge amount of filesUser quotasCan resize volumes
MacOS 10.13APFS—Apple File SystemFile encryptionFile securityLarge file sizesLarge amount of filesImproved access speedOn-demand volume size
Linux (various types)ext4—Fourth extended file systemFile encryptionFile securityEfficient file organizationLarge file sizesLarge amount of filesFragmentation protection

Computers with these preferred file systems have their primary hard disk partition or volume (a unit of space) formatted with that system. Windows must be installed on NTFS, for example. An issue can arise where portable external drives cannot be used between operating systems. For instance, Windows cannot read a hard drive formatted for a Mac.

Application Software and Information Systems

Software applications are used to perform specific tasks on a computer such as writing a document, listening to music, playing a game, or retouching photographs. In a business environment, many applications are used to manage information. These applications are part of an information system.

In a traditional work environment, there are many levels of management and differing types of information needs. Executives deal with the highest level of decision-making of a company. It is rare that an executive would deal directly with a worker—usually they deal with employees in senior or middle management. Their needs are different than a senior manager who is entrusted to implement high level decisions made by the executive level. At the middle manager level, management of workers is key. They are assigned tasks and evaluated on those tasks. At the worker level, a task management system is essential to track exactly which employees are working on which job as well as deadlines and progress. Information system software falls into several different sub-categories depending on the needs of the organization.

Parts of Information Systems

Information is the considered result of processed data; it has been organized to provide insights into the subject of focus. An information system consists of the raw data itself that exists in the system. Software tools can provide analysis of the data and present it in different ways for user interpretation. This means that software is part of the information system, as well as the people using it. Of course, the software needs hardware to run on, so it also is part of the system. In such a system, it makes sense to assign tasks properly because a computer can perform many tasks more efficiently than a human, but not all tasks. Things like data searching are perfect jobs for computers but difficult for humans, especially with very large data spaces. This means that there is some kind of business organization that makes decisions about how to use the hardware, software, people, and data that are available. This organization is also part of information systems.


Management Information Systems (MIS) is a classification of information system that helps facilitate management within an organization. Highlights of MIS software include:

  • managing internal files
  • sorting company data
  • creating action plans
  • tracking inventory
  • managing budgets
  • personnel management
  • MIS software is used by lower and mid-level management (Kimble, n.d.).


Transaction Processing Systems, or TPS, handle operational transactions within a business. These are mostly used by the workers (bottom level of pyramid). Here are some examples of what these systems do:

  • Manage payroll
  • Process customer orders
  • Track inventory/stock
  • Data validation
  • Funds transfers

Data from these systems is often used in higher-level systems (Kimble, n.d.).


A Decision Support System (DSS), is primarily designed to look to the future and help make informed decisions. A key of a good DSS is that the information is presented in an easily understandable way. These systems often have multiple ways to present data including in graphs and charts. DSS systems provide information such as:

  • revenue predictions,
  • hiring needs,
  • inventory analysis, and
  • future sales.

This system is most often used at the middle management level and above.


An Executive Information System (EIS) is similar to a DSS but it is specifically designed to be used by the executive level of the information systems pyramid. EIS data and analysis focuses on the whole company—the “big picture” level. Executives can use this data to determine future decisions for the company. EIS systems can

  • provide overall performance analyses
  • compare productivity between departments
  • display market research data
  • predict future performance

These systems are designed for executives who may not necessarily have technical skills; thus, a good EIS must be intuitive to use.


Online Analytical Processing (OLAP) involves a high-performance data analysis system. It is designed to query data across multiple tables and sources and provide a fast and detailed result. These systems are often used for

  • sales figures
  • product information
  • product comparisons
  • employee information

These systems are good for data mining and creating reports. They are frequently used by middle management.


There are two primary types of software: operating systems and applications, often referred to as “apps.” Operating systems allow use of the computer hardware, while apps are designed to perform specific tasks. You might think of app as a tool; you use the tool for the task you want and then you put it away until next time. Even though users can open up many apps at once, humans can only perform so many tasks at a time.

Apps for Every Purpose

There are many different types of applications. Apps are not only used on personal computers, but they are widely used on smart phones. They have many different characteristics and work in different ways.

App sizes

Applications can be small; sometimes they are just a few KB. Apps like a to-do list or notepad don’t use much memory (around 200KB). Some applications are medium-sized, like Skype for video calling. These apps are in the MB range, around 50 to 100MB. Other apps, like Adobe Photoshop, perform many functions and take up much more space, often over 1GB. These apps will be divided up into several files that contain the program code and only the parts that are needed are loaded into memory. For instance, if you were editing a photo and wanted to use an artistic filter, Photoshop would load those filters into RAM. Other parts of the program not in use would remain in the files until needed. Some apps are very large, and many of these are video games. Games include 3D models, textures, pre-made video (cut scenes), and large worlds to explore. They can take up many gigabytes of space: the popular game World of Warcraft takes up more than 70GB of space on the hard drive. Just as with other large apps, it is split into separate files and the ones that are needed are loaded into memory. Some applications come in “suites,” or collections, like Microsoft Office or the Adobe Suite. These can take up very large amounts of space since they are several apps installed together.

App operating systems

Each operating system has a different API to communicate with the hardware. That means that once an app is written for a specific OS, it will only work on that OS until it is rewritten for a different operating system. This is the reason that some applications are available for both Mac and PC, and some are only available for one or the other. Final Cut Pro, the video editing app, is only available for the Mac OS, for instance, while Adobe Premiere Pro is available for both.

The same holds true for smartphones—apps must be made for either Android or iOS, but they can be made for both. The version of Facebook made for Android will not work on an iPhone.

Since Windows and Android have the highest market share for PC and phones respectively, most software companies will make their app for these platforms first. They may decide to also make a version for other operating systems as well.

Free and paid apps

Anyone with basic knowledge of programming and a sufficient amount of time can create an app. However, that does not mean you should trust your valued hardware with them. Apps found through verified sites that are tested to be stable and virus-free are the best choice.


Some apps are made to be given out without payment. Sometimes their developers will ask for donations, or they just want to make something to contribute to the software community. A good example of this is Audacity, a free audio editing software. Audacity is “open source,” meaning that the program source code is also shared and people are free to modify it how they wish. The Audacity website also has a section where you can choose to donate if you like the program.


Another type of app is adware. This software is sponsored by advertisements. Usually you can use all the features of the software but there will be occasional ads or a banner ad at the top or bottom. It also might offer to opportunity to buy the software to get rid of the ads.


Before the internet was widely available, people went to computer stores and bought apps in boxes on CD-ROMs. Once you bought the software you were free to use it according to the license terms on the box. Usually, this meant you could install and use it personally on one or two devices. This model is still the same, except users can now purchase and download apps directly from the internet. This entitles the user to one specific version of the software. If a new version comes out, then the user must pay to buy it if desired. This model suffered greatly from piracy where people who would not or could not pay the cost of the software found ways to circumvent the copy protection. This problem eventually led to the subscription model.

Subscription apps

A newer model for apps is the subscription model. It might be considered similar to “renting” an application. You are able to install it on one or more machines, and use it as much as you wish for a monthly fee. There are three main benefits to the subscription model. First, it prevents piracy by continually verifying your subscription through an internet connection. Second, it makes software more affordable for users. Instead of paying $500 for a version of the software, a user can pay $30 a month for as long as they can afford it. This takes away some of the incentive for piracy as well. Third, when a new version comes out, subscriptions are designed to automatically upgrade. This means that a user is not tied to the purchase of an obsolete version but always has access to the newest and best software.


Some software also has some destructive or nefarious code. This makes it “malware,” coming from the Latin word “mal,” meaning “bad.” Many websites offering “free” software are actually distributing malware. This includes viruses that can replicate to other computers and “Trojan horses” that masquerade as a useful program but actually steal your data, compromise your security, or even delete data from your device.

Coding for Applications

Creating complex applications is a multi-layered task. Very few apps are made by one person, and those tend to be small and simple. Larger applications are made by teams of programmers working together with UI designers under managers. When an application team has completed its design document, coders then begin to write the app. At this point the computer language (or languages) is decided on, as well as the operating system on which the app will run. The process begins with writing and debugging code until there is an early version called an alpha, which is then followed by the beta version. The beta version is tested extensively in a process referred to as “beta testing.” Finally, when the app has been tested it will be released. Even then, users may find more bugs or other problems, and the development team will be called upon to change the code once again. This process takes months, or, in most cases, years.

Embedded Systems

Think about the computers you have owned, including smartphones. All over the world, there are people like you who have had multiple computer devices. Now think about all of the different electronic gizmos and gadgets you’ve owned. Right now, count how many electronic devices you have in your house—things as simple as a digital clock. There are many times more electronic devices than actual computers. We are surrounded by them. Many of these devices are classified as “embedded systems.”

An embedded system is a system used in a special purpose device that usually performs one primary task over and over. Devices like medical equipment, fire alarms, voting machines, standalone GPS, washing machines, traffic lights, electronic toys—even switches and routers have embedded systems. The simple definition is a computer system that is designed for one specific task. It includes a CPU, memory, and an input/output system. Since the invention of microprocessors, these devices can be very small, such as a digital watch. Most of these systems use a microcontroller.


A microcontroller is a computer chip with a processor, RAM, input, and output built in. The input and output travels through specific pins on the chip.

The chips can receive and send data to USB ports and even Ethernet through these I/O pins.

Microcontrollers enable the creation of an enormous variety of different electronic devices; they are basically an entire computer on a chip.

Embedded System Design

Since an embedded system typically performs the same function over and over again, the software can be placed on a computer chip, making it firmware. This instruction set was created by programmers, but the requirements for an embedded system are different than an app for a computer.

As these devices are hard to upgrade (and upgrading is not part of the regular use of the device), they must have an extremely high degree of reliability. No one wants a device that crashes or has errors that cannot be corrected. This means that much more rigorous testing must be done of any software created to be placed on the device.

Electronic devices also have a very limited amount of RAM—programmers must be as efficient as possible to get the maximum amount of instructions into the smallest amount of memory. This is also different than programming for a computer as memory is a concern; PCs have a very large amount of RAM in comparison to embedded systems. In addition, PCs are made to run more than one app at a time, so they have even more memory.

Examining Embedded Systems

Imagine an ultrasound scanner in a medical office. This device has one purpose—to use ultrasound to create an image of the internals of a human being. The input would be selections from the user and various settings through an interface. Input would also include data from the scanner. These data would have to be converted and displayed as output—the final image of the patient. The device would be designed to connect to the scanner and also a display. It would have to be able to store an image and likely connect to a network.

A voting machine will have very specific requirements. It would have to report data externally without error. Counting a vote for an incorrect choice or counting a vote twice would be a catastrophic type of error for this machine. Crashing in the middle of voting would also be intolerable. The internal memory would store the progress of a voter’s choices and when it is done, there will be some kind of output to transfer voting data.

Embedded Systems and IoT

Originally, devices such as a fire alarm system or a washing machine functioned independently. With the proliferation of the IoT, more embedded systems are connecting to the internet. Some devices, such as fire alarms, can benefit from an internet connection. Besides sending an alarm signal to emergency responders, building managers could get other critical data such as where the alarm was tripped in the building and watch a real time update of what is happening through data. Other devices, such as your washing machine, might get some benefits from being connected to the internet (remote control), but there may not be enough advantages to justify the cost of additional network components yet. Since devices are often mass produced, and there is market competition, manufacturers are always striving to have the best device for the smallest cost. Additionally, size reduction is becoming a factor. People tend to want smaller and smaller devices that can perform the same functions. With new IoT protocols, even small devices with low power systems can be connected to a network.

Software Development

In many ways, writing software is similar to writing a book. An author does not just sit down and start writing—many things have to be established first. Authors research the best title, write down their purpose and goals for the book, and then write an outline. Software developers do the same thing. Hours and hours of work must first be completed before the first line of code is even written.

Software Development Life Cycle

There are many methods to create software; most follow the standard development life cycle. This is called a “life cycle” because it takes place over the life of a software application. After a software is released, maintenance and updates are required. These updates must go through a similar cycle to that of the initial development process.

The SDLC concept incorporates technology as well as management to put together planning and teams to meet a specific software need.

Analyze requirements

A development company usually is in one of two situations: they are either being hired to create specific software for a client, or they are creating a new software to release into the market. In either case, a common business question is asked: “What problem is this software going to solve?” In other words, what needs are unfulfilled, either in the market or for a specific business?

The software should address those needs efficiently. This is when a company should be polling the users of the software as to what they want and need. Input from users, clients, and experts such as programmers and marketers is taken into consideration. Once these objectives are ready they are put together into a requirements document.


This stage includes examining the feasibility of the project as well as looking at specific team assignments. Risks are examined and strategies to eliminate those risks are formed. Budgets are drawn up at this stage.


Once a budget is approved, the relevant teams can begin working on the design—including creating a design document. This is a critical phase, since coding begins next. The design must be analyzed to make sure it will fit within the budget and that there are not any major design flaws. Otherwise, coders could begin working and waste hours developing something that must be redesigned.


Coding finally begins. The documents created in earlier phases become the blueprints for the code. At this point, every team leader should clearly explain the specifications, standards, and phases of the development to the teams.


Once initial coding is complete, the software goes through extensive testing. This is where the QA (quality assurance) team goes to work. A good QA professional will test every aspect of a software application when looking for errors. During this stage, coders are notified of bugs and assigned to fix them.

Deployment and maintenance

Once internal testing is complete and QA checks have been passed, the software is released to the client or into the market. It is most likely that, even though extensive testing was done, other errors and problems will be found after deployment to teams. Part of software maintenance is updating the software to address these issues. If a new version is requested, then this new version of the software will start at the beginning of the life cycle and go through all the phases again.

Implementations of the SDLC

The initial concept of the software life cycle has spawned many specific development models. We will take a look at three specific models: Waterfall, Spiral, and Big Bang.

The waterfall model

This model is a rigid model flowing downward through the different steps of the cycle. The idea is that after each stage is completed, it will not be returned to. It has similar steps to the SDLC concepts addressed earlier.

The Requirements phase produces the requirements document, and Design the design document. Implementation is where the programming occurs. Verification is testing, and then the last step, Maintenance, includes the release or deployment.

The spiral model

A spiral approach to software development intentionally goes through several cycles of planning, designing, development, and testing, before final release. An advantage of this model is that various teams can get started working earlier in the process and some teams can work simultaneously.

Several prototypes of the product are developed before final release in the spiral model. More than one round of testing and verification happens, as well as multiple rounds of redesigning. This model may take longer, but it is very thorough. The more recent “agile” model is based on this iterative model and has become popular.

The Big Bang model

The Big Bang model is a newer model often used by startups and other small companies. It is best used for smaller projects. The main feature of this model is that coding, design, requirements, and testing all begin simultaneously with very little planning. It involves quite a bit of risk since there is little pre-planning, but it also can lead to the quickest release of software.

Though there is little planning in the Big Bang model, plans can be created on the fly as development continues. Testing is done as soon as compiled code is available. It is not recommended for large projects or essential services, but can be used to brainstorm new types of software quickly.

Back to blog

Wordpress Developer Loader, Web Developer Loader , Front End Developer Loader Jack is thinking