Computer Architecture Changes in Recent Years (Free Essay Example)

đź“ŚCategory: Science, Technology
đź“ŚWords: 1396
đź“ŚPages: 6
đź“ŚPublished: 17 October 2022

Computers are now integrated in modern people's everyday lives; some people have not even known a life without computers being all around us. Transition  to wanting to know more about computers. The basic structure of a computer includes a central processing unit, storage(primary storage/ random access memory and secondary storage), input/output devices, and bus interconnection (Jorgensen 9). The CPU is often referred to as the brains of the computer since it performs the directions. It is composed of the Arithmetic Logic Unit, registers, Address Generational Unit (Jorgensen 7). The CPU’s function is to fetch data, decode, execute, and store. WIthout a CPU, a computer would not exist. The recent innovation of computer architecture, more specifically of CPUs, gets more efficient every year and shows that there needs to be a change of technology or choice specification. 

Computer Architecture from 1970s to mid 2010s

When companies saw the promise from computers the race to make the most efficient computers began and still continues. In the 1970s, many microprocessors(CPUs) that were designed were 8 bits with 16-bit addresses in a 40 pin dual-in-line package (Furber). The first attempt was Intel 8086 which did not succeed then Intel 8088.  Companies like Intel and Motorola helped with the emergence of people having computers in their house in the 1980s (Furber). For example IBM started producing PC desktop computers that had the Intel 8088(Fuber) The 8-bit processors were common so there was an urge to make 16 bit processors to make more efficient computers. At this time microprocessors were using CISC or Complete Instruction Set Computer. John Patternson and David Ditzel argued for the Berkeley RISC I  since it would be, “optimizing an architecture for the limited resource on a single chip was quite different from optimizing it for a multi-chip processor such as that on the VAX 11/780”(Furber). Companies moved from CISC to a Reduced Instruction Set Computer by developing their own microprocessors (Furber). The company Acorn started developing their 32-bit RISC microprocessor that is now known as ARM (Furber). Many companies like Nokia adopted ARM in the 1990s. In the 2000s, the microprocessor innovations were in the problem of power making the chips too hot.  In 2011, AMD and Intel had sold over 350 million x86 microprocessors annually (Hennessy). However, an increase of transistors would make the chip too hit and dysfunctional. The solution was to  increase the number of microchips in the CPU. Even though it is harder to program, it allows for more powerful computers. In the 2000s and 2010s, the multi-core solution also appeared in cellular phones since there needs to be a small power budget and a need for large compute power (Fuber). Apple made their own systems for their chip, SoC (Hennessy). Other forms of computers are the “cloud computations” which are computers in warehouses from companies like Amazon or Google. These computers are servers and storage systems that are mostly used by Intel high end microprocessors(Fuber). In 2019, 99% of 32-bit and 64-bit processors were RISC (Hennessy). With many innovations, at this time microprocessors by Intel and ADM  have been controlling the market. 

Microprocessor Innovations of the Last 5 Years 

The recent market for general purpose CPUS has been dominated by companies like AMD and Intel.  Some recent innovations for CPUs starting from 2017 include the adding of cores and multithreading to make architecture more efficient. In 2017, the Zen from AMD came out featuring multithreading and high count 8 core die. In the same year, AMD released multiple of their Ryzen CPUs varying in size which was thought to break the monopoly that Intel had in the last couple of years. POWER9 by IBM was also released this year which had multithreading and 12 to 24 cores which later make one of the fastest supercomputers. In 2017 and 2018, Intel released their core i9 for both mobile and for desktops. In 2019, AMD released the Ryzen 2019 which had double cache and cheaper design. AMD was able to achieve a cheaper design  by separating the input/output and compute into two dies. This meant that one could be lower than 16 cores  since it did not need that many to be accurate. AMD chips help base the chips later used for gaming consoles like PlayStation 5 and Xbox Series X/S. In 2020, Apple released their M1 chip which is an ARM based SoC that serves as a CPU and a GPU for their desktop, laptops, and slime tablets. The switch from Intel was due to the company wanting more performance per watt(Graves). In 2021, Apple released the M1 Pro and M1. This year they released their M1 Ultra with 114 billion transmitters (the original M1 had 16 billion). The Apple processors are similar to recent Intel Processors but use one tenth the power. Although microprocessors get better every time, there should be a limit when general purpose CPUs cannot physically be more effective and innovation must occur. 

One approach of architecture to make computers more efficient are domain specific architecture or DSAs. Hennessy and Patterson, professors at Berkeley who urged for the use of RISC, argue that the future for CPUs is DSAs. This is a specific microprocessor that is tailored  to specific functions which increases their performance. This is due to the way parallelism is used specifically to their main function. Another reason why DSAs are more efficient is because of their efficiency in memory hierarchy specified by their software (Hennessy 57).  In addition they “use less precision when it is adequate (Hennessy 57). This means that they can use 4, 8, or 16 integers depending when they need it and accuracy is important. The last reason why DSAs are important is that they can be written with their domain specific languages. This helps improve on parallelism. DSAs are sometimes called accelerators since they do not have the same function of general purpose CPU(golden). Common DSAs are GPU(Graphic Processing Unit), neural network processors, and  processors for software defined networks. Two examples of DSAs are TPU and VCU from Google and  Intel's SG1.  Google TPUs, first production since 2015, allow for better efficiency in energy  and performance to a factor of 10.  The transfer of Google’s TPU allows the arithmetic problems to be 29 times faster than a general purpose CPU, uses half the power, and loads 80 times faster(golden). The Argus Video Transcoding Unit(VCU)’s purpose is to process videos to aid processing videos for YouTube(news article). Although before Google has used Intel Xeon CPU, it is estimated they have replaced  four to thirty three million of the Intel CPUS for their VCUs. 

Future of Computer Architecture

With the innovation and the Era of the INternet making it so that there is information available to many people, many wonder what the future for computer architecture looks like. Since efficiency growth has been slowing down, quantum computers display promising much faster computers in the future. Although IBM has had a quantum computer since 2016, the IBM vice president said in 2020 that we have only discovered one fifth of quantum computers. Quantum computers do not use a binary system of a bit being 1 or 0 but instead have atoms that are in the spectrum of being between 1 and 0. This allows for more memory and being able to execute calculations faster. It is nothing like a normal CPU and has very different technology based on quantum physics. The efficiency of these composers was proved by Google saying that one of their quantum supercomputers calculated a problem with no real life application that took 200 seconds while they estimated that a computer with a CPU would have taken 10,000 years.(CNBC) IBM stated that one of their super computers only took 2.5 days. Nonetheless, the evidence that quantum computers can aid with encryption/security needs, healthcare, the internet, etc.The technology of quantum computers is very interesting and once fully implemented will change the world. It is estimated for it to take 20-30 years for quantum computers to be reliable enough to have applications.  As of now, people can keep trying to make transistors more efficient and seeing techniques of fully optimizing current technology.

Unfortunately, the innovation  of computer architecture, specifically microprocessors, has not been improving as efficiently as before which can be explained by the decrease in Moore’s Law and Dennard Scaling.  Moore’s Law stated that every two there would be a doubling of transistor density. This meant  bbbbbbb. Moore’s Law began to slow down in 2018 with a 15-fold gap from the prediction and current capability(golden p 53). In addition another projection that has been shown to not be as effective in recent years is Dennard Scaling which states that “as transistor density increased, power consumption per transistor would drop, so the power per mm^2 of silicon would be near constant”(Hennessy 53) . By 2012, computer architects were urged to find better ways to use parallelism(performing multiple programs at the same time to increase efficiency). (golden 53) These challenges urge the current computer architects to find different approaches to CPUs.

+
x
Remember! This is just a sample.

You can order a custom paper by our expert writers

Order now
By clicking “Receive Essay”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails.