Search

MIT: If chips can't get smaller, then coders have to get smarter - TechRepublic

kuaciasing.blogspot.com

CSAIL study makes the case for moving away from creating smaller hardware to developing smarter software.

What's Hot at TechRepublic

Many of the tech devices we take for granted are made possible because of improvements in computing power over the years that have not only made things like PCs and smartphones faster but smaller. It all started in 1965 when Intel co-founder Gordon Moore predicted that the number of transistors that could fit on a computer chip would grow exponentially—and they did, doubling about every two years.

Moore's Law has endured and recently celebrated its 55th anniversary. But the miniaturization trend can only last for so long and things can only get so tiny: With the latest silicon chips, 10,000 of them end-to-end are still no wider than a human hair. As a result, over the last decade researchers have been trying to figure out how to improve performance so that technologists can continue to innovate.

SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium)

Some experts have high hopes for new technologies like quantum computing or carbon nanotubes. But a newly released paper from a team at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) argues that the answer is most likely not going to be some future paradigm shift. Instead, the team has identified three key areas to prioritize to continue delivering computing speed-ups—new algorithms, better software, and more specialized hardware.

Senior author Charles Leiserson said the miniaturization that makes a smartphone possible has, in many ways, encouraged coding shortcuts. For decades programmers have been able to prioritize writing code quickly—rather than writing it so that it runs quickly—because smaller, faster computer chips have always been able to pick up the slack.

"This hasn't posed a problem yet, but nowadays being able to make further advances in fields like machine learning, robotics and virtual reality will require huge amounts of computational power," said Leiserson, a professor in MIT's Department of Electrical Engineering and Computer Science (EECS), in the article. "If we want to harness the full potential of these technologies, we must change our approach to computing."

No more Moore

The authors break down their recommendations into the categories of software, algorithms, and hardware architecture.
With software, they believe that programmers' previous priority of productivity over performance has led to problematic strategies like "reduction" (taking code that worked on problem A, and using it to solve problem B). 

As an example, if someone has to create a system to recognize yes-or-no voice commands, but doesn't want to code a whole new custom program, they could take an existing program that recognizes a wide range of words, and tweak it to respond only to yes-or-no answers, according to the CSAIL paper.

While this approach reduces coding time, it adds to the inefficiencies: If a single reduction is 80% as efficient as a custom solution, and 20 layers of reduction are added, the code will ultimately be 10,000 times less efficient than it could be, the paper said.

"These are the kinds of strategies that programmers have to rethink as hardware improvements slow down," said research scientist and co-author Neil Thompson. "We can't keep doing 'business as usual' if we want to continue to get the speed-ups we've grown accustomed to."

SEE:  Moore's Law turns 55: Is it still relevant? (TechRepublic)

Instead, the researchers recommend techniques like parallelizing code. Multicore technology, they said, has enabled complex tasks to be completed thousands of times faster and in a much more energy-efficient way.

As for algorithms, the team suggests a three-pronged approach that includes exploring new problem areas, addressing concerns about how algorithms scale, and tailoring them to better take advantage of modern hardware.

Finally, in terms of hardware architecture, the team advocates for streamlined hardware so that problems can be solved with fewer transistors and less silicon. Streamlining includes using simpler processors and creating hardware tailored to specific applications, like the graphics-processing unit (GPU) is tailored for computer graphics, the report said.

"Hardware customized for particular domains can be much more efficient and use far fewer transistors, enabling applications to run tens to hundreds of times faster," said research scientist and co-author Tao Schardl. "More generally, hardware streamlining would further encourage parallel programming, creating additional chip area to be used for more circuitry that can operate in parallel."

While these approaches may be the best path forward, the researchers caution that they won't always be easy. Organizations that use such techniques may not know the benefits of their efforts until after they've invested a lot of programmer time, they said. Plus, the speed-ups won't be as consistent as they were with Moore's Law: They may be dramatic at first, but then require large amounts of effort for smaller improvements.

Certain companies have already gotten the memo, the team noted.

"For tech giants like Google and Amazon, the huge scale of their data centers means that even small improvements in software performance can result in large financial returns," Thompson said. "But while these firms may be leading the charge, many others will need to take these issues seriously if they want to stay competitive."

Also see

Multicolored glowing futuristic hardware chip

Getty Images/iStockphoto

Let's block ads! (Why?)



"chips" - Google News
June 06, 2020 at 12:33AM
https://ift.tt/2UepxZs

MIT: If chips can't get smaller, then coders have to get smarter - TechRepublic
"chips" - Google News
https://ift.tt/2RGyUAH
https://ift.tt/3feFffJ

Bagikan Berita Ini

0 Response to "MIT: If chips can't get smaller, then coders have to get smarter - TechRepublic"

Post a Comment

Powered by Blogger.