Prepare for Artificial Intelligence to Produce Less Wizardry

栏目: IT技术 · 发布时间: 3年前

内容简介:The company already used purchasing data and a simple statistical method to predict sales. Withdeep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data including local weather, traffic conditions

Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods.

The company already used purchasing data and a simple statistical method to predict sales. Withdeep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data including local weather, traffic conditions, and competitors’ actions—the company cut the number of errors by three-quarters.

It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.

“They were like, ‘well, it is not worth it to us to roll it out in a big way,’ unless cloud computing costs come down or the algorithms become more efficient,” says Neil Thompson , a research scientist at MIT, who is assembling a case study on the project. (He declined to name the company involved.)

The story highlights a looming problem for AI and its users, Thompson says. Progress has been both rapid and dazzling in recent years, giving us clever game-playing programs , attentive personal assistants , and cars that navigate busy roads for themselves. But such advances have hinged on throwing ever-more computing resources at the problems.

In a new research paper, Thompson and colleagues argue that it is, or will soon be, impossible to increase computing power at the same rate in order to continue these advances. This could jeopardize further progress in areas including computer vision, translation, and language understanding.

AI’s appetite for computation has risen remarkably over the past decade. In 2012, at the beginning of the deep learning boom, a team at the University of Toronto created a breakthrough image-recognition algorithm using two GPUs (a specialized kind of computer chip) over five days. Fast-forward to 2019, and it took six days and roughly 1,000 special chips (each many times more powerful than the earlier GPUs) for researchers at Google and Carnegie Mellon to develop a more modern image-recognition algorithm . A translation algorithm , developed last year by a team at Google, required the rough equivalent of 12,000 specialized chips running for a week. By some estimates, it would cost up to $3 million to rent this much computer power through the cloud.

“Deep neural networks are very computationally expensive,” says Song Han , an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.”

Han’s group has created more efficient versions of popular AI algorithms using novelneural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go,” to make deep learning less compute-hungry.

Other researchers have noted the soaring computational demands. The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch.

Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says. “Slower automation might sound good from a jobs perspective,” he says, but it will also slow gains in productivity, which are key to raising living standards.

In their study, Thompson and his co-authors looked at more than 1,000 AI research papers outlining new algorithms. Not all of the papers detailed the computational requirements, but enough did to map out the cost of progress. The history suggested that making further advances in the same way will be all-but impossible.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

面向对象技术UML教程

面向对象技术UML教程

王少峰 / 清华大学出版社 / 2004-2 / 24.00元

《面向对象技术UML教程》主要介绍统一建模语言UML及其应用。全书内容丰富,包括UML的用例图、顺序图、协作图、类图、对象图、状态图、活动图、构件图和部署图等9个图中所涉及的术语、规则和应用,以及数据建模、OCL、业务建模、Web建模、设计模式、OO实现语言、RUP等方面的内容,同时介绍了Rose开发工具中的一些用法。《面向对象技术UML教程》最后是一个课程注册系统的实例研究,以及一些思考题和设计......一起来看看 《面向对象技术UML教程》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

随机密码生成器
随机密码生成器

多种字符组合密码

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器