Prepare for Artificial Intelligence to Produce Less Wizardry

栏目: IT技术 · 发布时间: 4年前

内容简介:The company already used purchasing data and a simple statistical method to predict sales. Withdeep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data including local weather, traffic conditions

Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods.

The company already used purchasing data and a simple statistical method to predict sales. Withdeep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data including local weather, traffic conditions, and competitors’ actions—the company cut the number of errors by three-quarters.

It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.

“They were like, ‘well, it is not worth it to us to roll it out in a big way,’ unless cloud computing costs come down or the algorithms become more efficient,” says Neil Thompson , a research scientist at MIT, who is assembling a case study on the project. (He declined to name the company involved.)

The story highlights a looming problem for AI and its users, Thompson says. Progress has been both rapid and dazzling in recent years, giving us clever game-playing programs , attentive personal assistants , and cars that navigate busy roads for themselves. But such advances have hinged on throwing ever-more computing resources at the problems.

In a new research paper, Thompson and colleagues argue that it is, or will soon be, impossible to increase computing power at the same rate in order to continue these advances. This could jeopardize further progress in areas including computer vision, translation, and language understanding.

AI’s appetite for computation has risen remarkably over the past decade. In 2012, at the beginning of the deep learning boom, a team at the University of Toronto created a breakthrough image-recognition algorithm using two GPUs (a specialized kind of computer chip) over five days. Fast-forward to 2019, and it took six days and roughly 1,000 special chips (each many times more powerful than the earlier GPUs) for researchers at Google and Carnegie Mellon to develop a more modern image-recognition algorithm . A translation algorithm , developed last year by a team at Google, required the rough equivalent of 12,000 specialized chips running for a week. By some estimates, it would cost up to $3 million to rent this much computer power through the cloud.

“Deep neural networks are very computationally expensive,” says Song Han , an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.”

Han’s group has created more efficient versions of popular AI algorithms using novelneural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go,” to make deep learning less compute-hungry.

Other researchers have noted the soaring computational demands. The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch.

Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says. “Slower automation might sound good from a jobs perspective,” he says, but it will also slow gains in productivity, which are key to raising living standards.

In their study, Thompson and his co-authors looked at more than 1,000 AI research papers outlining new algorithms. Not all of the papers detailed the computational requirements, but enough did to map out the cost of progress. The history suggested that making further advances in the same way will be all-but impossible.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

第二曲线:跨越“S型曲线”的二次增长

第二曲线:跨越“S型曲线”的二次增长

[英]查尔斯·汉迪(Charles Handy) / 苗青 / 机械工业出版社 / 2017-6 / 49.00

S型曲线是每个组织和企业在预测未来时一定会参考的工具,一切事物的发展都逃不开S型曲线(“第一曲线”)。 然而,从公司组织、企业治理、市场的变化,到个人职业发展、社会人际关系以及未来的教育与社会价值,多维度地探讨这个世界需要重新以不同的角度来思考问题,不能够总是停留在“第一曲线”的世界。 如果组织和企业能在第一曲线到达巅峰之前,找到带领企业二次腾飞的“第二曲线”,并且第二曲线必须在第一曲......一起来看看 《第二曲线:跨越“S型曲线”的二次增长》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

随机密码生成器
随机密码生成器

多种字符组合密码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具