请登录后使用此功能。 您可以使用此功能将商品添加到您的收藏列表。
关闭
您已经添加该商品到您的收藏列表。 查看我的收藏
关闭
从您收藏列表中删除此商品。
关闭
请登录后使用此功能。 您可以使用此功能将公司添加到您的收藏夹列表。
关闭
这家公司已成功添加。 查看我的收藏
关闭
这家公司已从你的收藏夹列表中删除。
关闭
请登录后使用此功能。 您可以使用此功能将公司添加到您的询问车。
关闭
这家公司已被添加到您的询问车。
关闭
这家公司已从询价车中删除。
关闭
该商品已被添加到您的询问车。
关闭
该商品已经从您的询价车中删除。
关闭
商品/公司已达到添加至询价车的数量。
关闭
Fictron Industrial Supplies Sdn Bhd
Fictron Industrial Supplies Sdn Bhd 200601019263

Specialized AI Chips Hold Both Promise and Peril for Developers

09-Aug-2019

In relation to the compute-intensive field of AI, hardware vendors are renewing the performance gains we enjoyed at the height of Moore’s Law. The gains are sourced from a new generation of specialized chips for AI applications like deep learning. But the fragmented microchip marketplace that’s emerging will lead to some hard choices for developers. The new era of chip specialization for AI began when graphics processing units (GPUs), which were initially developed for gaming, were deployed for applications like deep learning. The same architecture that made GPUs render realistic images also enabled them to crunch data more proficiently than central processing units (CPUs). A big step forward happened in 2007 when Nvidia released CUDA, a toolkit for making GPUs programmable in a general-purpose way.
 
AI researchers need each advantage they can get when dealing with the unprecedented computational requirements of deep learning. GPU processing power has advanced aggressively, and chips at first designed to render images have become the workhorses powering world-changing AI research and development. Many of the linear algebra routines that are necessary to make Fortnite run at 120 frames per second are now powering the neural networks at the heart of cutting-edge applications of computer vision, automated speech recognition, and natural language processing.  
 
Now, the movement toward microchip specialization is changing into an arms race. Gartner projects that specialized chip sales for AI will double to around US billion in 2019 and achieve more than billion by 2023. Nvidia’s internal projections place the market for data center GPUs (which are almost solely used to power deep learning) at billion in the same time frame. In the next five years, we are going to see significant investments in custom silicon come to fruition from Amazon, ARM, Apple, IBM, Intel, Google, Microsoft, Nvidia, Qualcomm. There are also a slew of startups in the mix. CrunchBase forecasts that AI chip companies, including Cerebras, Graphcore, Groq, Mythic AI, SambaNova Systems, and Wave Computing, have jointly raised more than billion. 
 
To be clear, specialized AI chips are both important and welcomed, as they’re catalysts for transforming cutting-edge AI research into real-world applications. And yet, the flood of new AI chips, each one faster and more specialized than the next, will also seem like a throwback to the rise of enterprise software. We can expect cut-throat sales deals and software specialization focused at locking developers into working with just one vendor. Imagine if, 15 years ago, the cloud services AWS, Azure, Box, Dropbox, and GCP all came to market within 12 to 18 months. Their mission would have been to lock in as many businesses as possible — because once you are on one platform, it is tough to switch to another. This type of end-user gold rush is about to happen in AI, with tens of billions of dollars, and priceless research, at stake. 
 
Chipmakers won’t be short on promises, and the benefits will be real. But it is important for AI developers to understand that new chips that require new architectures could make their products slower to market — even with faster performance. In most cases, AI models are not going to be portable between different chip makers. Developers are well aware of the vendor lock-in risk posed by adopting higher-level cloud APIs, but in the past, the actual compute substrate has been standardized and homogeneous. This situation is going to change dramatically in the world of AI development.
 
It is rather possibly that more than half of the chip industry’s revenue will soon be driven by AI and deep learning applications. Just as software begets more software, AI begets more AI. We have seen it many times: Companies at first focus on one problem, but in the end solve many. For example, major automakers are striving to bring autonomous cars to the road, and their cutting-edge work in deep learning and computer vision is already having a cascading effect; the research is leading to such offshoot projects as Ford’s delivery robots. As specialized AI chips come to market, the current chip giants and major cloud companies will probably strike exclusive deals or acquire top performing startups. This trend will fragment the AI market rather than unifying it. All that AI developers can do now is understand what’s about to happen and plan how they’ll weigh the benefits of a faster chip with the costs of building on new architectures.
 
总办事处

Fictron Industrial Supplies Sdn Bhd 200601019263
No. 7 & 7A, Jalan Tiara, Tiara Square, Taman Perindustrian Sime UEP, 47600 Subang Jaya, Selangor, Malaysia.

电话:
传真:

邮件:
网址: http://www.fictron.biz
网址: https://fictron.newpages.com.my/
网址: https://fictron.n.my/
网址: http://fictron.newstore.my/

其他办事处

Fictron Industrial Automation Pte Ltd
140 Paya Lebar Road, #03-01, AZ @ Paya Lebar 409015, Singapore.

电话:
邮件:

游览 : 首页 - 分类 - 公司 - 地区 - 标签 - 商品 - 消息与促销 - 工作征聘 - 手机版 - 谷歌 - 搜索引擎优化结果

NEWPAGES

  • BR 44584
  • US 11007
  • AR 4077
  • MX 3300
  • GB 2900
  • CN 2719
  • RU 2192
  • EC 2123
人 在线
Seni Jaya Logo
Brochure
Download
Our PackageContact Us