Current location - Training Enrollment Network - Mathematics courses - Committed to the innovation of neural network architecture and boosting the future development of artificial intelligence.
Committed to the innovation of neural network architecture and boosting the future development of artificial intelligence.
—— Wang distinguished professor, School of Electronic Science and Engineering, Nanjing University

When it comes to artificial intelligence (AI), you may think of robots first, but at this stage, neural network is the most popular technology. Since the related theories were put forward in the 1940s, neural networks have experienced ups and downs for decades. Nowadays, because of its good learning and expression ability, deep neural network has made breakthrough progress in many fields such as image processing and natural language processing, and has become the most widely used model in the field of artificial intelligence. However, in the practical application process, the huge parameters and calculation amount of deep neural network bring severe challenges to traditional computing hardware in terms of processing speed and energy efficiency. The optimal design and implementation of energy-efficient deep neural network accelerator is the key to the rapid application of a new generation of artificial intelligence.

Based on the above requirements, Professor Wang, an internationally renowned expert in VLSI design of signal processing systems, has conducted a series of studies on algorithm optimization and hardware acceleration of deep learning systems. Professor Wang has made great contributions to the development of artificial intelligence and integrated circuit design in China.

Pursuing dreams and sticking to firm scientific research feelings

Wang's life and scientific research experience can be described as rich and colorful. During the technical secondary school, he completed the mathematics courses in high school and university by himself with tenacious perseverance. When he was young, he gave up the "iron rice bowl" of iron mine and overcame many difficulties. He was admitted to the Automation Department of Tsinghua University with the first place in science in the county through self-study. During his college years, he never stopped moving forward, completing his undergraduate studies ahead of schedule with excellent results and studying for a master's degree; After graduation, I worked in a high-tech company in Beijing, then went abroad for further study and entered the Department of Electrical Engineering of the University of Minnesota in the United States to continue my doctoral degree. During his doctoral studies, he worked hard, published many high-quality papers in the top journals in the industry, and won the SiPS Best Paper Award 1999, the flagship conference of IEEE signal processing system industry.

After Dr. Wang graduated in 2000, he successively worked for National Semiconductor Company, School of Electronic and Computer Engineering of Oregon State University and Broadcom Company, and made remarkable achievements in different units. He has participated in the research and development of more than ten commercial chips, and the performance indicators of some core modules designed by him are in the leading position in the industry. His technical scheme has been adopted by more than ten network communication standards such as IEEE. 20 15 was awarded the IEEE Fellow for its outstanding contribution to FEC (Error Correction Code) design and VLSI (Very Large Scale Integrated Circuit) implementation.

Although he has a superior research environment in the United States, Wang clearly knows that this is not what he wants. "Science has no national boundaries, but scientists have national boundaries." Being overseas, Wang Yizhi cares about the development of the motherland. "That's home country, it's homeland, and we should do our best for her." In 20 16, when the motherland summoned the overseas wanderer to return to China in the form of "international special experts", he resolutely returned to the embrace of the motherland during his career rise and was determined to contribute his own strength to the development of scientific research in the motherland.

20 16 Wang entered the School of Electronic Science and Engineering of Nanjing University. In the same year, he led the establishment of the Integrated Circuit and Intelligent System (ICAIS) laboratory, focusing on the design and hardware optimization of digital communication and machine learning, facing the important economic needs of countries such as intelligent manufacturing, intelligent construction sites and intelligent communities, and cooperating with many famous schools at home and abroad and some top enterprises, actively promoting and leading the development of integrated circuit design in China and striving to overcome technical bottlenecks. Today, Wang's research team has been influential in the field of international integrated circuit design, and his dream of serving the country through scientific research is being realized step by step.

Pioneering and innovating, breaking through artificial intelligence chips

"The trend of ambition, no far. Poor mountains and distant seas are limitless. " After returning home, Professor Wang quickly set up a team, carefully laid out and started his work in an all-round way. With rich R&D experience in the field of digital signal processing and IC design for more than 20 years, he led the team to focus on "collaborative design and optimization of algorithm and hardware architecture", and made all-round efforts in scientific research directions such as artificial intelligence algorithm and hardware architecture, low power consumption, strong error-correcting channel codec hardware architecture design, and trusted computing acceleration, and achieved remarkable academic results.

Specific to the design of artificial intelligence chips, Wang led the team to develop a multi-dimensional hardware-friendly neural network compression algorithm and a series of efficient deep learning reasoning and training hardware acceleration architectures. In the aspect of algorithm optimization, they innovated the way of hardware-accelerated architecture mining and processing redundant information, made full use of the orthogonality of redundant information in different dimensions, and combined dynamic calculation adjustment with static parameter compression, which significantly reduced the computational complexity and the number of parameters of the deep learning algorithm on the premise of ensuring the reasoning accuracy. In addition, the team conducted a comprehensive and systematic research on common models such as convolutional neural networks, and creatively developed a series of calculation optimization and data flow optimization schemes, including convolution acceleration technology based on fast algorithm and data transmission scheme of interlayer fusion and multiplexing, which solved two bottlenecks in its hardware design in terms of computing power and transmission bandwidth, and greatly improved the calculation efficiency, energy efficiency and throughput of the system.

On the hardware implementation level, in view of the widespread sparsity of neural networks and the bottleneck problem that parallel processing can not fully improve energy efficiency, they introduced the design ideas of local serial and global parallel, which can make full use of the redundancy of neural networks without losing accuracy, and obviously improve the power consumption efficiency of AI inference accelerator. Combined with the customized design of the complete tool chain, this efficient architecture can be widely used in different scenarios. In the design of training accelerator, Wang is one of the first scholars to explore the application of new data representation format and the architecture design of reconfigurable training accelerator. He led the team to use Posit data format for the first time, and designed an efficient deep neural network training method and Posit special low-complexity multiply-accumulate unit, which greatly reduced the calculation, storage overhead and bandwidth requirements and achieved the same model accuracy as the full-precision floating-point data format. In addition, Wang led the team to fully apply the parallel computing and pipeline processing technology commonly used in the field of high-speed circuit design to the neural network acceleration architecture, which broke through the system clock bottleneck caused by recursive computing, thus ultimately improving the overall throughput of the accelerator.

In order to promote collaborative innovation in Industry-University-Research, Wang led the establishment of Nanjing Fengxing Technology Co., Ltd. on 20 18, and devoted himself to the research and development of artificial intelligence chips and intelligent system solutions and other related products. The company has international leading low-power integrated circuit design and optimization technology. In 2020, the energy-efficient sparse neural network computing chip architecture for high-performance intelligent computing was exclusively launched, which supported common deep learning algorithms and solved the problem that universality and high performance in the field of AI chips were difficult to balance. It has an industry-leading energy efficiency ratio, which can meet a variety of reasoning application scenarios of cloud-edge-end, and reduce the extremely high requirements of AI computing on memory bandwidth and storage; While significantly improving the performance of the chip, it can greatly reduce the cost of the chip, thus effectively promoting the practical application of artificial intelligence algorithms in many fields.

Heaven rewards diligence, and sweat pours out beautiful flowers. Since the resumption of work on 20 16, Wang has successively won honors and awards such as "double-creative talents" in Jiangsu Province, "double-creative team" leading talents, "high-level innovative talents" in Nanjing, and "top scientific and technological experts gathering plan A talents". In 2020, he won the Wu Wenjun Artificial Intelligence Science and Technology Progress Award. 20 18-202 1, Wang * * * has seven co-authored papers (all of which are communication authors) entered the final shortlist of the Best Paper Award in the flagship conference of IEEE IC-related industries, among which the design of AI hardware accelerator won the Best Paper Award for four consecutive times in1August. At the same time, team Wang has applied for dozens of invention patents, 9 of which have been industrialized, driving social capital investment of tens of millions of yuan. These achievements also inspire Professor Wang to broaden his research direction and forge ahead.