There are indications worldwide that artificial intelligence (AI) is being used to raise the electronics industry to a new level of quality, either in part or as a whole. A striking example for the development of super ICs, complex semiconductor modules such as SoCs and PCBs is the DARPA project IDEA.
Parts 1 and 2 of this report outline the revolutionary goals of IDEA. Part 3 deals with the presentation of other European and American projects to anchor AI in the design and production of printed circuit boards.
Origin of the term and attempts at definition
Fig. 1: The development of the appropriate AI system and models, adapted to specific requirements, is a complicated mathematical taskIn order tobetter understand the following explanations on the application of artificial intelligence (AI) in the development and production of electronics, it is useful to first deal with the meaning of the term AI itself. A look at Wikipedia reveals the whole 'mess'. According to [1], artificial intelligence (AI or A. I.) is fundamentally a branch of computer science that deals with the automation of intelligent behavior and machine learning. The term is difficult to define, as there is already a lack of a precise definition of 'intelligence'. It was coined in 1955 by the US computer scientist John McCarthy as part of a funding application for a research project.
If you take a closer look today, you realize that in practice there are numerous different definitions of AI with different 'claims'. Depending on the point of view, artificial intelligence is defined in industry, research and politics either by the applications to be achieved or by looking at the scientific foundations. Examples [1]:
- AI is the property of an IT system to exhibit 'human-like', intelligent behavior (Bitkom e.V. and German Research Center for Artificial Intelligence).
- AI is a branch of computer science that deals with research into the mechanisms of intelligent human behavior (Spektrum der Wissenschaft, Lexikon der Neurowissenschaften).
- By AI, we mean technologies that complement and strengthen human abilities in seeing, hearing, analyzing, deciding and acting (Microsoft Corp.).
- AI is the ability of a machine to imitate human abilities such as logical thinking, learning, planning and creativity (European Parliament, website).
Strong and weak AI
A distinction is made between weak and strong AI. Strong AI would be computer systems that can take on the work of performing difficult tasks on an equal footing with humans. Weak AI, on the other hand, is about mastering specific application problems. Human thinking and technical applications are to be supported here in individual areas.
The ability to learn is a key requirement for AI systems and must be an integral component that cannot be added as an afterthought. A second main criterion is the ability of an AI system to deal with uncertainty and probabilistic information. Of particular interest are those applications that, according to general understanding, require some form of 'intelligence' to solve them. Ultimately, weak AI is about simulating intelligent behavior using mathematics and computer science, but it is not about creating awareness or a deeper understanding of intelligence(Fig. 1).
While the creation of strong AI has failed to date due to its philosophical issues, significant progress has been made on the weak AI side in recent years. Artificial intelligence of very different levels is already present in almost every piece of software in various forms. Users often do not identify the use of artificial intelligence as such.
As already mentioned, it is generally a matter of mastering specific application problems of varying content and complexity. The examples in parts 1 to 3 of this article show that even with weak AI, there are clear differences in the capabilities of the AI tools implemented to date. This depends on the performance of the computer base used, the associated accumulated and classified information and knowledge base in the background databases, the accuracy of the underlying models and the application-specific apps used with their data processing algorithms. Ultimately, everything is also a question of cost. With the exception of vital or special applications, AI generally has to pay for itself.
Better than humans?
The first example, Darpa's IDEA, which is the subject of Parts 1 and 2, demonstrates extremely high expectations regarding the performance of the future use of AI. The aforementioned Bitkom refers to AI as 'human-like', intelligent behavior. However, it is well known that human behavior, no matter how intelligence-based, can be quite error-prone. The aim of Darpa's IDEA project, however, is precisely to bring this error-proneness as close as possible to 'zero' in the realization of future extremely complex electronic systems by using very powerful AI. In practical terms: highly efficient weapon systems must function absolutely reliably. Microsoft's definition is more to the point. AI can solve problems independently and perform certain tasks just as well or better than a human. However, it cannot vary the approach to problem solving itself. However, if IDEA's AI system has gained a great deal of 'experience' through a huge number of completed projects, it can certainly suggest different and better solutions to the key data entered (specified) for a new, additional target product than the 'client' had imagined for the system. This is because their knowledge of the 'system' is very likely to be limited.
Another decisive reason for the IDEA project is to achieve reasonable costs and development and production times for future highly complex electronic systems. The AI system envisaged in IDEA should ultimately be better than humans and, above all, decidedly faster. It will therefore probably already be at the upper end of weak AI.
Another important thought: the realization of projects such as IDEA has a feedback effect on the electronics industry. The better and faster such projects can be realized, the more powerful computer and ultimately AI systems will be available in a shorter time, which in turn will stimulate the forward development of electronics production and thus the entire economy. The development spiral is therefore likely to turn even faster in the future. Can humans cope with this?
The American dream
Fig. 2: The speed of the new Russian hypersonic missile surpasses everything that has gone beforeTheDarpa (Defense Advanced Research Projects Agency), the technology agency of the US Department of Defense (DoD), has often taken the lead in the further development of technologies and the final products based on them for the country's defence sector and other important areas. In many cases, the sometimes more, sometimes less intensive efforts have been successful. In 2017, it also initiated corresponding activities for the intensive use of AI in the development of electronic products. According to the experts at Darpa, there were serious reasons for this, which were explained in detail on their website [2]. The following is explained there: The next-generation intelligent electronic systems that will enable the DoD to make widespread use of, for example, artificial intelligence, autonomous vehicles, shared spectrum communications, electronic warfare and radar must have process efficiencies orders of magnitude greater than what is possible with currently available commercial electronics. Example: The use of the already existing new hypersonic missiles can only be countered by adequate, intelligent, fast detection and defense systems, also of a new quality(Fig. 2).
To achieve the performance levels required by these DoD applications, highly complex system-on-chip (SoC) platforms must be developed that utilize the most advanced integrated circuit technologies. Despite advances in electronic design automation (EDA) tools, the complexity of the steps associated with IC design and verification continues to increase rapidly, in part due to the steady progress of Moore's Law. To meet design requirements, commercial electronics manufacturers developing advanced hardware solutions will employ ever-larger design teams where each designer has expertise in a particular facet of the design flow. However, the DoD does not have the resources in terms of researchers and development teams to effectively implement or manage such a strategy in the traditional way. This leads to hardware design cycles that are two to three times longer than in commercial organizations.
To close the gap in design expertise and resources and keep pace with the exponential increase in chip complexity, the DoD initiated the Intelligent Design of Electronic Assets (IDEA) program in 2017. The goal is to attempt to develop a general purpose hardware compiler that translates source code or schematics into physical layouts (GDSII) for SoCs, System-In-Packages (SIPs) and Printed Circuit Boards (PCBs) in less than 24 hours without human intervention in the electronics design loop. The program aims to leverage advances in applied machine learning, optimization algorithms and expert systems to create a compiler that enables users without prior design knowledge to execute the physical design in the most advanced technology nodes. Consequently, the goal of the IDEA program is to provide the DoD with a way to rapidly develop next-generation electronic systems without the need for large design teams, significantly reducing the cost and complexity barriers associated with advanced electronics design.
A huge task
The size of the task to be solved, which is considered revolutionary by American experts, is made clear by the following quote from Andrew Kahng from the University of California in San Diego, who heads one of the teams involved: "No one yet knows how to safely complete a new chip design in 24 hours without human intervention. This is a fundamentally new approach that we are developing."
Erica Fuchs, professor at Carnegie Mellon University and public policy expert on emerging technologies, praised the ERI framework project and thus also the IDEA sub-project in a statement in 2018, but believes that the US government's overall approach to supporting electronics innovation is "simply an order of magnitude short" of what is needed to address the challenges facing the US. "Let's hope that the grassroots chip design movement that Darpa is trying to foster will help close the gap." So far, that doesn't seem entirely obvious.
Outlook
In addition to IDEA, the associated POSH (Posh Open Source Hardware) project must also be mentioned. This will be the subject of the second part of this series of articles - alongside the ambitious plans of Darpa's ERI initiative in response to technical and economic trends in the microelectronics industry. Can it revitalize the chip industry in the US?
References
[1] https://de.wikipedia.org/wiki/K%C3%BCnstliche_Intelligenz#Begriffsherkunft_und_Definitionsversuche
[2] www.darpa.mil/program/intelligent-design-of-electronic-assets