Artificial Intelligence

7371

7373

7379

INDUSTRY SNAPSHOT

In the very simplest of terms, the artificial intelligence (AI) industry seeks to create machines that are capable of learning and intelligent thought. It includes the development of computer-based systems that can learn from past behaviors and apply that knowledge to solving future problems. AI draws from a variety of academic fields, including mathematics, computer science, linguistics, engineering, physiology, philosophy, and psychology, and predates the modern computer age. Although it did not truly emerge as a stand-alone field of study until the late 1940s, logicians, philosophers, and mathematicians formed the foundation upon which modern AI rests during the eighteenth and nineteenth centuries.

The field of AI gradually evolved during the last half of the twentieth century, when major research departments were established at prominent U.S. universities, beginning with the Massachusetts Institute of Technology (MIT). The U.S. government has been a dominant player in this market for many years, providing significant funding for military projects. However, private enterprise also is a major stakeholder. By 1993 the U.S. Department of Commerce reported that the artificial intelligence market was valued at $900 million. At that time some 70 to 80 percent of Fortune 500 companies were applying AI technology in various ways.

By 2005-2006, AI technology was being used in such varied fields as robotics, information management, computer software, transportation, e-commerce, military defense, medicine, manufacturing, finance, security, emergency preparedness, and others. According to a Business Communications Company Inc. (BCC) study, the AI industry was projected to grow at an average annual rate of more than 12 percent through 2007, at which time it would be worth $21.0 billion worldwide. This would represent a significant increase from $11.9 billion in 2002.

ORGANIZATION AND STRUCTURE

The AI industry is powered by a blend of small and large companies, government agencies, and academic research centers. Major research organizations within the United States include the Brown University Department Of Computer Science, Carnegie Mellon University's School of Computer Science, the University Of Massachusetts Experimental Knowledge Systems Laboratory, NASA's Jet Propulsion Laboratory (JPL), the Massachusetts Institute Of Technology (MIT), the Stanford Research Institute's Artificial Intelligence Research Center, and the University Of Southern California's Information Sciences Institute.

In addition, a large number of small and large companies also fuel research efforts and the development of new products and technologies. Software giants like IBM Corp., Microsoft Corp., Oracle Corp., PeopleSoft Inc., SAS AB, and Siebel Systems Inc. are heavily involved in the development and enhancement of business intelligence, data mining, and customer relationship management software.

Large corporate enterprises often have their own research arms devoted to advancing technologies like AI. For example, Microsoft operates its Decision Theory and Adaptive Systems Group, AT&T operates AT&T Labs-Research (formerly AT&T Bell Labs), and Xerox Corp. is home to the Palo Alto Research Center (PARC).

Associations

The artificial intelligence industry is supported by the efforts of the American Association for Artificial Intelligence (AAAI), a nonprofit scientific society based in Menlo Park, California. According to the AAAI, which was established in 1979, it is "devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions." Along these lines, the AAAI included students, researchers, companies, and libraries among its more than 6,000 members during 2005-2006.

In addition to its annual National Conference on Artificial Intelligence, the AAAI hosts fall and spring symposia, workshops, and an annual Innovative Applications of Artificial Intelligence conference. It awards scholarships and grants, and publishes the quarterly AI Magazine, the annual Proceedings of the National Conference on Artificial Intelligence, and various books and reports.

BACKGROUND AND DEVELOPMENT

The history of artificial intelligence (AI) predates modern computers. In fact, its roots stretch back to very early instances of human thought. The first formalized deductive reasoning system—known as syllogistic logic—was developed in the fifth century B.C. by Aristotle. In subsequent centuries, advancements were made in the fields of mathematics and technology that contributed to AI. These included the development of mechanical devices like clocks and the printing press. By 1642, the French scientist and philosopher Blaise Pascal had invented a mechanical digital calculating machine.

During the eighteenth century, attempts were made to create mechanical devices that mimicked living things. Among them was a mechanical automaton developed by the mechanician Jacques de Vaucanson that was capable of playing the flute. Later, Vaucanson created a life-sized mechanical duck that was constructed of gold-plated copper. In an except from Living Dolls: A Magical History of the Quest for Mechanical Life that appeared in the February 16, 2002 issue of The Guardian, author Gaby Wood described the duck this way: "It could drink, muddle the water with its beak, quack, rise and settle back on its legs and, spectators were amazed to see, it swallowed food with a quick, realistic gulping action in its flexible neck." Vaucanson gave details of the duck's insides. Not only was the grain, once swallowed, conducted via tubes to the animal's stomach, but Vaucanson also had to install a 'chemical laboratory' to decompose it. It passed from there into the 'bowels, then to the anus, where there is a sphincter which permits it to emerge.'"

Other early developments included a form of binary algebra developed by English mathematician George Boole that gave birth to the symbolic logic used in later computer technology. About the same time, the Analytical Engine was developed. This programmable mechanical calculating machine was used by Ada Byron (Lady Lovelace) and another English mathematician named Charles Babbage.

British mathematician Alan Turing was a computing pioneer whose interests and work contributed to the AI movement. In 1936, he wrote an article that described the Turing Machine—a hypothetical general computer. In time, this became the model for general purpose computing devices, prompting the Association of Computing Machinery to bestow an annual award in his honor. During the late 1930s, Turing defined algorithms—instruction sets used during problem solving—and envisioned how they might be applied to machines. In addition, Turing worked as a cryptographer to decipher German communications for the Allied forces during World War II, creating a machine named Colossus that was used for this purpose. In 1950 he developed the now famous Turing Test, arguing that if a human could not distinguish between responses from a machine and a human, the machine could be considered "intelligent."

Thanks to the efforts of other early computing pioneers, including John von Neumann, the advent of electronic computing in the early 1940s allowed the modern AI field to begin in earnest. However, the term "artificial intelligence" was not actually coined until 1956. That year, a Dartmouth University mathematics professor named John McCarthy hosted a conference that brought together researchers from different fields to talk about machine learning. By this time, the concept was being discussed in such varied disciplines as mathematics, linguistics, physiology, engineering, psychology, and philosophy. Other key AI players, including MIT scientist Marvin Minsky, attended the summer conference at Dartmouth. While researchers were able to meet and share information, the conference failed to produce any breakthrough discoveries.

A number of milestones were reached during the 1950s that set the stage for later developments, including an AI program called Logic Theorist. Created by the research team of Herbert A. Simon and Allan Newell, the program was capable of proving theorems. It served as the basis for another program the two men created called General Problem Solver, which in turn set the stage for the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT