Certain forms of AI are indeed becoming ubiquitous. For example, algorithms (算法) carry out huge volumes of trading on our financial markets, self﹣driving cars are appearing on city streets, and our smartphones are translating from one language into another. These systems are sometimes faster and more perceptive than we humans are. But so far that is only true for the specific tasks for which the systems have been designed. That is something that some AI developers are now eager to change.
Some of today's AI pioneers want to move on from today's world of "weak" or "narrow" AI, to create "strong" or "full" AI, or what is often called artificial general intelligence (AGI). In some respects, today's powerful computing machines already make our brains look weak. AGI could, its advocates say, work for us around the clock, and drawing on all available data, could suggest solutions to many problems. DM, a company focused on the development of AGI, has an ambition to "solve intelligence". "If we're successful," their mission statement reads, "we believe this will be one of the most important and widely beneficial scientific advances ever made."
Since the early days of AI, imagination has outpaced what is possible or even probable. In 1965, an imaginative mathematician called Irving Good predicted the eventual creation of an "ultra﹣intelligent machine …that can far surpass all the intellectual (智力的) activities of any man, however clever." Good went on to suggest that "the first ultra﹣intelligent machine" could be "the last invention that man need ever make."
Fears about the appearance of bad, powerful, man﹣made intelligent machines have been reinforced (强化) by many works of fiction﹣Mary Shelley's Frankenstein and the Terminator film series, for example. But if A I does eventually prove to be our downfall, it is unlikely to be at the hands of human﹣shaped forms like these, with recognisably human motivations such as aggression (敌对行为). Instead, I agree with Oxford University philosopher Nick Bostrom, who believes that the heaviest risks from AGI do not come from a decision to turn against mankind but rather from a dogged pursuit of set objectives at the expense of everything else.
The promise and danger of true AGI are great. But all of today's excited discussion about these possibilities presupposes the fact that we will be able to build these systems. And, having spoken to many of the world's foremost A I researchers, I believe there is good reason to doubt that we will see AGI any time soon, if ever.
(1)What does the underlined word "ubiquitous" in Paragraph 1 probably mean?
A. |
Enormous in quantity. |
B. |
Changeable daily. |
C. |
Stable in quality. |
D. |
Present everywhere. |
(2)What could AGI do for us, according to its supporters?
A. |
Help to tackle problems. |
B. |
Make brains more active. |
C. |
Benefit ambitious people. |
D. |
Set up powerful databases. |
(3)As for Irving Good's opinion on ultra﹣intelligent machines, the author is .
A. |
supportive |
B. |
disapproving |
C. |
fearful |
D. |
uncertain |
(4)What can be inferred about AGI from the passage?
A. |
It may be only a dream. |
B. |
It will come into being soon. |
C. |
It will be controlled by humans. |
D. |
It may be more dangerous than ever. |