Docstoc

Nguyen

Document Sample
Nguyen Powered By Docstoc
					                                                                                         Nguyen 1


Khoa Nguyen

Dr. Lyle / Dr. Pilgrim

CSC 540 - Social, Ethical, and Professional Issues in the Information Age

24 April 2011

                             Artificial Intelligence: Now and Then.

       Computers have been around us for a long period of time. “The first electronic computers

were developed in the mid-20th century (1940–1945). Originally, they were the size of a large

room, consuming as much power as several hundred modern personal computers (PCs) ”

("Computer," 2011). Since then, they have become an essential and indispensable part of our

daily lives. As computer science continues to grow, we start thinking about all of the potentials

that we can make machines do. Mostly because that is a part of what computer science is all

about: giving ideas of what should be done and making plans of doing it. So our minds start

wandering: “How can we make these plain stupid machines think what we think, do what we

do?” As a result, today we have a big and important branch of computer science called Artificial

Intelligence (AI).

       So what is AI exactly? To fully understand this special term, let us define each word

separately. On one hand, “Artificial” describes any objects which are not created by nature but

by human. In other words, it is used for anything that is made by human and does not exist in

nature. On the other hand, according to dictionary.reference.com, “Intelligence” is the “capacity

for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping

truths, relationships, facts, meaning, etc.” ("Intelligence," 2011) Consequently, we can conclude

that “Artificial Intelligence” is the ability to learn, reason, understand… that is made by human.
                                                                                          Nguyen 2


       Artificial intelligence (AI) is the intelligence of machines and the branch of computer

       science that aims to create it. AI textbooks define the field as "the study and design of

       intelligent agents" where an intelligent agent is a system that perceives its environment

       and takes actions that maximize its chances of success. John McCarthy, who coined the

       term in 1956, defines it as "the science and engineering of making intelligent machines."

       ("Artificial Intelligence," 2011)

       Let us look at one of the most significant creation of mankind in AI as an illustration: the

IBM computing system, Watson. “Watson is an artificial intelligence computer system capable

of answering questions posed in natural language, developed in IBM's DeepQA project by a

research team led by principal investigator David Ferrucci. Watson was named for IBM's first

president, Thomas J. Watson.” ("Watson (computer)," 2011) And according to the New York

Times, for the last three years, IBM scientists have been developing what they expect will be the

world’s most advanced “question answering” machine, able to understand a question posed in

everyday human elocution – “natural language,” as computer scientists call it – and respond with

a precise, factual answer. (Thompson, 2010) That means they want to build a system that can do

more than what typical computer do nowadays: giving the answers (information) when the

human input questions (data). This system is required to answer questions by speaking to the

users with natural language rather than pointing them to the answers online like search engines

such as Google, Yahoo, Bing do. And they have succeeded! We have seen the demonstration of

Watson in class and what this magnificent system is capable of. What I want discuss here is how

Watson works.

       Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a

       roomful of servers working at speeds thousands of times faster than most ordinary
                                                                                           Nguyen 3


       desktops. Over its three-year life, Watson stored the content of tens of millions of

       documents, which it now accessed to answer questions about almost anything. (Watson is

       not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is

       already in its “brain.”) During the sparring matches, Watson received the questions as

       electronic texts at the same moment they were made visible to the human players; to

       answer a question, Watson spoke in a machine-synthesized voice through a small black

       speaker on the game-show set. When it answered the Burj clue — “What is Dubai?”

       (“Jeopardy!” answers must be phrased as questions) — It sounded like a perkier cousin of

       the computer in the movie “WarGames” that nearly destroyed the world by trying to start

       a nuclear war. (Thompson, 2010)

In my point of view, intelligence is partly based on knowledge, and knowledge is based on

learning and experience. Hence, the first step in AI is finding a way to input knowledge into

these machines, making them “learn.” Secondly, computer scientists will have to come up with

such algorithms for these computers to utilize this knowledge in order to do what they need to

do. That is why Watson has “the content of tens of millions of documents” inside as

“knowledge” as in the first step in Artificial Intelligence development. This impressive system is

a giant leap in computing for mankind.

       Technologists have long regarded this sort of artificial intelligence as a holy grail,

       because it would allow machines to converse more naturally with people, letting us ask

       questions instead of typing keywords. Software firms and university scientists have

       produced question-answering systems for years, but these have mostly been limited to

       simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed
                                                                                           Nguyen 4


       that even for the latest artificial intelligence, the game was simply too hard: the clues are

       too puzzling and allusive, and the breadth of trivia is too wide. (Thompson, 2010)

One can only imagine how helpful Watson will be to our daily lives in the future. Indeed, IBM

itself has stated this in the official website: “The DeepQA technology that powers Watson could

soon power many solutions in our world. In fact, IBM is already working to implement

applications in the fields of healthcare, finance and telecom.” ("Watson in the world," 2011)

       We have gone through one of the most intelligent system in history of computing.

However, Artificial Intelligence is not always about making computers that can answer questions

through text files. A recent study has shown that computer can be made to understand human

thoughts as well. From the AutoNOMOS innovation labs of Freie Universität Berlin, computer

scientists have developed a system making it possible to steer a car with the power of thought.

       The scientists from Freie Universität first used the sensors for measuring brain waves in

       such a way that a person can move a virtual cube in different directions with the power of

       his or her thoughts. The test subject thinks of four situations that are associated with

       driving, for example, "left" or "accelerate." In this way the person trained the computer to

       interpret bioelectrical wave patterns emitted from his or her brain and to link them to a

       command that could later be used to control the car. The computer scientists connected

       the measuring device with the steering, accelerator, and brakes of a computer-controlled

       vehicle, which made it possible for the subject to influence the movement of the car just

       using his or her thoughts. (Freie Universität Berlin, 2011)

According to this article, the driver equipped with headphone-looking EEG sensors (Figure 1)

was able to control the car with no difficulties.
                                                                                           Nguyen 5




                               Figure 1: EEG sensors used by Freie Universitaet Berlin

The only disadvantage this device has at the moment is that there was only a slight delay

between the envisaged commands and the response of the car. Nevertheless, I am confident that

the problem would be fixed easily in the near future. This technology is definitely an impressive

success because it will be a giant hope for people with certain disabilities or diseases making

them unable to drive normally. On the contrary, we can see an obvious downside of this brilliant

idea: it is not always easy to control our thoughts. At the moment, the system is designed to

understand four basic commands: “left,” “right,” “accelerate” and “brake.” Let us say, the driver

is talking to the passenger and at a certain point of the conversation; the driver thinks the

passenger is “right.” What if his car accidentally interprets this thought into “turn right?” This

would result in a bad situation where the car suddenly turns right without the driver’s consent,

yielding a high chance of getting involved in a real accident. Therefore, this system has to be

designed in such a way that it can distinguish driving commands and everyday conversation in

the human brain, which is far more difficult than it really sounds.

        So we have computers that can understand human thoughts. What about human gestures?

That is certainly possible, according to ScienceDaily. They report that robotic nurses can be used

by surgeons in the future. They also state that it might help to reduce the length of surgeries and

the potential for infection.
                                                                                          Nguyen 6


       Surgeons routinely need to review medical images and records during surgery, but

       stepping away from the operating table and touching a keyboard and mouse can delay the

       surgery and increase the risk of spreading infection-causing bacteria.

       The new approach is a system that uses a camera and specialized algorithms to recognize

       hand gestures as commands to instruct a computer or robot. (Purdue University, 2011)

One of the challenges they found was that developing the proper shapes of hand poses and the

proper hand trajectory movements to reflect and express certain medical functions. Operating

surgeons do not always try to give out gesture commands to the robots. While doing surgeries,

for instance, they might want to discuss what is going on with other surgeons as well as to do a

lot of thinking. When this happens, the surgeon(s) may do some specific gestures to reflect what

they are doing accordingly. However, these should not be confused with the commanding

gestures that they want the robot to perform certain actions. We know that the tiniest mistake in

the operation room can result in a regrettable situation. With that in mind, this system needs to be

considered very carefully, tested very thoroughly before it can actually benefits the healthcare

department.

       Obviously, Artificial Intelligence has changed the world in many ways that we could not

imagine before. Besides, the ultimate goal of AI is to construct machines which can perceive

human-level intelligence. Many technologists as well as computer scientists have questioned:

“When will there be such machines that can actually be as intelligent as human beings?” Well,

nobody can guarantee there is a 100% accurate answer to the question above. On the other hand,

we can always predict what could happen with a high accuracy. According to the result of

an informal poll at the Future of Humanity Institute (FHI) Winter Intelligence conference at

University of Oxford on machine intelligence in January 2011, machines will achieve human-
                                                                                         Nguyen 7


level intelligence in the 2028 to 2150 range. Specifically, this will happen in 2028 (median

estimate: 10% chance), by 2050 (median estimate: 50% chance), or by 2150 (median estimate:

90% chance). Other findings that were mentioned are:

      Industry, academia and the military are the types of organizations most likely to first

       develop human‐level machine intelligence.

      The response to “How positive or negative are the ultimate consequences of the creation

       of a human‐level (and beyond human‐level) machine intelligence likely to be?” were

       bimodal, with more weight given to extremely good and extremely bad outcomes.

      Of the 32 responses to “How similar will the first human‐level machine intelligence be to

       the human brain?” 8 thought “very biologically inspired machine intelligence” the most

       likely, 12 thought “brain‐inspired AGI” (Artificial General Intelligence) and 12 thought

       “entirely de novo AGI” was the most likely.

      Most participants were only mildly confident of an eventual win by IBM’s Watson over

       human contestants in the “Jeopardy!” contest.

       (Sandberg, & Bostrom, 2011)

       In conclusion, it is fairly impressive how plain stupid computers nowadays can be

developed in order to understand our thoughts as well as gestures, or even response to us in

natural language – a way that we can understand them also. Thanks to all of the non-stop

working human minds out there, we can enjoy our lives better in much more meaningful ways.
                                                                                         Nguyen 8


                                           References

Artificial Intelligence. (2011). Wikipedia. Retrieved April 30, 2011, from

       http://en.wikipedia.org/wiki/Artificial_intelligence

Computer. (2011). Wikipedia. Retrieved April 30, 2011, from

       http://en.wikipedia.org/wiki/Computer

Freie Universität Berlin (2011, February 21). Scientists steer car with the power of thought.

       ScienceDaily. Retrieved May 1, 2011, from

       http://www.sciencedaily.com/releases/2011/02/110218083711.htm

Intelligence. (2011). Retrieved April 30, 2011, from

       http://dictionary.reference.com/browse/intelligence

Purdue University (2011, February 3). Future surgeons may use robotic nurse, 'gesture

       recognition'. ScienceDaily. Retrieved May 2, 2011, from

       http://www.sciencedaily.com/releases/2011/02/110203152548.htm

Sandberg, A., & Bostrom, N. (2011). Machine intelligence survey. Informally published

       manuscript, Future of Humanity Institute, University of Oxford, Oxford, England.

       Retrieved from

       http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/MI_survey.pdf

Thompson, C. (2010, June 16). What is I.B.M.’s Watson?. The New York Times, Retrieved May

1, 2011 from

       http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html

Watson (computer). (2011). Wikipedia. Retrieved May 1, 2011, from

       http://en.wikipedia.org/wiki/Watson_(computer)
                                                                       Nguyen 9


Watson in the world. (2011, February 21). Retrieved May 1, 2011 from

       http://www-03.ibm.com/innovation/us/watson/

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:3
posted:10/25/2012
language:Unknown
pages:9