MIT的AI实验室对研究生学习的建议(一)

王朝other·作者佚名  2006-01-10
窄屏简体版  字體: |||超大  

怎样阅读文献

Next: Getting connected Previous: Introduction Up: How To Do Research In the

MIT AI Lab

Reading AI

Many researchers spend more than half their time reading. You can learn a lo

t more quickly from other people's work than from doing your own. This secti

on talks about reading within AI; section covers reading about other subject

s.

The time to start reading is now. Once you start seriously working on your t

hesis you'll have less time, and your reading will have to be more focused o

n the topic area. During your first two years, you'll mostly be doing class

work and getting up to speed on AI in general. For this it suffices to read

textbooks and published journal articles. (Later, you may read mostly drafts

; see section .)

The amount of stuff you need to have read to have a solid grounding in the f

ield may seem intimidating, but since AI is still a small field, you can in

a couple years read a substantial fraction of the significant papers that ha

ve been published. What's a little tricky is figuring out which ones those a

re. There are some bibliographies that are useful: for example, the syllabi

of the graduate AI courses. The reading lists for the AI qualifying exams at

other universities-particularly Stanford-are also useful, and give you a le

ss parochial outlook. If you are interested in a specific subfield, go to a

senior grad student in that subfield and ask him what are the ten most impor

tant papers and see if he'll lend you copies to Xerox. Recently there have b

een appearing a lot of good edited collections of papers from a subfield, pu

blished particularly by Morgan-Kauffman.

The AI lab has three internal publication series, the Working Papers, Memos,

and Technical Reports, in increasing order of formality. They are available

on racks in the eighth floor play room. Go back through the last couple yea

rs of them and snag copies of any that look remotely interesting. Besides th

e fact that a lot of them are significant papers, it's politically very impo

rtant to be current on what people in your lab are doing.

There's a whole bunch of journals about AI, and you could spend all your tim

e reading them. Fortunately, only a few are worth looking at. The principal

journal for central-systems stuff is Artificial Intelligence, also referred

to as ``the Journal of Artificial Intelligence'', or ``AIJ''. Most of the re

ally important papers in AI eventually make it into AIJ, so it's worth scann

ing through back issues every year or so; but a lot of what it prints is rea

lly boring. Computational Intelligence is a new competitor that's worth chec

king out. Cognitive Science also prints a fair number of significant AI pape

rs. Machine Learning is the main source on what it says. IEEE PAMI is probab

ly the best established vision journal; two or three interesting papers per

issue. The International Journal of Computer Vision (IJCV) is new and so far

has been interesting. Papers in Robotics Research are mostly on dynamics; s

ometimes it also has a landmark AIish robotics paper. IEEE Robotics and Auto

mation has occasional good papers.

It's worth going to your computer science library (MIT's is on the first flo

or of Tech Square) every year or so and flipping through the last year's wor

th of AI technical reports from other universities and reading the ones that

look interesting.

Reading papers is a skill that takes practice. You can't afford to read in f

ull all the papers that come to you. There are three phases to reading one.

The first is to see if there's anything of interest in it at all. AI papers

have abstracts, which are supposed to tell you what's in them, but frequentl

y don't; so you have to jump about, reading a bit here or there, to find out

what the authors actually did. The table of contents, conclusion section, a

nd introduction are good places to look. If all else fails, you may have to

actually flip through the whole thing. Once you've figured out what in gener

al the paper is about and what the claimed contribution is, you can decide w

hether or not to go on to the second phase, which is to find the part of the

paper that has the good stuff. Most fifteen page papers could profitably be

rewritten as one-page papers; you need to look for the page that has the ex

citing stuff. Often this is hidden somewhere unlikely. What the author finds

interesting about his work may not be interesting to you, and vice versa. F

inally, you may go back and read the whole paper through if it seems worthwh

ile.

Read with a question in mind. ``How can I use this?'' ``Does this really do

what the author claims?'' ``What if...?'' Understanding what result has been

presented is not the same as understanding the paper. Most of the understan

ding is in figuring out the motivations, the choices the authors made (many

of them implicit), whether the assumptions and formalizations are realistic,

what directions the work suggests, the problems lying just over the horizon

, the patterns of difficulty that keep coming up in the author's research pr

ogram, the political points the paper may be aimed at, and so forth.

It's a good idea to tie your reading and programming together. If you are in

terested in an area and read a few papers about it, try implementing toy ver

sions of the programs being described. This gives you a more concrete unders

tanding.

Most AI labs are sadly inbred and insular; people often mostly read and cite

work done only at their own school. Other institutions have different ways

of thinking about problems, and it is worth reading, taking seriously, and r

eferencing their work, even if you think you know what's wrong with them.

Often someone will hand you a book or paper and exclaim that you should read

it because it's (a) the most brilliant thing ever written and/or (b) precis

will find it not particularly brilliant and only vaguely applicable. This ca

n be perplexing. ``Is there something wrong with me? Am I missing something?

'' The truth, most often, is that reading the book or paper in question has,

more or less by chance, made your friend think something useful about your

research topic by catalyzing a line of thought that was already forming in t

heir head.

A whole lot of people at MIT

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
 
 
© 2005- 王朝網路 版權所有 導航