分享
 
 
 

MIT的AI实验室对研究生学习的建议(一)

王朝other·作者佚名  2006-01-10
窄屏简体版  字體: |||超大  

怎样阅读文献

Next: Getting connected Previous: Introduction Up: How To Do Research In the

MIT AI Lab

Reading AI

Many researchers spend more than half their time reading. You can learn a lo

t more quickly from other people's work than from doing your own. This secti

on talks about reading within AI; section covers reading about other subject

s.

The time to start reading is now. Once you start seriously working on your t

hesis you'll have less time, and your reading will have to be more focused o

n the topic area. During your first two years, you'll mostly be doing class

work and getting up to speed on AI in general. For this it suffices to read

textbooks and published journal articles. (Later, you may read mostly drafts

; see section .)

The amount of stuff you need to have read to have a solid grounding in the f

ield may seem intimidating, but since AI is still a small field, you can in

a couple years read a substantial fraction of the significant papers that ha

ve been published. What's a little tricky is figuring out which ones those a

re. There are some bibliographies that are useful: for example, the syllabi

of the graduate AI courses. The reading lists for the AI qualifying exams at

other universities-particularly Stanford-are also useful, and give you a le

ss parochial outlook. If you are interested in a specific subfield, go to a

senior grad student in that subfield and ask him what are the ten most impor

tant papers and see if he'll lend you copies to Xerox. Recently there have b

een appearing a lot of good edited collections of papers from a subfield, pu

blished particularly by Morgan-Kauffman.

The AI lab has three internal publication series, the Working Papers, Memos,

and Technical Reports, in increasing order of formality. They are available

on racks in the eighth floor play room. Go back through the last couple yea

rs of them and snag copies of any that look remotely interesting. Besides th

e fact that a lot of them are significant papers, it's politically very impo

rtant to be current on what people in your lab are doing.

There's a whole bunch of journals about AI, and you could spend all your tim

e reading them. Fortunately, only a few are worth looking at. The principal

journal for central-systems stuff is Artificial Intelligence, also referred

to as ``the Journal of Artificial Intelligence'', or ``AIJ''. Most of the re

ally important papers in AI eventually make it into AIJ, so it's worth scann

ing through back issues every year or so; but a lot of what it prints is rea

lly boring. Computational Intelligence is a new competitor that's worth chec

king out. Cognitive Science also prints a fair number of significant AI pape

rs. Machine Learning is the main source on what it says. IEEE PAMI is probab

ly the best established vision journal; two or three interesting papers per

issue. The International Journal of Computer Vision (IJCV) is new and so far

has been interesting. Papers in Robotics Research are mostly on dynamics; s

ometimes it also has a landmark AIish robotics paper. IEEE Robotics and Auto

mation has occasional good papers.

It's worth going to your computer science library (MIT's is on the first flo

or of Tech Square) every year or so and flipping through the last year's wor

th of AI technical reports from other universities and reading the ones that

look interesting.

Reading papers is a skill that takes practice. You can't afford to read in f

ull all the papers that come to you. There are three phases to reading one.

The first is to see if there's anything of interest in it at all. AI papers

have abstracts, which are supposed to tell you what's in them, but frequentl

y don't; so you have to jump about, reading a bit here or there, to find out

what the authors actually did. The table of contents, conclusion section, a

nd introduction are good places to look. If all else fails, you may have to

actually flip through the whole thing. Once you've figured out what in gener

al the paper is about and what the claimed contribution is, you can decide w

hether or not to go on to the second phase, which is to find the part of the

paper that has the good stuff. Most fifteen page papers could profitably be

rewritten as one-page papers; you need to look for the page that has the ex

citing stuff. Often this is hidden somewhere unlikely. What the author finds

interesting about his work may not be interesting to you, and vice versa. F

inally, you may go back and read the whole paper through if it seems worthwh

ile.

Read with a question in mind. ``How can I use this?'' ``Does this really do

what the author claims?'' ``What if...?'' Understanding what result has been

presented is not the same as understanding the paper. Most of the understan

ding is in figuring out the motivations, the choices the authors made (many

of them implicit), whether the assumptions and formalizations are realistic,

what directions the work suggests, the problems lying just over the horizon

, the patterns of difficulty that keep coming up in the author's research pr

ogram, the political points the paper may be aimed at, and so forth.

It's a good idea to tie your reading and programming together. If you are in

terested in an area and read a few papers about it, try implementing toy ver

sions of the programs being described. This gives you a more concrete unders

tanding.

Most AI labs are sadly inbred and insular; people often mostly read and cite

work done only at their own school. Other institutions have different ways

of thinking about problems, and it is worth reading, taking seriously, and r

eferencing their work, even if you think you know what's wrong with them.

Often someone will hand you a book or paper and exclaim that you should read

it because it's (a) the most brilliant thing ever written and/or (b) precis

will find it not particularly brilliant and only vaguely applicable. This ca

n be perplexing. ``Is there something wrong with me? Am I missing something?

'' The truth, most often, is that reading the book or paper in question has,

more or less by chance, made your friend think something useful about your

research topic by catalyzing a line of thought that was already forming in t

heir head.

A whole lot of people at MIT

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
2023年上半年GDP全球前十五强
 百态   2023-10-24
美众议院议长启动对拜登的弹劾调查
 百态   2023-09-13
上海、济南、武汉等多地出现不明坠落物
 探索   2023-09-06
印度或要将国名改为“巴拉特”
 百态   2023-09-06
男子为女友送行,买票不登机被捕
 百态   2023-08-20
手机地震预警功能怎么开?
 干货   2023-08-06
女子4年卖2套房花700多万做美容:不但没变美脸,面部还出现变形
 百态   2023-08-04
住户一楼被水淹 还冲来8头猪
 百态   2023-07-31
女子体内爬出大量瓜子状活虫
 百态   2023-07-25
地球连续35年收到神秘规律性信号,网友:不要回答!
 探索   2023-07-21
全球镓价格本周大涨27%
 探索   2023-07-09
钱都流向了那些不缺钱的人,苦都留给了能吃苦的人
 探索   2023-07-02
倩女手游刀客魅者强控制(强混乱强眩晕强睡眠)和对应控制抗性的关系
 百态   2020-08-20
美国5月9日最新疫情:美国确诊人数突破131万
 百态   2020-05-09
荷兰政府宣布将集体辞职
 干货   2020-04-30
倩女幽魂手游师徒任务情义春秋猜成语答案逍遥观:鹏程万里
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案神机营:射石饮羽
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案昆仑山:拔刀相助
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案天工阁:鬼斧神工
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案丝路古道:单枪匹马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:与虎谋皮
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:李代桃僵
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:指鹿为马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:小鸟依人
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:千金买邻
 干货   2019-11-12
 
推荐阅读
 
 
 
>>返回首頁<<
 
靜靜地坐在廢墟上,四周的荒凉一望無際,忽然覺得,淒涼也很美
© 2005- 王朝網路 版權所有