微博

ECO中文网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 7703|回复: 0
打印 上一主题 下一主题
收起左侧

2018 约书亚-本吉奥

[复制链接]
跳转到指定楼层
1
发表于 2022-4-23 23:32:29 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Yoshua Bengio
Birth: March 5, 1964 in Paris.

Education: B.Eng. in computer engineering (McGill, 1986); M.Sc. in computer science (McGill, 1988); Ph.D. in computer science (McGill, 1991).

Experience: Massachusetts Institute of Technology:  Post-doctoral Fellow, Brain and Cognitive Sciences Dept. (1991-2). Bell Labs: Post-doctoral Fellow, Learning and vision algorithms (1992-3). University of Montreal: Assistant Professor (1993-1997); Associate Professor (1997-2002); Full Professor (2002-Present).

Honors and Awards (selected): Canada Research Chair, Tier 2 (2000); Canada Research Chair, Tier 1 (2006); Government of Quebec, Prix Marie-Victorin (2017); Officer of the Order of Canada (2017); Officer of the Royal Society of Canada (2017); Lifetime Achievement Award,  Canadian Artificial Intelligence Association (2018); ACM A.M. Turing Award (2018); Killam Prize in Natural Sciences (2019); Neural Networks Pioneer Award, IEEE Computational Intelligence Society (2019); Fellow of the Royal Society (2020).

YOSHUA BENGIO DL Author Profile link
Canada – 2018
CITATION
For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

SHORT ANNOTATED
BIBLIOGRAPHY
RESEARCH
SUBJECTS
Yoshua Bengio was born to two college students in Paris, France. His parents had rejected their traditional Moroccan Jewish upbringings to embrace the 1960s counterculture’s focus on personal freedom and social solidarity.  He attributes his comfort in following his “scientific intuition” to this upbringing.[1]In search of a more inclusive society, the family moved to Montreal, in the French-speaking Canadian province of Quebec, when Yoshua was twelve years old.

Bengio spent his childhood as a self-described “typical nerd,” bored by high school and reading alone in the library. Like many in his generation he discovered computers during his teenage years, pooling money earned from newspaper delivery with his brother to purchase Atari 800 and Apple II personal computers. This led him to study computer engineering at McGill. Unlike a typical computer science curriculum, this included significant training in physics and continuous mathematics, providing essential mathematical foundations for his later work in machine learning.

After earning his first degree in 1986, Bengio remained at McGill to follow up with a masters’ degree in 1988 and a Ph.D. in computer science in 1991. His study was funded by a graduate scholarship from the Canadian government. He was introduced to the idea of neural networks when reading about massively parallel computation and its application to artificial intelligence. Discovering the work of Geoffey Hinton, his co-awardee, awakened an interest in the question “what is intelligence?” This chimed with his childhood interest in science fiction, in what he called a “watershed moment” for his career. Bengio found a thesis advisor, Renato De Mori, who studied speech recognition and was beginning to transition from classical AI models to statistical approaches.

As a graduate student he was able to attend conferences and workshops to participate in the tight-knit but growing community interested in neural networks, meeting what he called the “French mafia of neural nets” including co-awardee Yann LeCun. He describes Hinton and LeCun as his most important career mentors, though he did not start working with Hinton until years later. He first did a one-year postdoc at MIT with Michael I. Jordan which helped him advance his understanding of probabilistic modeling and recurrent neural networks. Then, as a postdoctoral fellow at Bell Labs, he worked with LeCun to apply techniques from his Ph.D. thesis to handwriting analysis. This contributed to a groundbreaking AT&T automatic check processing system, based around an algorithm that read the numbers written by hand on paper checks by combining neural networks with probabilistic models of sequences.

Bengio returned to Montreal in 1993 as a faculty member at its other major university, the University of Montreal. He won rapid promotion, becoming a full professor in 2002. Bengio suggests that Canada’s “socialist” commitment to spreading research funding widely and towards curiosity-driven research explains its willingness to support his work on what was then an unorthodox approach to artificial intelligence. This, he believes, laid the groundwork for Canada’s current strength in machine learning.

In 2000 he made a major contribution to natural language processing with the paper “A Neural Probabilistic Language Model.” Training networks to distinguish meaningful sentences from nonsense was difficult because there are so many different ways to express a single idea, with most combinations of words being meaningless. This causes what the paper calls the “curse of dimensionality,” demanding infeasibly large training sets and producing unworkably complex models. The paper introduced high-dimensional word embeddings as a representation of word meaning, letting networks recognize the similarity between new phrases and those included in their training sets, even when the specific words used are different. The approach has led to a major shift in machine translation and natural language understanding systems over the last decade.

Bengio’s group further improved the performance of machine translation systems by combining neural word embeddings with attention mechanisms. “Attention” is another term borrowed from human cognition. It helps networks to narrow their focus to only the relevant context at each stage of the translation in ways that reflect the context of words, including, for example, what a pronoun or article is referring to.

Together with Ian Goodfellow, one of his Ph.D. students, Bengio developed the concept of “generative adversarial networks.” Whereas most networks were designed to recognize patterns, a generative network learns to generate objects that are difficult to distinguish from those in the training set. The technique is “adversarial” because a network learning to generate plausible fakes can be trained against another network learning to identify fakes, allowing for a dynamic learning process inspired by game theory. The process is often used to facilitate unsupervised learning. It has been widely used to generate images, for example to automatically generate highly realistic photographs of non-existent people or objects for use in video games.

Bengio had been central to the institutional development of machine learning in Canada. In 2004, a program in Neural Computation and Adaptive Perception was funded within the Canadian Institute for Advanced Research (CIFAR). Hinton was its founding director, but Bengio was involved from the beginning as a Fellow of the institute. So was LeCun, with whom Bengio has been codirecting the program (now renamed Learning in Machines and Brains) since 2014. The name reflects its interdisciplinary cognitive science agenda, with a two-way passage of ideas between neuroscience and machine learning.

Thanks in part to Bengio, the Montreal area has become a global hub for work on what Bengio and his co-awardees call “deep learning.” He helped to found Mila, the Montreal Institute for Learning Algorithms (now the Quebec Artificial Intelligence Institute), to bring together researchers from four local institutions. Bengio is its scientific director, overseeing a federally funded center of excellence that co-locates faculty and students from participating institutions on a single campus. It boasts a broad range of partnerships with famous global companies and an increasing number of local machine learning startup firms. As of 2020, Google, Facebook, Microsoft and Samsung had all established satellite labs in Montreal. Bengio himself has co-founded several startup firms, most notably Element AI in 2016 which develops industrial applications for deep learning technology.

Author: Thomas Haigh

[1] Personal details and quotes are from Bengio’s Heidelberg Laureate interview - https://www.youtube.com/watch?v=PHhFI8JexLg.




约书亚-本吉奥
诞生。1964年3月5日在巴黎。

教育背景:计算机工程学士(麦吉尔大学,1986年);计算机科学硕士(麦吉尔大学,1988年);计算机科学博士 计算机工程学士(麦吉尔大学,1986);计算机科学硕士(麦吉尔大学,1988);计算机科学博士(麦吉尔大学,1991)。

工作经历。马萨诸塞州技术研究所。 脑与认知科学部博士后研究员(1991-2)。贝尔实验室。博士后研究员,学习和视觉算法(1992-3)。蒙特利尔大学。助理教授(1993-1997);副教授(1997-2002);正教授(2002至今)。

荣誉和奖项(部分)。加拿大研究主席,第2级(2000年);加拿大研究主席,第1级(2006年);魁北克政府,Marie-Victorin奖(2017年);加拿大勋章官员(2017年);加拿大皇家学会官员(2017年);加拿大人工智能协会终身成就奖(2018年);ACM A.M. 图灵奖(2018年);自然科学Killam奖(2019年);IEEE计算智能学会神经网络先锋奖(2019年);皇家学会会员(2020年)。

YOSHUA BENGIO DL作者简介链接
加拿大 - 2018
奖状
由于在概念和工程方面的突破,使深度神经网络成为计算的重要组成部分。

短篇注释
书目
研究成果
题目
约书亚-本吉奥出生在法国巴黎,是两个大学生的儿子。他的父母摒弃了他们传统的摩洛哥犹太教教养,接受了20世纪60年代反主流文化对个人自由和社会团结的关注。 他把自己在遵循 "科学直觉 "方面的舒适感归功于这种成长经历。[1]为了寻找一个更具包容性的社会,在约书亚12岁时,他们全家搬到了加拿大魁北克法语省的蒙特利尔。

本吉奥作为一个自称是 "典型的书呆子 "度过了他的童年,他在高中时感到无聊,独自在图书馆阅读。像他那一代人中的许多人一样,他在十几岁的时候发现了计算机,与他的兄弟一起把送报纸赚来的钱集中起来购买Atari 800和Apple II个人计算机。这促使他在麦吉尔大学学习计算机工程。与典型的计算机科学课程不同,这包括物理学和连续数学的重要培训,为他后来的机器学习工作提供了重要的数学基础。

1986年获得第一个学位后,本吉奥继续留在麦吉尔大学,1988年获得硕士学位,1991年获得计算机科学博士学位。他的学习是由加拿大政府的研究生奖学金资助的。他是在阅读有关大规模并行计算及其在人工智能中的应用的文章时了解到神经网络的想法的。发现他的共同获奖者杰弗里-辛顿的工作,唤起了他对 "什么是智能 "这一问题的兴趣。这与他童年时对科幻小说的兴趣相吻合,他称这是他职业生涯的一个 "分水岭时刻"。本吉奥找到了一位论文导师雷纳托-德莫里(Renato De Mori),他研究语音识别,并开始从经典的人工智能模型过渡到统计方法。

作为一名研究生,他能够参加会议和研讨会,参与对神经网络感兴趣的紧密但不断增长的社区,遇到了他称之为 "法国神经网络黑手党 "的人,包括共同获奖者Yann LeCun。他将Hinton和LeCun描述为他最重要的职业导师,尽管他直到多年后才开始与Hinton合作。他首先在麻省理工学院与Michael I. Jordan做了一年的博士后,这帮助他提高了对概率建模和循环神经网络的理解。然后,作为贝尔实验室的博士后研究员,他与LeCun合作,将其博士论文中的技术应用于手写分析。这为AT&T的自动支票处理系统做出了开创性的贡献,该系统围绕一种算法,通过结合神经网络和序列的概率模型来读取纸质支票上的手写数字。

1993年,本吉奥回到蒙特利尔,成为其另一所主要大学蒙特利尔大学的教职员工。他迅速赢得了晋升,在2002年成为全职教授。本吉奥认为,加拿大的 "社会主义 "承诺,即广泛传播研究资金和好奇心驱动的研究,说明加拿大愿意支持他在当时非正统的人工智能方面的工作。他认为,这为加拿大目前在机器学习方面的实力奠定了基础。

2000年,他发表了 "神经概率语言模型 "一文,对自然语言处理做出了重大贡献。训练网络来区分有意义的句子和无意义的句子是很困难的,因为有许多不同的方式来表达一个想法,大多数的词语组合是没有意义的。这导致了论文中所说的 "维度的诅咒",要求不可行的大型训练集,并产生不可行的复杂模型。这篇论文引入了高维词语嵌入作为词语意义的代表,让网络认识到新短语和包含在其训练集中的短语之间的相似性,即使使用的具体词语不同。这种方法在过去十年中导致了机器翻译和自然语言理解系统的重大转变。

本吉奥的小组通过将神经词嵌入与注意力机制相结合,进一步提高了机器翻译系统的性能。"注意力 "是另一个从人类认知中借用的术语。它帮助网络在翻译的每个阶段将重点缩小到只有相关的上下文,其方式反映了单词的上下文,包括例如一个代词或文章指的是什么。

本吉奥与他的博士生之一伊恩-古德费罗一起,提出了 "生成对抗网络 "的概念。大多数网络被设计用来识别模式,而生成式网络则学习生成难以与训练集中的对象区分的对象。这种技术是 "对抗性的",因为一个学习生成可信假货的网络可以针对另一个学习识别假货的网络进行训练,从而允许一个受博弈论启发的动态学习过程。这个过程经常被用来促进无监督的学习。它已被广泛用于生成图像,例如自动生成不存在的人或物体的高度逼真的照片,用于视频游戏。

Bengio曾是加拿大机器学习机构发展的核心。2004年,加拿大高级研究所(CIFAR)内部资助了一个神经计算和自适应感知的项目。辛顿是该项目创始主任,但本吉奥从一开始就作为该研究所的研究员参与其中。LeCun也是如此,自2014年以来,Bengio一直与他共同指导该项目(现在更名为 "机器和大脑的学习")。这个名字反映了其跨学科的认知科学议程,在神经科学和机器学习之间有一个双向的思想通道。

部分归功于本吉奥,蒙特利尔地区已经成为本吉奥和他的合作者所称的 "深度学习 "的全球工作中心。他帮助创建了Mila,即蒙特利尔学习算法研究所(现在的魁北克人工智能研究所),以汇集来自四个地方机构的研究人员。本吉奥是该研究所的科学主任,负责监督一个联邦资助的卓越中心,该中心将来自参与机构的教师和学生集中在一个校园内。它拥有与全球知名公司和越来越多的本地机器学习创业公司的广泛合作。截至2020年,谷歌、Facebook、微软和三星都已在蒙特利尔建立了卫星实验室。本吉奥本人也共同创办了几家创业公司,最引人注目的是2016年的Element AI,该公司为深度学习技术开发工业应用。

作者。托马斯-海格

[1] 个人细节和引文来自本吉奥的海德堡奖得主访谈--https://www.youtube.com/watch?v=PHhFI8JexLg
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 分享分享 分享淘帖 顶 踩
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|小黑屋|手机版|网站地图|关于我们|ECO中文网 ( 京ICP备06039041号  

GMT+8, 2024-11-22 21:01 , Processed in 0.088437 second(s), 20 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表