Learning

This post was archived from a Google Plus post found here


Requiring the machines to perform or learn with the same resources as humans is especially arbitrary and stupid. We do not ask for humans to learn the task without any other generalized world experience that they can apply to the game, or to forego the mentorship or textbooks that AlphaGo did not use. 

> Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.

More: https://arxiv.org/abs/1705.08807

View post on imgur.com

via +Roman Yampolskiy

// AI experts estimate that computers will beat human Starcraft players in around 5 years, will talk with convincing speech in 10, will write best selling novels in 30, and will achieve general parity with human performance in 45 years. 

Many of these tasks have already had significant defeats (like Poker and Go). The article clarifies that Go needs to be won with human-scale training. From Table S5:

> Defeat the best Go players, training only on as many

games as the best Go players have played. For reference, DeepMind’s AlphaGo has probably played a hundred million games of self-play, while Lee Sedol has probably played 50,000 games in his life. 

// The reasoning here is that AlphaGo is better at Go because it has more experience than its human counterparts, giving it a profound advantage over any human player. 

To me, the interesting thing here is how we’re crafting the bounds of what constitutes “human level performance” as a kind of defensive reaction against the encroaching machine.