AI weekly (50/2019)

My selection of news on AI/ML and Data Science

AI weekly (50/2019)

My selection of news on AI/ML and Data Science

+++ With Hanabi-playing bot, Facebook AI reaches a milestone in cooperative AI +++ Baidu is Training an AI to Move Like a Human +++ Google launches E2 family of virtual machines for smaller workloads +++ AI Index 2019 assesses global AI research, investment, and impact +++ Turing Award winner Yoshua Bengio wants AI systems that can reason, plan, and imagine +++ Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant +++ Why scientists need to be better at data visualization +++

Breakthrough — Or So They Say

With Hanabi-playing bot, Facebook AI reaches a milestone in cooperative AI. Facebook AI has announced today that it has met a milestone in the creation of a bot that can play Hanabi, a cooperative card game involving imperfect information, with near-perfect results. This milestone is important because games such as Hanabi represent real-world situations where an AI must participate in complex tasks with humans, discern intentionality from human actions and make decisions based on imperfect information. This bot not only improves on previous AI systems but exceeds the capabilities of elite human players, as judged by experienced players who evaluated it. Facebook’s AI team achieved the feat by applying search techniques in conjunction with deep reinforcement learning. Facebook AI researcher Noam Brown says: “The benefits that we get from search are stronger than the benefits that have been gained through all of the deep reinforcement learning algorithms that have been used in the past.” The paper is on arXiv, the code on github.

Baidu is Training an AI to Move Like a Human. Achieving human-level versatile locomotion is the holy grail for designing humanoid robots. It remains a top research challenge, as human movement is an innate ability that’s achieved through a complex and highly coordinated interaction between hundreds of muscles, skeletons, tendons, joints, and more. A team from Baidu has now won the NeurIPS 2019: Learn to Move Challenge, which asked participants to develop a controller for a physiologically plausible 3D human model to move in OpenSim, a physics-based simulation environment, using the osim-rl reinforcement learning environment (see github). The winner video is on youtube. Most algorithms they used in this challenge are from PARL, their home-grown reinforcement learning framework, which was open-sourced this summer.

Tools and Frameworks

Google launches E2 family of virtual machines for smaller workloads. Google launched a new tier of general-purpose virtual machines with “dynamic resource management” capabilities that help ensure a lower total cost of ownership. The new VMs, the E2 family, are targeted at workloads that don’t need such a large instance type or access to graphics processing units or local solid state storage. The E2 VMs offer significant savings because they run on a custom central processing unit scheduler that dynamically maps virtual CPU and memory to a physical CPU and memory. That helps ensure the VMs are more efficient, with customers billed only for the compute power they actually use. The E2 VMs are supposed to provide performance comparable to Google’s N1 family of VMs, but with average total-cost-of-ownership savings of 31%. They should be a good fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases and development environments. Google said the new E2 VMs are available in beta test mode starting in eight regions, including Belgium and Netherlands, with additional regions planned for later.

Business News and Applications

AI Index 2019 assesses global AI research, investment, and impact. Leaders in the AI community came together to release the 2019 AI Index report, an annual attempt to examine the biggest trends shaping the AI industry, breakthrough research, and AI’s impact to society. It also examines trends like AI hiring practices, private investment, AI research contributions by nation, researchers leaving academia for industry, and how much AI plays a role in specific industries. The report also notes strides in the reduction of the amount of time it takes to train AI systems and computing costs, two of the biggest hindrances to AI adoption rates. Three short facts: (1) AI is the most popular area for computer science PhD specialization, and in 2018, 21% of graduates specialized in machine learning or AI. (2) From 1998 to 2018, peer-reviewed AI research grew by 300%, whereas the total number of AI papers on arXiv increased 20 times between 2010 and 2019. (3) In 2019, global private AI investment was over $70 billion, with startup investment $37 billion, M&A $34 billion, IPOs $5 billion.

Publications

Turing Award winner Yoshua Bengio wants AI systems that can reason, plan, and imagine. An interesting interview on IEEE Spectrum with Yoshua Bengio, a professor at the University of Montreal, on how he sees the future of AI and Deep Learning. Some key statements: (1) “I don’t think we’re anywhere close today to the level of intelligence of a two-year-old child. But maybe we have algorithms that are equivalent to lower animals, for perception.” (2) Attention mechanism may be the crucial building blocks for replicating human reasoning and imagination. (3) Causality, generalization, transfer learning and adaptability will be very important for progress of machine learning. (4) AI systems need to acquire a model of how the world works, understanding, for example, intuitive physics.

Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant (arXiv:1912.03234v1). Two researchers from Amazon, Seattle, describe methods used to improve the joke skill of Voice-controlled virtual assistants (VVA) through personalization. VVA depend to a good part on the emotional and personalized experience they deliver, with humor being a key component in providing an engaging interaction. They present several methods, both traditional NLP techniques as well as deep learning based models. Online A/B tests show that the NLP-based model consistently outperforms heuristic methods. Offline results suggest further improvements from DL-approaches, but at the cost of added complexity and latency.

Why scientists need to be better at data visualization. The Knowable Magazin, from Annual Reviews, has published a nice essay that covers a number of well known issues in chart design, e.g. chart choice, (in)effectiveness of certain chart types, or challenges when choosing colors to represent data. Quote: “The scientific literature is riddled with bad charts and graphs, leading to misunderstanding and worse. Avoiding design missteps can improve understanding of research.”

Tutorials

See also