AI weekly (03/2020)

My selection of news on AI/ML and Data Science

AI weekly (03/2020)

My selection of news on AI/ML and Data Science

+++ Google’s AI language model Reformer can process the entirety of novels +++ World Economic Forum launches toolkit to help corporate boards build AI-first companies +++ Apple has quietly acquired low-power AI startup Xnor.ai for about $200M +++ Up to two billion times acceleration of scientific simulations with deep neural architecture search +++ Researchers propose system that taps AI to see hidden objects around corners +++

Breakthrough — Or So They Say

Google’s AI language model Reformer can process the entirety of novels. One of the prime challenges of a language-based AI model is to understand the context of the surrounding content. To solve this problem, Google has introduced a new model called Reformer, which understands the context of 1 million lines using just 16GB space. The company built this to solve problems of its old model Transformer. The power of Transformer comes from so-called attention, the process by which it considers all possible pairs of words within the context window to understand the connections between them. So, in the case of a text of 100K words, this would require assessment of 100K x 100K word pairs, or 10 billion pairs for each step, which is impractical. Reformer combines two crucial techniques to solve the problems of attention and memory allocation that limit Transformer’s application to long context windows. Reformer uses locality-sensitive-hashing (LSH) to reduce the complexity of attending over long sequences and reversible residual layers to more efficiently use the memory available. The code and examples are available on github.

Tools and Frameworks

Business News and Applications

World Economic Forum launches toolkit to help corporate boards build AI-first companies. This resource for boards of directors consists of: an introduction; 12 modules intended to align with traditional board committees, working groups and oversight concerns; and a glossary of artificial intelligence (AI) terms. Topics covered range from competitive and brand strategy, through cybersecurity and technology strategy, to operational implementation and change/leadership aspects. The toolkit was co-created by the WEF, Accenture, IBM and others.

Apple has quietly acquired low-power AI startup Xnor.ai for about $200M. According to several sources cited by GeekWire and featured on SiliconANGLE, Apple has acquired Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence tools, which began its life as a spinout from the Allen Institute for Artificial Intelligence. The Allen Institute is a nonprofit research center based in Seattle that operates an in-house startup incubator. It launched 2014 with a $125 million grant from late Microsoft Corp. co-founder Paul Allen. Xnor.ai has developed a software platform that reduces the amount of power it takes to run machine learning models. It works by converting the mathematical calculations an AI algorithm uses to process data into binary operations, the simplest type of operation a processor can perform. Xnor.ai’s binary-based representation of a given mathematical problem thus takes fewer steps to execute than if it were described in standard code, which has the end result of improving power efficiently. The company notched several notable advances in 2019, including the development of a standalone AI chip capable of running for years on solar power or a coin-sized battery, the debut of an AI-enabled gizmo that can autonomously monitor grocery shelves. There are many ways Apple could put Xnor.ai’s technology and know-how to use. One obvious application is improving the power efficiency of the system-on-chips inside the iPhone and iPad, which have machine learning accelerators to boost application performance.

Publications

Up to two billion times acceleration of scientific simulations with deep neural architecture search (arXiv:2001.08055). Quote: “Computer simulations are invaluable tools for scientific discovery. However, accurate simulations are often slow to execute, which limits their applicability to extensive parameter exploration, largescale data analysis, and uncertainty quantification. A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets, which can be prohibitively expensive to obtain with slow simulations. Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data. The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters. Our approach also inherently provides emulator uncertainty estimation, adding further confidence in their use.”

Researchers propose system that taps AI to see hidden objects around corners. Can sensors see behind the corners of obstacles in real time? As it turns out, yes. A study by researchers at Stanford, Rice University, Princeton, and Southern Methodist University published in the journal Optica proposes a system that’s capable of producing around-the-bend images at high resolutions and speeds. Quote from the abstract: “We use spectral estimation theory to derive a noise model for NLoS (non-line-of-sight) correlography. We use this model to develop a speckle correlation-based technique for recovering occluded objects from indirect reflections. Then, using only synthetic data sampled from the proposed noise model, and without knowledge of the experimental scenes nor their geometry, we train a deep convolutional neural network to solve the noisy phase retrieval problem associated with correlography.”

Tutorials

See also