Mobile

Transform 2019: Hear from the movers and shakers in AI

Artificial intelligence is transforming business and offering significant strategic and practical opportunities, from natural language processing and smart speech to IoT and edge computing.

While the tech has become democratized, allowing companies of any size to reap the benefits, some companies and innovators are leading the charge — and they’ll be at this year’s Transform event in San Francisco on July 10 and 11. Join us to get in the room with them, and look over their shoulders at how it’s done. They’ll offer inspiring and practical takeaways — ones crucial to your business.

Here’s a look at a few of those thought leaders:

Andrew Moore, Head of Google Cloud AI

Andrew Moore, former dean of Carnegie Mellon University

In an increasingly competitive cloud market, Google is positioning itself as the go-to for businesses from startups to enterprises, with dozens of new AI-powered products and services that are easy to access even for non data scientists. Consider Google Cloud Platform, which offers AI creators a new, shared, end-to-end environment for teams to test, train, and deploy models from germ of an AI strategy all the way to launch. Google Cloud is making a bid to differentiate itself from competitors by offering small businesses or startups that depend on a cloud provider’s technology the opportunity to run their models on premise, or on GCP.

Plus there are new classes for AutoML, a collection of premade retail and Contact Center AI services, and AI Platform, a collaborative model-making tool. Developers with little coding experience can use AutoML, while AI Platform is for data scientists — part of Google’s attempt to deliver AI tools across the spectrum of experience and bring useful AI to all industry verticals. Other conversations at Transform from independent and brand executives will help put all this in context.

Keynote speaker: Jerome Pesenti, Facebook’s head of AI

Artificial intelligence is central to Facebook’s business, and is incorporated into everything from its News Feed to its ultra-targeted advertising placements. Under Jerome Pesenti, head of AI at Facebook, the company is turning even more attention to long-term research projects.

Facebook has developed Pytorch, one of the most popular AI frameworks, and competitor to Google-led Tensorflow. Facebook AI has developed new innovations for game developers, recently announcing both a system capable of extracting controllable characters from real-world videos, which could revolutionize game design, and an AI that can learn to navigate a fantasy text-based game, which is a major advance in natural language processing.

Another leap in NLP: a model that can convert one singer’s voice into another with just 5-30 minutes of their singing voices, thanks in part to an innovative training scheme and data augmentation technique.

For advertisers, the company is looking to boost the potential of augmented reality to increase the interaction and engagement and richness of their customer experience with Spark AR Studio. The AR app creation tool is now available on both Windows and Mac, and designed to make it easier to create powerful AR apps that can leverage personalization and engagement.

Keynote speaker: Swami Sivasubramanian, Amazon AI vice president

If you’re looking to train machine learning models at massive scale while keeping costs down, Amazon’s AWS also offers all kinds of AI products for developers and business executives. Amazon hopes you’ll tap its SageMaker AI service, which uses innovative techniques to keep the needed amount of computing power locked down while providing similar performance. The more data that gets fed through SageMaker’s streaming algorithms, the more training the system does, but the computational cost of doing so remains constant over time, rather than scaling exponentially.

That means they’ve created a system that can handle incredibly large datasets running at global scale with the same amount of accuracy as more traditional methods of training AI systems. That’s important both for Amazon’s work on its own AI projects, as well as customers’ needs.

Companies need to invest in NLP technologies to keep up with the revolution happening in search and engagement, and Amazon AI is staying apace in the NLP space with leaders like Google. Scientists at Amazon’s Alexa division recently used cross-lingual transfer learning, a technique that entails training an AI system in one language before retraining it in another, to adapt an English language model to German, and in a new paper they expanded the scope of their work to transfer an English-language model to Japanese.

Hilary Mason, GM, Machine Learning at Cloudera and founder of Fast Forward Labs

Hilary Mason, one of the highest profile women in data science and GM of Cloudera, stated earlier this year that the biggest trend in AI is the ethical implications of AI systems. Companies need wider awareness for the necessity of putting some kind of ethical framework in place, and both technical and business leaders need to accept accountability for creating products without bias.

Also, in the same way you’d expect business managers to be minimally competent using spreadsheets to do simple modeling, you’ll need to expect them to be minimally competent in recognizing AI opportunities in their own products.

Mason also thinks more and more businesses will need to form structures to manage multiple AI systems. A single system can be managed with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems, in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling, and a shift from having pockets of competency or even brilliance to having a systematic way to pursue machine learning and AI opportunities. (Here too, we’ll have an array of companies — LinkedIn, Uber, Airbnb, and Lyft, talking at Transform about how this is done.)

Greg Brockman and Ilya Sutskever, OpenAI cofounders

Gaming has been the benchmark in AI research, and OpenAI has been leading the way in creating an AI that can play many of the most complicated games better than humans. Built on deep reinforcement learning, the technology is arguably showing the early steps toward a general artificial intelligence — and one that can be applied outside of games.

Indeed, they’ll be discussing the latest AI behind NLP and text-generation, something many businesses are working on with their customer-engagement messaging apps. It all stems from the excitement OpenAI has generated from work in gaming, though: Its bot was thrown in one of the biggest rings yet. Between April 18 and April 21 the company conducted a massive-scale experiment to test how good it was against the best Dota 2 players.

OpenAI Five had a victory rate of 99.4%, and no one was able to find the kinds of easy-to-execute exploits that human-programmed game bots suffer from.

A bot that can navigate complex strategy games is a milestone because it begins to capture aspects of the real world. It’s a step toward an AI that can handle complexity and uncertainty, offering a clearer path toward developing autonomous systems that outperform humans at the most economically valuable work.

Kevin Scott, Microsoft CTO

The modern machine learning industry is built not just on advances in compute power but also on open source projects. It’s that architecture that will enable leaps forward in machine intelligence, and tech giant Microsoft is leading the charge with its new Azure Machine Learning and Azure Cognitive Services announcements.

Microsoft is working in a ton of areas relevant for enterprise, including AI on the edge for robotics and manufacturing companies. It’s also made generally available FPGA chips for machine model training and inferencing. Moreover, the Open Neural Network Exchange (ONNX) plays to Microsoft’s strengths, because it allows Microsoft customers to use other, non-Microsoft technologies, heralding a new era of openness. ONNX now supports Nvidia’s TensorRT and Intel’s nGraph for high-speed inference on Nvidia and Intel hardware. This comes after Microsoft joined the MLflow Project and open-sourced the high-performance inference engine ONNX Runtime.

The interoperability that ONNX brings to the collections of different frameworks, runtimes, compilers, and other tools enables a larger machine learning ecosystem. FPGA chips have been used for years now to run 100% of data encryption and compression acceleration tasks for Azure. You can now build custom models using TensorFlow, PyTorch, Keras, or whatever framework you prefer, and then hardware-accelerate it any GPU or FPGA.

Microsoft is also now known as one of the largest employers of open source project contributors according to the 2018 Octoverse Report released last fall by GitHub, which Microsoft acquired last year.

These are just a handful of the speakers coming to Transform, our flagship event for business executives on how to achieve results with AI. Register now to network with the AI leaders who are implementing real-world, practical, and successful AI strategies.

Let’s block ads! (Why?)

Mobile – VentureBeat

Leave a Reply

Your email address will not be published. Required fields are marked *