Last week I had the pleasure of attending the Amazon Web Services Re:Invent conference, in Las Vegas. Over 32,000 attendees and presenters from around the globe were there for the world’s largest global cloud summit. Over five days I walked nearly 68 miles between airport terminals and conference event halls, seeing over 20 talks, keynotes, and events.
This year’s event had some emergent themes: Machine Learning (ML), Internet of Things (IoT), serverless and microservices-based architectures. While I won’t go into detail about all the 30+ services and features announced during and after the event, I am going to talk about the ones I think are important to the work we do here at ISE.
Machine Learning - Infrastructure
AWS CEO, Andy Jassy, (shown above) began his keynote with rather tame-seeming announcements of new EC2 instances. However, there were some gems, specifically Amazon EC2 elastic GPUs and EC2 F1.
Elastic GPUs extend the capabilities of other instance class by dynamically adding and removing GPU capabilities. Achieving hardware acceleration on EC2 deployments before meant employing the costly P2-class instances which, while very performant, didn’t offer the kind of development flexibility and scalability that elastic GPUs can achieve.
The all-new F1 class of instances is specifically for FPGAs, along with a new AFI (Amazon FPGA Image) format, and a section in the marketplace for buying and selling FPGA images. Existing FPGA tools from common vendors are also compatible. These features have implications for many tasks like large-scale transcoding and encryption applications, however both would be notably useful in creating custom ML applications that operate in real-time on large datasets
Managed Machine Learning Services
Jassy’s keynote also introduced three new managed ML services! Amazon Lex offers the ability to build Amazon Alexa-like conversational interfaces into your applications. Amazon Polly is a context-aware text-to-speech engine that can decode units, abbreviations and other text shorthand in natural-sounding way. Finally, Amazon Rekognition, is a managed deep-learning image analysis that can do facial and object recognition. These new services will make it easy to add advanced ML features to your applications without needing to hire expert ML developers!
Internet of Things
There were a lot of companies present at Re:Invent telling their stories of IoT device successes, using AWS services for synchronization, processing, and analytics. However, Jassy’s keynote also touched on a remarkable new service: Amazon Greengrass. Greengrass is a runtime SDK for IoT devices that extends the serverless compute, state management, and local device messaging features of AWS IoT to the devices themselves. You can write backend processing as AWS Lambdas, and Greengrass-enabled devices can employ that function locally, without contacting the cloud, even if it means talking to other devices in the vicinity. Devices can function better in intermittent-connection states, reduce overhead by pre-processing data before securely transmitting to other AWS storage and processing services, and IoT applications can be more agile, allowing the on-device lambda code to be updated seamlessly. (image source below: AWS Greengrass)
Serverless and Microservices
While Greengrass is a remarkable fusion of device and serverless technology, serverless architectures stole the show this year. AWS CTO Werner Vogels had some surprises in store for those of us excited about serverless architectures. In the past, the higher-level abstractions of a serverless were difficult to manage, piecewise in AWS. Before the conference, Amazon quietly released the open Serverless Application Model (SAM) specification as an extension to Amazon CloudFormation to ease in defining, deploying and monitoring serverless applications. In his Wednesday keynote, Vogels added AWS Step Functions, a visual state-machine builder to orchestrate lambdas into serverless applications. In addition, Lambdas can now be run on CloudFront edge locations world-wide with a service called Lambda@Edge, meaning faster responses for time-critical portions of your applications. Finally, Amazon added the capability to run C# code in Lambdas, easing transitions of existing applications to serverless architectures by allowing reuse of existing business logic.
With architecture and deployment well-covered, we turn to look at management. Vogels had a few surprises here as well. Amazon X-Ray is a new service in preview status for monitoring and debugging serverless and microservice architectures. It allows you to trace requests as they pass through underlying application services, record those traces, and analyze a service map to view actual behavior of the application-level requests. Blox is a new open-source container scheduling mechanism, with the backing of Netflix. AWS Opworks for Chef Automate handles task automation with a fully-managed Chef server for configuration management across your deployments. The AWS Personal Health Dashboard gives you a way to monitor service health in a personalized way, while also integrating with push notifications and automation.
These were just the highlights of the show at AWS Re:Invent 2016. There were many more services including new database and analytics resources, security features, and integration with hybrid-cloud environments. According to Andy Jassy, we can look forward to a continued increase in substantial new features, over 1000 in the past year alone. I’ll be covering some more of those developments in the coming weeks. If there’s something you think I missed, please comment below!