Amazon took several more big steps into the gen AI and LLM battlefield, making several big announcements at last week’s AWS Summit New York. It should be no surprise that Amazon will continue to drive AI innovations, given that AWS is the market leader in cloud computing services, with a 31% market share and a $100B annual run rate.
AWS has one simple, obvious strategy. Enterprises host a significant portion of their mission-critical data on AWS, so Amazon is leveraging this data gravity and bringing AI capabilities to their data. This makes AWS an easy test-and-validate for large enterprises and unicorns.
One real strategic offering to enterprises is how AWS implemented openness in Amazon Bedrock, a fully managed service for developing AI applications. According to AWS, 63% of enterprises are deploying AI models from multiple model providers. The strength of AWS Bedrock is that they offer a choice of high-performing foundation models, including Anthropic, Meta, Mistal AI, and Stability AI. Amazon Knowledge Bases for Bedrock – essentially AWS’s implementation of retrieval-augmented generation (RAG), is AWS’s approach to help enterprises build contextual gen AI solutions.
5 Key Gen AI Announcements
Less obvious about their strategy is how AWS is expanding the scope of their target Builders, the people developing applications, machine learning models, data pipelines, and cloud infrastructures. They’re making development easier for developers and data scientists with conversational AI capabilities and application builders. But they are also building no and low-code app and automation capabilities for business users and technologists with less coding experience or time to code.
Secondly, they’re offering a semi-open strategy where
- Developers can choose from a selection of foundational AI models.
- Conversational assistants connect to enterprise applications and data sources hosted outside of AWS.
- Safeguarding technology can be used as a general AI compliance service.
At the AWS Summit in New York, Dr. Matt Wood, VP for AI Products, showed during his keynote how AWS customers can switch between models and benchmark results. Five AI offerings and related announcements illustrate AWS’s gen AI strategy.
- Amazon Q is their answer to Copilot for Microsoft 365 and GitHub Copilot. It includes Amazon Q Developer, a virtual assistant for software developers to generate code, modernize legacy code, build autonomous agents, and improve applications’ security and reliability. A second use case is Amazon Q Business, which has connectors to common enterprise applications such as Microsoft OneDrive, Google Drive, Slack, Salesforce, Jira, and ServiceNow and enables business users to prompt for information and automate tasks. AWS reported that BT Group accepted 37% of Amazon Q’s code suggestion, an acceptance rate that outperforms rival code copilots.
- At the summit, Amazon announced the general availability of Q Apps, an Amazon Q Business capability that enables business users to describe an app that Q Apps builds and connects to the data based on their roles and permissions. Q Apps brings a no-code interface to business users, supporting them in developing AI-enabled applications.
- Amazon also announced AWS App Studio in preview, a generative AI-powered service that uses natural language to build enterprise-grade applications. AWS App Studio extends development capabilities to a new set of builders, including IT project managers, data engineers, and enterprise architects.
- At the conference, they announced one interesting Amazon Q capability: its integration into Amazon SageMaker Studio. This integration aims to make it easier for data scientists to learn SageMaker features, get sample code, and receive troubleshooting assistance using a conversational assistant. Instead of asking a gen AI to analyze and interpret data, they showed a use case where a data scientist could ask Q to generate analytics code to complete the analysis.
- Guardrails for Amazon Bedrock scan prompts and provide safeguards customized to organizational policies. It provides topic, word, harmful content, and PII filters. It also helps address hallucinations by analyzing whether an AI’s result is found in source material and relevant to the user’s question. With the Guardrails Independent API, developers can use the safeguarding capabilities to assess any text without invoking the foundation LLM models. AWS reports an 85% reduction of harmful content with Guardrails compared to unprotected models on AWS Bedrock.
Speaking to Sriram Devanathan, GM of Amazon Q Apps and AWS App Studio at AWS, he described App Studio as “the fastest way to build a new enterprise-grade application.” He contrasted it to low code, which he described as “predominantly visual,” whereas AWS App Studio lets a product manager start with just a natural language description of the requirements and iteratively define the features.
Analysis: Key Tenets of AWS’s AI Strategy
AWS has a significant foothold of enterprise data and wants CIOs and business executives to feel comfortable and confident in selecting AWS AI platforms and services. Their strategy includes these three tenets for enterprise leaders:
- Connect AWS-hosted data with SaaS and other data sources to develop enterprise knowledge bases.
- Test, benchmark, and select best-of-breed gen AI models and develop AI applications with guardrails.
- Expand who can build AI apps, from developers to business people, subject matter experts, and non-coder technologists.
For organizations seeking long-term competitive advantages from their IP, especially those already using AWS for databases and data lakes, AWS’s gen AI strategy is a compelling option worth investigating.
Contact me, Isaac Sacolick, if you have questions about your gen AI strategy.





















Leave a Reply