Contextual AI and Google Cloud brings GenAI to the enterprise

(L) Amanpreet Singh | Founder, CTO & (R) Douwe Kiela | Founder, CEO (Photo: Business Wire) | Contextual AI and Google Cloud

Contextual AI, the “AI that minds your business” firm, has entered into a strategic partnership with Google Cloud to be its preferred cloud provider to build, run and scale its growing business and to train its large language models (LLMs) for the enterprise.

In June, Contextual AI announced its $20m in seed funding to work on its next generation of language models in order to provide fully customizable, trustworthy, privacy-aware AI that lets companies focus on the work that matters. The company selected Google Cloud for its leadership and open approach to generative AI, as well as the comprehensiveness of its compute infrastructure which is purpose-built for AI/ML.

The company will build and train its LLMs through Google Cloud’s extensive portfolio of GPU VMs, particularly A3 Vms and A2 VMs, which are based on the NVIDIA H100 and A100 Tensor Core GPUs, respectively. Contextual AI will also utilize Google Cloud’s custom AI accelerators, Tensor Processor Units (TPUs) in order to build its next generation of LLMs.

Built on Google Cloud, Contextual Language Models (CLMs) will craft responses that are tailored to an enterprise’s data and institutional knowledge, resulting in higher accuracy, better compliance, less hallucination and the ability to trace answers back to source documents.

Additionally, Contextual AI’s LLMs take into consideration data privacy while providing customization and efficiency. Co-founder Douwe Kiela helped pioneer the retrieval augmented generation (RAG) technique that underpins Contextual AI’s text-generating AI technology. RAG allows enterprise customers to build custom LLMs on top of their data, ensuring that data remains secure, using external sources to generate responses that take context into consideration.

Mark Lohmeyer, VP and GM, compute and ML infrastructure, Google Cloud, said: “At Google Cloud, we believe that enabling the next generation of generative AI services requires a purpose-built, AI-optimized infrastructure stack, spanning hardware, software and services.

“We’re proud to offer customers unparalleled flexibility and performance and excited to support Contextual AI’s world-class team of AI innovators as they build next generation LLMs for the enterprise on Google Cloud.”

Douwe Kiela, CEO, Contextual AI, said: “Building a large language model to solve some of the most challenging enterprise use cases requires advanced performance and global infrastructure.

“As an AI-first company, Google has unparalleled experience operating AI-optimized infrastructure at high performance and at global scale which they are able to pass along to us as a Cloud customer.”