image created by DALL-E

Tuning an LLM in Vertex AI

Mark Ryan

--

No code base model tuning in Vertex AI Studio

With all the excitement about RAG (Retrieval Augmented Generation) in the last few months, we haven’t heard so much about other methods of getting LLMs to give higher fidelity queries for a particular use case. Vertex AI makes it easy to tune text and chat models for applications like classification, summarization, and domain-specific queries. In this article, I’ll go through an example in Vertex AI in the Google Cloud console of tuning a text model and exercising the tuned model. Note that completing the exercise in this article will cost you between $30 and $50 CDN.

The Basics

The supervised tuning process in Vertex AI requires two things:

  • A base model. Tuning is currently supported on a variety of text-bison models.
  • A tuning dataset in the JSONL format.

In the tuning process, additional parameters are set that get used when you exercise (perform inference) on the trained model. The result of this combination of the base model with the additional parameters is better inference results in the use case related to the tuning data.

Selecting a tuning dataset

You can define your own tuning dataset in JSONL and upload it to Google Cloud storage for the…

--

--

Mark Ryan

Technical writing manager at Google. Opinions expressed are my own.