The research could make it possible to teach old AI models new tricks without expensive fine-tuning or retraining sessions.
Artificial intelligence (AI) researchers at Google Research and Google DeepMind have developed a method by which a large language model (LLM) can be augmented with other language models.
This addresses one of the biggest outstanding problems with LLMs by allowing developers to imbue existing models with new abilities without having to start from scratch or engage in costly retraining/fine-tuning sessions.
According to the Google Research team, augmenting an LLM with another language both improves performance at existing tasks and enables new tasks that wouldn’t be achievable by the models by themselves.