Skip to main content
Knowledge Centre on Interpretation

Language models with classic machine learning techniques

Finding the perfect blend - Insight.factset.com

circuit machine green

As adoption of Large Language Models (LLMs) accelerates across sectors and industries, it's crucial to avoid succumbing to tunnel vision and remember the decades of innovation and proven capabilities of existing artificial intelligence (AI) techniques.

Given AI technologies as a whole have the potential to significantly enhance how businesses operate, the key to future success will likely lie in fusing the various strengths of an ensemble of methodologies and using the correct approach for the problem at hand. Understanding LLMs To merge the capabilities of old and new techniques, it’s essential to first understand the strengths and weaknesses of LLMs.

A recent subset of AI, LLMs are powerhouses that process and produce human-like text by analyzing immense amounts of written content. At their core, these models are large-scale neural networks—a computer model that attempts to mimic the way the human brain processes and learns information. They are trained with diverse examples from books, articles, websites, and other text sources to learn the statistical patterns of human language. That said, an LLM's primary function is to use all that information to “simply” predict the next most probable word given a specific context—the text provided to it as part of the question and the model’s current response.

Despite processing text one word at a time, LLMs such as ChatGPT, Claude, Llamma, Bard, and others can generate seemingly human responses that are coherent and contextually relevant—for example, they answer questions, summarize texts, predict words, and compose sentences and paragraphs. However, LLMs lack awareness of any objectives they're trying to meet and cannot correct any errors they produce. Once an LLM generates a word, it progresses to the next, unable to plan ahead or adjust what came before. Several strategies are being developed to overcome these LLM weaknesses. One method is to carefully prompt models to provide step-by-step explanations of their solutions and thereby break down tasks into simpler, manageable ones.

Read more

Give feedback on this page

mandatory field