The emergence of Large Language Models (LLMs)

Historically, conversational AI (e.g Jarvis from Iron-Man) is thought to be the most difficult to create since our natural language (umbrella term for all languages, including sign/speech) is context-bound and full of nuances. Iron Man is just one of the many AI inspired movies over the last 100 years. 

Humans are the only ones that can understand and process natural language. So naturally, scientists modelled our brains and simulated it as a computer program, a neural network that is capable of taking natural language inputs and processing them into outputs. Some examples of LLMs from big tech companies are: GPT-4 (OpenAI), BERT (Google), and LLaMA (Meta AI).

What are LLMs?

LLMs are machine learning models capable of Natural Language Processing (NLP), as they are trained on huge amounts of text data (usually from the internet/books) via deep-learning algorithms. 

As a result, they are able to produce human-like responses to natural language inputs, achieving some sort of intelligence. Current LLMs have widespread applications in areas such as content creation, customer service, translation, sentiment analysis and many more. The common theme throughout these applications is the automation of jobs, boosting productivity when swamped in a sea of data.

 

However, they are not quite ready to be operating without human’s oversight or additional inputs. Take ChatGPT for example, the LLM AI tool that everyone has tried or at least heard about. OpenAI gave a disclaimer that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”Ironically, humans often do the same, yet we do not offer such a disclaimer. 

In a nutshell, the LLMs do not actually understand our natural language inputs, researchers found a cleverly close but still imperfect way for the computer to “understand”; and that is by turning the inputs into numbers (vectors to be exact). This allows the computer to cluster words that are similar in a region, and also sometimes find unexpected links between words. 

In case you found this explanation to be wanting, fret not! This is just the layman-friendly description, we will be covering the inner workings of LLMs in detail for the next article, stay tuned!

Download the report and join our debate

These are just some of the insights covered in our new report – “The future, by ChatGPT. Download your copy here. Momentum Academy will also be doing a zoom briefing on this report on Thursday, 6 April, 3PM – 4PM SGT. You can register for the briefing here

If you would like to have this sharing and other insights for your leadership team, you are welcome to contact Momentum Academy ([email protected]).

 

Thanks for reading The Low Down (TLD), the blog by the team at Momentum Works. Got a different perspective or have a burning opinion to share? Let us know at [email protected].