ChatGPT, the generative AI tool launched by OpenAI has accumulated more than 100m users within 2 months after its launch – faster than other consumer apps (Tiktok and Instagram).

ChatGPT was only launched on 30 November 2022 – but people have already been building hundreds, if not thousands, of applications on top of it… 

Behind the seemingly sudden leap of ChatGPT was years of development in computing power, data availability, national language processing algorithms and model architecture. The confluence of these factors made it possible to structure, train and fine-tune LLMs with 100s of billions of parameters. 

To understand what is generative AI, we need to understand AI 

Interestingly enough, AI has been evolving and inspiring movie producers for almost 100 years. 

Today, AI is embedded in almost every part of our lives -including  social media, search, digital voice assistant, smart home device and even sending email. 

“Traditional” AI can only classify and analyse the given data to detect patterns and make conclusions. It does not generate new or original content. 

When I ask Siri “How can Siri change the future of mankind?” this is her/ his answer: 

What makes generative AI (used by ChatGPT) different? 

On the other hand, this is the answer from ChatGPT 

Generative AI, generates new and original content. 

It uses deep learning models to create new and original content from large data sets that they have been trained on.The content generated includes but is not limited to text, images, soundtracks and videos. 

Generative AI entered the mainstream in 2022, starting with the public launches of image generators (such as DALL-E 2, Stable Diffusion and Midjourney), followed by chatbot generator ChatGPT which was a big morale boost for the AI industry. 

But how did generative AI come about? It was the culmination of a decade of development (and setbacks). 

In 2014, the deep-learning based model Generative Adversarial Network (GAN) was introduced by Ian Goodfellow. And over the next decade, tech giants – Google, Microsoft, Meta – started investing more money and effort into building and training models. 

In 2017, Google published Transformers paper, bringing scalability into Natural Language Processing, a base that allows others to create Large Language Models (LLM) which consist of billions of parameters. 

These LLMs only emerged in the last couple of years. With OpenAI Launching GPT-1 in 2018, followed by Google and Meta who launched their own models BERT and RoBERTa respectively in the following years. In 2019, Microsoft invested US$1 Billion into OpenAI after the launch of GPT-2, which was followed by GPT-3, and recently GPT-4.  With the launch of GPT-4, is Google losing the race to OpenAI?

The emergence of LLMs and the capabilities that products based on LLMs have demonstrated, are fundamentally changing the landscape of AI, and perhaps much more. 

Find out more about the generative AI in our new report – “The future, by ChatGPT”, where we also discuss key concepts behind models like GPT, its potential impact on various industries, and the impact on organisations and individuals. 

Momentum Academy will also be doing an online sharing about the report on Thursday, 6 April, 3PM – 4PM SGT. You can register for the sharing here. 


Thanks for reading The Low Down (TLD), the blog by the team at Momentum Works. Got a different perspective or have a burning opinion to share? Let us know at [email protected].