Hype, hope or revolution: What is ChatGPT and do you need to care?

Hero image for Silo AI blog post about ChatGPT.

The hype is most definitely real. OpenAI's conversational chatbot ChatGPT has in recent weeks provided hope. But is it a true technological revolution? Put simply, the answer is both yes and no, but as with most things, deciphering the current and future value of ChatGPT is far more complex than that. The deeper answer to this relies on and requires a true understanding of the underlying technology.

In terms of innovation, the emergence of ChatGPT is a significant development for AI, but it is much less clear whether it is a significant scientific breakthrough. Generally, the main achievement of ChatGPT has been the popularization of the progress and future promise of AI technologies. And albeit a significant development, it is only one step forward of many that we will encounter in the coming years.

As the hype curve is peaking and all media outlets are filled with ChatGPT tricks, it’s not surprising that we at Silo AI have spent the past months advising on topics related to generative AI and its significance. Key topics of interest have been the actual technology behind ChatGPT, while others have sought to understand the magnitude of its effect on various business landscapes.

AI is a broad field, advancing fast, and attracting vast innovation throughout industries and society. To that end, we won’t try to give any all-encompassing review of what ChatGPT and its underlying elements might mean for a specific business or industry. Nevertheless, here are some general principles to guide yourself through the excitement, fears, and opportunities regarding ChatGPT.

1. ChatGPT is not a big AI advance in itself, but it has popularized some of the AI advances of the past decade

First of all, it’s good to note that generative AI methods like ChatGPT are not creative in themselves, but rather good at creating novel content from individual elements within the vast data provided to them. And ChatGPT is not the most advanced AI model developed, but it is the most user-friendly.

Noteworthily, this user-friendliness is not so much the result of AI model improvements, but the result of significant manual labor, design and engineering efforts. In a purely scientific sense, many state-of-the-art AI models are quite a bit more capable and versatile than ChatGPT.

This contrast also highlights the importance of AI-specific engineering work in deploying AI for real-world use.

2. Capabilities of generic large language models are starting to create value, but most value from such models will be created for specialized use cases using specialized data

Large language models have scaled quickly in their capabilities during past years by ramping up their parameter counts and training datasets.

Going forward, new approaches are emerging as the way to improve the usefulness of these models. In the end, we will not be using the same AI models to evaluate legal contracts, to assess children’s speech development, or to guide the work of a factory technician. Similarly as for the underlying AI models, also the entire digital product will be different, be it a virtual legal assistant, speech analyzer or a factory maintenance knowledgebase.

The reasoning behind this is much the same as for why we don’t own general purpose household robots, but instead fill our homes with swathes of special purpose appliances.

3. This technology will produce significant changes to how we work and live, but the science still requires vast engineering work to make it useful

Turning technology innovations and ideas into clear communication for ourselves or others is a surprisingly large time sink. As throughout history, humans are increasingly becoming supervisors for machines that do the hard work for us. While AI as a science is already reaching formidable capabilities, there is a very long path in front of us to make it work reliably beyond trivial use cases.

In addition to much needed scientific breakthroughs, this will first and foremost require both design and engineering efforts. Fortunately, it’s likely that the ChatGPT-triggered hype in large language models will lead to significant efforts in engineering to scale value creation with generative technology.

4. ChatGPT is far from the most impactful progress happening in AI. And we are still in the very early days of the AI wave

In the 1950s, computers were already quite capable and accordingly caused broad public tumult. But computers also remained large in size and really difficult to use. This is a contrast to our present day, in which practically everything we do touches a computer somewhere. 

AI today is like the computer of the 1950s. AI is already the key innovation in many market-leading products and services, and this change is only getting started. At the same time, looking a few years ahead, it’s highly unlikely that ChatGPT in itself will be the key AI innovation in these market-leading products and services.

For further details and more technical coverage on ChatGPT and large language models, read the more comprehensive FAQ in the below appendix.



There is still a lot more that can be said about ChatGPT and large language models (LLMs). To that end, in addition to the high-level theses, we have compiled a more detailed FAQ with the intent to clarify some aspects around ChatGPT and what you need to know as a business leader.

In the FAQ you will find answers to:

  • What is ChatGPT technically?
  • What is a LLM with reinforcement learning in practice?
  • Is ChatGPT an algorithmic revolution?
  • What can ChatGPT do?
  • Can ChatGPT do math?
  • Is ChatGPT just a hype?
  • Are these models just glorified Excel formulas?
  • Where is ChatGPT having the biggest influence right now?
  • Who is going to take advantage of these models in the long run?
  • About AI alignment, what’s stopping the model from proposing something horrible?
  • How long is this pace with LLMs going to last?
  • Should you build our own similar LLM?
  • Should you build our own specialized LLM?
  • What does the legal and regulatory situation look like?
  • What should we humans still do?
  • Will the outcome of ChatGPT be some general AI?
  • What is going to be done with all this AI?
  • What else is going on in AI, other than with natural language?

Throughout this FAQ, we will be using the abbreviation LLM for large language models to refer to models that are large in parameter count, are costly to train, and are trained with publicly available text datasets. In other words, it means ChatGPT and all its ancestors and close relatives.

What is ChatGPT technically?

ChatGPT is a combination of a LLM with reinforcement learning.

To avoid some of the all too common failure cases with LLMs producing questionable answers, the reinforcement learning part has been trained to give the model severe penalties for talking about controversial or plain inappropriate topics.

This is a good example of how reinforcement learning can be used to reliably take into account a myriad of messy special situations that would be practically impossible to handle with hand-written rules.

As another interesting highlight and as its name might suggest, ChatGPT’s design focused not on producing correct answers, but on producing a fluent conversation experience. Therefore, it literally may sometimes prefer plausible and easy-to-digest answers to truthful ones.

What is a LLM with reinforcement learning in practice?

LLMs are combinations of massive memory stores and a relatively sophisticated machine ability to understand and produce human language. It’s noteworthy that this does not entail any ability for independent logical reasoning. LLMs often rely on transformer models, an internal design unit of neural networks. These models learn meaning by tracking relationships in sequential data, such as words in a sentence, and then detect subtle ways words influence and depend on each other.

Transformers and so-called self-supervised training have become the key combination in learning models from large datasets. Before these innovations, training language models required creating large labeled datasets, which was costly and time-consuming, whereas modern LLMs eliminate that need, making useful the petabytes of text data on the web.

Reinforcement learning models are trained by their developers or sometimes also by their users to behave in a desired way by rewarding and penalizing their various actions. In practice, the model parameters are gradually modified to prefer good actions. The purpose of reinforcement learning in ChatGPT is to interpret human questions and decide how to best answer them.

Is ChatGPT an algorithmic revolution?

Overall, the LLM algorithms haven’t really changed significantly during the recent years.

It was already evident some years back that LLM performance can scale quite well to much larger training datasets. What has changed since then is the accumulation of these large datasets and huge amounts of engineering work to make the models more approachable and reduce their sharp edges.

What can ChatGPT do?

Consider ChatGPT as a hyperactive summer intern. It is very fast, producing vast amounts of output in seconds. However, it also has a hugely inflated self-confidence, often producing absolute nonsense that nevertheless may look convincing at a first glance. Therefore, accepting any of its produce without competent human scrutiny is asking for trouble.

Additionally, the model is incapable of true originality, that is, it can only combine elements of what it has seen previously. But even such output can be quite insightful when really complex combinations are made from incredibly large training datasets.

Can ChatGPT do math?


ChatGPT has no direct understanding about how numbers work, it just sees them as a bunch of letters put next to each other. But when it has seen the sequence “1+1=2” in its training data a couple of million times, it’s not too hard for it to guess the right answer.

So again, it gives right answers until suddenly, it doesn’t. However, it’s useful to notice how much AI models can achieve, even in problems where they should be clueless.

Is ChatGPT just a hype?

Yes and no.

We are definitely at the peak of the hype curve at the moment with the general public’s expectations being strongly overblown. At the same time, we shouldn’t underestimate the progress that has been made.

Only some years ago, only a few experts would have expected us to reach this point of performance during this decade. The recent LLMs are already useful as such for some more casual use cases and they are also enabling further advances towards more serious applications.

Are these models just glorified Excel formulas?


LLMs and their current capabilities would have been impossible without the research achievements of the past 10 years. At the same time, the related AI research is many decades away from stabilizing.

Expect the rapid progress to continue in the future.

Where is ChatGPT having the biggest influence right now?

The current gold rush is strongly focused on the copywriting / SEO / marketing engagement industry. Some estimates show that up to 80 % of popular ChatGPT tweets are related to pitching ChatGPT based automation for these tasks.

Who is going to take advantage of these models in the long run?

Quite a few professional groups.

Translators have already worked as supervisors and editors for machine translation for a while, even with legal texts. Early experiences in automating creation of software program code are showing some gains in productivity. Education will be strongly impacted in several ways.

About AI alignment, what’s stopping the model from proposing something horrible?

Nothing but endless manual labor that tries to figure out things that could go wrong and telling the model not to do so.

This is again another example of why AI is a scale game, with successful deployments either requiring a race to the biggest budgets or the use of proprietary data specific to the use case. In the broader set of things, most public debate around AI alignment has been on philosophical brain teasers.

At its core however, AI alignment is primarily a mathematical challenge, with surprisingly difficult problems on how to encode human preferences for AI models to comply with.

How long is this pace with LLMs going to last?

Less than one might expect.

The capabilities of LLMs are a combination of training data size, quality, and diversity, and the models’ ability to efficiently turn training data into sensible behavior, with the latter one slowly improving through research. Regarding training data, there is already research estimating that we are getting close to running out of high-quality text data to train with.

However, lowering the bar on data quality (which research will eventually find a way to work with) may expand the available data by say 100x, thus extending the current LLM progress trajectory by another decade or so. At the same time, LLM research is focused on making the models use data more efficiently.

Should you build your own similar LLM?


This is a scale game and we remain with our old prediction that only the cloud majors (Amazon, Microsoft, Google in the west) will succeed in building any business around generic language models.

At the same time, the technology itself and most data assets used in it are readily available. Therefore ChatGPT is not a unique invention and one should expect all cloud majors to implement similar language capabilities in their offering.

Should you build our own specialized LLM?


In recent years, there has been significant research into adapting large generic models like the LLMs for special use cases. For many cases, this has caused a big drop in the cost of building large specialized models making them feasible for more targeted business cases.

What does the legal and regulatory situation look like?

As a highlight, the upcoming EU AI Act requires systems to notify users when they are interacting with AI systems, and some websites are already implementing this guideline. 

Elsewhere, there are already wide-ranging debates about people’s data being used in training these models without their permission, with many websites either forbidding the use of their data, letting users decide for themselves, or plainly selling all user data to LLM builders.

Expect to hear news of massive class action lawsuits.

What should we humans still do?

During previous events of technical machines replacing humans with sheer jumps in productivity, humans have taken the role of supervision and decision making. Similarly, as the LLMs truly mature during this decade, humans will turn to taking the role of an editor rather than being authors.

Will the outcome of ChatGPT be some general AI?

No, not even if it could, which it won’t.

Consider the early “robots are coming” hype in the 1950s when people expected our homes to be soon full of household robots, taking care of all our chores. We never got those robots, but what we did get during that time were washing machines. And now our homes are full of cheap appliances from toasters to smart locks, each specialized in one task.

The same goes with AI models. It would be prohibitively expensive and complicated to produce a single model that can work well in highly varied use cases. Instead, we will end up with a wide range of models, each being superior to human performance in that specific use case.

What is going to be done with all this AI?

Don’t hold your breath.

We expect rapid discovery, progress, and deployment of AI use cases to continue for up to 50 years, assuming no serious adverse surprises in improving the cost of computing. 

Thus a valid comparison to AI’s current state wouldn’t be far from the first 2-seat commercial airplane or the first building-sized commercial computers. And we are not alone.

The USA Department of Defense’s strategy report expects AI to change the world at least as much and most probably more than airplanes, computers, biotech, or nuclear weapons.

That is going to take a while.

What else is going on in AI, other than with natural language?

Quite a lot. Picking just some highlights here:

  • Computer vision is already highly productive and reliable technology. In most cases the remaining challenges are focused on producing solid engineering around the algorithms and on how to efficiently scale development and operations without blowing up the budget. Another rising opportunity in computer vision is the development of multiple computer vision models without starting from scratch every time and also enforcing consistency between the models’ outputs.
  • Reinforcement learning has really been on a ninja act for many years, gradually making significant progress in small steps. We generally expect reinforcement learning to be the eventual source of greatest disruption from AI, but that will still take time and improvements especially in the engineering side of things. Interestingly, some very recent (weeks-old) scientific publications are hinting towards some larger improvements again, and we’re naturally incorporating these advances in our work when it accelerates progress towards our clients’ objectives.
  • Some other really broad areas that are now getting media attention and that we could delve into are generative data (it’s so much more than DALL-E or Midjourney), and how AI is disrupting traditional engineering in physics, chemistry, or biology (these use cases easily touch more than half the world’s GDP). We’ll be happy to discuss these in detail and what they could mean for you and your business.

As said and as usual, in every specific use case and application one needs to pick the right balance between things like AI speed-to-market, reliability, performance, cost to develop and operate, and plain accuracy.

For further details on the topic or how to create industry-leading AI-driven products, we’d be happy to continue the dialogue.


No items found.

Want to discuss how Silo AI could help your organization?

Get in touch with our AI experts.
Peter Sarlin, PhD
CEO & Co-Founder
+358 40 572 7670
Peter Sarlin, PhD
CEO & Co-founder
Silo AI

Peter Sarlin is the CEO and co-founder of Silo AI, one of Europe’s largest private AI labs, and a Professor of Practice in applied ML and AI at Aalto University. He has spent his career at the intersection of academia and industry, deploying state-of-the-art AI into products of large corporations and startups. Peter has a PhD in applied machine learning and a pedigree as research professor/associate from top universities like Imperial College London, London School of Economics, Stockholm University, IWH Halle, University of Technology Sydney and the University of Cape Town, and has previously worked for the ECB and IMF, among others.

Niko Vuokko, PhD
CTO, CBO Smart Things
Silo AI

Dr. Niko Vuokko, Chief Technology Officer, is specialized in fast-growth data-driven B2B, Niko’s expertise spans product, strategy, technology, and business development with a key passion in aligning sales and product. He runs Silo AI's Smart Things business unit and heads Silo AI's offering. Niko holds an olympic medal in mathematics and PhD in data science, and has co-founded, advised, and sat on the board of several digital startups.

Share on Social
Subscribe to our newsletter

Join the 5000+ subscribers who read the Silo AI monthly newsletter to be among the first to hear about the latest insights, articles, podcast episodes, webinars, and more.

By submitting this form you agree to the processing of your personal data by Silo AI as described in the Privacy Policy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What to read next

Ready to level up your AI capabilities?

Succeeding in AI requires a commitment to long-term product development. Let’s start today.