When Google introduced Bard in March 2023, it generated a lot of excitement. It was anticipated to break OpenAI's ChatGPT monopoly and introduce substantial competition.
However, Bard fell short of being the AI titan people were hoping for, and GPT-4 remains the dominant generative AI chatbot platform. Now, Google's Gemini is here, but the question remains: Is this long-awaited AI model better than ChatGPT?
What is Google's Gemini AI Model?
Gemini is Google's most advanced generative AI model, capable of operating across various data formats such as text, audio, image, and video. It is Google's attempt to create a unified AI model by combining capabilities from its most advanced AI technologies.
Gemini is available in three variants:
Gemini Ultra: The largest and most capable variant designed for handling highly complex tasks.
Gemini Pro: The best model for scaling and delivering high performance across a wide range of tasks, though less capable than Ultra.
Gemini Nano: The most efficient model designed for on-device task deployment, allowing developers to integrate powerful AI into mobile apps or integrated systems.
According to Google's blog, The Keyword, Gemini Ultra outperforms the state-of-the-art in several benchmarks and beats GPT-4 in key benchmarks, claiming an unprecedented 90.0% score on the rigorous MMLU benchmark.
Google Gemini AI Data Comparing to OpenAI GPT-4 Model Gemini Ultra can also understand, explain, and generate high-quality code in popular programming languages. However, these are benchmarks, and benchmarks don't always tell the full story. So, the crucial question is, how well does Gemini perform in real-world tasks?
How to Use Google Gemini AI
Among the three variants, Gemini Pro is currently available for use. To use Gemini Pro with Bard, go to bard.google.com and sign in with your Google account.
Google states that Gemini Ultra will be rolled out in January 2024, so for now, we'll have to settle for testing Gemini Pro against ChatGPT.
How Gemini Compares to GPT-3.5 and GPT-4
When a new AI model is launched, it is customary to test it against OpenAI's GPT AI models, considered the state-of-the-art models for comparison. So, using Bard and ChatGPT, Gemini's abilities were tested in math, creative writing, code generation, and image processing.
Starting with a simple math question, both chatbots were asked to solve: -1 x -1 x -1.
Bard, running Gemini Pro, struggled and needed multiple attempts to get the right answer. ChatGPT, running on GPT-3.5, got it right on the first try.
To test Gemini's image interpretation abilities, it was tasked with interpreting popular memes, but it declined, stating it can't interpret images with people in them. In contrast, ChatGPT, running GPT-4V, successfully interpreted such images.
A creative task involved asking Gemini Pro to create a poem about Tesla. The result showed marginal improvements compared to previous tests.
At this point, a decision was made to compare the results against GPT-3.5 instead of GPT-4. When ChatGPT, running GPT-3.5, was asked to create a similar poem, Gemini Pro's attempt seemed better, although personal preferences may vary.
Is Gemini Better than ChatGPT?
Before Google launched Bard, there were expectations that it would be the competition ChatGPT needed, but it fell short. Now, with Gemini, Gemini Pro doesn't seem to be the model to surpass ChatGPT.
Google claims that Gemini Ultra will be much better, and we hope it lives up to or exceeds the claims made. Until the best version of Google's generative AI tool is tested, GPT-4 remains the undisputed AI model champion.