“Generative AI” is the new kid on the block, able to generate text, images, or other media in response to prompts, and everyone’s talking about it.
Leading the 2023 AI race are the “Big Five”: Alphabet (Google), Amazon, Apple, Meta (Facebook), and Microsoft.
With news of Amazon finally joining the race, many wonder why these tech monoliths are so interested in generative AI. In this article, we’ll dive deeper into why this new technology is sought after and how it will affect you.
Why are the Big Five so invested in generative AI?
ChatGPT’s rapid success is only the tip of the AI iceberg that will be rolled out over the next years. The Big Five are all in on generative AI, but perhaps not for the same reasons. Let’s take a closer look…
Microsoft first invested in OpenAI in 2019, and it’s safe to say their gamble paid off. With the recent release of GPT-4, Microsoft’s contribution to generative AI has been shocking the world, even going as far as passing the bar exam.
ChatGPT has already been deployed on Microsoft’s search engine Bing. Offering access to GPT-3.5, the LLM behind ChatGPT, Microsoft is also encouraging developers to create their own AI models.
Making ChatGPT available on the Azure cloud network makes it easy to see why Amazon feels threatened by Microsoft and OpenAI. As the global cloud computing market leader, Amazon seems adamant about retaining its title.
In April, Amazon announced they’d release two Large Language Models (LLM) available via their cloud platform, Amazon Web Services. These language models allow users to code their own AI software.
Unlike their competitors, it seems that Google’s parent company, Alphabet, is pursuing generative AI for a different reason. Fearing how sophisticated AI might affect the legitimacy of their search engine, Google higher-ups have issued a “code red.”
Attempting to rival ChatGPT, Google has been testing their new AI chatbot “Apprentice Bard.” Alphabet confirms that while it’s developing exciting new AI models, the reputation of Google is too much to risk on a premature launch.
Mark Zuckerberg, CEO of Meta, has quickly dismissed any rumors that Meta has lost interest in the metaverse in favor of generative AI. Investors are weary as it seems Meta is investing heavily in AI servers rather than the metaverse.
Among others, Zuckerberg mentioned the development of AI features on Whatsapp and Facebook. The social media giant intends to create “AI agents” to help users with everything from simple goals to handling customer service for businesses.
The computing giant Apple has been more or less quiet in the generative AI race. Yet, they haven’t mentioned whether they’re building their own LLM or if they’ll be adopting a current AI.
Their existing AI assistant Siri has had trouble understanding different accents and phonetics for a long time. Luckily for its users, Siri will be the first Apple product to feature cutting-edge generative AI.
3 cybersecurity risks accelerated by AI
While it strives to simplify your day-to-day life, generative AI isn’t all sunshine and rainbows. There are a few AI-based cybersecurity risks that have already been causing trouble.
#1. Deepfake videos
You’ve undoubtedly seen one by now — “deep fakes” are videos digitally altered by AI. The most common form is replicating a celebrity’s face, causing them to appear to say things they didn’t.
What began as harmless memes quickly evolved into an imminent threat. Just look at this viral deep fake by BuzzFeed, mimicking former US president Barack Obama.
Would you have spotted this as a fake?
#2. Chatbot phishing
An unexpected side-effect of ChatGPT’s success is the negative way it’s been used to scam people. The chatbot is so effective at sounding human that victims are convinced they’re messaging a real, compassionate person.
Scammers have always given themselves away due to their lack of empathy and language skills. With hyper-intelligent bots like ChatGPT, a cyber criminal with few skills can pull off a successful scam.
#3. Voice cloning
With only a few sentences, generative AI can now reproduce your voice. If that sounds terrifying, that’s because it is. In truth, scammers have already taken advantage of this, stealing millions in fake ransom threats.
In March, grandparents Ruth and Greg received a distressing call. Their “grandson” claimed he was in prison with no money or phone. They scrambled to gather money before a bank manager recognized the scam and informed the couple.
3 protective Measures to safeguard against AI threats
Whether facing scams or dodging malware, browsing is only secure with a Virtual Private Network (VPN). Hold out for VPN Cyber Monday for a bargain on cybersecurity essentials.
#1. Safe words and questions
Particularly with voice-cloning scams, preparing safewords or questions is key. Ask a question that only your loved one would be able to answer. Generative AI is smart, but it’s not psychic.
#2. Email Authentication
Many scammers still rely on classic email phishing methods. Strong email verification lets you know if an email comes from a legitimate source. Email providers usually offer some protection, but upgrading email authentication software is key.
#3. AI detection tools
As advanced as generative AI is becoming, so are AI detectors. When ChatGPT soared in popularity, the market flooded with new AI detection software. Detectors like Originality.ai do a great job on almost all texts.
Unfortunately, no AI detection tool has been proven to work 100% of the time, so you must do further checks and trust your spidey senses.
The generative AI race is in full swing, with each Big Five attempting to lead the pack.
Yet, with new AI-powered products comes new dangers — ensure you’re prepared and secure, and you’ll avoid becoming a victim of an AI scam.