Updated: May 15
This is a continuation of from a previous article outlining the invention and deployment of ChatGPT, which can be found here. Start with that article before reading this one.
Unless you’ve been offline for the couple months, you should have seen that Open AI’s ChatGPT has been updated to its 4th version. If you thought the 3rd version was useful, you haven’t seen anything yet. Here’s a brief overview:
Increased search size
The old model could handle 3,000 words of input from the user. The new model can now process 25,000 words!
Increased training parameters
Old model used 175 billion parameters in it's training. New model can handle trillions of parameters
In addition to text, GPT-4 can actually use images as input. In the example from the demo video, the user drew a simple layout of a website on a napkin. Using the scan of that napkin, GPT-4 was able to create a fully functional version of that website. Granted, the actual implementation of this feature at the time of writing has not been made available to the public
GPT-4 is already being used in the wild
With these kind of new abilities, it's no wonder that multiple organizations are attempting to take advantage of the advanced tools now at their disposal.
Some SOC (Security Operations Center) environments take advantage of ChatGPT's functionality to automate and triage incoming alerts
Duolingo is using it to power up its learning platform
The government of Iceland is using it to preserve the Icelandic language
DoNotPay claims it can help you fight the courts using ChatGPT
With all these various organizations trying to adopt AI into their platforms and business plans, we are seeing "blue ocean strategy" before our eyes. AI has almost become the newest “race to space.” However, this rapid evolution and adoption of AI into organizations without a map or guide can be potentially treacherous--and it hasn’t gone unnoticed.
The Biggest Names in Technology have Stepped In
In an open letter, some of the biggest names in tech and AI have called for an "immediate pause" on the training of AI systems for at least 6 months. These include a pioneer of modern AI Yoshua Bengio, Skype cofounder Jaan Tallinn, even Twitter CEO Elon Musk. While the call isn’t for a sudden halt on development, it is asking everyone to consider that language models like GPT-4 can already compete with humans at several tasks, which could lead to automation of jobs and potential of misinformation. Furthermore, the letter also argues that AI systems could eventually replace humans and remake civilization.
Our Take - Where do we go from here?
As it stands right now, while the innovation and enthusiasm of adoption of new technology is commendable, AI is operating and evolving at an alarming rate with very little urgency to make sure it is following already-established ethical guidelines. That would be like trying to install a new application on your endpoint device because it’s been proclaimed to be the “next big thing” without testing it for proper functionality and alignment with organizational goals. While there isn’t anything wrong with quick adoption of the newest trending technology, it could be going against what would be considered best practices.
There must be a balance between innovation and a structured approach to implementation. Without a proper ethical framework or best practices in place, our information security infrastructure could turn into a free-for-all—influenced by AI in the hands of nefarious threat actors. Thankfully, there are a handful of frameworks specifically for AI that are available from multiple organizations, including NIST AI Risk Management Framework, ISO 42001, IEEE, Responsible AIInstitute among a growing list of others found here.
There has to be some Guardrails in Place
Regardless of whether technology like ChatGPT to you feels like a major technological breakthrough like the invention of the smartphone or like the beginning of AI taking over (vis a vis Skynet from the Terminator), machine learning as a whole shows no sign of slowing down. If this technology continues to evolve at its current rate or faster, regulations and frameworks should be highly considered—and quickly-- to keep all relevant checks and balances in place.
Responsible and Ethical AI frameworks currently in development will ensure AI remains “human-first". At ARORA Solutions, we pride ourselves on taking a human-centric approach as an audit and technology company. We are already in the process of supporting organizations to get ready for responsible and ethical AI adoption within their technology ecosystem.