ChatGPT and AI have sparked diverse opinions, with some seeing it as a new opportunity for innovation and others expressing concern. Engineers and entrepreneurs view it as a new frontier, while social scientists and journalists have raised concerns, with one prominent author describing it as an “information warfare machine.”
I believe there are great potentials with this technology. As with any new innovation, it’s difficult to fully understand the consequences, but we can expect some challenges and setbacks. However, overall, the outcome will be positive.
What Is ChatGPT?
This technology, along with others like it, is referred to as a “language machine.” It utilizes statistics, reinforcement learning, and supervised learning to analyze words, phrases, and sentences. It doesn’t possess actual “intelligence” as it doesn’t understand the meaning of a word, but it can effectively answer questions, compose articles, summarize information and more.
ChatGPT and similar engines are “trained” by programming and reinforcement to replicate writing styles, avoid certain types of conversations, and learn from user’s questions. With the more advanced models, it can improve answers as more questions are asked and save what it learned for future use.
The concept of chatbots is not new, with Siri, Alexa, Olivia, and others existing for over a decade. However, the capabilities of GPT-3.5, the latest version, are impressive. I have asked it questions such as “what are the best practices for recruiting” or “how do you build a corporate training program” and it provided relatively good responses. While the answers were basic and not entirely accurate, with more training, they will improve.
It has a wide range of abilities. It can provide answers to historical questions like “who was the president of the US in 1956”, it can write code, as Microsoft’s CEO Satya Nadella believes that 80% of code will be automatically generated in the future, and it can also compose news articles, summarize information and more.
I recently spoke with a vendor who is utilizing a variation of GPT-3 to generate automatic quizzes from courses and serve as a “virtual Teaching Assistant.” This highlights the possible applications of this technology.
Also read: ChatGPT at Highest Capacity: How To Fix It?
How Can ChatGPT and Similar Technologies Be Used?
Before discussing the market potential, I want to explain why I think this technology will be so significant. These systems are “trained and educated” by the database of information they analyze. The GPT-3 system has been trained on the internet and on a set of highly validated data, so it can provide answers to a wide range of questions. However, this also means that the system is somewhat “naive” as the internet is a mix of marketing, self-promotion, news, and opinions. It can be difficult to determine what information is accurate, as searching for health information on a specific condition can yield misleading results.
There is rumored to be a Google alternative to GPT-3, known as Sparrow, which was developed with “ethical guidelines” from the beginning. According to my sources, these guidelines include rules such as “do not provide financial advice,” “do not discuss race or discriminate,” and “do not give medical advice.” It is currently unclear if GPT-3 has the same level of ethical considerations, but it is likely that OpenAI, the company behind the development of GPT-3, and Microsoft, one of its major partners, are working on it.
What I am suggesting is that while “conversation and language” is crucial, some highly educated individuals can still be unpleasant. This means that chatbots like ChatGPT require high-quality, extensive content to develop robust intelligence. It’s acceptable for a chatbot to function “reasonably well” if it’s used to overcome writer’s block. But if you want it to work dependably, it must be fed with accurate, in-depth, and extensive domain data.
An analogy would be Elon Musk’s highly-publicized autonomous driving software. Personally, I don’t want to drive or even be on the road with cars that are 99% safe, even 99.9% safety is not good enough. Similarly, if the database of information is flawed, and the algorithms aren’t constantly evaluating reliability, this technology could become a “disinformation machine.” One of the most experienced AI engineers I know warned me that ChatGPT is likely to be biased due to the data it typically consumes.
For example, if the Russians were to use GPT-3 to create a chatbot about “United States Government Policy” and direct it to every conspiracy theory website, it wouldn’t be hard for them to do so. If they made the chatbot appear as American, many people might use it. Therefore, the source of information is crucial.
AI engineers are aware of this and believe that “more data is better.” OpenAI’s CEO, Sam Altman, thinks that these systems will “learn” from inaccurate data, as long as the data set is large enough. While I understand this perspective, I tend to disagree. I think the most valuable applications of OpenAI in business will be directing this system to refined, smaller, trustworthy and in-depth databases.
From the demonstrations I have witnessed over the years, the most impressive solutions have been those that focus on a specific domain. For example, Olivia, an AI chatbot developed by Paradox, is able to effectively screen, interview, and hire McDonald’s employees. There is also a vendor who has created a chatbot for bank compliance that functions as a “Chief Compliance Officer” and it works effectively.
As I mentioned in the podcast, imagine if we were to develop an AI that accessed all of our HR research and professional development. It would be a “virtual Josh Bersin” and could potentially be even more intelligent than me. (We are currently in the process of developing a prototype.)
Last week, I saw a demonstration of a system that used existing course materials in software engineering and data science to automatically generate quizzes, a virtual teaching assistant, course outlines, and even learning objectives. This type of work usually requires significant cognitive effort from instructional designers and subject matter experts. If we direct the AI towards our content, we can make it available to a wider audience and train it in the background as experts or designers.
Imagine the numerous business applications: recruiting, onboarding, sales training, manufacturing training, compliance training, leadership development, and even personal and professional coaching. If the AI is focused on a reliable domain of content, which most companies possess, it can efficiently solve the problem of delivering expertise at a large scale.
Where Will This Market Go?
As with any new technology, early adopters may encounter challenges. Although ChatGPT appears to be groundbreaking, it is likely that innovators will continue to improve, expand and refine it quickly. It is probable that many venture capital firms are now investing heavily in startups in this field, thus there will be a lot of competition in the future.
My intuition is that companies like OpenAI and Microsoft will likely face competition from many other players such as Google, Oracle, Salesforce, ServiceNow, Workday, etc. Therefore, most major vendors will increase their expertise in AI and machine learning. If Microsoft integrates OpenAI APIs into Azure, thousands of innovators will develop domain-specific offerings, new products, and creative solutions on that platform. However, it’s still too early to predict the outcome, and it’s likely that industry-specific and domain-specific solutions will be more successful.
The number of potential areas of application is vast: leadership development, fitness coaching, psychological counseling, technical training, customer service, and many more. This is why, despite the early stage of this market, I still believe the potential is “huge.” (Recently, I attempted to get assistance with PayPal through their chatbot and became so frustrated that I decided to close my account.)
I see this technology as similar to the early days of “mobile computing.” Initially, it was viewed as an additional feature to our corporate systems. But it grew, expanded, and matured, and now most digital systems are designed for mobile first. Entire technology stacks are built around mobile, and we use it to analyze consumer behavior, markets, and customers. The same thing will happen with this technology. Imagine having access to all the questions customers ask about your products? The potential is truly vast.
As I mention in the podcast, many jobs will be transformed. I recently conducted an analysis of all the jobs that will be directly impacted by ChatGPT (editors, reporters, analysts, customer service agents, QA engineers, etc.) and found that out of the 10.3 million open jobs, about 8% (800,000) will be immediately affected. These jobs will not disappear, but they will be improved and modified by these systems over time. (And there will be new jobs such as “Chatbot trainer” that will be created.)
There is much more to explore on this topic, so I invite you to become a member of the Josh Bersin Academy or Corporate Member to delve deeper. If you have your own experience or are developing something interesting, we would love to hear about it.
Let’s move forward with optimism and consider this as one of the most promising advancements in our future, while also taking steps to ensure it doesn’t spiral out of control.
Also read: Can You Trick Search Engines?