Mon. Dec 23rd, 2024
5 Things Ceos Need To Know About Chatgpt And Generative

Observations from the Fintex Snark Tank

If you’ve attended any industry conferences this year, you know that ChatGPT and Generative AI, and artificial intelligence in general, dominated the agenda.

However, much of the content is preachy and empty. For example, “AI will be disruptive” or “AI is a game changer.”

CEOs (and other senior executives, for that matter) need a more concrete view of what impact these new technologies will have and how to move forward with them. I hope.

Here are five things CEOs should know about ChatGPT and generative AI.

1) Cost reduction is not the purpose of generative AI

In the early stages of implementing Generative AI tools and technologies, the focus should be on improving productivity, especially speeding up processes.

Attrition estimates vary by role and position type and range from 20% to evenly. 80%. There are some examples of companies completely (or almost completely) replacing their employees with generative AI, but they are few and far between, and the results aren’t all that great.

The business impact of Generative AI is not to replace talent, but to accelerate human productivity and creativity. Charles Morris, Microsoft’s Chief Data Scientist for Financial Services, said: “Think of Gen AI not as an automation tool, but as a co-pilot. The human does it, and the co-pilot helps the human do it faster.”

From running marketing campaigns to developing websites to developing code to create new data models, the benefit of these use cases using Generative AI is not just lower costs, but faster time to market. It is shortened.

2) The risks of large-scale language models need to be assessed

ChatGPT may be the best-known large-scale language model (LLM) today (Microsoft’s Gorilla and Facebook’s Llama are doing well), but almost every major technology vendor is developing an LLM or has recently Released.

By the end of the decade, we expect to rely on between 10 and 100 LLMs, depending on industry and business size. There are two things you can bet on. 1) Technology vendors will claim to have generative AI technology built into their products when in fact they haven’t. 2) Technology vendors won’t tell you about the weaknesses and limitations of their LLMs (if they really have them).

As a result, companies must evaluate the strengths, weaknesses, and risks of each model themselves. Chris Nichols, Director of Capital Markets at South State Bank, said:

“There are specific standards that companies should apply to each model. Risk Group tracks these models and assesses their accuracy, potential for bias, security, transparency, data privacy, audit approach/frequency, and ethical considerations. (intellectual property infringement, creation of deepfakes, etc.) must be assessed.”

3) ChatGPT will remain the same until 2023 and Lotus 1-2-3 until 1983.

Remember the Lotus 1-2-3 spreadsheet? Although it wasn’t the first PC-based spreadsheet to hit the market, it helped spark a boom in personal computer adoption when it was introduced in early 1983. , was considered the “killer app” for PCs.

Lotus 1-2-3 also created a boom in employee productivity. This allows people to track, calculate, and manage numerical data like never before. Few in today’s working world remember how we (oops, I meant “they”) had to rely on HP calculators to do calculations and write them down. .

Despite the significant increase in productivity, there were some problems. 1) Users hardcoded calculation errors, causing major problems for some companies. 2) The assumptions entered into the spreadsheet were poorly documented (virtually non-existent) and lacked transparency. 3) There was a lack of consistency and standardization in spreadsheet design and use.

The same problems that companies were grappling with 40 years ago with Lotus 1-2-3 exist today with ChatGPT and other generative AI tools. ChatGPT output is often inaccurate and lacks documentation (or “paper trail”). Tools are used inconsistently between employees within the same company or even within the same department.

At the time, Lotus 1-2-3 produced a number of plug-ins that enhanced the functionality of spreadsheets. Similarly, ChatGPT already has hundreds of plugins. In fact, much of the power to generate output such as audio, video, programming code, and other forms of non-textual output comes from these plugins rather than from ChatGPT itself.

4) Data quality will make or break your generative AI efforts

Consultants have been advising you to clean up your internal data house for years, but once you start using Generative AI tools, you’ll see just how successful they are. The adage “garbage in, garbage out” was tailored to generative AI.

For open source LLMs that use public internet data, you need to pay close attention to data quality. While the Internet is a treasure trove of data, it is also a gold mine in the middle of a data landfill. When I dabble in getting data, I don’t know if I got a nugget of gold or a handful of garbage.

For decades, companies have been grappling with giving employees access to the data they need to make decisions and get their jobs done. One of the challenges is having the tools to access the data and ensuring that employees are trained and up to date on those tools.

Generative AI tools can help abstract some of the problems associated with using data access and reporting software applications. This is a huge advantage (and one of the reasons why these new tools can help improve human performance).

However, what remains is the quality of the data.

However, paradoxically, we need to stop talking about “data” in general. Instead, evaluate the quality, availability, and accessibility of specific types of data, such as customer data, customer interaction data, transactional data, financial performance data, and operational performance data.

Each of these types of data feeds into generative AI tools.

5) Generative AI requires new behaviors

You cannot prohibit the use of Generative AI tools. What can and should be done is to establish guidelines for its use. For example, ask your employees to: 1) Document the prompts used to generate results. 2) Calibrate (and prove that you have calibrated) the output of the generative AI. 3) Adhere to internal documentation guidelines such as use of keywords, clear headings, graphics with alt tags, short text, and formatting requirements.

It’s a tall order, but according to South State Bank’s Nichols, “poorly structured documents cause the majority of inaccuracies in generative AI.”

Management’s focus will also change over the rest of the decade.

Companies have spent the past decade on “digital transformation” initiatives, with a focus on digitizing high-volume transaction processes such as account opening and customer support.

The focus is shifting (expansion is a better word) to improving the productivity of knowledge workers within organizations (IT, legal, marketing, etc.).

In the short term, you would be crazy to trust a generative AI tool to run your company without human intervention or oversight. There is too much bad data leading to too many “illusions”.

In the long term, generative AI will be “disruptive” and a “game changer.” CEOs need to be proactive and take big steps to ensure that these disruptions and changes are positive for their organizations.

follow me twitter or linkedin. check out my website.