Aslam Rawoof, a partner in Benesch's Corporate Practice Group and a member of the firm’s AI Commission, was quoted in American Banker on the growing use of AI by employees and the need for companies to implement AI policies that explicitly define approved and unapproved uses.
The article notes that ChatGPT now has over 400 million active weekly users, with many self-reporting that they rely on AI to manage around 30% of their workload.
Aslam said, “People get excited, and maybe they start typing client information into ChatGPT. But when you do that, ChatGPT takes all inputs it receives from anywhere in the world and trains itself. Even the people who designed it don't know how it processes information. So the client information that you give here in New York could pop out in Tokyo tomorrow in response to some questions.”
He also outlines practical steps banks can take to begin developing AI governance frameworks, including conducting anonymous employee surveys to better understand how AI is already being used, and gathering input from stakeholders across departments.
"But don't exhaust all possibilities and try to draft a very elaborate AI policy," he said. "Just do something relatively simple to start, and then make sure that you review it on a periodic ongoing basis, and then add more detail to it as you learn more, because we're still very much in the early stages of AI. Taking a year and having five AI subcommittees deliberate to write the perfect policy is silly because it'll probably be obsolete the moment you produce it."
Read the full article here.