Skip to Main Content
 
Thought Leadership

The Justice Insiders - AI-Washing: Everything Old Is New Again

 
Podcast

     

Episode 24: AI-Washing: Everything Old Is New Again

Host Gregg N. Sofer welcomes Husch Blackwell attorney Rebecca Furdek back to the show to discuss recent government inquiries and enforcement actions concerning products and services related to artificial intelligence (AI). Gregg and Rebecca explore a few recent high-profile government investigations into so-called AI-washing and discuss the implications for businesses that are integrating AI into their workflows and product/service offerings. The discussion also covers how, in lieu of comprehensive federal legislation, agencies are using their existing powers to regulate AI. Finally, Gregg and Rebecca talk about some of the practical steps compliance departments can take to manage risks while seizing opportunities presented by AI.

Gregg N. Sofer | Full Biography

Gregg counsels businesses and individuals in connection with a range of criminal, civil and regulatory matters, including government investigations, internal investigations, litigation, export control, sanctions, and regulatory compliance. Prior to entering private practice, Gregg served as the United States Attorney for the Western District of Texas—one of the largest and busiest United States Attorney’s Offices in the country—where he supervised more than 300 employees handling a diverse caseload, including matters involving complex white-collar crime, government contract fraud, national security, cyber-crimes, public corruption, money laundering, export violations, trade secrets, tax, large-scale drug and human trafficking, immigration, child exploitation and violent crime.

Rebecca Furdek | Full Biography

A senior associate in Husch Blackwell’s Milwaukee office, Rebecca is a member of the firm’s White Collar, Internal Investigations & Compliance team and regularly helps clients navigate today’s regulatory and government enforcement landscape. Before joining Husch, Rebecca served as Counsel to the Solicitor at the U.S. Department of Labor (DOL), where she gained firsthand insight into federal agency rulemaking and administrative enforcement. Prior to her government service, Rebecca worked as an associate in the Washington, D.C. office of a global law firm, focusing on litigation and government enforcement, and began her legal career as a judicial law clerk at the U.S. District Court for the Northern District of Texas. During law school, she served as a law clerk with the U.S. Senate Judiciary Committee.

Additional Resources

Richard Vanderford, “SEC Head Warns Against ‘AI Washing,’ the High-Tech Version of ‘Greenwashing’” Wall Street Journal, December 5, 2023

Michael Martinich-Sauter and Rebecca Furdek, “When the AI Does It, Does That Mean It Is Not Illegal?” Ethisphere, Winter 2024

Securities and Exchange Commission, “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence,” March 18, 2024

Securities and Exchange Commission, SEC Charges Founder of AI Hiring Startup Joonko with Fraud, June 11, 2024

U.S. Department of Justice, “Founder And Former CEO Of Artificial Intelligence Company Charged With Securities Fraud,” June 11, 2024

© 2024 Husch Blackwell LLP. All rights reserved. This information is intended only to provide general information in summary form on legal and business topics of the day. The contents hereof do not constitute legal advice and should not be relied on as such. Specific legal advice should be sought in particular matters.

Read the Transcript

This transcript has been auto generated

00;00;01;23 - 00;00;28;09

Gregg Sofer

Ever wonder what is going on behind the scenes as the government investigates criminal cases? Are you interested in the strategies the government employs when bringing prosecutions? I'm your host, Gregg Sofer, and along with my colleagues in Husch Blackwell's White Collar, Internal Investigations and Compliance Team, we will bring to bear over 200 years of experience inside the government to provide you and your business thought provoking and topical legal analysis.

00;00;28;09 - 00;00;54;25

Gregg Sofer

As we discuss some of the country's most interesting criminal cases and issues related to compliance and internal investigations. Welcome to Episode 24 of the Justice Insiders. Our episode today is powered by artificial intelligence, and that seems like a pretty innocuous introduction. But more and more statements like that are facing scrutiny from regulators concerned about something called a washing.

00;00;55;08 - 00;01;18;22

Gregg Sofer

That is the use of overstated capabilities or exaggerated claims in connection with AI related services and products. Our show today is powered by AI if you count powered meaning, simply, it's the subject of our discussion. And to help make sense of how AI is being approached by regulators, I'm very pleased to invite Rebecca Furdek back to the podcast.

00;01;18;22 - 00;01;36;04

Gregg Sofer

I'm lucky enough on a regular basis to work with Rebecca, and she's an attorney in our Milwaukee office and a very valued member of the White Collar, Internal Investigations and Compliance Team here at Husch Blackwell. And we've included her full and impressive bio in the show notes. Rebecca, welcome back to the show.

00;01;36;08 - 00;01;37;26

Rebecca Furdek

Thanks for having me back on the program.

00;01;38;11 - 00;02;04;25

Gregg Sofer

So we're going to try to find our way through some of the complexities that have been introduced in the legal realm by a guy obviously in a program like ours, we're not going to be able to get too deep into this. But clearly the government has turned its eyes on A.I. of late, and we've seen the SEC and the FTC and the government sort of writ large, including Congress, start looking at ways to regulate.

00;02;05;14 - 00;02;14;18

Gregg Sofer

And obviously there are concerns here. But I thought maybe we'd start by just talking about some of the most recent actions by the government. Can you run us through just a few of them?

00;02;14;26 - 00;02;41;28

Rebecca Furdek

Absolutely. There are. There are so much we could say about this. But I will focus for now, at least on the SEC and one issue that they're really looking at called AI washing. Back in March, the SEC reached settlements with two investment advisory firms, a total of $400,000 between the two of them. And in those cases, the SEC brought charges against the firms for allegedly making false and misleading statements about their purported use of AI.

00;02;42;14 - 00;03;12;02

Rebecca Furdek

So as to the first company for a period of several years, from 2019 to 2023, a firm made false and misleading statements in SEC filings in a press release and on its website regarding its purported use of AI that supposedly incorporated client data into its investment product. For example, it claimed to investors that it used AI to predict which companies and trends were about to make it big, and so investors would then invest in them.

00;03;12;18 - 00;03;37;24

Rebecca Furdek

The order from the SEC stated that the company did not in fact even have those AI capabilities that it was claiming. And ultimately that company was also charged with violating the marketing rule, which is a 1940s law that was somewhat recently updated. And that law prohibits a registered investment adviser from disseminating advertisements that include an untrue statement of material fact.

00;03;37;27 - 00;03;59;07

Rebecca Furdek

So that's the first one. The second one is fairly similar. There a company also made false and misleading claims. According to the order it made those claims both on its website and on social media about its use of AI. And there it was, claiming to be the first regulated AI financial advisor and stated that its platform provided expert A.I. driven forecasts.

00;03;59;24 - 00;04;27;20

Rebecca Furdek

Similarly, the SEC also charged that company with the marketing role because it was falsely claiming that it offered certain services relating to tax loss harvesting, as well as a variety of other securities law violations. And moving forward a little bit, just last month, the SEC charged another company that was claiming to use AI within its products to help employers find diverse candidates to fulfill their DNI hiring goals.

00;04;28;09 - 00;04;57;02

Rebecca Furdek

And there, that company was alleged to induce investments which totaled well over $20 million within just 2021 and 2022. And the the founder and the former CEO of that company allegedly made false claims regarding main aspects of its business, like how many customers, who the customers were. The owner allegedly made false and misleading statements regarding its revenue, even falsified a bank statement that it issued to an investors.

00;04;57;15 - 00;05;12;13

Rebecca Furdek

And so there that individual was charged with one count of securities fraud, wire fraud, each of which, you know, carries a maximum sentence of 20 years. So these are just three recent examples on one issue, one agency with an A.I..

00;05;12;23 - 00;05;38;08

Gregg Sofer

So let's unpack this a little bit. The SCC, which has been properly characterized as extremely aggressive under this administration, seems to me, is wading into another shiny new toy area, sort of like cybersecurity or greenwashing was at the beginning. But they seem to like, as do many government agencies, to look at something new, focus on something new. It gets headlines.

00;05;38;08 - 00;06;03;09

Gregg Sofer

It's something that Congress or other forces in the government and outside the government are pushing for increased regulation. So they chase after the latest and greatest, most interesting sort of issues or most interesting areas. But really, what you've just described sound to me like garden variety fraud. I mean, whether it was A.I. they were talking about or how many people they have doing the analysis.

00;06;03;28 - 00;06;08;01

Gregg Sofer

It's just isn't it just the same as any other fraud case?

00;06;08;10 - 00;06;35;20

Rebecca Furdek

In many ways it is. All three of those cases I talked about are what we're calling AI washing, and that seems to be the new catch phrase for it. But again, it's just overstating and misrepresenting their product or service, which is, you know, a very old concept that's been around for a long time. You know, what these cases do have in common is how companies talk about it and how we talk about AI, how we use the language, how we disclose it.

00;06;36;01 - 00;06;49;04

Rebecca Furdek

That's some of what is new. But ultimately, yes, the SEC and other agencies are finding ways to use what we might call the existing authorities that they already have to regulate and initiate enforcement actions.

00;06;49;12 - 00;07;14;14

Gregg Sofer

And this is sort of in lieu of partial, we'll call them A.I. centric regulations or A.I. centric legislation that the SEC and the FTC and again, and DOJ and everybody else, they already have the mechanisms to prosecute and investigate and conduct enforcement operations against people who are just flat out lying about their capabilities, whether it's A.I. or anything else.

00;07;14;14 - 00;07;14;22

Gregg Sofer

Right.

00;07;15;01 - 00;07;50;04

Rebecca Furdek

Exactly. Exactly. You know, in the case charge last month, the SEC reiterated that same point. They allege that the defendant, quote, engaged in an old school fraud using new school buzzwords. So that's exactly what was happening. And the FTC put it similarly in recently saying there's no exemption from the laws on the books. In other words, AI is just a tool in the toolbox for would be fraudsters or also well-intended individuals or companies that may use it and be subject to a strict liability type offense.

00;07;50;09 - 00;07;55;26

Rebecca Furdek

But it's just one more tool by which the government may regulate or enforce.

00;07;56;09 - 00;08;31;11

Gregg Sofer

Again and again. I look at it as similar to the cybersecurity kind of realm or any of the other new shiny toys that the SEC and others are going after. But seems to me there may be some aspects of AI that really do set it apart from these other new technological advances or new ways in which fraudsters are able to actually commit fraud is got some unusual characteristics that, from a legal standpoint, probably present some very, very interesting challenges.

00;08;31;11 - 00;09;01;19

Gregg Sofer

And let me just see one of them up. I use a AI in my business, whether it's an investment company or a company that makes widgets. The AI, as I understand it, is learning while it's on board, while it's on the job, and because it's learning, it's changing perhaps the way that it analyzes and produces reports or other mechanisms that the AI is connected to.

00;09;02;01 - 00;09;25;07

Gregg Sofer

And so, number one, what this idea about AI washing is, if you're a AI does something different than what it is you originally intended or you have you committed AI watching, then the examples that you have just given us seem like pretty clear examples of AI said it can do this, or I said it does do this and then it doesn't or it never could.

00;09;25;29 - 00;09;47;10

Gregg Sofer

But what happens if I say that my and the way that it interacts with my business, it does these five different things. And then the AI decides it's going to do a sixth or it's going to at that the fifth is a bad idea. So there it's going to reject that to the extent that it's a malleable learning process, that the AI in AI is intelligence.

00;09;47;10 - 00;09;54;15

Gregg Sofer

And intelligence implies a learning capability. Doesn't that make regulators jobs even more difficult?

00;09;54;25 - 00;10;17;24

Rebecca Furdek

I would think so. I think that most people would say that the government's not exactly known for updating its laws and regulations very quickly, and that reality is the exact opposite of the reality we're facing with AI. So as AI is developing in real time, we're frankly learning about it in real time as it develops. And I think a user of AI, of course, is almost second hand.

00;10;17;24 - 00;10;46;13

Rebecca Furdek

You know, they're downstream learning what a developer is, learning about their product while rolling it out. So it presents many, many challenges, and particularly with some of these intent based laws that, you know, if you have a material omission or a material misrepresentation, it may be not a material misrepresentation one day, but three weeks down the road, it could be depending on how quickly the product is developing and changing and advancing.

00;10;46;21 - 00;11;14;27

Gregg Sofer

And that's just the skills, if you will, or the capabilities of the AI itself. But as I was suggesting, if the AI produces results and those results are published, let's say, to investors and it produces the wrong result, whose fault is that? Is that my company's fault? Because I employed the AI and I didn't realize it was changing strategies, for instance?

00;11;14;27 - 00;11;35;22

Gregg Sofer

Or is that the developer of the A.I.? Do I need to have someone on board who's somehow checking all the time to see whether or not the AI has learned through its intelligence that it was wrong at the beginning or it wants to change focus? Maybe that's based on market forces. Maybe that's based because I came up with a better way to skin the cat.

00;11;36;13 - 00;11;46;22

Gregg Sofer

And it seems to me this is going to present a lot of difficulties and complexities, not just what the AI is capable of, but the results of the AI’s analysis. Do you agree with that or not?

00;11;47;02 - 00;12;10;07

Rebecca Furdek

Absolutely. You know, there's intent based and there's outcome based. And I'm thinking about some of these FTC actions and some of these laws. You know, for instance, if your business is regulated by the Fair Housing Act, that federal law, it might not matter that you're I ended up doing something or leading to a result that unintended really violated the law because there's strict liability.

00;12;11;14 - 00;12;34;15

Rebecca Furdek

You know, one example, late last year, a company was prohibited from using some facial recognition technology for surveillance purposes in order to settle an SEC charge that it was failing to implement reasonable procedures there. The FTC found that some of the biometric information that was being used by an AI driven product failed to take reasonable measures to prevent harms.

00;12;35;05 - 00;12;59;25

Rebecca Furdek

What ended up happening is thousands of customers were falsely accused of shoplifting or other wrongful conduct, and the FTC noted that that company failed to consider these risks from misidentifying people, including heightened risk to certain consumers because of their race or gender. So that's a pretty stark example. But it can happen on a much smaller scale for a smaller business using an AI driven product.

00;13;00;10 - 00;13;20;15

Gregg Sofer

Yeah. And this goes back to your point about results versus intent. And I know you wrote an article about this and we'll link to that article as well in the show notes. But I want to turn back a little bit and dig down on that concept. So, I mean, look, I can do some amazing things I've read. I don't know that this is true that they use AI now on cruise ships.

00;13;20;15 - 00;13;40;13

Gregg Sofer

It analyzes all the surveillance and it can tell the way someone is walking though, where someone is walking and what time it is and a number of other factors that this person is a risk of jumping overboard and they will dispatch a human being immediately to that part of the ship to make sure that the person doesn't try to commit suicide, whatever.

00;13;40;13 - 00;14;03;11

Gregg Sofer

It may be an amazing concept, something that human beings probably can never do. I've read also similarly that radiologists are not as capable necessarily as AI and reading imaging for human beings in the health care space. And it could save thousands of lives every year because the AI will pick up on something in that image that a human being can't.

00;14;03;11 - 00;14;25;15

Gregg Sofer

But obviously it's subject to great abuses, either purposeful or accidental in the law, as you point out. And I think we should dig down a little bit on this fair housing question. The law sometimes doesn't really care how you got to the result. It's concerned about the outcome. So what thought I know again you wrote about this a little bit, but what thoughts do you have about that?

00;14;25;25 - 00;14;50;10

Rebecca Furdek

Well, absolutely. You know, companies using this need to think about what they're committing as well as what they're providing when they provide information to the government, when they provide information to their customers. You know, and using too little A.I. could present certain risks and regulatory risks if you will. And frankly, using it or overstating it gets you into the danger of those three cases we just discussed.

00;14;50;24 - 00;15;10;22

Rebecca Furdek

There's a lot of moving parts and and remain you know, the FTC is just one agency of many. You know, there's also state attorneys generals looking at this on the state level. There's a few different state laws that was just that were just passed, including one in Colorado in May that was fairly comprehensive. So there's a lot going on.

00;15;10;22 - 00;15;19;13

Rebecca Furdek

Was one main point of that article. But also just thinking about what you disclose, what you document, and how you thoughtfully implement A.I..

00;15;19;24 - 00;15;42;28

Gregg Sofer

And again, as we discussed a little bit, I mean, what I would say to our clients and to our listeners is that any time the government starts to wade into a new area and any time the public and Congress all get excited about something like this, you're going to see a lot of initiatives inside, whether it's the regulators or other government agencies, including DOJ.

00;15;42;28 - 00;16;07;18

Gregg Sofer

You're going to see a task force created on a you're going to see a task force or initiative to regulate AI and a number of different agencies. It's sort of the way the government grows at sort of the way it is. I can tell you after spending 30 years in the federal government that in the federal and state government that you it is a shiny toy and it's a place that attracts more and more interest.

00;16;07;18 - 00;16;22;13

Gregg Sofer

And so to the extent that you are in this space, whether you're producing or using it, I think you can expect lots more inquiries, many more questions. And I would expect the government to turn its eye on this area quite a bit.

00;16;22;22 - 00;16;48;29

Rebecca Furdek

Yeah, well, you know, last fall, the White House issued a pretty large executive order that established standards for A.I., quote, safety and security and of course, that can mean a lot of different things. But among among other things, that executive order demands that companies share their safety test results with the federal government when they develop a model that poses a risk to national security and national economic security or national public health and safety.

00;16;49;03 - 00;17;22;24

Rebecca Furdek

So that's pretty broad right there, but also directs federal agencies to establish standards and best practices for detecting that AI related content. And it encourages Congress to pass bipartisan data privacy legislation accounting for AI related risks. You know, to date, there's been a lot of bills, of course, that have come out, but there's been no comprehensive of federal legislation devoted strictly to A.I. So that gets back to the theme of, you know, we're just using what laws are on the books to deal with the shiny new object, the amount of activity in the States, though, is really interesting.

00;17;23;03 - 00;17;48;28

Rebecca Furdek

Just last year, there was an analysis from BSA that said legislators in 31 states introduced 191 AI related bills, and that's a 440% increase from the year prior. And only 14 of those became law, though. So we're very much, you know, to the theme of government being a lot slower than A.I. We're thinking about a lot of these issues and debating them, but we're not necessarily passing a lot of specific laws.

00;17;49;05 - 00;17;53;20

Gregg Sofer

Maybe if we had A.I. as our legislators, things would move more quickly. I don't know.

00;17;53;21 - 00;17;55;20

Rebecca Furdek

Yeah, and that could be good or bad, you know.

00;17;55;22 - 00;18;17;09

Gregg Sofer

It could. Well, I think then we talked a little bit about that before we started recording. I mean, the fact of the matter is, I assume, again, not an expert in this area that bias one way or another. I'm not talking about political bias or racial or other kind of bias. I'm talking about any kind of bias can be introduced into the AI itself.

00;18;17;09 - 00;18;38;05

Gregg Sofer

And you can sort of push it one way or another. And so again, this goes back to the question that I imagine that that a lot of the legislation and regulation that we're likely to see is going to start focusing on not the users of AI, not the company or the customers that employs it, but the people who actually built it.

00;18;38;05 - 00;19;08;02

Gregg Sofer

Because my guess is that most companies, even if they're sophisticated, will have some limited understanding of what it is they've actually employed. They'll see the results and they'll maybe see a lot of efficiencies. But over time, I'm not sure that they'll understand who created this. A I and what exactly its guideposts are. Meaning was it truly made with guidepost to have a do a and has it decided to be on its own?

00;19;08;02 - 00;19;13;22

Gregg Sofer

Or if you can't understand what you've employed, that in and of itself is dangerous, right?

00;19;14;03 - 00;19;35;21

Rebecca Furdek

For sure. And I think an interesting test case of that will be that new Colorado law that was passed. It requires reasonable care by both the developers and the users, and it outlines a few different ways. There's a rebuttable presumption for each of them to meet that standard. So it'll be interesting to see if that's even enforceable on a practical matter or, you know, what kind of cases come from that.

00;19;36;03 - 00;20;01;00

Rebecca Furdek

As far as developers probably being more liable, just thinking back a little bit to some of the issues we touched upon in my article, which was a little more focused on the FTC. When you think about the FTC prohibiting unfair or deceptive acts, I think that would likely fall more on developers. You know, because FTC, an act is unfair if it causes or it's likely to cause substantial injury to consumers, that's not reasonably avoidable.

00;20;01;05 - 00;20;08;25

Rebecca Furdek

That's sort of high level inquiry, I think would be more appropriate to a developer than somebody who just buys a product that's advertised.

00;20;09;03 - 00;20;32;27

Gregg Sofer

Yeah, but again, that's the difficulties of this I think are the strength of the technology is its dynamic capabilities. And so the idea that you build something that entire focus is to be able to learn and then expect it to behave in a certain way, almost seems contradictory to the idea that it's learning and adapting as it goes on.

00;20;33;17 - 00;20;48;13

Gregg Sofer

So it's a really interesting concept and I think we're going to see a lot of case law about this and a lot of action by the government. You know, again, it's a scary concept to some people, especially people are going to lose their jobs. And a lot of people predict that. A lot of people will lose their jobs.

00;20;49;05 - 00;21;12;19

Gregg Sofer

But it's also got some capabilities that, you know, really could change humankind's comfort level and safety for many, many eons to come. Rebecca The practical advice for people, or at least some of the main points I know you touched upon this again in your recent article. Can you just sort of run down some of those bullet points?

00;21;13;03 - 00;21;37;08

Rebecca Furdek

Sure. Absolutely. The first is just read what's out there, read what the government is saying and how they're saying it. You know, when agencies initiate an enforcement action, they're often considering the larger implication when they do that, because they know that it sends a signal to what agencies call the regulated community on how they'll engage in future enforcement in that same industry or issue area.

00;21;37;15 - 00;22;04;27

Rebecca Furdek

You know, look at the press releases, look at what the facts are saying, what's emphasized, how are the issues characterized. And while we focus most of our discussion today on federal enforcement, also, you know, be reminded that state attorneys general are also enforcing a lot of AI related issues. So that's one another review. Your own marketing materials are taking us back to the three SEC matters we discussed at the beginning.

00;22;05;05 - 00;22;26;01

Rebecca Furdek

You know, because everything on this front is moving very quickly. And companies, of course, want to show that they're at the cutting edge of using AI to optimize their products or services. So make sure your company is appropriately advertising and not crossing the line into being potentially misleading in some way because you don't want to be accused of eye washing.

00;22;26;01 - 00;22;56;08

Rebecca Furdek

And of course, in jurisdictions like Colorado, like I mentioned, to ensure that you're providing all the requisite information that you should to consumers. And then finally, I would say, you know, consider documenting how and why you start using AI and start using a particular product and particular ways. You know, in remarks just in March of this year, the SEC chair said that public companies should make sure they have a reasonable basis for the claims they make, the particular risks they face, and that investors should be told that basis.

00;22;56;19 - 00;23;20;06

Rebecca Furdek

You know, so that Fair Housing Act example we discussed a company could violate that act by using AI in a manner that causes a disparate impact. And to the extent you document how and why you onboarded a particular system to show that you were exemplifying a reasonable basis, that sort of thing might be helpful for you to the extent your company ever faces an investigation.

00;23;20;19 - 00;23;26;28

Rebecca Furdek

So again, those are just three examples of a few ways that companies might start thinking about these issues.

00;23;27;18 - 00;23;53;02

Gregg Sofer

Well, thank you very much, Rebecca. Really insightful, really appreciate it. You know, one of the areas that AI is particularly applicable is in the law. So whether you and I are replaced by a AI before the next episode of The Justice Insiders, you may be the last guest we have. Hopefully not. I think there's still some space for human beings in the law, but we'd be remiss if we didn't say that in our own practice.

00;23;53;02 - 00;24;00;26

Gregg Sofer

You know, AI is something that is increasingly being used and is focused upon. So we'll see whether we survive it or not.

00;24;02;00 - 00;24;06;17

Rebecca Furdek

Well, it's been nice being here. If this is the last time, you know, we humans are here.

00;24;06;20 - 00;24;07;24

Gregg Sofer

Thank you so much for coming.

00;24;08;01 - 00;24;08;20

Rebecca Furdek

Thank you.

00;24;09;03 - 00;24;31;19

Gregg Sofer

Thanks for joining us on The Justice Insiders. We hope you enjoyed this episode. Please go to Apple Podcasts or wherever you listen to podcasts to subscribe, rate and review the Justice Insiders. I'm your host, Gregg Sofer, and until next time, be well.

Professionals:

Gregg N. Sofer

Partner

Rebecca Furdek

Senior Associate