Skip to Content

How a DU Expert Is Tackling AI and Corporate Power

Back to News Listing

Author(s)

Nika Anschuetz

Writer

Writer"

Nika.Anschuetz@du.edu

Writer"

303-871-2711

Sturm College of Law professor Michael Siebecker believes “a duty of discourse” may be the key to accountability.

News  •
Ricketson Law Building at sunset

In 2014, a Hong Kong-based venture capital firm called Deep Knowledge Adventures appointed VITAL to its board. VITAL wasn’t a person; it was an algorithm designed to predict investments. While it’s no longer in use and may have been merely a publicity stunt, the move foreshadowed the growing role of artificial intelligence in shaping our world.

A decade later, AI has evolved from a niche tool to a transformative force, influencing industries, economies and societies. From ChatGPT queries that simplify daily tasks to machine learning models that optimize supply chain logistics, AI is a part of our daily lives, with 97% of large corporations leveraging it and reaping significant rewards. 

Headshot of Michael Siebecke

Yet corporate decision making is often shrouded in secrecy, left to a select few advisors, each with their own priorities. As AI becomes embedded in decision making, how can we ensure corporate executives are serving the best interests of all stakeholders, rather than an elite few?

Michael Siebecker, a professor in the Sturm College of Law, addresses this in his latest paper, “Democracy, Discourse and the Artificially Intelligent Corporation.”

Beyond the boardroom and into the community

Siebecker, who has degrees in law and political science, from Yale University and Columbia University, respectively, points to a basic problem in corporate law. The fiduciary framework, or the legal structure that requires directors and officers to act in the best interests of the corporation and its shareholders, isn’t robust enough. 

“It gives corporate managers unbelievable leeway in running the corporation in a way that really seems to detach their decision making from consideration, or at least robust consideration, of real shareholder interests,” Siebecker says. 

It’s a problem he’s been researching for 15 years, and he believes he’s found the answer: making a slight tweak to the fiduciary framework through the lens of discourse theory. 

Discourse theory is based on the political philosophy of Jürgen Habermas and holds that, in simple terms, language is a tool not only to convey information but also to construct meaning and “truth” and to exert power and influence. 

“What I try to do is really shape the fiduciary duties around robust discourse—because, in the end, fiduciary duties really are predicated upon the philosophy of trust, and trust itself has to be built upon robust discourse.”

In the corporate setting, this would mean adopting rules and incentives that encourage independent expression of ideas, fair participation of corporate constituents in decision making, consideration of diverse viewpoints and more.

These rules and incentives would go beyond the current fiduciary framework of duty of care, loyalty and good faith—which is carried out, more or less, by the honor system and is left open to differing legal interpretations.

“All I’m suggesting is that there’s this duty of discourse. That’s it,” he says. “Just continually engage.” 

This type of engagement doesn’t solely involve those in the boardroom. This reimagined framework values the interests of customers, community members and more.

AI, the changing role of corporations and democracy 

AI’s earliest applications helped improve business operations, making things more automated, data-driven and cost-effective. As AI capabilities have widened, though, so has corporate reliance on the tool, says Siebecker—from running human resources to interacting with consumers, and more.

“As AI entities start to occupy managerial and potentially ownership roles, the institutional identity of the corporation shifts significantly. And with that changed identity, the role corporations play in society needs reconsideration,” Siebecker says. 

Consider, for example, how AI could play a role in how corporations operate in society. 

Siebecker says he’s most concerned about corporations exercising their right to free speech—namely, political speech, as they continue to gain influence in politics. The landmark Supreme Court case, Citizens United v. Federal Election Commission, ruled that the freedom of speech clause in the First Amendment allows corporations to not only express their political views but also to give unrestricted amounts of money to political campaigns. 

Siebecker says the combination of a lax fiduciary framework, the power of AI, and the tendency of corporations to manipulate investor and consumer behavior and influence opinions is problematic. Eventually, he says, “Corporations can dominate politics and undermine our democratic structure.”

AI already plays a dominant role in politics. In the 2016 presidential election, Cambridge Analytica used AI to create fake profiles on social media to sway voters. In 2020, the Trump and Biden campaigns used AI to create individually targeted messaging strategies. In 2024, hundreds of “deep fake” videosfabricated content that mimics a real person, often a politician or other known person—spread across social media. According to computer security company McAfee, 63% of people said they saw a political deepfake in the 60 days leading up to the presidential election, with nearly half of them saying it had some influence on who they voted for. 

Simply put, Siebecker says, AI cannot be left unchecked. 

Protecting our future

Siebecker believes there’s still hope. While it’s crucial that we act sooner rather than later, he says requiring corporate officers to meaningfully engage with all stakeholders will make a big difference.

“I’m just suggesting a little add-on to an existing framework that will have enormous consequences. It will save democracy,” Siebecker says. “It would prevent these horrific outcomes that people fear regarding corporate power and the quick evolving dissemination and development of AI technologies.”

AI will likely play a crucial role in shaping our future, but so can we. 

Related Articles