In today’s rapidly evolving workplace, artificial intelligence (AI) has become an integral tool across industries, transforming how businesses operate and make decisions. For business schools, equipping students with the skills to effectively harness AI is no longer optional but essential. Beyond technical proficiency, students must be prepared to navigate the ethical challenges posed by AI, such as data privacy, algorithmic bias, and the societal impact of automation.
“We need to prepare students for a work environment where the use of AI is widespread,” said Jennifer Merton, associate chair and senior lecturer in Isenberg’s Management Department. Merton is leading the school’s effort to embed AI education into the curriculum—a new undergraduate course on ethics and AI will launch in spring 2025, and a similar graduate course is planned for 2026. The benefits and challenges of using AI as a business tool will also be integrated throughout the existing undergraduate and graduate curricula at Isenberg.
“I think it is especially important for our students to think about the ethical issues that the use of AI raises,” said Merton.
“Business ethics are crucial in establishing trust and positive relationships with customers, and it’s likely that our students will encounter situations where they might need to question how AI is being used.”
Integrating AI into business operations can help leaders improve decision making for any business function by instantly analyzing vast quantities of data. The results include efficiencies, automations, customizations, and innovations that might otherwise have remained hidden.
By embedding AI education into the curriculum, business schools can empower future leaders to responsibly leverage AI, ensuring innovation is aligned with ethical principles and equitable outcomes, said Merton.
NEW COURSES TO BE LAUNCHED
The new undergraduate course on business ethics in AI is being developed by Merton and lecturer Brian Shea, as part of a curriculum review that arose out of a Management Department retreat spearheaded by chair Mzamo Mangaliso. The course will include topics on AI’s evolution, core algorithms and techniques in machine learning, ethical considerations and societal impacts, and challenges and future directions in AI research. Students will also explore real-world applications of AI in healthcare, finance, and autonomous systems.
A similar class is being developed by Merton and Shea for graduate students, with input from UMass Amherst’s Public Interest Technology group.
The ethical issues raised by using AI in business decision making include data privacy and security, bias in AI algorithms, AI’s impact on employment, and its autonomous use in decision making.
“We’ve adopted a use-case approach to what we’re going to teach our students, so that they’re exposed to real ethical situations,” said Merton.
Among cases cited by Merton is how insurance companies are using AI to approve or deny claims.
“The ethical issue here is bias,” said Merton.
Humans—with their conscious and unconscious biases—create the algorithms that run AI, Merton pointed out, and the data sets that AI platforms learn from also reflect human biases. AI may introduce historical biases into evaluating claims, rather than evaluating claims based on merit.
Another significant ethical challenge posed by this use of AI involves concerns around transparency. Consumers often do not know that AI is being used to evaluate their claims. Moreover, AI is a black box whose development cannot be fully control or understood, according to Merton, which makes it difficult to audit AI systems to make sure that they are not designed or trained to deny claims.
Marketing teams use AI to generate content, which raises copyright issues, Merton explains.
“If you ask an AI tool to write you something, is it pulling from a copyrighted source?” she said. “And, you cannot copyright something written by AI, because copyright can only be owned by people.”
At the same time, Merton noted that AI is transformative across many industries. She cited a study at Beth Israel Deaconess Medical Center where researchers found that ChatGPT was able to make accurate diagnoses in challenging medical cases, demonstrating that AI might have an important role in the future of diagnosis and patient care while challenging experts to think about how to integrate AI with human judgment.
“All these use cases are situations that our students might encounter in their careers,” said Merton. “When a new technology is so revolutionary, it’s easy to not challenge or clarify decisions. We need to teach our students to ask questions from a framework of ethical business practice.
“And as business leaders,” Merton added, “our students will need to establish policies and processes that ensure that AI is developed and used based on values such as nondiscrimination, privacy, and individual rights.”
Dave Orsman is a marketing specialist in the UMass Amherst Isenberg School of Management’s Marketing and Communications office. Submit story ideas to dorsman@isenberg.umass.edu.