Insights
Ken Frantz

A Q&A with Ken Frantz, Managing Director, Advisory, BPM

Ken Frantz, Managing Director of Advisory Services at BPM, has extensive experience as an IT leader, specializing in guiding clients through technology transformations while mitigating cyber risks throughout their organizations. Ken has assisted clients in crafting comprehensive enterprise security initiatives, integrating cutting-edge technologies to safeguard critical systems and data, and formulating strategic approaches to navigate the ethical considerations surrounding artificial intelligence (AI). We recently had the opportunity to sit down with Ken to discuss prevalent misconceptions about AI and its ethical implications. Here’s a glimpse into our conversation.

Q: What ethical considerations does AI introduce into your industry, and how are you tackling these concerns?

Frantz: For an industry like ours that creates content based on audits, tax assessments and other analytical and creative works, I’m concerned about the accuracy of our reporting and the potential to appropriate the intellectual property of others without proper permissions. Already, we’ve seen stories about using generative AI for legal research, presenting the “hallucinations” of fictional legal cases in court filings and the ambiguity of intellectual property rights for work created based on the sources used to train AI systems. At BPM, we’re tackling these concerns by teaching the risks of using AI and ensuring that we apply human quality assurance activities to review output before publicly presenting it. The ease of using generative AI is alluring. That means we need to be vigilant to prevent misuse.

Q: What other ethical considerations do you think AI introduces into various industries, and how do you believe these concerns should be addressed?

Frantz: Apart from concerns regarding accuracy and intellectual property rights, AI also raises questions about privacy and bias. In industries like healthcare, for instance, there’s a delicate balance between leveraging AI for improved patient outcomes while safeguarding sensitive medical data. Additionally, biases present in training data can perpetuate discrimination, especially in decision-making processes like hiring or loan approvals. I believe addressing these issues requires a multi-faceted approach. That approach should include diverse representation in AI development teams, transparent algorithms and robust data privacy regulations.

Q: What are the most common misconceptions about AI within your industry, and how do you address them?

Frantz: One of the most common misconceptions is that AI will result in widespread layoffs and unemployment. Undoubtedly, jobs will change, but I like to remind people how innovations have created job growth in the past.

Another misconception is that AI systems learn on their own. In fact, the systems we see result from extensive development by data scientists and other developers to curate data and guide systems through their training.

I think there is also a perception that AI is accurate. By teaching the limitations of AI and the errors it can produce, we can help everyone using these systems to be more vigilant and ensure that what they present based on AI is complete and accurate.

Q: Could you elaborate on how your team navigates the challenges posed by adversarial AI and compliance requirements from regulatory bodies like the SEC?

Frantz: Adversarial AI presents a significant threat, particularly in industries reliant on data integrity, such as finance. Our team employs a combination of cutting-edge technology and strategic partnerships to stay ahead of evolving threats. This includes deploying advanced anomaly detection systems and leveraging machine learning algorithms to detect and mitigate potential attacks. Compliance with regulatory bodies like the SEC also demands rigorous adherence to industry standards and continuous monitoring of systems for any deviations. At BPM, we work closely with clients to ensure their systems are not only secure but also compliant with relevant regulations, providing them with peace of mind in an increasingly complex landscape.

Q: In your opinion, what steps can organizations take to foster greater understanding and acceptance of AI among their employees and stakeholders?

Frantz: I believe education is key. Organizations must invest in comprehensive training programs to demystify AI and its implications. This involves technical training for developers and data scientists, along with awareness programs for non-technical staff. By fostering a continuous learning and transparency culture, I believe organizations can empower their employees to embrace AI as a tool for innovation rather than a threat to job security. Additionally, open communication with stakeholders about the ethical considerations and limitations of AI builds trust and credibility, paving the way for responsible AI adoption.

BPM: Navigating AI misconceptions and ethical considerations

At BPM, Ken and his team are at the forefront of leveraging advanced technology to address the evolving challenges of the digital age. With a deep understanding of both the technological landscape and regulatory requirements, BPM delivers tailored solutions that empower businesses to thrive in a rapidly changing environment. Whether mitigating the risks of adversarial AI or helping ensure compliance with stringent regulations, BPM offers unparalleled support to navigate the complexities of today’s business world. To learn more about how Ken and his team can help your organization stay ahead of the curve, contact us.


Ken Frantz

Related Insights
Subscribe