Cal Al-Dhubaib, Pandata: On developing ethical AI solutions

Cal Al-Dhubaib, Pandata: On developing ethical AI solutions
Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it’s geeky, he’s probably into it. Find him on Twitter: @Gadget_Ry

Businesses that fail to deploy AI ethically will face severe penalties as regulations catch up with the pace of innovations.

In the EU, the proposed AI Act features similar enforcement to GDPR but with even heftier fines of €30 million or six percent of annual turnover. Other countries are implementing variations, including China and a growing number of US states.

Pandata are experts in human-centred, explainable, and trustworthy AI. The Cleveland-based outfit prides itself on delivering AI solutions that give enterprises a competitive edge in an ethical, and lawful, manner.

AI News caught up with Cal Al-Dhubaib, CEO of Pandata, to learn more about ethical AI solutions.

AI News: Can you give us a quick overview of what Pandata does?

Cal Al-Dhubaib: Pandata helps organisations to design and develop AI & ML solutions. We focus on heavily-regulated industries like healthcare, energy, and finance and emphasise the implementation of trustworthy AI.

We’ve built great expertise working with sensitive data and higher-risk applications and pride ourselves on simplifying complex problems. Our clients include globally-recognised brands like Cleveland Clinic, Progressive Insurance, Parker Hannifin, and Hyland Software. 

AN: What are some of the biggest ethical challenges around AI?

CA: A lot has changed in the last five years, especially our ability to rapidly train and deploy complex machine-learning models on unstructured data like text and images.

This increase in complexity has resulted in two challenges:

  1. Ground truth is more difficult to define. For example, summarising an article into a paragraph with AI may have multiple ‘correct’ answers.
  2. Models have become more complex and harder to interrogate.

The greatest ethical challenge we face in AI is that our models can break in ways we can’t even imagine. The result is a laundry list of examples from recent years of models that have resulted in physical harm or racial/gender bias.

AN: And how important is “explainable AI”?

CA: As models have increased in complexity, we’ve seen a rise in the field of explainable AI. Sometimes this means having more simple models that are used to explain more complex models that are better at performing tasks.

Explainable AI is critical in two situations:

  1. When an audit trail is necessary to support decisions made
  2. 2) When expert human decision-makers need to take action based on the output of an AI system.

AN: Are there any areas where AI should not be implemented by companies in your view?

CA: AI used to be the exclusive domain of data scientists. As the technology has become mainstream, it is only natural that we’re starting to work with a broader sphere of stakeholders including user experience designers, product experts, and business leaders. However, fewer than 25 percent of professionals consider themselves data literate (HBR 2021).

We often see this translate into a mismatch of expectations for what AI can reasonably accomplish. I share these three golden rules:

  1. If you can explain something procedurally, or provide a straightforward set of rules to accomplish a task, it may not be worth it to invest in AI.
  2. If a task is not performed consistently by equally trained experts, then there is little hope that an AI can learn to recognise consistent patterns.
  3. Proceed with caution when dealing with AI systems that directly impact the quality of human life – financially, physically, mentally, or otherwise.

AN: Do you think AI regulations need to be stricter or more relaxed?

CA: In some cases, regulation is long overdue. Regulation has hardly kept up with the pace of innovation.

As of 2022, the FDA has re-classified over 500 software applications that leverage AI as medical devices. The EU AI Act, anticipated to be rolled out in 2024-25 will be the first to set specific guidelines for AI applications that affect human life.

Just like GDPR created a wave of change in data privacy practices and the infrastructure to support them, the EU AI act will require organisations to be more disciplined in their approach to model deployment and management.

Organisations that start to mature their practices today will be well prepared to ride that wave and thrive in its wake.

AN: What advice would you provide to business leaders who are interested in adopting or scaling their AI practices?

CA: Use change management principles: understand, plan, implement, and communicate to prepare the organisation for AI-powered disruption.

Improve your AI literacy. AI is not intended to replace humans but rather to augment repetitive tasks; enabling humans to focus on more impactful work.

AI has to be boring to be practical. The real power of AI is to resolve redundancies and inefficiencies we experience in our daily work. Deciding how to use the building blocks of AI to get there is where the vision of a prepared leader can go a long way.

If any of these topics sound interesting, Cal has shared a recap of his session at this year’s AI & Big Data Expo North America here

(Photo by Nathan Dumlao on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , , ,

This post was originally published on AI News

Share your love