Advertisement
Australia markets closed
  • ALL ORDS

    7,837.40
    -100.10 (-1.26%)
     
  • ASX 200

    7,575.90
    -107.10 (-1.39%)
     
  • AUD/USD

    0.6535
    +0.0012 (+0.18%)
     
  • OIL

    83.66
    +0.09 (+0.11%)
     
  • GOLD

    2,349.60
    +7.10 (+0.30%)
     
  • Bitcoin AUD

    97,885.27
    +1,385.92 (+1.44%)
     
  • CMC Crypto 200

    1,345.30
    -51.23 (-3.67%)
     
  • AUD/EUR

    0.6108
    +0.0035 (+0.57%)
     
  • AUD/NZD

    1.0994
    +0.0037 (+0.33%)
     
  • NZX 50

    11,805.09
    -141.34 (-1.18%)
     
  • NASDAQ

    17,718.30
    +287.79 (+1.65%)
     
  • FTSE

    8,139.83
    +60.97 (+0.75%)
     
  • Dow Jones

    38,239.66
    +153.86 (+0.40%)
     
  • DAX

    18,161.01
    +243.73 (+1.36%)
     
  • Hang Seng

    17,651.15
    +366.61 (+2.12%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     

William Blair Commentary: Can AI Be Ethical?

As artificial intelligence (AI) continues to evolve, how do we ethically approach a technology with such wide-ranging implications? In this episode of The Active Share, Hugo talks with Olivia Gambelin, founder and CEO of Ethical Intelligence, about AI ethics, responsible AI, and how our current systemslegal, social, economic, and politicaladopt, and adapt to, new technologies.

Comments are edited excerpts from our podcast, which you can listen to in full here.

How do you define AI ethics and responsible AI?

Olivia Gambelin: I define AI ethics and responsible AI as two different things. AI ethics is the practice of implementing human values into our technology, specifically in AI. It's a design-based approach that looks at technology and determines if it needs value protection or value alignment.

ADVERTISEMENT

Responsible AI is an umbrella term that includes different topics such as AI ethics, regulation, governance, and safety. It's the practice of developing AI in a responsible manner and is more focused on the operations and development of AI.

Do these definitions inform your individual working framework for AI?

Olivia: The split between AI ethics and responsible AI is commonly accepted, but I think my framework comes into play on the responsible AI side; it focuses on how to strategically implement responsible AI and what kind of gaps exist within an organization.

When you advise companies, what are the most frequently asked questions?

Olivia: Companies in high-risk industriesfinancial, health, or mediaare focused on risk. The questions I usually get are, Are we compliant with the law? What kind of regulation do we need to be watching for? What kind of risks are associated with our specific use-cases? These companies have a risk-based mindset and are focused on protecting their company and making sure they are not intentionally doing harm.

I also work with companies in more creative fields looking to take an innovation-based approach toward AI ethics and responsible AI. These companies ask, How do we make AI a competitive edge? How do we turn something like privacy, or fairness, or transparency into a competitive edge where we stand out from competition?

Are companies beginning to set standards as AI capabilities evolve? Or are they interpreting legislation?

Olivia: This is a huge debate that's occurring within the European Union (EU). Major players are concerned that the proposed regulation, the EU AI Act, is too strict, and that policy and regulation will guide AI best practice versus companies having space to shape AI best practice. Should companies influence the pace of innovation? Or is it the responsibility of legal bodies?

Ethics is a grey space, with the need to find balance and different context setting. There aren't black and white answers. However, black and white answers help pave the way to laws and regulations, which means that these then become the baseline. While this is what we must be doing, it doesn't mean it's what we should be doing.

Who should have a seat at the table when it comes to determining the best approach to AI?

Olivia: We need both the public and private sectors. We need to have the public interest in mind, but people in the private sector know the technology best. The balance between public and private is incredibly important. Public brings in social good, while private brings in expertise.

I would also love to see more ethicists at the table. Another challenge we're facing right now when it comes to AI is how we measure success. The tools we're using aren't necessarily in tune with what we as a culture, as a society, as global citizens want them to be. And if there isn't someone at the table, like an ethicist, focused on the long-term impact and using that as a success marker, we will start to see an imbalance.

A lot of these questions exist in the grey, and they're difficult to deal with. We need people with different mindsets around the table.

Does AI accelerate the need for the legal system to change?

Olivia: In my opinion, yes. We're at a point in time where we need to adapt and grow. When the EU AI Act was established, there was no mention of generative AI. But then ChatGPT was released, which stalled the development of the EU AI Act because legislators had to figure out how to account for a new AI model.

While generative AI did exist before Chat GPT, it wasn't a prevalently used type of architecture. The fact that AI development is outpacing AI regulation is a huge indicator that we need to rethink how some legal systems work. We either need to make regulations adaptable so they can grow alongside the development of AI or shorten feedback loops to be able to keep pace with technological development.

Technology moving faster than governments is a longstanding problem. You call this a democratic deficit. Can you explain?

Olivia: One of the challenges that companies are facing is a lack of feedback loops, such as talking to users and experts in the field, in development processes.

For example, a healthcare start-up may develop software for nurse practitioners, but the practice of talking to nurse practitioners to understand their needs is missing. Just because a company designs the software doesn't mean it knows the best solutions for a certain profession.

Companies must start talking to field experts and get feedback loops and democratic input in place. This is shaping what a software platform and AI system would look like.

Does there need to be global coordination around AI?

Olivia: I think one of the unique challenges of AI is that these systems can reach a global scale. But the way that we interact with technology is heavily influenced by different cultures.

Although we need global communication to work on the main risks of AI, when it comes to specific risks or specific applications, there still needs to be cultural sensitivity. This makes global collaboration difficult. Take China and the United States. Each country has a very different approach to AI. How do we account for that if we're supposed to have global cooperation?

Are there countries ahead of the curve on global coordination?

Olivia: We're just now seeing countries catching up in terms of understanding the need for a more active role in AI development.

So far, AI has been driven by the private sector. Now, we're seeing executive orders coming out of the United States, and other powerhouses are starting to play a bigger role. But there are differences in approach. For example, the United Kingdom has an innovation-based approach, while the EU has a risk-based approach.

Let's move on to the idea of accountability. Are we capable of holding AI accountable?

Olivia: That's still a big question. Speaking as an ethicist, there will always be blame to assign. Blame will always need to be assigned to a person, not a system.

But at the end of the day, we must look to our legal systems. We can't prosecute an AI system; we must prosecute a person or a company. Even though it may feel like we can hide behind these systems, we can't. There will always be legal ramifications.

Should users of AI-generated decisions or owners of AI systems be held accountable?

Olivia: It can be incredibly difficult to pinpoint if harm is occurring. As a user, you can say, I think something feels off. Or I don't know if I should be experiencing this technology in a different way.

Research in the space of responsible AI and AI ethics allows us to preemptively and accurately catch a breakdown in the system. We're moving away from a time where you can skirt accountability by saying, We didn't know.

In hindsight, could social media have been managed better from an ethical perspective?

Olivia: There could have been tighter feedback loops in terms of ethics, where we may have been able to catch any negative consequences and change the core structure of how we approach social media. Now we've been using social media for so long that would be difficult to go back and make changes.

For example, when Facebook first launched the Like button, it didn't necessarily have the right controls set up to understand the effects and then feed that back into product and feature design.

The Like button is now ingrained in how we use social media. Instagram even launched a feature that hides the amount of likes a post gets to combat negative side effects, but it has resulted in a drop in engagement. We know the Like button causes these adverse effects, but we can't leave it behind.

If there had been an ethics feedback loop in place earlier on, we would've been able to adapt. We also must have the humility to say, We broke something we weren't supposed to, but we're going to try and fix it.

Is that too great a responsibility to expect from a group of entrepreneurs?

Olivia: Life is a balance of both good and bad, and we're never going to get past that. This is where I will differ from a lot of ethicistsI understand I will not be able to reach every company or individual with a different mindset.

In my work, I'm finding a growing sentiment for something different, for having work be more value-driven, and for technology to serve some type of greater purpose beyond just what the marketing team is putting out.

We're always going to have bad actors. But I believe there is a shift happening, especially in Silicon Valley, toward, Why don't we change the world for the better instead of just changing it for change's sake?

Overall, are you optimistic?

Olivia: I have been lovingly nicknamed the optimistic ethicist because of the work that I've done and the change that I've seen. I'm working with people that want to achieve success. And you can hold ethics and success in the same hand; they don't need to be opposing. I've seen that in practice, and I've seen the results.

When you have a value-driven approach to business, it can result in stronger technology, products, companies, and people behind the scenes. I'm optimistic because I've seen the change that is already happening. And the more success stories we have, the more momentum is going to build.

As investors, we think about risk in many ways. Does responsible AI help reduce technological risk?

Olivia: Recently, the Massachusetts Institute of Technology (MIT) and Boston Consulting Group (BCG) released a report that put a beautiful number on the work I do. It found that companies that engage in responsible AI practices reduce risk of AI failure rates by 28%, which is huge, as the AI risk failure rate is usually between 86% and 93%.

Responsible AI is good business practice and helps de-risk development processes. Combined with an ethics layer, companies have the potential to establish themselves as leaders in their industries. And it becomes riskier to not practice responsible AI than it is to invest in these practices.

This article first appeared on GuruFocus.