Blog

Reflections on the AI Seoul Summit

30 May 2024 | Eleanor Lightbody

Last week, Seoul hosted a two-day summit to discuss global collaboration on AI safety, innovation and inclusion. So, what’s the latest when it comes to the world’s approach to AI? We sat down with Luminance’s CEO, Eleanor Lightbody, to hear her perspective on what fast-growing scale-ups like Luminance want to see, from stricter regulations on AI washing to a more agile, pro-innovation approach to policies. Here’s what she had to say…

What progress has been made since the UK summit six months ago?

Eleanor: There’s no doubt AI is still an extremely hot topic, and looking back to even six months ago when the UK summit took place, so much has changed. Conversations back then were very focused on what AI actually is, with lots of scaremongering and uncertainty surrounding it. I’m pleased to see the perception has since evolved, as people realise the tangible benefits and day-to-day realities of AI. I’ve also noticed a shift in the type of coverage we’re seeing, with a transition from theory to practice that now needs to be reflected in policymakers’ approach to regulating these technologies.

Why do you think this summit took place in South Korea?

Eleanor: We’ve had one in Europe and now one in Asia, and I think this variety goes to show that this is a global topic which requires global attention and collaboration. I recently visited one of our customers, LG Chem in South Korea, and saw for myself the incredible tech focus and literacy they have over there. They’ve recently invested billions of dollars into their tech industry, and just last month announced a partnership with the UK, so they were a very fitting candidate to host this summit.

What would you like to see in terms of regulation?

Eleanor: Ultimately, it’s a balancing act of ensuring privacy and safety whilst promoting growth and innovation. But I think now we’ve got to the point where we need to see more than just intent – I’m talking concrete guidelines and tangible frameworks with timelines and clear actions that companies like my own can take in response.

One thing I’d like is to see more done to prevent AI washing. We’re seeing more and more providers slapping an AI label onto their product as a marketing tagline, overstating their use of it to create a halo effect. Tangible regulation preventing this kind of misinformation would help to educate end-users on what they’re buying. Another aspect I feel passionately about is taking a verticalized approach to legislation; there are so many different types of AI and ways it’s being used, a one-size-fits-all approach just isn’t possible nor effective. For example, regulating autonomous vehicles is a completely different ballpark to AI in the legal industry, and the respective guidelines need to reflect this distinction.

What do you think of the commitments made at this summit?

Eleanor: The commitments from the 16 AI companies and 27 countries to address AI safety is a promising start, but I want to see a more diverse range of voices involved in the discussion. Given the complexity of the topic, we need people who truly understand the technology to be involved in its regulation. Whilst the Big Tech names offer a valuable perspective in this sense, they also all have their own agenda. For example, smaller companies may struggle to quickly adjust to regulation, putting them on the backburner from the offset. That’s why incorporating AI start-ups and scale-ups as well is vital, in order to hear from all sides of the court, and then weigh up the advice against who it’s coming from.