BoardPro Podcasts
A series of podcasts designed to demystify the world of business governance bringing practical advice and tips for organizations to improve their operational effectiveness.
BoardPro Podcasts
Webinar: Making the case - Persuading your board to tackle AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Most directors know AI matters. The challenge? Convincing your Chair and peers to prioritise it now, not “someday.” That’s what this session delivers.
Every director knows AI is coming—but few know how to push it onto the agenda. Boards dismiss it as premature, irrelevant, or a management issue. This webinar equips you with a persuasion toolkit: five compelling arguments, tested rebuttals to objections, and a practical next step. In half an hour, you’ll leave ready to frame AI as a governance necessity, not a tech distraction—ensuring your board takes action before competitors, regulators, or crises force it.
This isn’t about turning directors into AI experts—it’s about equipping them to persuade. With five tested arguments and objection handling, participants gain a clear pathway to move hesitant boards into meaningful AI discussions.
So, hi everybody, welcome to our webinar today titled Making the Case Persuading Your Board to Tackle AI. Today we are we have, I should say, Helen Van Orten and Alexei O'Brien with us on the webinar as your hosts. My name is Sean McDonald, and I shall be your moderator in the background for the next 40 odd minutes. Firstly, though, thank you for attending today. We always appreciate the effort you make to be here for our live webinar events during the session. If you have any questions, please try and use the QA button on the toolbar. As against the QA, it just enables us to keep a track of everything as the questions are coming through thick and fast. And finally, if you stay through till the end, which of course we hope you will do, uh, and is very customary for our webinars, we have a special treat for you. If you answer our really short one-minute survey at the end of the webinar, you'll go in the drawer to win one of our beautiful gift tampers worth over$400. Now, for those not too familiar with Board Pro, we are a board software provider, uh, sometimes called a board portal, and we serve just over 35,000 users across about 32 to 33 different countries across the globe. And we enable organizations to prepare for and run their board meetings and committee meetings more efficiently and effectively with less time and to deliver more impact and value for the organization, all with the great software from Board Pro. And as much as we are a board software provider, part of our wider mission here at Board Pro is to make the fundamentals of governance free and easy to implement for all organizations, but especially those organizations with resource constraints. And one of the many ways we do this is by providing free access to hundreds of governance templates, guides, and resources, which you'll find funnily enough, on the resources section of our website. These webinars that we host every Thursday are also a great way of accessing key governance knowledge without necessarily the time, commitment, and costs associated with in-person events. So uh sit back and relax. Everything will be provided to you. The um the slide uh content, the video will be sent to you just after the in fact, it won't be just after, it'll be tomorrow now. So everything will be provided to you. So just sit back and relax and ask as many questions as you would like. Uh let me have Helen and Alexei introduce themselves now, starting with you first, Helen.
SPEAKER_02Uh everyone, I'm Helen Van Otton. I'm an experienced board director and chair, but I also run my own um training and consulting business directly. Um, and uh directly our main focus is empowering boards and execs to lead with confidence in the AI era. So offering tailored training and actionable insights that help you take the complex technology that is AI and actually turn it to a strategic advantage for you and your company. Alexei.
SPEAKER_01Thanks, Helen. Um, thanks so much, Sean, for having me as well. I'm Alexei O'Brien and I'm the director of LeadershipAcademy.ai. I work with boards, executive teams, and businesses across Australia and New Zealand and really look to build that AI fluency to have businesses move from curiosity about AI to competent governance use. My background is commercial, so I've spent the bulk of my executive career in retail and financial executive roles at Lululemon and Ripcurl. Um, so I come at this from a perspective of someone who sat in the rooms where these decisions are getting made, um, not just from a tech background. Um, and more recently, I've worked with businesses like Pillow Talk, NTANE, Harris Farm Markets, and St Kilda on building those AI-enabled cultures and really getting real work done with AI. Also current board director, a graduate of the AI CD, and looking forward to today's chat with you, Helen.
SPEAKER_00Fantastic. All right, let's uh progress on to our first slide of the afternoon. Over to you, Helen.
SPEAKER_02Um, I think that is possibly just the sort of the headline. That's what the um the topic is. Um, if we can go jump on to the next one. Yep, perfect. Thank you. Um, so let's start off with what's actually changed. I was listening to a podcast this morning and it was talking about um how every everything you listen to right now, it's like the AI is completely embedded and it's everywhere in the world. And I thought, well, actually, the thing is that that's increasingly true. Um, and what has actually changed is that AI has arrived, and even if it hasn't necessarily arrived in your organization, it is absolutely embedded, whether it's in the software that your teams are using in your supplier systems, or if you have got it within your organization in the products that you're selling. Um, your HR platform is probably using it to screen candidates, uh, your accounting software is probably using it to flag anomalies, uh, your customer services team might be using it to draft their responses. And as a board, you may or may not actually have visibility of that at all. Um, and then the next layer of this, which is agenda KI, which some of you may or may not be familiar with. Um, and this is a shift that a lot of boards really haven't registered yet. Um, and agenda KI is actually alive in your systems. Um, it isn't just answering questions, it's moving away from just a chat conversation like you'd have in Chat GPT or Copilot, is actually planning and executing and taking actions autonomously. Um, whether it's reading your email, booking your meetings, sending emails, running processes behind the scenes, often with minimal human intervention. Um, it really if you're not into agenda KI, it really does sound like something out of a science fiction movie. Um, but that is actually being deployed. I'm sure I know Alexei and I are both using it within our own businesses already, and a lot of the organizations that we're working for are already starting to use that. Um, for those of you in Aussie who, or even New Zealand, who potentially went to the AICD summit last week, um, there was a whole load of conversation about AI right through the summit. Um, and I just I'm gonna just read a quote uh from um Clara She, who's a door director at HubSpot, which I actually thought some, I thought, oh, this is perfect when we do the podcast this week or webinar this week. Um so her comment was we've never seen this pace of change, not through the internet, not through mobile, not through social. Two things make this wave categorically different. AI passes the Turing test. And for those of you who don't know what the Turing test is, that is where a computer can pass as being human. Um, and it passed it, I think probably about April last year. Um it embodies human values and decision making in the way it performs, and AI self-improves. A database that fails needs a human to fix it. An AI model can detect its own failure and continuously improve. Those two properties together place this in an entirely different category. So that's kind of quite confronting, right? So the speed with which the change is happening and how quickly it's being embedded within our organizations and the fact that it's self-corrupting. So even if your organization hasn't got a formal AI program, AI is already starting to shape the way that your risk profile is evolving, your competitive position, your strategic options. And boards right now need that visibility, they need that governance, and they need a credible plan. And so that's what Alexei and I are gonna walk you through today, uh, which is a narrative that you can use at your next board meeting. Some of those arguments that are gonna cut through. We know we both sit on boards ourselves, we know that there's skeptical directors on them. Um and there's people who are going, oh, this is just gonna pass by. Alexei will go through the arguments that you're gonna uh confront. So at the end of today, we'll help you with this arguments to cut through that, and a roadmap as well, and a and a little bit of a checklist checklist for your oversight. It's gonna be hopefully really practical. And starting Monday or even tomorrow, you should be able to use it. So um, if you wouldn't mind popping on to the next slide, please, Sean. Um, we thought it was always good. Directors, we like numbers, don't we? Um so we thought we'd just start off with some numbers because actually some of those are quite uncomfortable and confronting. So that first one, 88% of enterprises are now using AI in at least one function. And that is um a McKinsey stat uh from November last year. Um and that was only 78% the year before, so it's already gone up 10% just over the course of a year. And as I said earlier, even if you don't think your organization's actually using AI, your suppliers, your partners, your competitors are. And that exposure, it's really systemic. It's sitting across the whole system. It's not just within your organization, it's everything that you are working into. Um, and then just jumping across to that yellow number on the right hand side, um, two-thirds of boards are admitting that they actually have very limited or no knowledge or experience with AI, and that's again from a Deloitte study last year. Um, and actually yesterday, um, a company called, I think it's pronounced Protivity, I haven't come across them before, um, actually published a boardroom study that showed that only 26% of boards are discussing AI at every meeting. So if you're here going, I need to persuade my board to talk about AI, and I'm really behind the eight ball, it's fantastic that you're here. It's fantastic. You're you're trying to get your board to talk about it. But you know, as of you know, this report literally out yesterday, literally one in four boards are only only one in four boards are discussing this every meeting. At the same time, you've got over 40% of CEOs saying that their company isn't going to survive the next decade without AI transformation. Um, and one in three boards don't have it on their agenda at all. So, you know, you're in the right place. Um, another little number that didn't make it onto the slide, but I thought was quite an interesting one, is that um less than 10% of board directors globally have got any sort of technology background. So we're going into this pace of change, this modern technology, this new era. And actually, a lot of board directors have no knowledge of technology or AI. So it's kind of leaving us quite vulnerable. Um, and then that that little line on the bottom there, that's the one that's um any of any of your directors who are not aware of this. Um, as of the end of last year, a number of the DO insurers are actually starting to introduce exclusions around AI. So, not the broad cyber ones that we've seen and be seen starting to evolve before, but really specific AI um exclusions. So, Barclay, one of the big US specialty insurers, has actually um called, they're calling it absolute AI exclusions across their whole DNO. Um, so that actually for a board director can be quite confronting because we rely on that DNO cover, right? Whether we're being sued or we're sort of having to work through anything. So having those exclusions coming in in that insurance space means that if your board can't demonstrate that active oversight of AI and you can't show that you've got the right controls and documented approaches, you might actually find yourself in a coverage dispute with your insurance company. So just to sort of, that's your highlight picture of where this, I guess it's the state of the nation of where we are now, and why it's so important that AI is moving from an operational topic that your technology team or your you know operations team might be talking about to actually why we need that board-level governance. Um and I guess the insurance companies are probably telling us that a little bit faster than the regulators are, because certainly in Australia and New Zealand the regulators have been a little slower in this space. Um, but I'm now going to hand over to Alexei to sort of just start talking through the so what.
SPEAKER_01I think uh just a couple of other stats um to add, Helen. Um, you know, just this pace of change, you know, I I know that like I'm in it every day, and I certainly uh feel the pace of change, and it seems to have certainly have accelerated this year. Um and 42%, this is on the PWC survey um out earlier this year, 42% of CEOs are actually citing transforming fast enough to keep pace with AI as their number one concern. And I think that you know, on the um AI, shadow AI um breaches, there's also, you know, certainly part of what keeps me up at night um is you know, shadow AI. Um there's actually um through last year, IBM found that one in five organizations have already had a breach that's been caused by shadow AI. Um so that's something for us to be aware of. Um exposure, it's not theoretical, and it's now starting to be quantified for sure as well.
SPEAKER_02And just jumping in there, Alexei, for those who aren't aware what shadow AI is, it's kind of the new shadow IT. It's uh where people are using AI in the background without you having any sort of policy around it, and therefore they're using it and potentially leaking company information. And the IBM uh report on that last year, I think it was 670,000 US dollars extra breach cost uh if it was a shadow AI involved in the breach. So, you know, there's some really big numbers associated with that as well.
SPEAKER_01Absolutely.
SPEAKER_00Alan, um a couple of people are asking, what is DO?
SPEAKER_02Uh directors and office holders um cover. It's the insurance that um boards and uh executives have in place. So if uh something goes wrong, it's their insurance policy that corrects uh protects them, and it's their sort of their personal insurance as a director and officer of the company.
SPEAKER_00Thank you. And we have a question in from Gordon, which I think we're gonna cover a little bit later, but I'll ask it anyway. Um so for boards that lack deep technical expertise, the conversation around AI can often feel really overwhelming. What strategies do you recommend for presenting these really complex issues in a way that focuses on governance, oversight, and strategy rather than getting bogged down in the technical detail?
SPEAKER_01I think we will cover that as we go through today. So um if we pause that until the end and uh make sure that we have covered off on that for you. Um and if we haven't deeply enough, we'll come back to it.
SPEAKER_00Great.
SPEAKER_01Yeah. Um, Sean, could you move on to the next slide? Um so before we get into strategy or policy or that governance architecture, I think it's worth us pausing to acknowledge what many of us on boards are actually wrestling with right now. And um, you know, the conversations that are actually being had. Helen and I both sat in these conversations and similar statements coming up again and again from you know, AI is really premature for us to we don't really use it, to we lack the expertise, or certainly what we're hearing right now, it's moving way too fast. Um, and even a little bit of management can deal with it. So, and they're not they're not reckless positions. They often reflect a genuine caution and a desire to avoid hype cycles and a very, very reasonable concern about committing to a technology that absolutely keeps shifting. Um, but they can also mask a deeper governance problem. So in many organizations, AI is already present. Helen touched on this a little bit earlier, not necessarily through a formal program, but through software features, vendor platforms, co-pilots, outsource services, and as well through that shadow AI, individual experimentation that might be happening across the business. And that exposure, therefore, arrives often before our conversation does. So a board can really sincerely believe it's not engaging with AI, but at the same moment, someone in the finance team might be using AI to draft some reports, or a supplier has it embedded into a model that you're actually paying as a service for, or a product team testing a copilot without that some formal sign-off. So the exposure is absolutely real, and the visibility, though, is what's not real. So research tells us about 48% of employees globally admit to using AI in ways that contravene company policy, which is that shadow AI use, and 57% of them are hiding it from their employees. So when we say we don't use AI, um the data actually says something else. And here's why that visibility gap isn't really theoretical. Again at the AI C D summit um this month, um, Clara She described a real example from a US services company that had deployed an agentic AI underwriting system. So researchers tested the same loan application a couple of times, every financial parameter identical, but just changed the application's applicant's name from John to Malcolm. And the agentic system made a different underwriting recommendation. So the AI model had been trained on internet data and had inherited its biases. The organization actually didn't know it had a problem until someone had looked. And that's the visibility gap in practice. So the real issue is really whether AI exists in the organization, it's whether there's enough visibility, enough clarity of accountability, and enough confidence in our oversight arrangements. And that's really where us as boards need to engage. Um, we're not expected to be technical experts. It can be really overwhelming. Um, but AI now touches on the things that boards are specifically responsible for: risk, controls, assurance, strategic capability, customer outcomes, operational resilience and reputation. So the shift that we're looking to make here is um to move the conversation from is this an IT topic and what are our technical um knowledge around it to what is our oversight obligations here? All right, uh the next slide, please. Thanks, Sean. Um, so when these objections are raised, some of the most useful responses are actually not to debate them, certainly not at a surface level, it's actually to reframe them and ask the next good governance question. Um, so the first one is is you know, AI is premature and getting more and more mature by the day, that's for sure. So a more useful reframe is that AI is already present in our tools, our vendors and workflows. The risk isn't theoretical adoption at some point in the future now. The risk um is actually unmanaged or weakly governed use that's actually already occurring. And so the broad question becomes where is AI being used today, including through third parties and our vendors, and what controls and reporting are in place? Um, then there's the we don't use AI, um which we've just discussed. It's very um really uh fully true. So it will appear in platform features and procurement decision software updates, whether or not we've actually labeled as official AI adoption. So the better question to ask is what is our inventory of AI-enabled tools, vendor claims, and use cases, and who owns that picture? Um, so Microsoft, as an example, confirmed just in February, um just a couple of weeks ago, that there was a bug in Copilot reading and summarizing emails that were actually confidential. So bypassing um DLP policies that organizations had actually specifically set up to prevent that. So the European Parliament blocked built-in AI use uh features on staff devices. Um, so this is the kind of thing that we actually need to be aware of. They're not hypothetical risks anymore. Um, then there's the well, we lack the expertise, which I think goes back to the question that you asked uh just before, Sean. Technically, that may be true, but we don't need at a board level to have that technical mastery. We just need enough governance, fluency to ask really sound questions, understand the nature of the risk, and assess whether management's response is actually proportionate. So the question becomes who's a who's actually accountable for AI risk and what evidence will the board receive regularly? So at the AICD um summit this month, the NAB chair Phil uh Cronikan put this exactly right. So boards need technical technology literacy. They don't need to be coders. People who understand how technology interacts with business and society at a strategic level is where we need to be coming from as directors. I'm certainly not a technical, um, a technical expert, um, but I do understand business and the risks and as well as the opportunities that AI presents for us. So um that's like it's the same way that you know we expect directors to have that financial literacy without needing to be an accountant. Then there is AI is moving too fast. Um, I absolutely concur, you know, we're in this every day. And um it seems to, you know, certainly the last um six weeks to be accelerating uh faster than ever. However, that's not a reason for us to stand back and wait. It's actually a reason that we've got to be clearer with our risk appetite, our escalation triggers, as well as our decision rights. So the board's got to be asking questions like what would cause us to pause, escalate, adapt, or actually seek further assistance? How do we make sure that we're getting trained and that we understand what's happening? Um, how do we keep our finger on the pulse of these moving changes as well? And finally, management can deal with it. Well, absolutely they execute, but oversight, especially with AI, can't be delegated away. So the Corporations Act, the New Zealand Companies Act, that duty of care absolutely sits with directors personally. Um, it can't be delegated to technology. Or to management. So our boards have got to set the expectation for assurance, reporting, and accountability. So the right question actually becomes what cadence of reporting, what metrics, what assurance mechanisms will tell us whether an organization is genuinely in control. So the discipline here, it's simple, but it's really important. We've got to move the conversation from opinion to oversight, from reaction to inquiry and from genuine or general concern to an accountable governance. And the quality of the board's AI oversight will often depend less on, as everything, having not having all the answers, but on asking the right questions early enough.
SPEAKER_02Back to you, Helen. Beautiful. Thanks, Alexa. I think what Alexa's just run through is just so helpful because those are all the objections that you know she and I have faced, but I'm sure all of you have faced when you try to raise AI as a conversation that we need to have at our boardroom table. And rather than arguing or uh constructively um pushing back, actually that reframe I think is a really helpful way of just actually, I'm just going to build on your question and just ask a slightly different one, which really heightens the importance of getting this in front of you. And if you can get your chair on board generally, that's always a really big win. But you know, so you're wanting to get your board alignment, um, great, but what do you actually do? Um so what we wanted to do is to just kind of give you a sort of a 30, 60, 90 day plan. Boys, boards always like one of those that any board can adopt, um, no matter where you're sitting on that maturity curve. Um so the first 30 days is arrange an expert briefing, whether that's a like I'm literally doing one with a board next week that is 30 minutes. I do like probably a bit like Alexa I, it's fit to size, right? Whether it's a 30, a 60, a 90 minute, a half day, whatever it is. It's not a vendor pitch. It's literally an independent AI governance specialist who can come in and talk to your board about the risks, the opportunities, what your oversight responsibilities are. Um, and that suddenly means that you go from as a board going, we don't have the expertise, we don't know what we're talking about, we're not IT specialists, to actually we've got a baseline and then get that AI governance onto your irregular agenda. So even if it's just on a quarterly basis, that it actually becomes that governance standing item. So you're not one of those boards that we talked about at the beginning who's not talking about at all or not talking about it on a regular basis. Um, it becomes that standard item. And you are talking about the risks, you're talking about the emerging regulations, you're talking about the strategic positioning. Um, the second six days is actually starting to get management to help you um draft up a governance framework. So this needs to be what your AI policy is an organization that is literally 101, get an AI policy in place, but also your ethics guidelines, like Alexei and I can wax lyrical about ethical AI and um explainable AI and transparency and all of that stuff. And we're very happy to, but that's not what today's agenda is about. Um but what are your ethics guidelines? What are your risk protocols? What do you have what have you got in place to monitor, particularly around that shadow AI space? Um, but potentially also what your ROI is once you start to put in um AI and how it's working for you. So it doesn't need to be perfect, it just needs to have something in there that you can test and improve. Um, and you can also demonstrate back to your insurers that the board has commissioned and reviewed this um this governance framework. Um and if you I think we publish it on the last page of the presentation, but both the AICD and the IOD have both published director's guides to AI governance, um, which are both really useful references. Um, to be honest, I think the AI CD ones is a bit more up to date than the New Zealand one. So I personally would probably start there. Um, but the other thing you can also try doing is put in a use case, right? So it's actually let's have one test and learn in a very controlled environment of some something in the AI space with really clear guardrails within that policy, you've got your board reporting built in. Um, and then that actually can give you that opportunity to see how we can innovate as an organization and oversee. And again, how to deploy AI is another whole separate conversation. But someone who is um an enthusiast in around AI has probably already got 15 ideas of things that you could potentially deploy within the organization. So um that final phase is that that final control pilot. Um and once you've done those three things, suddenly you'll find that A, it's not as big and scary as you thought it was, you've got a really clear plan of action. Um, and as a board, you feel like you're having the right conversations about the right things. This is definitely not about your board becoming AI experts. It's asking your board to do what boards really should be doing well, which is setting the expectations with your management team, commissioning the right work, making sure you've got the evidence of the controls, and then holding management accountable. Alexei, anything to add?
SPEAKER_01Yeah, I was just gonna say, I think, you know, one of the things that you can do importantly is get in there and actually use the tool because you will actually learn the capabilities and you know, have some things come back and think, wow, you know, if we had access to this or we or you know, people were putting information in here, where would we be exposed? So with that familiar familiarity, you will actually start to draw um, you know, the dots and the conclusions on um where some of those risks will be, and you'll be more informed to be able to ask those better questions. Um, I think also, you know, when you get to that pilot in phase three, certainly at a board level, um, the tools you choose matters. Um, you know, there's a difference between you know a co-working space where anyone with a key card can wander in and a secure building with a sign-up clearance levels and an access log for every every room. So general purpose AI is like that co-working space, whereas purpose-built governance AI um is a secure building. So the architectural distinction really also matters enormously when we're um looking at board at content, uh content um at a board level as well. Yeah.
SPEAKER_02And just can I just build on that comment, uh Alexei? Yeah, absolutely. So AICD um published a paper very end of last year saying uh basically that gap between directors and companies is widening and widening and widening because people in the companies are using it and directors are trying to cover something that they had no understanding about. So even if you just get a free version, I mean, I never advocate this for when I'm training, but even if it's just for your own personal purposes and you want to write menus or write storybooks for your kids or whatever it is, just start using the tools and understand the power and capability. And I think you'll very quickly go, right, I actually need a paid version and I now need to start using this enthusiastically because the it is incredible what you can do with AI is is mind-blowing. Um, and once you realize that as a director, you realize the opportunity for your organization, but then immediately come back to the, but we need to do this properly. We need to make sure that we're we're governing it really, really well because you see the power going wrong is also quite a terrifying thing as a director.
SPEAKER_01So back to you, that's saying yeah, absolutely. Um, so um uh that next slide.
SPEAKER_00Uh we just there's a couple of questions that have just come in, if you don't mind.
SPEAKER_01Yeah.
SPEAKER_00Uh Christine's asked, how do you actually determine what shadow AI might be occurring in your workplace? That's a tough one.
SPEAKER_01It is a tough one. Yeah. Um, Helen, do you want to take this one?
SPEAKER_02Uh sure. I mean, part of it is actually having the policy in place. Um, and then it's really clear conversations between managers and your people around saying, you know, make sure you are operating within the policy. I literally had a board discussion in this on another board I was sitting in on Tuesday, um, which is how are we tracking shadow AI? And it's very hard to, I mean, you can obviously lock your systems down, but there's nothing to stop someone picking up their phone and taking a screenshot of something uh because it's easier to get AI to analyze it than it is for themselves to do it. So there's it's very hard to completely lock it down. But if you are having, if you had the training, you've had you've got the policy, you've trained the policy, you're having regular conversations with your people, that's a really good start to admiring the problem, which is, I think, unfortunately, what a lot of boards are doing at the moment.
SPEAKER_01I think there's also, I mean, at the moment, the reality is that um cybersecurity, the AI cybersecurity vendors are really um they're struggling to keep up and and create tools that will actually support organizations and protect them. Um, I know a number of organizations that I work with, you know, have really um locked down and they've made sure that, you know, they, you know, people when they're in their uh Wi-Fi in their um, they've they've got a firewall around it, they actually can't access tools that haven't been approved uh for the organization. Um, and I think it also comes down to training, training not just at a board level, not just management, but training the entire organization on the risks um that are happening in if they do upload information into a different tool, if they do expose the organization, um, if we do uh connect in our because that's that's how capable it is if we connect in our ERP system to some of our tools and connectors, or um even use Clawed, you know, for co-work and some of these new uh agentic type of tools, um the risks magnify. So it's really being able to educate your teams on the risks of those while you've got that policy um in place and you know, empowering the IT team. You know, I think that that's where it I've seen in organizations at the moment there's a little bit, you know, like there's the accelerator and the IT team are being seen as the break a little bit. But I think, you know, their job is really to make sure that um we're uh supporting the organization's safe use uh of AI as well.
SPEAKER_00Uh one more question before we carry on. I know of a board that's divided on how to respond to AI. One side is to get the data transformation expert onto the board, and uh the other side says you can you can't get an expert to cover in depth all aspects to get different source external expertise for different aspects or get data transformation expert on a finance, risk, and audit committee level. Is there any right answer?
SPEAKER_01Um I I think we've we've just got to keep coming back to you know what it who are the who are the people that have the um enough knowledge to understand the application that's happening inside the business and how it can be used to be asking those right questions to make sure that we're um surfacing up the um the risk cases and applying them to our risk appetite. You know, I think that there, you know, it's so new. There aren't going to be experts in every single field, but it's those general purpose um uh business application people, you know, like Helen and myself that have you know dive deep into it. You know, we're not technical experts, we we really understand the application of it from a business perspective. So I think um you might need to have a specialist, um, AI technologist on your board, um, but having you know someone come in and you know, on a regular basis, you know, because it's changing so fast, make sure that you're keeping up with you know what's happening, what's changing, how might that how might that expose us to risk? Um, and make sure that you know you're keeping educated, just like you would, you know, with our cyber or our financial policies as well. So it's just keeping educated. Um, Helen, do you have anything else to add to that?
SPEAKER_02Um the the one comment I was going to say is as part of that Deloitte study last year, that apparently 40% of boards are now looking at uh trying to find someone to go onto their board who's got some knowledge of AI, um, which will be a challenge because it's such a new field. You know, ChatGPT is just over three years old. Um, so there's nobody, I mean, of course, there will be people who were in early stages of AI, um, but most of them are in Silicon Valley. Uh, so there's not going to be a huge amount of directors around. And I think it goes back to that comment Alexei made earlier, right? We don't all have to be accountants to have a really good understanding of um the financials of the organization. But it's probably a good idea if you haven't got an accountant on your board to make sure you've got an independent accountant that you're touching base with regularly who can come in and talk to you about the latest IFRA standards or whatever it is. Um, so again, I mean, that's literally what Alexei and I do is support boards both in their initial training, but you know, what's going on in the AI world, what do you need to be thinking about? You know, whether it's a deep dive on risk or governance frameworks or whatever it is. So there are people around like that who can support you as you go through that AI journey. I think that's your slide, Alexei.
SPEAKER_01Yep, absolutely. So um for for us as directors, readiness it doesn't begin with having all of the answers, it begins with asking better questions. So um a couple of questions here to run through. Um, the first is around that visibility. You know, do we have a comprehensive inventory of how AI is being used? Certainly um from an approved perspective across the organization, um as well as potentially where we may not be seeing it. So unseen use is absolutely where that governance weaknesses sit. So Bendor tools, embedded software, pilot activity that hasn't been formally sanctioned. Um, you know, us, you know, really being in the communication with our teams, you know, how they're doing things and how they're using it. And so the board can't see it, we can't govern it. Um, the second question there is around strategic discipline, you know, has management developed a clear AI strategy with a defined risk appetite, performance metrics, and accountability structures, not enthusiasm, not experimentation, a real um coherent view of where AI creates value and where it creates risk and how those trade-offs are being governed. What I'm seeing in practice right now is because of the opportunity is a lot of experimentation and a lot of chasing the shiny um ball syndrome. Um and I think that the opportunity, um, certainly at a management and a board level, is like what is our strategy? What is our business strategy? You know, AI strategy should be servicing our business strategy. Um, AI on its on its own is not a strategy. So we've got to keep um reminding and challenging uh management um that you know we've got to have an AI strategy, but it's got to be there inside our risk appetite, and we've got to be asking how is it actually serving our overarching business strategy? The third question is around formal governance governance. Do we have a board-approved, uh board-approved AI governance policy that addresses these data, um, which is a huge thing? Data, um, clean data, privacy um issues, um, as well as ethics, um, privacy, responsible AI use as well, and a framework that really helps us establish principles, thresholds, responsibilities, as well as escalation pathways. Uh, the fourth question we've touched on already capability. Are we investing in AI literacy for directors as well as the broader workforce? It's great that they understand and can experiment, but we've also got to have them understand that the risks and against our overall um risk appetite as a business. Um, can we demonstrate, especially for our insurers going forward, that um due diligence, governance, governance weaknesses, you know, certainly in this um day and age um is not just about bad intent. It's about people making decisions without enough understanding of the tools, the data implications, the privacy implications, um, or the risk um boundaries. And that can include the board itself. And then the question, um, the fifth question is about compet competitiveness. You know, how does our AR maturity compare with our peers? That goes back to that um stat right at the beginning where you know Helen said 42% of CEOs are, you know, they don't think they'll exist in 10 years' time. So we've really got to get into what does that 90-day pilot look like for a high value aligned with our strategy opportunity? Um, and perhaps one of the most single useful strategic questions that a board can ask is well, if a well-resourced competitor came at your business with a data center full of AI capability, um, what would they do to us? Um, so boards that ask, you know, what can we automate are not having the complete conversation. So the real value comes from reimagining our business models entirely and our business strategy. Um so taken together, though, these five questions that really form a usefulness, a useful readiness check. Um, if we can answer them clearly, it's likely we're moving towards active oversight. If we can't, it doesn't mean that we're failing, but it does point to a governance gap that really needs some attention. So, and in most cases, you know, that most important shift um is not pretending certainty, but being explicit about what the board still needs to see, strengthen, um, and test. Helen, any any other additions there?
SPEAKER_02No, we'll just jump onto the last slide, please, Sean. Um, so we've decided to be quite confronting on our last slide and told me. Oh, what we put on there. Um, so to start off with though, I just wanted to say everyone who's who's dialed in for the call, the fact that you're here, that you registered and showed up, and more importantly, thank you, that you stayed, tells me that you already know that this matters, right? You don't need convincing, you're not the problem. You're probably the person in your leadership team or on your board who's actually trying to get the rest of the team across the line. Um, and we know that that's really hard because it's not just about selling an idea, it's actually getting people to feel really uncomfortable about something that they don't understand. And you know, to that question earlier, people don't necessarily understand this. So um, you're asking them to act on a risk that they don't can't really see. And sometimes there's quite a lot of skepticism in those rules. Um, so let me ask you, uh give me something to take back into your boards, which is this is what the inaction costs. So when your board is saying, um, oh, we're just going to think about it a little bit more, your staff are already using AI with no policy, no guardrails, no visibility. Your vendors have probably already embedded it in your services that you're paying for. Your DNO insurers probably quietly rewriting what they will and what they won't cover. And somewhere in the market, one of your competitors or a completely new entrant is rebuilding your whole value proposition with probably a fraction of your headcount. Um I'm just going to quote from the AI CD summit just super quickly. Um, the ASIC chair, Joe Longo, um, commented every board must have an explicit conversation about AI, set risk appetite, establish policies rather than hoping that the issue resolves itself. Um and those of you who um know who of Anthropic, which is uh co-pilot's um parent company, their CEO, uh his observation a couple of weeks ago was that humanity is being handed the most unimaginable power, and our social and institutional systems might not yet be mature enough to wield it responsibly. So that you look at that business model disruption, you look at that change, you look at this amazing power that we're being given, that risk isn't just coming, it's already here. Um, so it's not about this complexity that needs the perfect solution. We need to make a decision, and that decision is that your board is going to engage, you'll get the visibility, set the expectations, and that you'll govern it with that same seriousness that you would apply to whether it's a financial or risk or legal compliance. So over the next 90 days, you could actually, looking at that chart we had earlier, you could actually move your board really, really far forward. So here's your choice. This is directly everyone who's here today. You can go back to your board with good intentions and put this on uh as another item on the agenda that gets deferred, or go back to them with something that really lands, which is a board education session. It's not a vendor pitch, it's not a webinar recording or policy document. It's actually a facilitated conversation that's designed for directors who need to get fluent and get moving. Um and that's exactly what Alexei and I do, and there's other people out there who do it as well. Board AI skills, governance training, delivered locally and designed for your room, right? We're both experienced directors, we both speak board language, we've both done senior exec roles in big organizations. Um, and we've both worked multiple times with chairs who are probably quite skeptical and have left with a framework and something to move forward with. So that's gonna be the quickest way to move your board from we'll get to it, to we're governing it. So in the next couple of days, all our details are on the um the LinkedIn page in a minute, but reach out to one of us and um you know, 12 months from now, your board is gonna be ahead of this and not scrambling to catch up. Um, and if you don't, that's fine, but you'll probably still be having the same conversation 12 months from now where the risk is probably going to be quite a lot higher, and you'll have lost another 12 months of credibility on this defining issue of this decade. Um, so thank you. Sean, back to you.
SPEAKER_00Thanks, Helen. Just to finish off, we have a fantastic promotion on this month. It's our March promotion, which we hold once every year. And it's 50% off any of the Board Pro subscription plans. So I highly recommend if you're looking for board management software, board portal software for your organization to take a look at Board Pro on our website, boardpro.com. We also host, as you know, the weekly webinars every Thursday. We have a range of great topics coming up over the next uh four to five weeks. So hit our webinar page to learn all about the upcoming uh topics that will be of great interest to you. Tomorrow, you shall receive, tomorrow being Friday, yes it is, you'll receive an email from me, which will include a recording of the session today, the video. It'll also include the transcript and the presentation slides with all of the uh resources that Helen and Alexei talked about. So just as you leave the webinar, everybody, don't forget to complete our really short one-minute survey. Go in the draw for our beautiful gift hamper. I'll announce the winner for that tomorrow as well. So thank you again for attending, everybody. I hope you enjoyed the session. I know I did. Always learn something with Helen and Alexei at the scene. I look forward to seeing you at our next webinar, everybody. Have a great day.