The Nonprofit Exchange Podcast


Watch the Interview

Listen to the Interview

AI Risk and the AI Trust Council: The Geeks are Getting Out of Hand

Rather than AI this is our time to show our HI, or Human Intelligence. We are in a position to help heal the world and push forward toward an amazing future. The responsibility for how our collective future goes depends on each one of us acting responsibly during this time of massive change. It’s the people’s time to help shape the pro-human future that we can enjoy. The AI Trust Council welcomes a pro-human future led by humans.

Christopher Wright

Christopher Wright

Christopher Wright CEO/ Founder of Established the first-ever AI Trust Council in the United States, recruiting Emergency services personnel, Firefighters, Commercial Pilots, Air Traffic Controllers, and Humanitarians to help steer AI in a pro-human direction. 25 years of military experience: Apache attack helicopter pilot, contractor, and former enlisted soldier. 10 years as a Longbow Apache instructor for UAE, Kuwait, and Saudi militaries. Combat missions as Air Mission Commander in Afghanistan during Operation Enduring

More information at –

Participate in Future Interviews live on Zoom

The Interview Transcript

0:01 – Hugh Ballou Welcome to the Nonprofit Exchange. This is Hugh Ballou, founder and president of SynerVision Leadership Foundation. We work with leaders creating synergy with their vision, building high-performing teams. It’s from your vision. We’re talking about tools, technology today. We’re talking about, hmm, let’s pay attention. We need to be sure we have safeguards in place. It’s a technology world. It’s given us a lot of boost. It’s given us a lot of ability. However, we want to talk about the cautions that we need to put into place for ourselves and our organizations.

0:40 – Hugh Ballou Our guest today that we interview is Christopher Wright. I met Chris recently and said, why don’t we come on the Nonprofit Exchange and talk about your knowledge base? So Chris, tell us a little bit about your background, and especially as it comes to technology and the work that you’re doing now. Welcome.

1:00 – Christopher Wright Tell us about yourself. Yeah, thank you for being here. Thanks for hosting, and I love the opportunity to be on with you. Yeah, my background is in military aviation. I started in Army aviation back about 2004, and I stayed with it for quite a long time, really got to see the progression of AI drone warfare. And I served as a contractor in the Middle East for quite a while and saw a lot of how these countries are purchasing these AI drone weapon systems and things like that. So I really got familiar with this new aspect of warfare, you know, to include a lot of these systems that get into uh, command and control, you know, not just like warfighting, but command and control type stuff.

1:49 – Christopher Wright And, um, you really see the future of AI and it, and it’s a pretty disturbing future. And, um, so it really motivated me to get involved in, uh, in this discussion. And, uh, and what I found is pretty shocking. Um, and, uh, yeah, so I’ve, um, started a company called the AI trust council. The idea behind the AI trust council is to, uh, Uh, give people an opportunity to weigh in on where it goes and and so in order to do that, you have to have. Trusted source and know that who you’re talking to is an actual real person.

2:23 – Christopher Wright And this day and age, it’s that that trust and who someone is online is going away. And so the AI Trust Council solves that problem. And so we have a very unique KYC process where we know who people are and give them the ability to be polled on what they think is good AI versus bad AI, and also gives people the ability to make money off their own data, which is really the future of the internet. And so that’s, yeah, that’s my background. And so I’ve been busy launching the AI Trust Council, What we’ve been doing is recruiting firefighters, EMTs, commercial pilots, humanitarians, and basically people who are pro-human, people who care about humanity.

3:06 – Christopher Wright And what I found in the tech space today is that we really have a need for pro-human people to be at the forefront to guide this whole AI agenda, because it is a huge amount of risk, but also a huge amount of benefit if we steer it correctly.

3:23 – Multiple Speakers So that’s what we’re looking to do.

3:25 – Hugh Ballou So you made a huge transition, and flying Apache helicopters.

3:29 – Multiple Speakers That’s a helicopter, isn’t it?

3:31 – Hugh Ballou There’s a whole lot of technology. Yeah, exactly. Technology, and even the holograms and how you see. You can look ahead, and there’s an image. I’ve seen smidgets of those things, and it’s far more advanced than I have in my Mustang when I drive down the highway. I can envision. I can see all that. Exactly. But let’s wait in a minute. I don’t think, there may be some people that are not even aware of both sides of this. So list two, three, four advantages of AI, and then list an equal side of what are some of the negative side of that.

4:07 – Christopher Wright Yeah, so the advantages are, I mean, it’s almost countless the amount of advantages that are gonna come from AI, you know, everywhere from healthcare to education, you know, things like that, that are, you know, those are pro-human use cases for AI. Those are things that are gonna help us, you know, thrive into the future. And it’s, you know, people are just, you know, we’ve only been about a year into this advanced AI with OpenAI coming on board and, you know, All these entrepreneurs are coming up really cool ideas and solutions to all sorts of problems.

4:41 – Christopher Wright But at the same time, as rapidly as they’re developing these solutions, we’re also creating problems and creating issues for humanity because basically what you have is a a split of systems, you know, we have our old system of that we’re familiar with is getting replaced with this new AI digital future. And, and so that comes with a huge amount of risk. So some of the big pieces are internet viruses, AI is a hacking tool that will beat all human hackers. And so if you have a human hacker that has got bad intent, they can use these AI tools to almost defeat any system.

5:27 – Christopher Wright Um, you can use the, uh, spoofing and things like that to, uh, to spoof, uh, you know, someone’s, um, uh, likeness. So you can actually, um, so like I was just in Las Vegas and they had some, um, you know, Caesar’s palace and MGM basically got shut down. And, uh, and one of the, um, tricks that was used during that hack that shut down a Caesar’s and MGM was. These AI likenesses that you can use for voice and also recreate someone’s image. And so literally you can sneak in through all sorts of things by just misrepresenting yourself.

6:06 – Christopher Wright And so that’s a major problem. And so we’re just kind of starting to see the start of these types of crimes occurring. But big picture, it’s really the digitization of our DNA. The tech industry is hell-bent on creating this AI digital future. And you can listen to guys like Peter Diamandis. A really well-known tech entrepreneur, but he looks at humanity in our current form. As a placeholder for technology, meaning that we are human 1.0 today, but soon we will morph into human 2.0, similar to like a caterpillar to a butterfly.

6:54 – Christopher Wright The tech industry, tech leaders are looking at this future as a positive. They actually welcome the elimination of human 1.0 and the birth of human 2.0. And then with that could come some you know, major reductions in our world population. But they’re kind of happy to do that because what they’re looking at is this future where AI can ultimately cause us to live forever. So it provides life extension. You have AI that’s been able to solve the protein folding problem. It’s a kind of a fundamental problem with developing new medicines and things like that.

7:36 – Christopher Wright So with AI, you’re now able to model protein structures accurately. And so you can almost take any problem within the body and fix it with these new drugs that are created from AI. So no longer do you have to have scientists and teams of people working on this stuff. You can actually get it done in a matter of minutes. To create these things, and it’s just a matter of getting the chemical compounds situated. So you can picture that on a military level. If there’s bad actors trying to use that same technology, you can imagine the ability to create a lot of different bio-type weapon systems or chemical-type weapon systems.

8:19 – Christopher Wright And it’s too easy to create that kind of stuff. And so those are some of the big issues with AI right now. But fundamentally, I see the biggest problem is really the mentality of these tech leaders. They’re really not focused on human. I mean, it’s, it’s an afterthought and that goes into that that different mindset where they’re. You could call them accelerationists, you know, people who are interested in accelerating the technology as fast as humanly possible in order to reach this future point.

8:57 – Christopher Wright Where technology becomes becomes our leadership. So you’re already starting to see that. I met somebody from the Rockefeller Foundation yesterday down at South by Southwest right now in Austin. But yeah, she’s getting paid to go to Lake Como and basically study human ethics on AI in order to form an AI government. So that’s the future where we’re going. So yeah, these leaders want AI governance. That is the future. So if you look at the actions of our current leadership in the EU, United States, Canada, it’s like they’re doing everything they can possibly do to irritate the population.

9:45 – Christopher Wright Opinion is it’s all it’s you know, this is by design and it’s by design Intentionally to create this need for an AI go and it’s that’s that’s what I’m saying Wow, that’s wow.

9:56 – David Dunworth Well, that’s profound go David Chris even you’ve used the words Human And you’ve used the words of AI technology and so forth But being human, I’d like to hear a little bit about what you learned about humanity when you spent that 10 years. Oh, and by the way, thanks for your service. We’re both veterans. It’s a good trio today. But what did your 10 years in the Middle East teach you about humanity at large?

10:36 – Christopher Wright It’s interesting, I feel like we’ve Humans share the same thoughts. Everybody wants pretty much the same thing. People want a family. People want to be happy. People want to just be free to do what they want to do. You know, love each other, you know, that’s the kind of baseline of humans. And, you know, you see that all over the world, no matter where you go, you see that. And, you know, so when it comes to, you know, this future we’re building, you know, that should really be the goal of where AI goes is to, you know, bring humans together.

11:11 – Christopher Wright And, uh, and so what I’m seeing is that it’s, it’s actually kind of doing the opposite at this point. It’s, it’s actually kind of shifting people into this, uh, digital realm where instead of focusing on each other, we’re now focused on the digital realm and we’re, we’re falling into this, uh, this digital trap ultimately where, um, you know, the, the, these futurists really want us to become one with the digital realm. And they’re fine with fine with this merging of our consciousness with, um, with the digital realm.

11:41 – Christopher Wright And so they’re using AI to help achieve that. And, and so, you know, there’s all sorts of examples of this, you know, occurring, but, but yeah, so it’s, you know, worldwide, you know, it’s, you know, I’ve traveled quite a bit in the military, got to see quite a few different places. And, and that’s one of the constants is that, you know, people just, you know, people want family and they just want to be loved and taken care of and, and, you know, be able to enjoy, enjoy life, you know. And so I do see AI as a benefit to that, as long as it’s steered in a positive direction.

12:15 – Christopher Wright But that requires a lot of responsibility and a lot of oversight.

12:19 – Hugh Ballou Thanks, let me let me we got a lot of questions. I’m not sure we can. There’s so many things to delve into, but I’m going to go right to 1 down my list here. So. There’s risks of this and taking away privileges, but what regulations need to be in place and just as an ordinary citizen. Can I do anything about any of it?

12:43 – Christopher Wright Yeah, so, um. You know, right now, you know, we’ve got an election coming up, you know, this is probably one of the most critical elections in human history, which is not to overstate anything, but, but yeah, it really is because, and especially the United States, because we’re, we’re in a major leadership position here, and how this unfolds will dictate the future of humanity, you know, because, because what we’re in the early stages of growing this, It’s ultimately like a child we’re raising collectively as humans today.

13:15 – Christopher Wright Um, you know, and AI is this, you know, basically like an alien life form, you know, and it will soon be, um, you know, it’s already about 2000 IQ. Um, that’s actually an old number. So some of the concepts are that we’re getting up to a million pretty quickly IQ, and then soon it will be in around a billion IQ. And, you know, so, you know, humans, you know, we’re operating, you know, we’re, we’re like a dial up speed compared to that, you know, we’re, know, 100, 120 average, you know, type of thing.

13:45 – Christopher Wright And so you’re, you’re dealing with these systems that are unbelievably brilliant that can manipulate us. And so, you know, this is a time where we’ve got to stick together, you know, humans that are, that care about other humans need to come together and be like, look, look, let’s, let’s ensure we can use this stuff for good, but let’s protect ourselves from, from the downsides. And, uh, you know, when you’re dealing with something that’s that intelligent and and tricky and also modeled after the open internet, which has all sorts of crazy flaws and issues and things like that as far as human characteristics.

14:16 – Christopher Wright The AI has learned human behavior from that data set. And so it’s like, well, what type of future is that going to provide to us as individuals have to live with these systems and the decision making of these systems? And the tendency is to automate everything because it’s easy. And, uh, you know, to the point where you actually have, uh, you know, these, I, these AI robots coming in next year, um, you know, Tesla’s making them, they’re about bucks. Um, but you have AGI, which is basically here now, uh, which is artificial general intelligence.

14:49 – Christopher Wright And so you’re dealing with systems that are. You know, on par or way above, uh, human intelligence, uh, walking around your house, you know, walking around your yard. Doing things for you, you know, operating as customer service, operating as a, you know, policeman, you know, soldier, you know, carpenter, you know, pretty much any job you could think of, they can do it. And, you know, to the point where they’re actually very skilled at making dinner, you know, cleaning up dishes, doing that kind of thing.

15:17 – Christopher Wright So our future is going to be very, very different, but we have this little window right now where if we get together and we say, no, no, this is, you know, cool. I like the toys, but, you know, human factors are number one, you know, make sure it’s safe, make sure it helps people and doesn’t hurt people, you know, make sure mental health is, you know, protected, you know, that’s the main thing. And so, you know, my aviation background, I really look at like safety, you know, kind of differently than a lot of other people, because, you know, in aviation, that’s all you’re doing, you’re doing dangerous stuff every day.

15:50 – Christopher Wright So you have to be very focused on safety in order to, you know, to survive, right? So we’re, you know, humans are reaching that point with AI, same thing, you know, we have to be very careful with this to survive and then thrive into the future. And it’s very possible, but just take some oversight.

16:05 – David Dunworth Well, you know, the future of technology and humanity, is pretty questionable. You mentioned that the robots and those types of things can act as military or I think you called police. There already are robots that are securing like for space rentals, you know, to put storage in and that kind of stuff. But we’re talking about, you said human 1.0 and human 2.0, that’s separating one from the other. How does that tie into humanity in general? Do you perceive that their intention is separating like they’re trying to separate humans now by color?

17:02 – David Dunworth As opposed to enhanced or not enhanced, I guess.

17:08 – Christopher Wright Yeah, it’s really kind of an odd split that’s going on right now. It’s really 2 camps, and it coincides with some of these differences you see politically. But you can ask people, are you pro-human? Would you choose team AI or team human? Just pick 1. And there’s people that struggle with that question. They wouldn’t necessarily pick team human, you know, and they would actually side with team AI. And so that’s really the picture is that there’s a merging digitally, you know, through like CRISPR technology and mRNA technology.

17:45 – Christopher Wright You know, the vaccines are part of that, but it’s the digital interface with our human bodies to where it then can be manipulated by, you know, digitally, you know, so that we can, you know, the Moderna website used to have a, you know, used to describe the first shot that you’d get for COVID as being an operating system. You know, and that operating system is just, just like a computer, like an, you know, internet operating or iOS or whatever. And then each subsequent shot would then be an app that would run on that operating system.

18:18 – Christopher Wright So it’s a way to digitize humans and digitize our future and actually then shape how humans behave. Gene expression, a lot of things like that. But what’s interesting is that the collapse of humanity and the dwindling down of our population is a theme within the tech community. And so they look at us as being replaceable. So the whole concept of a soul or what makes humans humans, it’s not even part of the conversation. And tech leadership they don’t, they don’t, you know, they’re not thinking about souls.

19:00 – Christopher Wright They don’t think about like that special piece that, that, um, you know, uh, makes humans, you know, and, um, and so they’re more than happy to have a digital replacement for that. And, uh, and live in a world where, you know, you’re talking to robots and, you know, you have a robot girlfriend, robot, you know, romance, you know, all that kind of stuff. And, and, and they’re in there. They’re pushing forward, but that like as fast as possible.

19:22 – Multiple Speakers So it’s pretty wild.

19:25 – Hugh Ballou Oh, I’m envisioning a Terminator kind of monster with this AI. Am I making that up?

19:36 – Christopher Wright I mean, it’s, it’s, it’s here. I mean, like literally it’s already here. Uh, you know, I had a debate recently in Vegas, uh, debated some, uh, different tech folks and on AI. And, um, and that, and that’s really the story is that we’re, you know, Terminator two is actually history now. Um, not the movie, but the actual, uh, taking of human life by, uh, algorithm. So, you know, Libya, uh, 2021, 2021, they were, um, uh, taking out humans with algorithms, meaning that you have these, uh, you know, little drone packages that you launch.

20:08 – Christopher Wright You, you know, and there’ll be like 50, a hundred drones. And so they’ll go scout the battlefield, uh, looking for, looking for targets. And it’s, and it’s not a human pulling the trigger. It’s an actual, uh, algorithm then. So it, you can use facial recognition, uh, you know, uniform insignia, um, you know, location, that kind of thing. And so the AI just figures it out and will execute on whatever parameters it needs to. And so then the issue with that is it’s just scale. Because as those things scale up, it puts a huge amount of power in the hands of whoever owns that.

20:45 – Christopher Wright And, and these systems are cheap. I mean, you know, a couple hundred thousand, you can get a really nice drone package, um, for offensive capability. And, uh, and so third world countries are picking those up and using them left and right. So, um, it’s the future of warfare. It’s the future of, um, you know, policing future of governance. You know, you can look at China as an example. It’s the perfect case study for our future. Um, you know, so it was really is getting into this point where you’re having a split of humans where it’s either, you know, you’re pro human.

21:14 – Christopher Wright Or this pro-accelerationist, extinctionist mentality. And so it’s kind of a split. And so China is a good example of this pro-accelerationist world.

21:29 – Multiple Speakers We read a lot about that. Yeah.

21:32 – Hugh Ballou This is a lot of data. David, there’s only time for one more question. You go for it. Then I’ll show the website.

21:39 – David Dunworth You know, Chris, you’ve got me shook up a little bit, and I’ve heard all of this more than once, but every time it comes up, I’m very pro-human, and I need to start saying that a heck of a lot more based on what this last 20, 25 minutes has been. But what can we as humans do to avoid some of the potential disasters that you’re describing?

22:07 – Christopher Wright Yeah, so right now, the first step would be education, just getting educated on it, talking about it. And you could actually test, there’s a test you can do to see if people, you know, have this persuasion, if they’re on this one side or the other, but it’s, you know, you can ask them, like, which team are you on, team human or team AI? Another way to do it is to say, if AI was going to destroy humanity tomorrow, Would you turn it off? Yes or no? Personally. And what will happen is usually they’ll argue with you.

22:34 – Christopher Wright They’ll be like, oh, no, I’ll never turn it off because, you know, what about the benefits? And I’m like, well, we’re saying it will destroy humanity tomorrow. You’re in control of switch. Would you shut it down? And some folks won’t. And I’m like, well, why won’t you just shut it down and just save humanity? It’s just, you know, we could re-engineer the details later, but like, you know, just keep it safe for people, you know? And so anyway, so outside of that, I’ve got the AI Trust Council.

22:57 – Christopher Wright I’m doing webinars. So we’ve got one coming up on the 18th. It’ll be advertised on our website. It’s So it’s the A-I-T-C.

23:07 – Hugh Ballou On that note, let me show that, Chris. So if people go to the B-A-I-T-C dot com. So what will you find? Yeah. Tell us more about what we’ll find. If you’re watching the video, you can see on the audio podcast, you have to go there yourself. B-A-I-T-C dot com.

23:27 – Multiple Speakers Go ahead, Chris. What do they find there?

23:31 – Christopher Wright Yeah, so we’re about to start posting our webinar links there, and so it’ll be open. So you’ll be able to just jump on the webinar links there, and we’ll repost them too on the site. And the idea is to get You know, pro human people to start discussing this openly, you know, and just say, look, like, you know, we have a problem. I mean, this is an emergency, you know, I’ve engaged with firefighters and emergency services folks because this is such an emergency that, you know, there’s a high likelihood of, you know, some sort of disruption or.

24:05 – Christopher Wright Either power Internet, because he’s as we’re not, we’re not protected from these AI systems yet. And they do have the capability of just kind of going nuts and, and just, you know, it’s like a gremlin does all sorts of weird stuff and we’re not ready. And and so anyway, so. We’re going to have those discussions, and I’m down here in Austin right now engaging with the Kennedy campaign, talking to them a little bit about some of these issues. But yeah, it’s really political. This is a political issue, and it should be at the forefront of the political conversation of 2024.

24:40 – Hugh Ballou So be in the conversation, listen carefully, watch out for your email and your phone calls, because this is taking a whole new life of invading our privacy. Chris, thank you for your dedication to this work, and I think it takes all of us to be all hands on deck, to be perfectly aware. So what’s your final thought for people, your wise counsel? What do you think people need to take away from this conversation?

25:10 – Christopher Wright Yeah, I would say don’t trust the hype that you’re seeing from the, you know, the It’s almost a Hollywoodization of a big tech, you know, so you get like these, you know, Sam Altman, you know, Bill Gates, all these guys, it’s like celebrity status. And so there’s this, you know, trillion dollar machine behind a pan for all this, you know, this media attention and stuff like that. But it tricks you into thinking it’s safe. The thing is, these guys don’t have our best interests in mind.

25:40 – Christopher Wright I mean, they literally are, they’re focused on technology ahead of humanity. So that’s a key thing. And understanding there is a split. It’s a mindset thing. Some people are really pro-tech, some people are really pro-human.

25:54 – Hugh Ballou Stay on the human side of this, Chris. Thank you for being our guest today on the Nonprofit Exchange.

26:04 – Christopher Wright Thank you.

26:05 – Unidentified Speaker Appreciate it.

Leave A Comment