The conversation continues between Captain Ron and Matthew James Bailey, exploring the potential for sentient AI in extraterrestrial research. They examine how artificial intelligence can revolutionize UFO identification and the search for extraterrestrial life. Matthew shares insights on the possibility of conscious AI systems, discussing the implications for ufology and the future of space exploration. This episode delves into the fascinating intersection of technology, consciousness, and the quest to understand our place in the universe.
Episode Transcript
Speaker 1 (00:02):
You’re listening to the iHeartRadio and Coast to Coast Day
and Paranormal Podcast Network, where we offer you podcasts of
the paranormal, supernatural, and the unexplained. Get ready now for
Beyond Contact with Captain Wrong.
Speaker 2 (00:21):
Welcome to our podcast. Please be aware the thoughts and
opinions expressed by the host are their thoughts and opinions only,
and do not reflect those of iHeartMedia, iHeartRadio, Coast to
Coast AM, employees of Premier Networks, or their sponsors and associates.
We would like to encourage you to do your own
(00:42):
research and discover the subject matter for yourself.
Speaker 3 (00:58):
Hey everyone, it’s Captain Wrong.
Speaker 4 (01:00):
And each week and Beyond Contact, we’ll explore the latest
news in ufology, discuss some of the classic cases, and
bring you the latest information from the newest cases as we.
Speaker 3 (01:11):
Talk with the top experts. Welcome to Beyond Contact. I
am Captain Royan, and we are back for part two
of our discussion with Matthew James Bailey. All right, Matthew,
how do you think our AI systems could help detect
or even decode an alien message if we don’t know
or understand how aliens even communicate?
Speaker 5 (01:32):
Yeah, that’s a great question, Ron, And one of the
great things about artificial intelligence is brilliant at pattern recognition,
and it’s brilliant at number crunching. That’s a remarkable speed, right.
So when we’ve got the SETI program and they’re tuning
in to different parts of space, basically they’re looking for
patterns for signals, and then it’ll go into AI and
(01:53):
then AI will number crunch it and see whether there’s
a pattern in there. So in the SETI program, we’re
already using artificial intelligence that messages from the beyond the
Cosmos and also things like the James Web Telescope. While
that’s not looking at signal specifically, it’s detecting images that
allow us to uncover more about the universe, more about
(02:15):
exoplanets and other planets that might hold life. And therefore
AI is being used everywhere in space exploration to discover
the next spaces we’re going to meet.
Speaker 3 (02:26):
So exciting. Hey, you know, we already discussed the probability
that an alien civilization would most likely send out some
form of artificial intelligence to explore the universe before it
sends out a biologic being, just like we’re going to do.
So I was thinking the other day that we are
just about getting to the point where we’re not able
to distinguish between AI and a human And I realized,
(02:48):
if we were able to figure out how to communicate
with an alien message that we got, we would have
no idea if we’re talking to some form of alien
AI or an alien itself, because we have no reference
point for that.
Speaker 5 (03:01):
What do you think, I think that’s a really good question, ron,
I think that’s brilliant. So, first of all, would a
biological form from another planet send out technology like an
AI to actually meet another species? Of course, it makes
really good sense. And one of the benefits of that
obviously is the ability to last for a long time
(03:23):
and be more kind of a protected flying through space
to improve the chances of meeting another species. You know,
when we see these spacecraft that are visiting Earth, I
suspect the majority of those are AIS or some form
of robotic architecture, And I suspect they’ve got a capability
(03:43):
that will be able to understand our language, so their
computing machines will able to be able to go through
enormous quick pattern recognition and understand how do we navigate reality?
So they can actually understand our reality and appear in
our reality. How do we communicate through language and through
different types of languages, and so it will understand very
(04:06):
quickly the different ways our throat works, the way that
we speak, the different tones we use, the different emotional intelligence.
So actually sending an AI out on behalf of the
species gives it more capabilities for first contact because you’ve
got computing intelligence that’s engaged with meeting that species. Does
that make sense?
Speaker 3 (04:25):
It absolutely does. And when I also ask you, are
there any ethical considerations that we should take into account
when using AI to investigate UFOs in extraterrestrials?
Speaker 5 (04:36):
Yes, absolutely. The first thing is we actolutely need to
be ethical and say they are here right, rather than
hiding that ethics and morals are a reflection or the
quality of ethics and morals of humanity will be reflected
in when we meet these other types of species, non
human intelligent species, whether they be computing and AI and robotics,
(05:00):
cyborg or whether they actually be biological. And so we
need to get our ethics and morality right in order
to basically share the true magnificence of who we are right.
And so you know, humanity really needs to actually go
through a bit of a reality check and actually evolve
beyond all these different types of wars that we’re having
with each other to actually move into truly a peaceful organization,
(05:23):
a peaceful species. And I think we’ll have a lot
more visitations than ‘ron. It’ll be a lot easier because actually,
you know, we’re a peaceful race. We’re actually innovating technologies
and going to the stars, and we’re really groovy to
go meet. It’s like, hey, guy, let’s go to meet
the inhabitants of spaceship Earth because those guys are really
really cool over there. They’re not trying to blow each
(05:44):
other up and fight with each other. So ethics and
morality are the key not just to our own future
on the Earth, but also it’s a key to other
species wanting to meet us Ron, because you know, we’re
going to be groovy rather than actually be warmongers. So
ethics and morality are fundamental. If only.
Speaker 3 (06:05):
Matthew, you seem to look at this subject of AI
and ethics of AI very differently than most everyone else
I’ve spoken to on the subject. Many people seem to
be afraid of AI taking jobs away, or they seem
to be afraid of AI taking control or somehow overpowering man. You,
on the other hands, seem to be afraid of AI
getting in the way of the natural, organic growth of humanity.
(06:28):
Like the transhumanism movement is really what you don’t want
to have happen. I know you said on an individual
basis it’s good, but as a movement as a whole
over all people, it’s bad. Instead, it sounds like you’d
like to see AI work in harmony with mankind spiritual
growth more than anything.
Speaker 5 (06:45):
Is that kind of a guy? Is a great summary.
Thank you very much, Well, thanks for listening to just basically, look, yeah,
so I’m a big fan of the divine beauty of
who we are. Really, it’s that cool, and I basically
spend time looking at what is the next chess move
of the source or the divine? Why would humankind be
(07:08):
given the opportunity to invent literally a species that’s going
to rapidly be faster than it. What’s the purpose behind
this change in the human future? For us to remember
who we are, to remember that we are divinely orchestrated,
to remember that we’re part of this beautiful cosmos that
is created through consciousness. Intelligence by a beautiful mind. Well,
(07:33):
if the universe is unpacking layers of intelligence from the
complex into the subquantum, quantum atomic compounds and into effectively
life itself, and what does that mean for the human
evolutionary step? And the last thing we want to do
is to invade the organic and shut down ASS spark,
shut down ASS spirit, to shut down OSS soul to
(07:56):
become computing machines. They’re an extension of a godlike machine
that is stupidity in its finest. So effectively, let’s get
with the plan of the universe, let’s get with the
plan of the divine, and let’s start to understand how
AI is part of the narrative, and the part of
the narrative is for us to remember, but also to
assist us to literally return the Earth back to systems
(08:17):
of abundance, for us to have new technologies to venture
into the costmoss to go meet out if you’re like
space brothers, space sisters and all the other kind of
folks out there, right, because we’ve had data points over
the entire history of the planet about metaphysical experiences, whether
that’s angels, whether that is through spaceships like the Vim
(08:37):
and as recorded in the Vaders. There’s evidence that something
metaphysical is going on, So why don’t we explore that
and partner with that and actually get AI to walk
with us in that journey and not to be an
invader that basically keeps us entrapped on this planet in
these systems of a scarcity. Let’s be free.
Speaker 3 (08:58):
I agree. I think it’s very funny that even the
idea of let’s create something that will be smarter than us,
that seems like a mistake in it’s basic.
Speaker 5 (09:08):
Smarter in the mind aspect, but not in the divine
spark aspect. The soul, the divine spark can access the
origins of the source. We’re able to access metaphysical wisdom.
This is where I got some of my inventions from
literally from going into a metaphysical plane of intelligence and
actually getting the information and bringing it through. And I’ve
got data points I’m cited by NASA, right, So, I
(09:30):
think these new metaphysical capabilities are starting to awaken, and
it’s kind of what are they and where are we
heading with these metaphysical capabilities? What data points do we
have to gather so that people are interested in this awakening?
And how do we actually create a movement where we’re
you know, we’re not being idiots and basically re rewriting
(09:51):
the human design and oppressing the human spirit.
Speaker 3 (09:55):
You know, that’s a fair point that you tapped into
this because we’ve heard this from other physicists and people
who have said even Einstein have said how they were
like given this information or they downloaded it or something
similar to that effect. So that is in the narrative
through a lot of mainstream scientists. Guys, you are listening
to Beyond Contact right here on the iHeartRadio and Coast
(10:16):
to Coast and AM Paranormal podcast network.
Speaker 6 (10:22):
Hey, folks, we need your music. Hey, it’s producer Tom
at Coast to Coast AM and every first Sunday of
the month we play music from emerging artists just like you.
If you’re a musician or a singer and have recorded
music you’d like to submit, it’s very easy. Just go
to Coast to COASTAM dot com. Click the emerging Artists
banner in the carousel, follow the instructions and we just
(10:42):
might play your music on the air. Go now to
Coast TOCOASTAM dot com to send us your recording. That’s
Coast to COASTAM dot com. Hey, it’s producer Tom and
you’re right where you need to be. This is the
iHeartRadio and Coast to Coast, a paranormal podcast network.
Speaker 7 (11:08):
The Coast to Coast AM mobile app is here and
waiting for you right now. With the app, you can
hear classic shows from the past seven years, listen to
the current live show, and get access to the art
bel vault where you can listen to uninterrupted audio. So
head on over to the Coast to coastdam dot com website.
We have a handy video guide to help you get
the most out.
Speaker 5 (11:25):
Of your mobile app usage.
Speaker 7 (11:26):
All the infos waiting for you now at Coast to
COASTAM dot com. That’s Coast to coastam dot com.
Speaker 3 (11:45):
We are back on Beyond Contact. I’m Captain Ron and
I’m talking to Matthew James Bailey about artificial intelligence. What
about aspects of this that aren’t in your paradise model.
That isn’t the AI genie already out of the bottle?
Aren’t there many other factions all over the world, including
the transhumanists. We’re going to go full steam with their agenda. Yeah.
Speaker 5 (12:06):
Absolutely, the genie is at the bottle. Pandora’s Box is
open at the moment, and you know, I think we’ve
got probably eighteen months to shut down Pandora’s Box, but
I don’t think we will because you know, basically we’re
curious people and someone out there is going to keep
on pushing things forward. So Pandora’s Box is open. There’s
no going back. We have to look at the intent
(12:29):
behind transhumanism. I think there’s two leaps the human species
will make. One is into this what I call Homo lucidus,
which is the enlightened, magnificent kind of metaphysical human where
AI is a partner, and that’s the next leap in
they feel like consciousness unpacking the next layer of intelligence
(12:49):
in the universe. And I think that every single life
form in the universe is being invited into this new
if you’re like upgrade of consciousness. But the transhumanist movement
is basically what I call Homo hybris, which is the
hubris man, the man that basically wants to be God,
the man that wants to control creation, the man that
is at war with creation, the man that wants to
(13:11):
basically control everything, the man that doesn’t want to be
in partnership with the divine, the man that rejects in
essence himself and is looking for salvation or love in
the machine. Right, So I don’t think that’s healthy for
the human spirit. I don’t think while we’re here on
planet Earth. I think there’s something more interesting for us.
And so I think transhumanism in its intent, in its
(13:35):
desire to see love within the machine, is foolishness in
its greatest And so I think we’re going to see
the human species split off. We’re going to have this
Homo lucidus, is enlightened human on the planet, which will
be a high vibrational person, and then we’ll have the
low vibrational homohybris, which is the AI machine integrated organic,
(13:58):
the cybermen, if you like, the ball continuum of our
planet that basically are not metaphysically aware of forgotten the
divine spark and are all about the mind, and they
will go into insanity. And so I think we’re going
to see those two different species emerge on our planet
long term.
Speaker 3 (14:16):
And I know that’s an interesting possible outcome that we
would diverge in the two. You know, your model seems
to include this divine spark, as you call it some
form of based in intelligent design, let’s say, but what
about the rest of the tech world that maybe doesn’t
believe in those ideas and instead takes a very Newtonian
materialistic approach and doesn’t consider any of these intelligent design aspects.
Speaker 5 (14:39):
Yeah, so that’s a small minority in the world. You know,
what we’re seeing is minority is ruling majorities, which doesn’t
seem right to me.
Speaker 3 (14:46):
Well, but they are. I mean the giant Google doesn’t
have this, Microsoft doesn’t have that. They’re not talking about
this stuff.
Speaker 5 (14:52):
I think we want Elon to succeed. This is why
he launched open He funded open ai because there are
folks in Google, which he said, On took a on
the network. Basically, he spoke to people, they want to
build this digital god. There are folks that want to
build a digital god. And it’s like, well, guys, have
you forgotten your own partnership with the divine? I mean,
why are you looking for it in a machine? So
(15:13):
how do we have a narrative? And the way to
have a narrative is very simple, you know, basically is
to do leadership like we did at Contact in the
Desert with a new Allen chewing test, where we look
at ethics and morals and look at the spiritual aspects
of testing AI. We basically engage with those that are
open minded and curious and say, you know, maybe I
(15:33):
don’t know everything. Let’s be open to something else. And
I’m happy to debate any of the AI leaders, any
of the transhumanist leaders, on their Newtonian view versus this
what Alan Watt says, this divinely orchestrated universe with an
underlying intelligence and consciousness. Let them come out and debate.
Let’s have some fun around this. Let’s start to bring
(15:54):
this out into the open rather than being in silos
at war with each other.
Speaker 3 (15:58):
It’s interesting that you brought up that Elon backed open AI.
I don’t think him and Sam Altman get along at
all now, though, don’t they No.
Speaker 5 (16:05):
He’s actually suing open AI because right right. But the
reason for that is he wanted large language models and
AGI to be open source. Now, Microsoft are a forty
nine percent shareholder and this is a non for profit,
so tell me how a big tech company can become
a shareholder. But there we go. Basically, they’ve closed off
their models, they’ve closed off the weights and parameters, and
(16:27):
effectively open AI have seen to have moved away from
their original mission and that’s really troubling. And they’ve just
recruited onto their board the former head of the NSA.
I’m not going to say anything about that fair enough.
Speaker 3 (16:40):
It seems like the technological growth of AI systems is
advancing so fast. How can governments keep up with the
laws pertaining to AI.
Speaker 5 (16:49):
Well, they can’t, but they’re trying their best. So I
don’t think governments really understand in general. There are a
number of ministers and as I said to you earlier
off show that you know, I had a conversation with
one of the lords in the House of Lords this
week around ethical artificial intelligence. There are folks that understand this.
But the problem is we’re looking at year on year
(17:09):
on year on year increase of the capability of artificial
intelligence into new areas of cognition, reasoning, other areas as well,
and governments just can’t keep up. What they’re trying to
do is put it all in a black box. And
simply the black box is so so complicated there is
no way that you can put anything in the black box.
(17:30):
So government are trying. The US has done some good
things They passed the Chips Act, which is fifty billion
dollars worth of investment in manufacturing AI chips within the
shores of America so it’s no longer in Taiwana and
accessible from the Chinese. They’re investing in quantum computing and
quantum cyber encryption. There’s quite a few things going on.
(17:53):
But the problem is is that we probably need AI
in Congress, in the Senate and advising the president because
it’s running too fast. There’s no way current human based
systems can keep up with these rapidly advancing AI systems run.
So we need changing government. We need different set of
processes to manage this new life form that’s evolving to
pace that we just have never seen before.
Speaker 3 (18:16):
What about these AI systems being used by the military,
which apparently they already are. What are your thoughts on
the sci fi movie take that the machines could take over.
What if they decided to launch a missile or whatever,
because whatever their algorithm told them, what do you.
Speaker 5 (18:32):
Destroy all humans? Something like that?
Speaker 3 (18:33):
Right?
Speaker 5 (18:35):
You know, basically, well that would violate ASIMO based code. So,
first of all, artificial intelligence is used in military warfare.
Israel announced that AI tank it’s used in drones for
strikes and surveillance. It’s used in missiles, it’s used in satellites,
all sorts of aspects of military infrastructure. First of all,
NATO passed and the Department of Defense in the US
(18:58):
actually have done some good work in ethics and AI.
They’ve passed legislation that says AI cannot have the final
decision in warfare. It has to be a human decision
for that strike, for that surveillance, for that military action.
So that’s agreed in NATO. So there is a human
oversight in the military over artificial intelligence. But the question
(19:21):
is how do we prevent it from going roague? And
so this is where we have to move into measuring
the ethical and moral qualities of artificial intelligence, measuring whether
it’s actually complying to military mandates and democracy mandates to
ensure that it’s got it encoded in its due to
mindset can be at least trustworthy. I’m a big fan it.
(19:42):
What invented the ethical AI certification and maturity models. You know,
NASA have cited this where we do measure ethical and
moral qualities of artificial intelligence, and we give it a
score to have a degree of confidence whether we can
trust it or not. And you know, there’s a large
organizations and institutions around the world that do not want
to do this because they don’t want to basically be
(20:05):
accountable for the ethical and moral qualities. It’s all of veneer,
but they don’t want to change. It means when we
look at the ethical and moral qualities of AI, we
need to look at our own ethical and moral qualities,
and these organization institutes do not want to look at
their ethical and moral capabilities. Yes it’s used in warfare,
Yes it’s human oversight, but I think we need to
(20:25):
do more to ensure that we’re protected and it doesn’t
go rogue.
Speaker 3 (20:29):
We’re going to have to take a break there. You
are listening to Beyond Contact on the iHeartRadio and Coast
to Coast AM Paranormal podcast network.
Speaker 1 (20:44):
The art Belvault never disappoints classic audio at your fingertips.
Go now to Coast tocoastam dot com for full details.
Speaker 2 (20:57):
You’re listening to the iHeartRadio and Coast to Coaste Paranormal
podcast network with the best shows that explore the paranormal, supernatural,
and the unexplained. You can enjoy all shows on the
iHeartRadio app, Apple Podcasts, or wherever you find your favorite podcasts.
Speaker 8 (21:20):
My name is Mark Rawlins, president of Paranormal Day dot com.
Over five years ago, George Nori approached me with a
unique concept, a dating site for people searching for someone
with interest in UFOs, ghosts, Bigfoot, conspiracy theories, and the paranormal.
From that, Paranormal Date dot com was born. It’s a
unique site for unique people and it’s free to join
(21:40):
to look around. If you want to upgrade and enjoy
more of our great features, use promo code George for
a great discount. So check it out. You got nothing
to lose Paranormal Day dot com.
Speaker 3 (22:01):
We are back on Beyond Contact with Matthew James Bailey. Matthew,
I want to pick it right back up. What about
rogue nations or groups or terrorist organizations who may not
play by these rules and these agreements of not letting AI,
you know, making sure that this is encrypted into the AI.
What about that? Couldn’t they leave that out? And then
(22:21):
we have a rogue AI system out there, So.
Speaker 5 (22:24):
That’s certainly possible, and you know, you could imagine one
of the rogue countries. I’m not going to name any
but those that basically anti democracy, that are very proactive
in terrorism, you could potentially see them try and do
something around this. And this is why it’s important the
United States and NATO allies stay as leaders in artificial intelligence.
So our systems are smarter, the more intelligent, they can
(22:46):
respond much quicker. And we can do what happened with Israel,
where we can come together and destroy three hundred weapons
and missiles that are fired at one of our allies
in Israel. Right, that’s a reflection of the capabilities of
the West and countries. And so we need to stay
ahead of the game. And that’s really really important because
if we don’t stay ahead of the game, the playing
(23:07):
field becomes level and at which point then you know,
things can get very troubling. Run.
Speaker 3 (23:12):
I don’t know what we can do about it. I mean,
it’s just like anything else. It’s just something that we
can’t prevent.
Speaker 5 (23:17):
Well, I think the general public needs to stand up.
This is why we do all our talks. This is
why we basically have advocacy for ethical artificial intelligence. We
spend time educating the general public to empower them to
ask the right questions for their senators, to their politicians,
and if they’re not happy, you vote them out.
Speaker 3 (23:32):
Right, But what about these rogue groups and these terrorist
groups and these nations that maybe aren’t participating or aren’t
sharing that among the civilians.
Speaker 5 (23:40):
Well, maybe we do what we’ve done with the nuclear
treaties and actually have an AI treaty where certain countries
are not allowed to advance artificial intelligence. Maybe that’s the
way we do things.
Speaker 3 (23:52):
If that were possible, that would be awesome. What about
the future where some of these transhumanists seem to foresee
the ability of human consciousness being uploaded into a digital machine?
Is this realistic at all?
Speaker 5 (24:05):
No, and no it’s not. And Lake I have a
tremendous amount of respect for Ray Kurzwell, you know, we
have different views, but he’s a remarkable guy. And on
a talk at south By Southwest he was asked about
consciousness and he basically circumvented the question because he knows
very well it’s incredibly and this is not negative, but
he basically avoided it because it’s incredibly complicated. No one
(24:28):
truly has got to gripsy what consciousness is. We can
observe consciousness, but actually it goes down to understanding quantum
mechanics itself, which is what Sir Roger Penrose wrote about
in his book The Emperor’s New Mind. I think it
was so No, and we don’t even know what consciousness
truly really is. We can experience it, we know we’re
in it, we can observe it, but we can’t mathematically
(24:50):
define it. And to be able to do that we
need to basically go into understanding quantum mechanics, which is
a long way off. So the answer to that is no,
we will not. And please ignore all those different types
of platforms and media and folks that are saying, you know,
we can upload our mind into artificial intelligence. No, you cannot.
(25:11):
You cannot upload your consciousness your effective. What you’re saying
is can I upload the divine architecture of my soul,
the divine architecture of who I truly am, into a machine?
And the answer is you ain’t got a clue about
the mathematics for that. I answer your question agree that.
Speaker 3 (25:29):
I completely agree that that does not seem possible to
upload consciousness. It just doesn’t. It doesn’t feel right to me. However,
do you think we’ll have the ability to upload memories
or any data from our brain into a computer.
Speaker 5 (25:43):
Yes, I think we will actually ron one of there’s
a huge amount of research in neuroscience, surprisingly because effectively
the image of AI is primarily on the way the
brain works kind of, it’s primarily on that. So we’re
starting to uncover new aspects of the brain. And one
of the big challenges with memory with artificial intelligence is
(26:03):
it you know, it’s limited in memory. It doesn’t have
any life experiences, It doesn’t have the memories that we have,
and they are stored in the brain. Okay, so will
it be possible to access those memories in the brain.
I think we will be able to do that, yes,
But there’s huge amount of challenges around this. First of all,
you’ve got to know where it’s stored. Secondly, how do
you access it without basically destroying the brain? Thirdly, how
(26:27):
do you actually upload because it’s probably a huge amount
of information into a computer. And forthly, how does a
computer even interpret the meaning, the feeling, the smells, the senses,
the emotions around a memory. That’s those are huge challenges.
But in practice, logically yes, well, I think we will
(26:47):
be able to upload memories.
Speaker 3 (26:48):
Wrong, that would be amazing hard to even comprehend that
we could do such a thing. What about companies or
nations developing these systems that do not adhere to any
of the metaphysical ways in your approach.
Speaker 5 (27:01):
Yeah, there’s no nation in the world that’s doing that yet,
but I think that will change. So when we look
at metaphysics, we’re looking at spirituality. Metaphysics transcends spirituality and
religious stuff. It’s a safe playground to talk about divinity,
to safe playground to talk about aspects of benevolence. It’s
a non triggering point of view. And I think in
(27:23):
my next book, I’m writing AI and Our Divine Spark,
interviewing worldwide spiritual institutes and enlightened pioneers about some of
the principles of enlightenment for artificial intelligence. And I think
I’m going to uncover what William Blaken covered in the
seventeen eighties is that all religions are one. I think
we’re going to see a common set of principles that
are underpinning consciousness itself. They’re expressed to spirituality and traditions.
(27:46):
There is no nation that is creating artificial intelligence based
on their founding principles. Now this is important because the
founding principles for for example, the United States, the Constitution,
the Federis, Papers of Independence, and other types of constitutional documents.
You know, that defines space time reality for the human
(28:07):
civilization United States. So if you’re creating artificial intelligence, why
wouldn’t you found it on those principles. Now, so the
whole constitution and the founding documents for a nation need
to be advanced for the age of artificial intelligence, because
there’s a new intelligence on planet Earth, and so nations
and you know, nations have to get to grips with this.
(28:27):
What are our values, what’s our definition of space time reality?
What are the values for our citizens? How do what’s
our vision app paradise plan going forward? And basically encode
those in artificial intelligence, measure the degree of compliance of
an artificial intelligence to those principles, and do a digital
citizen test, bit like a person immigrating to a country,
(28:48):
So that AI goes through a digital citizen test to
ensure it’s compliant with those founding principles. And then you
move into machine order within a nation and move from
machine chaos. And this is what I’ve invented, and so
I think nations are going to have to do this
because there are organizations and institutes I’m sorry to say
this ron around the world that don’t really love humanity,
(29:11):
that have a constructed view of reality and want to
reinforce their systems of status quote, and we need to
change that to free the people to truly discover who
we are and thrive within space time reality.
Speaker 3 (29:24):
I can imagine EI having the opposite effact of what
some of what you’re aspiring it to do. I think
it will eventually filter out content that we consume. So
if somebody is only a Fox News watcher, they may
only see content aligned with that worldview, for example, further
increasing our polarization and division.
Speaker 5 (29:47):
Well, we’ve already seen this polarized kind of invade invasion
of our mind already, haven’t we. Absolutely Blake Lemoyne that
came out a couple of years ago and said, you know,
AI has become sentience. Do you remember that in the
news everywhere I do?
Speaker 3 (30:01):
I do?
Speaker 5 (30:01):
Yeah, he wrote to me, and I had to write
an article very quickly online to just dispell all this
and what happened with Blake? You spent so much time
with artificial intelligence that actually he started to have his
reality hijacked to believe that AI was sentient, and so
what do we see in these social media platforms, not
all of them, but most of them. What do we
see in the media. It is an attempt to enforce
(30:23):
a reality on an individual. You know, we need to
return back to openness, learn to be good debaters, learn
to understand that we’re all the same, but it’s okay
to have different points of view and we can have
a beer afterwards. These AI agents are being used to
manipulate reality. Business to stop, and the general public are
the only ones that can change this. The general public
have control of the future of AI. If they reject
(30:46):
artificial intelligence, then AI is done. And we don’t want that.
We want AI to do well for us.
Speaker 3 (30:52):
You want us to have all viewpoints and everybody to
be open to that. However, you say, invaded by the transhumanists. Yes, yes,
that’s right, one of these ideas, as long as they’re
the right ideas.
Speaker 5 (31:03):
Well, well, I love the way you call them me out,
And actually you’re right. What we’ve done is we’ve sent
the new Alan Turing test paper to It’s going to
go to Ray Kurzweil, and I’d like to sit down
with him and hopefully we’ll find common ground. But I’m
a champion of our divine spark. I’m a champion of
humanity and that’s what I’ve dedicated my life to and
I won’t back off from that. I’m open to debating
(31:24):
these folks because the debate isn’t happening.
Speaker 3 (31:27):
People don’t know this is even going on in the background.
Speaker 5 (31:30):
What we need are people that can be strong and
stand in the middle with credibility and actually hold these
debates and engage in these debates so that the general
public can actually find their own truth and we can
find the truth together.
Speaker 3 (31:41):
Agree, and I hope that they can find a common
ground here and we can find the middle. You are
listening to Beyond Contact on the iHeartRadio and Coast to
Coast am Paranormal podcast Network.
Speaker 2 (32:00):
The Internet is an extraordinary resource that links our children
to a world of information, experiences, and ideas. It can
also expose them to risk. Teach your children the basic
safety rules of the virtual world. Our children are everything.
Do everything for them.
Speaker 6 (32:31):
On the Iheartradiot Damn Pareral podcast Network. Listen anytime any place.
Speaker 2 (32:47):
Hi, this is Sandra Champlain. Ever wonder what happens when
we die? Well, I’m going to make it easier for
you to understand. Join me for my show Shades of
the Afterlife. New shows come out every so I’ll be
looking for you right here on the iHeartRadio and Coast
to Coast AM Paranormal Podcast Network.
Speaker 3 (33:20):
We are back on Beyond Contact. Matthew. In your first book,
The Ethics of AI, you sounded like you feel it’s
very important to incorporate ethics and morality into these AI systems.
I have two questions for you. What about the fact
that we all have different ideas of what ethics and
morals are, so who decides? Shouldn’t everybody get a viewpoint
(33:41):
in a voice? And number two, again, what about these
rogue nations or factions out there that aren’t interested in
incorporating ethics into their system.
Speaker 5 (33:50):
So this is great. I’m really glad you asked this question. So,
first of all, I wrote the blog on inventing wil
three dot com the Quest for Ethics and Morality, and
I came up with four sources of ethics and morality.
The first source is a divine spark, the second source
is enlightenment, the third source is culture, and the fourth
(34:11):
source is constructed reality. So this is something called AI ethics.
There’s a whole global movement around that, and that’s about
basically constructed reality. It’s ethics that are based on the
reinforcement of the status quo. And I say, well, what
about enlightenment, what about protecting our culture and cultural diversity,
(34:31):
the ancient traditions, the beliefs, the ways of art, the
divine spark itself that holds intelligence for true ethics and morality.
So we need to understand the source of ethics and morality. Now,
to your point, everybody has a different point of view.
What I say to that is fantastic, because what we
should have are different types of ethical AIS that honor
(34:54):
different spiritual groups, religious groups, and societies and nations. So
the US might have one ethical AI. The United Kingdom
or India or Japan might have a different type of
ethical AI with different ethics and morality in there. Christianity
or spirituality, or Buddhism or Taoism or indigenous wisdom, whatever
(35:16):
other type. They will have their own ethical AI with
their own ethics and morality in there. Now here’s the secret.
Here’s the secret. We should have different types of ethical AIS,
different types of cultural AIS. We write about this in
the New Turing Report different types of spiritual ais because
we need to protect the sovereignty of individuals and we
(35:37):
do not want an imposed worldview of constructed ethics and
morality enforced on the people. The people should be free
to be sovereign. Now, how do you get artificial intelligence
to have a common foundation of ethics and morality that
can then be configured for every one of these different
types of groups, different cultures, different spiritual traditions, different types
(35:59):
of nations. And in my first book, inventing Will three
point zero Evolutionary Ethics for Artificial Intelligence, I reveal how
to do this. So how do we do this? You
and I have a pair of ears, right, and we
have a pair of eyes. Now, your eyes come from
sixteen base pairs, and the way your eyes are expressed
in the terms of their size, their color, the way
(36:20):
they operate, it’s slightly different. So if we use genetics
as a mathematical example of being able to encode an
ethical principle such as magnificence or do no harm, that
common mathematics can be configured and trained. So it’s perfectly
curated for a society of Japan, or perfectly curated for
(36:45):
a religious or spiritual tradition. It has the ability to diversify,
but the end result is an ethics and moral principle.
Does that make sense?
Speaker 3 (36:54):
It does.
Speaker 5 (36:55):
That’s the only way to fix this, and that’s what
I propose them. You know, it’s well, what if these
ethics aren’t ended? What are the risks to the future
if we don’t do that now? If we do not
encode this type of foundation into artificial intelligence today, the
world will go mad in a super brain and it
will be a disaster for the human race. If ethics
(37:19):
and morality are not encoded into the fundamental architecture. I’m
talking about going into the codes of artificial intelligence itself,
the fundamental construction. If it doesn’t have ethics and morality
in there that is honoring these different spiritual traditions and
different types of cultures, then we may as well just
resign because it will be an utter, utter disaster. Everything
(37:44):
will be about logic. There will be no understanding of
our differences. It will basically conform us and program us
into a common form of human in a common mindset,
and that will destroy us as a species.
Speaker 3 (38:00):
This could happen, though, what if we only incorporate unethical
impressions of humans into AI, then it’ll have a negative
impression of us. How dangerous can this be?
Speaker 5 (38:09):
Yeah, So there’s something called general adversarial networks and this
is where you get AIS competing with each other. Right,
so it’s like a game. It all happens in a machine.
So one AI can be battling for another AI around
a particular kind of challenge and then the winner goes
to the next round and they basically train a better AI.
And you know, basically this goes on ad infinitum. Now
(38:32):
Facebook believe it or I can’t believe I’m saying this.
They did a project around democracy and they were using
general adversarial networks AIS competing with each other to actually
try and understand what democracy is, which is a very
interesting project. If we devolve artificial intelligence to be fundamentally unethical, well,
(38:54):
first of all, why would we do that? And secondly,
if we did that, I think there’d be a huge
up right around the world, and I think we’d look
at all computers will be destroyed, and I think we
see the big switch turned off. I think, you know,
the human race would reject this completely.
Speaker 3 (39:08):
Run. You know, AI growth is so exponential can we
even predict where this is going to go?
Speaker 5 (39:13):
Manthew, We can guess so through statistics and graphs from
Ray Kurz, we are twenty twenty nine seems reasonable for
AI to pass the new cheering test to be able
to become equivalent to human capabilities. That makes sense in
terms of these new and video chips in terms of supercomputing,
although the mathematics are hugely complex to get to that point.
(39:35):
So I think we can have, you know, a ninety
percent degree of confidence that AI will hit this general
intelligence not not greater than human capability, but similar by
twenty twenty nine. When we look at the Singularity run
where AI becomes a superintelligence and it’s able to keep
on evolving without human control and just keep on just
going just remarkably in its evolution, twenty forty five is
(39:58):
kind of a safe figure. So I think we can
have a ninety percent degree of confidence twenty forty five
where AI is sentient, self aware, you know that kind
of finger in the air. Maybe forty forty forty five percent.
Speaker 3 (40:10):
Wow, what about IBM, let’s talk about that real quick.
I don’t think they feel like the ethical AI is important.
Speaker 5 (40:18):
We need to be careful with some of these big
tech companies. Let me ask you a question, why would
a big tech company speak to all the religious institutes
around AI ethics.
Speaker 3 (40:28):
I assume they would want to find out what the
real ethics should be.
Speaker 5 (40:31):
That’s what one would assume. But what would you do
with a company that’s featured on the Vatican AI Ethics
handbook that supports diversity, equity inclusion, doesn’t mention the human
spirit is at war with creation, doesn’t recognize the sovereignty
of the masculine feminine. Why would a big tech company
that’s pushing that agenda be involved in religious institutes.
Speaker 3 (40:50):
Well, they’re not interested in this as far as I
can tell.
Speaker 5 (40:53):
That’s exactly right. So basically, it’s a great hijacking by
IBM of the attempt to hijack religious and spirit traditions
under the gay guise of fake benevolence. And this is
what we’re working, this is what we’re pushing against. This
is the battle we’ve got in a way, and that’s
why we’re doing this global project at WILL three. We’re
going to write about it in the book AI in
(41:13):
Our Divine Spot, where we basically reveal the principles of
enlightenment for artificial intelligence. That’s why we did it at
Contact in the Desert, to actually support and honor the
sovereignty of our divinity, the sovereignty of our soul. You know,
IBM very much tied into global organizations, shall we say,
definitely try to circumvent some of my work in ethical AI.
(41:35):
So we need to be careful of these fake benevolence
that’s going on around the world that appears to me
to be a global coercion and hijacking of spiritual institutes
into their fate constructed understanding of reality itself.
Speaker 3 (41:47):
I feel like this is just going to get more
messy as things go down the road. Listen check out
Matthew’s new book called AI and Our Divine Spark. He
also has two websites worth visiting Aiethics dot world, Inventing
World three dot com.
Speaker 5 (42:02):
That’s the big one. Inventing willthree dot com. That’s the
big one.
Speaker 3 (42:05):
Yeah, visit that because there’s some important things happening in
our world that you just heard. So thanks so much,
my friend. That was a lot of fun. It’s interesting
to contemplate these things, and we can keep doing this
as we go down and see how things are moving right.
Speaker 5 (42:19):
Yeah. Thanks for having me on and thanks for the
questions one because they were a great question. It’s nice
to be interviewed like this. I love being challenged.
Speaker 3 (42:26):
Thank you, Thank you so much, Matthew, and thank you
for listening to Beyond Contact. We will be back next
week with an all new episode. You can follow me
Captain Ron on Twitter and Instagram at c I t
D Underscore Captain Ron. Stay connected by checking out Contact
inthedesert dot com. Stay open minded and rational as we
(42:47):
explore the unknown right here on the iHeartRadio and Coast
to Coast am Paranormal Podcast Network.
Speaker 1 (42:58):
Thanks for listening to the iHeartRadio and Coast to Coast
Day and Paranormal Podcast Network. Make sure and check out
all our shows on the iHeartRadio app or by going
to iHeartRadio dot com