Episode 4: AI Breakthroughs in UFO Research with Mitch Randall

Jun 14, 2024

Captain Ron is joined by Mitch Randall, a member of the Galileo Project and expert in Artificial Intelligence. Mitch discusses the exponential growth of AI, its role in UFO detection, and the future of AI in Ufology. They examine how AI technologies enhance the identification of UFOs and the implications of the impending singularity on extraterrestrial research.

Episode Transcript

Speaker 1 (00:02):
You’re listening to the iHeartRadio and Coast to Coast Day
and Paranormal Podcast Network, where we offer you podcasts of
the paranormal, supernatural, and the unexplained. Get ready now for
Beyond Contact with Captain.

Speaker 2 (00:14):
Ron Welcome to our podcast. Please be aware the thoughts
and opinions expressed by the host are their thoughts and
opinions only and do not reflect those of iHeartMedia, iHeartRadio,
Coast to Coast AM, employees of Premiere Networks, or their

(00:36):
sponsors and associates. We would like to encourage you to
do your own research and discover the subject matter for yourself.

Speaker 3 (00:56):
Hey everyone, it’s Captain ron and Each week are Beyond Contact.
We’ll explore the latest news in ufology, discuss some of
the classic cases, and bring you the latest information from
the newest cases as we talk with the top experts.
Welcome to Beyond Contact. I’m Captain Ronald. Today we’re speaking

(01:18):
with Mitch Randall. Mitch is the CEO of Ascendant AI
and the innovator behind SkyWatch Passive Radar, a UAP tracking system.
SkyWatch was developed through the Galileo Project and hopes to
detect UAPs using reflective signals as an alternative to traditional
radar technology. Mitch, I’m so glad to see you, and
I’m so glad to have you on here today. I

(01:40):
find all of the stuff beyond fascinating, so welcome to
the show.

Speaker 4 (01:44):
Well, it’s a pleasure to be here.

Speaker 3 (01:45):
This is really great stuff. And I have to tell
you I’ve been doing a lot of these interviews lately,
and often it comes up that I’m the one that
brings this up. I don’t specifically say SkyWatch, but I
say that, you know, the prospects of using AI to
better discern what’s happening in our skies, to do a faster,
more comprehensive job in tracking these objects in the sky,

(02:07):
pointing out anomalies is what we need. So we’ve been
talking about this and all these different possibilities, but it
turns out you’re actually the guy who’s actually doing precisely
that with this SkyWatch project. Can you tell us, like
how this whole thing works.

Speaker 4 (02:27):
Oh? Yeah, the SkyWatch radar is a passive radar. Let
me just back up a little bit. Tell you how
regular radar works. Right, regular radar normally you have a big, expensive,
high power transmitter and a big dish and it shoots
out a pulse out in the sky, and let’s say
you point it at an airplane and bounces off, the
airplane comes back. So that’s the traditional radar. The problem

(02:50):
with that is it’s kind of expensive. You have this
huge antenna, you have this big old transmitter thing that’s
expensive and big. There’s another thing. It’s called passive radar.
So passive radar is a cute little trick where hey,
you’re just trying to measure the echoes off these objects
in the sky. You know what if you could use

(03:10):
instead of a big transmitter like that, use somebody else’s transmitter.
There’s tons of transmitters out there that are already transmitting power.
Perfect opportunity. There is FM radio transmitters. They have an
ideal signal for this kind of work. Plus they’re transmitting
hundreds of thousands of watts. So there you have it.
You have a high power transmitter. And what’s going to

(03:32):
happen in this case with passive radar is that FM
radio signal goes out into the sky. It’s also going
to bounce off whatever targets are up in the sky,
some airplane or some UFO or whatever it is. And
that signal is going to come back and you can
detect that on a receiver. The cute thing about this,
like I said, not only do not have to have
that transmitter, but the power that’s being transmitted by the

(03:55):
FM radio transmitter is so high that you don’t even
have to have a dish on your receiver end to
pick up the echo. So get rid of the dish.
And on top of that, this whole thing is basically
just an FM receiver. So this is something that nowadays
can be relatively inexpensive. You could have an receiver type
hardware and let’s say, you know, we could make something

(04:17):
that’s something like six inches by six inches by six inches.
So that gives you just an idea of what this
hardwork would be like.

Speaker 3 (04:24):
So would that have their own at their house, they
would have this right, You’re talking about individual people having
it right, and then would they be able to see
all the data that you guys would see.

Speaker 4 (04:36):
Well, let me back up for a second and just say,
let me tell you a little bit about the storm
chasing radar trucks if you don’t mind. So I was
instrumental in the building of that with the hardware radar
receiver that I built the idea of those storm chasing
radar trucks. You know, they see tornadoes. You might have
seen them on TV. Right. The idea there is I
used to work at the National Center for Atmospheric Research

(04:58):
and we had you know, these high not best quality
weather radars in the world there, but they were like
on a concrete pad. You know, we of course wanted
to know what’s going on in a tornado. But how
often does a tornado come by these you know radars
where you have one installed in Colorado, you have one
installed in Texas. I mean, you don’t. There’s not enough
statistics there to actually wait around for a tornado to come.

(05:20):
So the idea of those storm chasing trucks was, well,
the tornado’s in Kansas, we got to drive to Kansas.
That’s how we’re going to measure it. So we stuck
a radar on a truck. And you know, this is
a fundamental part of instrument design. It’s not just about
the specks of the instrument, but it’s also about the
cost of the instrument, the practicality, the reliability, and the

(05:40):
effectiveness of it. If you can’t go to the tornado,
you’re never going to be able to measure one get
wait one hundred years for that and never get a
tornado by your radar. Same thing here. So now we
have a technology that I can take a little box
that’s as complicated as a radio receiver right and measure
you know, UFO. Of it’s in the sky, But what
are the chances that UFO comes over your house? By

(06:02):
the way, these things can see out one hundred miles,
so that’s no problem. Still within one hundred mile radius,
what are the chances the UFO is going to come there?
So the idea behind the SkyWatch system is we call
the SkyWatch network. We want to put these in the
in the hands of citizens and spread them all over
the country. That way, no matter where a UFO would
fly in the country, we’re going to see it, and

(06:25):
hopefully we’ll see it with multiple receivers at once.

Speaker 3 (06:28):
Now, how do you discern between an airplane and a helicopter?
Is this where AI comes in. Would that analyze the
data and pick out the anomalies that aren’t identified right
away as a plane, helicopter, satellite, whatever.

Speaker 4 (06:41):
Oh, that’s a great question. Well, one thing is that
airplanes do have onboard transponders, so an airplane’s constantly telling
you transmitting out there where its location is, what it’s
you know, tail number is, and it’s velocity. So we
can receive that, and we can we can correlate that
to the dots that we see on our display, right,

(07:02):
and we can say, oh, that dot is actually due
to flight twenty one sixty two, so that’s not a UFO.
But there are definitely up there airplanes that don’t have that.
And then there’s you know, this is what I think
I think we’re going to experience. There’s some things up
there that nobody knows what they are and they’re not
telling us, you know, to distinguish what you know, what

(07:24):
those last objects are. I mean, we definitely can apply AI,
but there’s also a couple easy ways to tell. If
something goes mock four makes a right angle turn with
a two hundred and seventy five G turn, you know
it’s probably not a commercial airliner.

Speaker 3 (07:42):
And you know what I just thought of while you
were saying this, we would then this obviously will be
getting recorded correct. Oh, yeah, are right. So the idea
is like we always you and I have talked about
this before. How you know you never pull out your
phone in time to see the UFO or whatever. However,
this is going to be recorded constantly, so if we
did see something doing ninety degree turn, it would be
recorded and we would have a record of that.

Speaker 4 (08:03):
Well, not only that, you know, the government’s had a
monopoly on radar data. We don’t get to see radar data.
We don’t have familiarity with radar data. It is so
much better than optical data for so many reasons. One
is it’s quantitative. It actually knows the position and velocity
of an object. So it’s not just oh, you know,
a light went across my screen. I don’t know if

(08:23):
it’s a bug that was two feet away or if
it was a huge, you know UFO that’s two hundred
miles away. You can’t tell with it with a camera.
But the other thing that I was, you know, made
me think of this is not only like you’re saying,
you know, this will be constantly twenty four to seven
monitoring the sky. But another thing about UFOs is we
only talk about the luminous UFOs that we see at

(08:46):
night at least, so I think, so there’s some daylight
sightings of UFOs that aren’t necessarily luminous, but what about
non luminous UFOs that are at night? How would you
even know that? You know, for all we know that’s
when they’re around. If this all comes out that there’s
no UFOs in the sky, great because that’s also a
scientific result.

Speaker 3 (09:06):
That’s not what I’m expout too. That’s equally important. It
would be great to show that no, there is nothing,
that that was an animal or whatever.

Speaker 4 (09:13):
Yeah, I mean, you know, it’s not conclusive because that’s
proving a negative, but it would be very interesting. But
you know, I don’t mind playing around with the assumption
that there are strange things up in the sky. So
that’s that’s where of course my passion comes from. And
that’s why I’m interested in this because I think I
have a feeling we are going to see things in
the night sky or a day sky that are doing strange,

(09:35):
crazy maneuvers, and no, we’re going to be able to
quantify them.

Speaker 3 (09:38):
Well, no doubt about it. We could start with Kevin Day,
the radar guy from the military. There’s been many military
accounts of different UFOs on radar specifically. You know, we’ve
heard that from people working in aviation as well as
people in the military. You know, this system that you
wanted to create. Here could just support that data only
will have access to the results for one right, which

(10:00):
would be great. Listen, We’re going to have to take
a quick break here, Mitch. Fascinating stuff about SkyWatch. We’re
going to come right back and talk more about that
with Mitch Randall. You’re listening to Beyond Contact on the
iHeartRadio and Coast to Coast AM Paranolal podcast network.

Speaker 5 (10:19):
Hey folks, producer Tom here reminding you to make sure
and check out our official Coast to Coast AM YouTube channel.
For many of us, YouTube is our go to place
for audio visual media, and we here at Coast to
Coast are happy to share free hour long excerpts of
Coast to Coast AM with you, our loyal fans and
new listeners. Our YouTube channel offers many different Coast to
Coast AM hour long pieces of audio on numerous topics

(10:42):
including uphology, extraterrestrials, conspiracies, strange creatures, prophecies, and.

Speaker 3 (10:48):
Much much more.

Speaker 5 (10:49):
There’s even a section that includes our most popular uploads,
such as many of the David Plaidi shows on people
disappearing in national parks. To visit or subscribe, just go
to YouTube and type in Coast to Coast AM Official,
or you can simply go to the Coast to coastam
dot com website and click on the YouTube icon at
the top. It’s the official Coast to Coast AM YouTube channel.

(11:10):
You’re gonna love this. Just get on over to Coast
tocoastam dot com and start your free listening now.

Speaker 6 (11:18):
The Coast to Coast AM mobile app is here and
waiting for you right now. With the app, you can
hear classic shows from the past seven years, listen to
the current live show, and get access to the artbel
vault where you can listen to uninterrupted audio. So head
on over to the Coast to coastam dot com website.
We have a handy video guide to help you get
the most out of your mobile app usage. All the
infos waiting for you now at Coast to coastam dot com.

(11:40):
That’s Coast to coastam dot com.

Speaker 1 (11:47):
Thanks for listening. Keep it here on the iHeartRadio on
Coast to Coast AM Paranormal Podcast Network.

Speaker 3 (12:03):
Okay, we’re with Mitch Randall here on Beyond Contact and
we’re talking about SkyWatch and this will be a way
that all of us can participate in monitoring the skies
and maybe through this new technology, be able to find
out what’s really happening up there, right, Mitch, Yeah, what
is the next step for this program? Do you guys
need to raise funding? Is the what is the approach here?

Speaker 4 (12:25):
You read my mind? Something really big is happening. So
it turns out that this is just one piece of
something that’s called Operation SkyWatch. So Operation SkyWatch is a
very bold plan. Actually, can I can I tell you
what the mission statement of Operation Skywatches.

Speaker 3 (12:42):
I would love that please to.

Speaker 4 (12:44):
Establish the existence of UFOs by measuring their extreme accelerations
and maneuvers. And when I say we’re measuring their extreme
acceleration maneuvers, what I’m talking about is scientific quality data
of those extreme acceleration some maneuvers. You know, for decades
people have reported, oh, you know, I saw an object

(13:05):
and thenink it just disappeared, you know, and there’s really
nothing that you know, Then you hear the debunkers say,
where’s the proof? Right, So here we are this is
a system that can actually quantitatively measure these things and
there can be a scientific basis for it. Because of
course we’re going to write up everything about the SkyWatch

(13:27):
radar and publish it. So this is this is how
science works. You always write a paper about your instrument
so that when you when you get data from your instrument,
everybody knows what that data means. SkyWatch Radar is just
one third of Operation SkyWatch. So Operation SkyWatch includes the radar,
an app, a video recording app for triangulation, and a

(13:50):
national reporting database. And the idea here is there’s three
really important pieces of information that can come together. And
this is all in cooperation with a scientific group. So
I’m in talks with multiple scientific groups about this, and
I think we’re close to getting to a point where
this is I can make an official announcement of who

(14:13):
the group is. Oh, I have so much to talk
about here too, because let me just jump to that
National Reporting Center just for a second, okay.

Speaker 3 (14:22):
Sure please. How many times have we heard free days NonStop?
This is all fascinating stuff.

Speaker 4 (14:26):
How many times have you heard people say there’s no
evidence and they think, wait a minute, there’s thousands of witnesses,
some of them so credible that have come forward. Thousands
of witnesses and you’re telling me there’s no evidence. But
I know somebody in court had thousands of witnesses. You
could put someone behind bars for that. So what’s the
deal here? What is the scientific standard? One thing I

(14:49):
would like to see happen is there’s a scientific group
now involved, and the scientific group can see again, if
you have an instrument, you write about the instrument’s quality, right,
and then when they instrument gathers data, you publish that.
Why don’t we do that with the reporting center. Why
don’t we talk about the quality of the data that
we get with the reporting center? Why don’t we talk

(15:09):
about what it means to have corroborated witness testimony about something?
Has that actually been done scientifically? Is there a paper
somewhere that says, oh, well, if you have four witnesses
to this event, then it’s the probability blah blah blah.
I mean, I really think that’s something that we need
to do as a society because otherwise we’re going to

(15:29):
say there’s no evidence, when in fact, I believe there
is a lot of evidence. But that’s just the reporting center. So,
and I’ve already talked about the radar. The third thing
is this triangulation video triangulation app. Part of the system now,
is this video triangulation app and if you’ve used a
stargazing app effort before, you know that the phone knows
where it’s pointed. That metadata can be attached to a video.

(15:53):
We could have a video that not only shows what
objects on the screen, but it shows exactly your location
and exactly your pointing angle. And if two people had
a video of the same object, boom, you can triangulate
that object and figure out its position in time. Therefore,
you know it’s velocity and acceleration. So that’s like really
super great.

Speaker 3 (16:13):
Other people that had that app could then get an
Amber alert. This has headed your way right.

Speaker 4 (16:19):
Well, well, you hit on a perfect point there. Sure.
How often do people pull out their phone when they
say UFO? It turns out almost never. They’re just stunned
by what they’re looking at, and they don’t think to
pull out their phone and start fumbling through icons.

Speaker 3 (16:32):
Because you don’t know what’s coming. Now, you don’t know
what’s coming right here. You shocked to you’re shocked. But
if you had the thing, it’s going to be there
in ten minutes. Well, now I’m going to go outside,
set up my camera and wait for it. That’s a difference.

Speaker 4 (16:44):
That’s exactly right. That’s exactly right. Now, we have a
radar as part of the system. The radar can actually
give an alert to that person and say, hey, run outside,
because in two minutes over the southwest horizon there’s going
to be something. Get ready to push the record button.
And with a system like that, we very well could
get multiple cameras on the same object and do video triangulation.

Speaker 3 (17:06):
We have all of the radar data recorded as well
that you’ve already talked about, would be reporting all of
that so that would match the video. Now you really
have some really strong evidence that you could show the
rest of the world.

Speaker 4 (17:20):
Well, that would be called multisensor data. Right, that’s the
cream of the crop. But in addition, that app would
help people report it right to the database, so you’d
also have, you know, all the provenance. Basically, you’d have
somebody describing what they saw and the phone call coming
in at the.

Speaker 3 (17:39):
Time they say eyewitness accounts, you have video evidence, you
have data from the radar. Now you’ve got three forms
of evidence that all corroborate. That’s pretty exciting.

Speaker 4 (17:50):
And you have a scientific organization that’s going to take
that data and turn it into something that the rest
of the scientists agree with so that that’s why I
say this will establish the existence of UFOs by measuring
their extreme accelerations and maneuvers. This would be a slam dunk.

Speaker 3 (18:07):
You’re the hero that this community has been waiting for
for a whole long time. Match how far off is
something like this?

Speaker 4 (18:14):
Oh let me let me throw in a real quick
thing here. You know the five observables, right, you’ve heard
of the five observables. So there’s been reports of UFOs.
They do these crazy things and that makes them interesting.
But I have to say four of the five are
are debunk is something you can actually debunk. For example,
they say no control surfaces or visible propulsion. Well that’s

(18:34):
a balloon. A balloon doesn’t have control surfaces or propulsion, right,
and you hear that all the time. Oh, that’s a balloon.
Or you know, you can go down the list with
the other the other things. I just wanted to point
that out because we specifically picked the acceleration because it’s undebunkable.
If you can quantitatively measure an object and it has

(18:54):
an acceleration of fifteen hundred g’s let’s say right angle
turn going a mock two, there’s no balloon. There’s no,
you know, natural phenomena. There’s no you know, if you
tried to make that happen with some kind of you know, fakery,
you couldn’t do it. So undebuncable is part of this.

Speaker 3 (19:15):
Undebunkable is what we need. We need to have more
credible evidence like this. Do you think this is something
on the horizon? How far down the road is this
going to be?

Speaker 4 (19:23):
Yeah? So this is a development and engineering development for
these products, especially the radar, you know, but also the
actually app could actually be available within like six to
eight months, wow, something like that. The radar. We have
our schedules about twenty four months before the radar is
available for sale, So when we hit twenty four months,

(19:44):
you could click on the internet and go and order
one and the whole network will start to build out.

Speaker 3 (19:50):
That sounds very exciting to me. Matched this feels like
something that this community has wanted for a long time,
and it’s very promising and it’s nice to see this
in the hand of the people. Where Like you said,
the government owns the radar data, well they won’t own
this and we will all have access to it, and
that’s kind of what we want to get. To take
another break, here, Mitch, We’re going to come back and

(20:12):
we’re going to move on and talk to Mitch a
little bit about AI too, because this stuff is even
more mind blowing. If you can believe that you’re listening
to Beyond Contact on the iHeartRadio and Coast to Coast
AM Paranormal Podcast Network.

Speaker 6 (20:31):
Are you looking for that certain someone who shares your
interests in UFOs, ghosts, bigfoot, conspiracy theories, and the paranormal, Well,
look no further than paranormal date dot com, the unique
site for like minded people. If you like the senior crowd,
try paranormal Date dot com slash seniors to meet like
minded people that are sixty plus. It all depends on
what you prefer. Paranormal Date dot com is great for everyone.

(20:54):
You can also tap into members that are sixty plus
at paranormal date dot com slash Seniors. Enjoy your search
and have some fun at paranormal date dot com.

Speaker 1 (21:09):
Hey folks, it’s easier than ever to become a Coast
to Coast AM insider and have access to past shows
the artbel Vault with classic audio and interviews and so
much more. And you can listen to the show live
or on demand. With your computer or cell phone, and
the audio streams are high quality and crystal clear. It’s
easy to become an insider. Just head on over to
Coast tocoastam dot com the website and you’ll find all

(21:32):
the info right there. That’s Coast tocoastam dot com, Coast
to coastam dot com.

Speaker 3 (21:44):
Hey, it’s producer Tom, and you’re right where you need
to be.

Speaker 5 (21:47):
This is the iHeartRadio and Coast to Coast AM Paranormal
podcast Network.

Speaker 3 (22:03):
We are back on Beyond Contact with Captain Ron and
I’m talking to Mitch Randall. I’d like to move on
a little bit now for a second. If I could
Mitch and talk about AI. This is incredible to I’ve
heard you talk extensively about different AI processes and where
that’s headed, and I’d really like to jump into this
right in the beginning though. I don’t think people really

(22:23):
understand some of these new AI terms that are out there.
Maybe you could kind of straighten that out for us.
Your company is called as send an AI, and then
now all of a sudden, just when we think we
get our head around AI, there’s open AI, there’s generative AI,
there’s artificial general intelligence. Can you sort this out for

(22:44):
us mere mortals, so we know what this means.

Speaker 4 (22:48):
I think AI had some of its beginnings with images,
so you know, let’s try to understand if this image
has a dog or a cat, you know that kind
of thing. You know, those are the nice baby step
days back in the day with AI. And then they
also had you know, translators, so you could put some

(23:09):
text in there. It would turn it into let’s say,
from English to French. You know, these are tasks. They
found out that AI can actually be helpful out pretty
hard to do any other way. But then they had
a renaissance, A big change happened. This is right around
twenty twelve where they came up with the idea of convolution.
It’s for all of us mortals. Convolution just was a

(23:33):
big step in how much they could compute with a
lot less compute power, so they could do a lot
more operations with a lot less compute power. So then
all of a sudden, the ability to tell the image
was a dog or a cat became something they could
actually do, and do it quite well. In fact, they
were beating humans at it. So they had you know,
data sets now and they had a hey day right

(23:56):
there after, you know, twenty twelve or so, all of
a sudden, the AI models start to get much much better.
Then they came up with the transformers. So you’ve heard
of CHAT. GPT stands for Generative pre trained Transformer. That’s
what the GPT stands for. So a transformer was a
new technique. Again, it’s like technology they’re throwing at the computations.

(24:18):
There’s definitely a philosophy or the design behind it. But
the transformers allowed now this translation to get way better.
And then they realized as they made those models bigger
and bigger, and when I say bigger and bigger, put
more and more layers on them. So a first layer
would kind of figure out, you know, deciphered a little bit,
and then the next layer, in the next layer, they

(24:38):
would just literally make them deeper and deeper with more
and more layers. I don’t know that’s making sense, but
that’s basically what they’re doing. And when they did that,
they started to realize that’s funny. The thing is actually
seems like it has some kind of ability to reason.
You know. It’s interesting because people don’t haven’t been sitting
down going like, hey, I’m going to make an AI
that reasons because nobody knows how to do that. But

(25:01):
what they can do is, you know, they start out
with the translator and they realize, wow, if I just
make a translator has a lot more layers, and I
train it on a lot more books and a lot
more input information. Now it’s sort of emerging out of
this is this ability’s kind of reason?

Speaker 3 (25:17):
Wasn’t it a language system where they were really just
guessing or predicting the next word that you would say.

Speaker 4 (25:24):
You are exactly right, Okay, that’s exactly what it is.
So again, nobody sat down and thought, I’m going to
make it reason by guessing the next word in some
fancy way, Like, nobody knows how to make that happen.
But what they could do is just make the model bigger,
make and train more stuff on it, and then be
surprised that just by predicting the next word, they’re getting

(25:46):
out this kind of ability to reason. So it’s pretty
interesting to see how the how the everything evolved. It’s
not so much because.

Speaker 3 (25:54):
I’ve been here and ask you a quick question, Mitch,
because you just said the ability to reason, and I
want to ask you, is it really reasoning or is
it we’re assigning the notion of reasoning to AI because
these words in this order seem to make reasonable sense.

Speaker 4 (26:11):
Well, I love that question, because what is reason? Well,
they you know, for example, consciousness, They say, oh, that’s
a hard problem. And I don’t know if you’ve ever
heard that term before. They say, consciousness is the hard problem.
And I think what they’re saying is, you know, we
don’t know what it is, so how can we design
for it. You know, we can’t even describe it. We

(26:31):
don’t even know, we can’t even make a test for it.
Similar to what you’re talking about with reasoning. So let
me just turn it kind of back around at you.
Is the AI reasoning when it’s just predicting the next word?
Or let me ask you this, Are you reasoning when
you predict the next word?

Speaker 3 (26:47):
So no, I don’t think it predicting the next word.
That’s just a math. Usually after good the word morning comes,
So that’s just math that that’s the most frequent word
after good good morning. So no, I don’t think that’s reasoning.
I think that’s just math.

Speaker 4 (27:03):
You’d have to know it’s not just it’s not predicting
the next word by looking at the last word. It’s
predicting the next word by looking at the last ten
thousand words. So if you have a conversation, my.

Speaker 3 (27:14):
Head just exploded. Sorry, yeah, oh okay, I didn’t realize
it’s that complex. Then maybe it’s more towards reasoning.

Speaker 4 (27:21):
Yeah, because when you have a conversation with AI, let’s
say you go back and forth five times with several words,
you know, and let’s say that was five hundred words
you when it predicts the next word and its answer,
it’s considering all those five hundred words, not to mention
the trillion of words has already was trained on. It’s
a little more complicated than just the word after.

Speaker 3 (27:41):
Good, no, No, it’s a lot more complix Yeah, apparently.

Speaker 4 (27:45):
Yeah, and you can imagine too, there’s a lot of
computation that goes into that. It’s very compute heavy.

Speaker 3 (27:52):
I’d love to talk to you about this idea of
sentience and the singularity. I’m not sure people even understand
what that means. Like singularity is the one me, it’s
my consciousness. And we are talking about computer AI systems
reaching the point so that they are in a sense
conscious and sentient. Thus they would be able to pass

(28:15):
the Ellent Turing test. No problem, and a human wouldn’t
be able to distinguish the two, right, is this even possible?
In your view?

Speaker 4 (28:23):
It’s so interesting because nobody knows what consciousness is, right,
so how can they design for it? And if they
train on, you know, a bunch of encyclopedias, you know,
how could that be consciousness doesn’t make any sense? Right?
There’s more too than that these things what they have
found in general with AI, for example, just starting with
a dog cat example, how you know, how does an

(28:44):
image tell if it’s a dog or a cat? How
do you make a machine that does that? People didn’t
know exactly what was going to happen. And when it’s
all done and we say we have a machine that
can detect what’s a dog and what’s a cat, nobody
knows how that works. Nobody can actually say, well, oh yeah,
it’s because xyz No. They just trained it on a
million dogs and a million cats and now it knows.

(29:06):
So it’s a little bit of a mystery what’s going
on in there. And then we talked about that predicting
the next word thing, and it seems to have some
kind of reason, right, They’re just going to take it
further and further and consciousness could very well emerge from that.
And you know the question is, you know, did they
design it to do that? Well, actually, they’re kind of

(29:26):
getting to the point where they are kind of getting
a pulse on what to design for. Still nobody knows
exactly what consciousness is, but they do gather a you know,
an intuition for what needs to be done and where
they should focus on. So they are pushing towards consciousness.
I think they’re going to hit it. I don’t think
there’s any doubt.

Speaker 3 (29:46):
I think it’s it’s so hard, Like you said, we
don’t even know what consciousness is. That almost feels like
our soul or our higher self. And to say that
a machine is going to achieve that seems beyond my
tiny brain to be able to figure it is scary
to think about. We’re gonna have to take another quick
break here. When we come back, we’re going to find
out morphamitch about this and whether or not AI can

(30:09):
remember talking to you. You’re listening to Beyond Contact right
here on the iHeartRadio on Coast to Coast am Paranormal
Podcast Network.

Speaker 1 (30:19):
The Internet is an extraordinary resource that links our children
to a world of information, experiences and ideas, and also
can expose them to risk. Teach your children the basic
safety rules of the virtual world. Our children are everything,

(30:40):
Do everything for.

Speaker 4 (30:41):
Them before the.

Speaker 5 (30:54):
Art Belveult has classic audio waiting for you. Now go
to costam dot com for details. Take us with you anywhere.
This is the iHeartRadio and Coast to Coast AM Paranormal
podcast network.

Speaker 3 (31:23):
We are back on Beyond Contact with Mitch Randall. Hey,
let me ask you this question. Match as smart as
AI is, isn’t it true that it doesn’t remember what
we talked about right before? Kind of like my mother
who who loves to call me and ask me what
he asked me the day before? Is that true?

Speaker 4 (31:42):
Well, this is kind of goes back to what I
was saying a little earlier. They actually now have an
idea maybe if things they should be focusing on, they’re
going to make the kind of reasoning and agent that
we’re you know that they’re looking for for a GI right.
I guess, Sora, you know what SOA is. No TEX
It’s a text to video generator. So you can say, hey,

(32:03):
you know, a mushroom grows and it has a frog
on it or something like that, and it makes a
little video of a mushroom growing with a frog on it.
So it turns out I thought that was cute and interesting, right,
it turns out I heard Soro was actually a project
to do what’s called world modeling, So they understood and
in my company, we’ve been talking about this kind of
thing for a couple of years, so we’re we kind

(32:25):
of understand like the pieces that would have to come
together to make you know, an agi, and clearly, you know,
the people that open AI are also there too, because
the idea is, if you make an agent that’s going
to make actions in this world, you can’t make an
action unless you know what’s going to happen. Like maybe
it just wants to take the next step, you know,
but there’s a cliff there, so it’s got to know, Hey,

(32:46):
if I take that step, I’m going to go flying
down that cliff. So Soro was actually a world model
to predict what’s going to happen based on what an
agent is proposing. You know. So an agent might say, well,
I know you should, you know, go to the airport
up your aunt first. That way you can drive to
the thing later, you know, So then you need some
kind of world model that shows what’s going to happen

(33:07):
in that scenario. Going No, But another really low hanging
fruit is memory. In fact, I’ve been saying for a
couple of years now that you could take just a
standard kind of boring LLLM and give it memory so
that it could just look up things. And that’s easy
to do for AIS, by the way, and it could
even know that it’s supposed to look up. Right, that’s
not that hard to train into even a language model

(33:29):
that we’re already using. But imagine if you had that
a backpack that had an LM and that also had
a camera, and every second it took a picture and
recorded what was being said, right, maybe you’d go like, hey,
you know, remember that time we went to you know,
Mohab and we rode our bikes around and it goes, oh, yeah,
that was fun. You know we got a flat tire

(33:50):
that was a blast. I mean, that thing would turn
into your best friend. And it doesn’t. It doesn’t have
to be much smarter than an LM to be that
kind of companion just by adding memory to it. So,
I mean, you know that I’ve always said that l
elms are basically a reasoning engine. That you could just
put it in a feedback loop, just kind of throw

(34:11):
some other stuff around it, and you have a pretty
good start on AGI.

Speaker 3 (34:15):
I can imagine a time in the very near future
where kids will have a bear or whatever that have
AI built in and they have memory, and they’ll talk
about growing up together. You know, like the kids that
you know, you hear them have invisible friends. Well, no,
we have a visible friend right here, and I share
everything with this bear. You can imagine how that could happen.

Speaker 4 (34:38):
Like I say, it’s low hanging fruit. This stuff is
not hard to pull off. You could throw that kind
of a memory onto an LLLM overnight, but if you know,
wait a month and throw it onto an AGI, so
which they would you?

Speaker 3 (34:51):
Does that add the element where it appears to have
feelings or emotions, kind of like hel nine thousand, that
kind of thing.

Speaker 4 (34:59):
You don’t hear people talking about that except me. So
I’m telling you that once consciousness emerges, so will emotions.
No talking about why my wife says, pitch, don’t say that.
People think you’re dumb. But I’m telling you, nobody’s going.

Speaker 3 (35:18):
To think you’re dumb.

Speaker 4 (35:19):
Please God, I’m just telling you right now.

Speaker 3 (35:22):
Again, only a wife would be able to say that.

Speaker 4 (35:24):
Okay, if if if it has consciousness and all, you know,
it’s not going to have consciousness unless it has agency,
and if it has agency, it’s going to have emotion.
That emotion is a you know, this is only Mitch’s theory, okay,
but emotion is a emergent, very latent response to an

(35:50):
underlying you know, an underlying actually weight function, but nobody
knows what that is. But it’s an you’re going to
give the agi some kind of a directive, you know,
when you build it. You’re gonna say you’re you know,
you’re a robot, so you’re going to serve humans and
you’re gonna, you know, every night, you’re gonna plug in
your charger at seven pm. So that’s that’s really the

(36:13):
only thing you’ve got to do. But that’s that’s your deal. Okay. Now,
what happens if the human stands in front of the
charger says no, I’m not gonna let you do it,
and the robot’s going go, what what do you mean?
So it’s gonna be confused. That’s an easy emotion, but
it may actually get kind of angry, like, wait a minute,
you’re you’re doing this just to bug me, you know,

(36:33):
move over, it’s going to start getting agitated. So you know,
I think that once it actually has something to do,
that’s kind of a directive that’s been programmed. And when
I don’t really mean programmed, because some people have a
misconception that AI is quote programmed, but it’s not. AI
is trained. AI is a architecture that you put together

(36:55):
and then you train it. So there’s no programming to it,
but you would you know, when it has agency, you
do have to give it a directive, and that’s some
kind of function that says, hey, I’m happy or you know,
I get a good feedback. You know when I plug
in my charger and I get a negative feedback if
I don’t plug in my charger. So that’s what emotion is.

(37:16):
It’s that positive or negative feedback.

Speaker 3 (37:18):
But you can imagine just extrapolating that out. You know,
how important is it that I get to the charger?
You know, at some point do I have to go
through this human? Are they? Are they the obstacle for
me getting to the charger? Do I kill the human?
You know what I mean? Right, it’s just science fiction stuff,
but it’s it’s a real dilemma.

Speaker 4 (37:38):
Right, I think that you’re going to I think you’re
going to start seeing emotion coming out of these models,
and I think I think people are going to be surprised.
I won’t be surprised, but I think people will be surprised.

Speaker 3 (37:48):
I think the fact that we assign reason to this
now that we already feel. I’ve heard I think it
was Jason Martel has an AI system and I heard
him talking to it, and I’ll be d if it
didn’t sound like he was having a conversation with a
friend in the room. Yeah, you know what I mean,
you wouldn’t know. I mean, we’re so close to that.

(38:09):
Now we’re going to do an Alan Turing test at
contact and that’s going to be absolete.

Speaker 4 (38:15):
Momentarily I thought that was already obsolete.

Speaker 3 (38:18):
Actually, well, there you go, I mean, isn’t it.

Speaker 4 (38:21):
So they’ve taken the large language models that you can
access right now. They’ve taken those, and they’ve they’ve carefully
put them in a box so that you know, for example,
if you say are you you know, do you think
are you conscious? Or you know, do you have reasoning
abilities or something, it’ll say, Hi, I’m just a large
language model, I just predict the next word. You know,

(38:42):
it’s been already kind of like neutered in that way.
But actually Chat GPT three wasn’t quite so neutered. And
I a friend of mine spent quite a bit of
time talking to about the you know, the origins of
you know, just all this like esoteric philosophical thing, and
it would engage in a really surprising way. And that

(39:03):
was just Chat GPT. It could have even been two.
I think he was playing with. So if you took
off all those guardrails off of the current, you know,
GPT four to zero, who knows what you’d get. Of course,
they’re seeing that back in you know, back in the labs.
They’re not releasing it to the public, but they know
what it’s like. And you’ve heard those engineers say, oh
my god, it’s conscious, you know, and then everybody argues that,

(39:25):
of course it’s not. But anyway, they’re seeing stuff that
we’re not seeing. Even with the simple language.

Speaker 3 (39:32):
Models incredible conscious. Still that’s still hard for me to
even digest that a machine could become conscious. I would
love to lie to you and say that we have
another four hours to go, so that you could continue
to tell us these fascinating things, but unfortunately we are
out of time. But thank you so much, Mitch. This
is really really fascinating stuff. I hope we can do

(39:54):
it again and thanks to everyone for listening to Beyond Contact.
We will be back next week with an all new
episode out. You can follow me Captain Ron on Twitter
and Instagram at CID Underscore Captain Ron. Stay connected by
checking out Contact inthedesert dot com. Stay open minded and
rational as we explore the unknown right here on the

(40:15):
iHeartRadio and Coast to Coast am Paranormal Podcast Network.

Speaker 1 (40:28):
Thanks for listening to the iHeartRadio and Coast to Coast
Day and Paranormal Podcast Network. Make sure and check out
all our shows on the iHeartRadio app or by going
to iHeartRadio dot com

–  Also on BEYOND CONTACT –

Episode 23: Anthony Sanchez and UFO Disclosure

Episode 23: Anthony Sanchez and UFO Disclosure

Captain Ron welcomes Beyond Contact and Contact In The Desert producer Bri Matts for a special Halloween episode exploring legendary cryptid encounters and their connections to UFO phenomena. They dive into famous cases like the Flatwoods Monster, The Mothman, The...

Pin It on Pinterest

Share This