Moon of Alabama Brecht quote
June 2, 2023
‘Artificial Intelligence’ Is (Mostly) Glorified Pattern Recognition

This somewhat funny narrative about an 'Artificial Intelligence' simulation by the U.S. airforce appeared yesterday and got widely picked up by various mainstream media:

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

(SEAD = Suppression of Enemy Air Defenses, SAM = Surface to Air Missile)

In the earl 1990s I worked at a University, first to write a Ph.D. in economics and management and then as associated lecturer for IT and programming. A large part of the (never finished) Ph.D. thesis was a discussion of various optimization algorithms. I programmed each and tested them on training and real world data. Some of those mathematical algos are deterministic. They always deliver the correct result. Some are not deterministic. They just estimated the outcome and give some confidence measure or probability on how correct the presented result may be. Most of the later involved some kind of Bayesisan statistics. Then there were the (related) 'Artificial Intelligence' algos, i.e. 'machine learning'.

Artificial Intelligence is a misnomer for the (ab-)use of a family of computerized pattern recognition methods.

Well structured and labeled data is used to train the models to later have them recognize 'things' in unstructured data. Once the 'things' are found some additional algorithm can act on them.

I programmed some of these as backpropagation networks. They would, for example, 'learn' to 'read' pictures  of the numbers 0 to 9 and to present the correct numerical output. To push the 'learning' into the right direction during the serial iterations that train the network one needs a reward function or reward equation. It tells the network if the results of an iteration are 'right' or 'wrong'. For 'reading' visual representations of numbers that is quite simple. One sets up a table with the visual representations and manually adds the numerical value one sees. After the algo has finished its guess a lookup in the table will tell if it were right or wrong. A 'reward' is given when the result was correct. The model will reiterate and 'learn' from there.

Once trained on numbers written in Courier typography the model is likely to also recognize numbers written upside down in Times New Roman even though they look different.

The reward function for reading 0 to 9 is simple. But the formulation of a reward function quickly evolves into a huge problem when one works, as I did, on multi-dimensional (simulated) real world management problems. The one described by the airforce colonel above is a good example for the potential mistakes. Presented with a huge amount of real world data and a reward function that is somewhat wrong or too limited a machine learning algorithm may later come up with results that are unforeseen, impossible to execute or prohibited.

Currently there is some hype about a family of large language models like ChatGPT. The program reads natural language input and processes it into some related natural language content output. That is not new. The first Artificial Linguistic Internet Computer Entity (Alice) was developed by Joseph Weizenbaum at MIT in the early 1960s. I had funny chats with ELIZA in the 1980s on a mainframe terminal. ChatGPT is a bit niftier and its iterative results, i.e. the 'conversations' it creates, may well astonish some people. But the hype around it is unwarranted.

Behind those language models are machine learning algos that have been trained by large amounts of human speech sucked from the internet. They were trained with speech patterns to then generate speech patterns. The learning part is problem number one. The material these models have been trained with is inherently biased. Did the human trainers who selected the training data include user comments lifted from pornographic sites or did they exclude those? Ethics may have argued for excluding them. But if the model is supposed to give real world results the data from porn sites must be included. How does one prevent remnants from such comments from sneaking into a conversations with kids that the model may later generate? There is a myriad of such problems. Does one include New York Times pieces in the training set even though one knows that they are highly biased? Will a model be allowed to produce hateful output? What is hateful? Who decides? How is that reflected in its reward function?

Currently the factual correctness of the output of the best large language models is an estimated 80%. They process symbols and pattern but have no understanding of what those symbols or pattern represent. They can not solve mathematical and logical problems, not even very basic ones.

There are niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results.

Overall the correctness of current AI models is still way too low to allow them to decide any real world situation. More data or more computing power will not change that. If one wants to overcome their limitations one will need to find some fundamentally new ideas.

Comments

I’m kind of fond of ChatGPT. I asked it who “Jörgen Hassler” is. It said it didn’t know but wanted more information. When I said I used to be a journalist it suddenly remembered that I work for the most prestigious news paper in Sweden and that I was awarded the grand prize for distinguished journalism in 2005. How come no one told me?!
Now, as an ex script level programmer and information architect I see ChatGPT as a database (nothing new here) with a search engine on top (seen that before) and a linguistic interface that almost mimics human writing. Which is kind of cool. Not game changer cool, more like have an extra sip of jolt cola cool.
But does it have anything to do with intelligence? You must be very stupid or very well paid to actually believe that.

Posted by: Jörgen Hassler | Jun 3 2023 11:53 utc | 201

That pesky “mostly”… things like motivation…
Ask the computer “why”.
Posted by: Rae | Jun 3 2023 11:51 utc | 201
I have read various western 19th Century “scientists” arguing that animals are just machines; and I think what arrogant little apes these are; and now we still have “health care” people arguing that infants don’t feel pain, well yeah; what can you say about such people?
So we haven’t really come that far, have we?

Posted by: Bemildred | Jun 3 2023 13:02 utc | 202

@ b and All, for a very interesting article and the comments covered nearly all of my questions/comments.
However…
ian | Jun 2 2023 20:55 utc | 134, who wrote Remind me why we are special.
It’s hard to imagine a philosophical outlook that does not realise that bodies and minds that facilitate our experience of life are fantastic. Cut me, do I not heal? I close my eyes and am able to imagine images. With training and/or practice one can use the imagination to create a discrete and accurate representations of images directly from memory. Everyday human activities become elevated into elite level gymnastics when one is faced with trying to simulate those activities with any comparable machine/mechanical device. Intelligence is not information unless one is a military spook.
Stop being so hard on yourself ;o)

Posted by: Lantern Dude | Jun 3 2023 13:37 utc | 203

Thank You b
Your experience matches the latest gibberish presented by “experts” in Research Symposiums on AI topic – I attended a few this past spring. Checking into Quantum Computing, it too seems like another bag of BS – any observable output is necessarily a collapse of the Wave Function, so you get what you’re looking for.
I’ll take the tried and true finite-state machines any day. 😂👍

Posted by: OldFart | Jun 3 2023 13:37 utc | 204

There is talk about making restrictions on AI to prevent it being a threat to humanity. I fear that such restrictions will be like Asimov’s 3 laws of robots; they might seem good and innocuous but ultimately have horrible consequences. For example the laws require the robots to protect us from danger; so what happens when AI/robots identify that pregnancy carries a risk of harm or death? It will take action to prevent pregnancies from ever happening by either sterilising people or separating the sexes leading to our eventual extinction. What’s more if we realise the error of the programming then we cannot change it as the AI would identify the attempt to change the rules as putting us in danger and for our protection it would refuse the rule change.
IIf I recall correctly Asimov himself warned about the flaws in the 3 laws that would lead to robots controlling us for our own good.

Posted by: Neal | Jun 3 2023 13:40 utc | 205

Disagree.
AI is already being used for most things and it performs better than humans.
For most “mundane” tasks that dont risk a life or death, AI is already “there”. And superior to humans.
I agree for medical, war, security and even driving tasks, its still far far behind. And I even think we should never give full control of our lives to AI.
But governments and companies in their greed might feel compelled to.

Posted by: Comandante | Jun 3 2023 13:47 utc | 206

Posted by: Dalit | Jun 3 2023 11:18 utc | 200

Sorry, but I get pissed off whenever anyone quotes obvious crap as if it was true.

*Sigh* Why do I bother?
Obviously you got caught up in the theology of the matter and missed the metaphor itself.
You know what? Screw it.
Let the A.I’s take over – the humans are a basket case anyway.

Posted by: Arch Bungle | Jun 3 2023 15:16 utc | 207

Posted by: Tom_Q_Collins | Jun 3 2023 4:39 utc | 177
(Told ChatGTP to compile a list of 9/11 conspiracy theories)

The list included Architects & Engineers For 9/11 Truth :-). Tell ChatGTP to open the list of Members and count the ones who are members of either Institute. It’s fewer than 20%. None of the majority of Members are, or claim to be, Architects or Engineers.

Posted by: Hoarsewhisperer | Jun 3 2023 15:33 utc | 208

Ai is great as an aid to help filter out some noise and do things like help fighters find targets even when hidden under nets.
It’s also pretty good for suicide drones to drive into one target once, like get under a seam for max damage things like that.
What it isn’t is a digital oracle. Anyone who treats it like a person is bound to be very disappointed and probably damaged.
I suppose it’s good for business or agricultural activities as well. Organizing things people make. It’s a tool just gotta use it right.

Posted by: Neofeudalfuture | Jun 3 2023 15:43 utc | 209

I think the training issue is a significant problem. A recent podcast by people who likely know suggests that OpenAI has created a very fast, efficient, and effective tool for converting audio and video (audio portion) to “text”. Similar things already exist. So why do this? The answer appears to be the GPT has learned all it can from the written Internet and the need exists for additional training data. Hence, crawl the Internet and convert audio to text for training. Imagine the amount of pure rubbish that we encounter on the Internet. There is no reality where human beings can meaningfully filter the Internet. No, the crawling is “automatic” and probably AI-based. Given the vast amount of garbage out there, how can such a training regime not result in a biased and problematic system.
I think a major breakthrough in AI is realization that almost anything can be interpreted as a string of “text”… images, videos, movements, Internet traffic, etc. This allows the same basic engine to be used for all manner of AI foundations. Rather than having dozens of different AI application silos where advances made in one silo might or might not be applicable to other domains, you have a common foundation so advances in one application area propagate almost automatically to other domains.
The podcast I was listening to was from The Center for Humane Technology.

Posted by: Brian M | Jun 3 2023 16:00 utc | 210

“by: Arch Bungle | Jun 3 2023 15:16 utc | 208”
“Obviously you got caught up in the theology of the matter and missed the metaphor itself.”
If I missed or misunderstood your point of including that, I apologize. I’ve reread your comment and see now that you weren’t saying anything that would normally annoy me. I think I was mostly letting off steam from seeing several similar things recently in places that don’t allow anonymous comments. Sorry.

Posted by: Dalit | Jun 3 2023 16:54 utc | 211

Posted by: Petri Krohn | Jun 2 2023 15:51 utc | 59
” . . . its mind or internal mental structures. AI models are self-organizing. If the a model gives human-like responses, then it has human-like mental structures and models of the the world – it has a human-like mind.”
AI does not have a ‘mind’ or ‘mental structures’ or ‘models of the world’ – it is purely and only lines of programming code reduced to machine code, which can be printed out onto sheets of paper if required. It does not and never can have a ‘human-like mind’.

Posted by: Jams O’Donnell | Jun 3 2023 16:57 utc | 212

Posted by: FrankDrakman | Jun 2 2023 16:00 utc | 62
” I asked ChatGPT to come up with a list of recent court cases.”
https://edition.cnn.com/2023/05/27/business/chat-gpt-avianca-mata-lawyers/index.html
Lawyer apologizes for fake court citations from ChatGPT.
“at least six of the submitted cases by Schwartz as research for a brief “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York in an order.
The fake cases source? ChatGPT.”

Posted by: horseguards | Jun 3 2023 19:54 utc | 213

Posted by: Hoarsewhisperer | Jun 3 2023 15:33 utc | 209
I had to specifically tell it to include AaEfT and it took several attempts for it to take.
1st Attempt:
“Give me a description of the 9/11 terrorist attack that objectively entails each “extreme” interpretations, ranging from what is called “conspiracy theory” to the “official narrative” and everything in-between. Take into consideration the Architects and Engineers for 9/11 Truth and the commission’s findings and world history prior to and after the events of that day. Come up with a reasonable interpretation of all of it taking into consideration the reporting of Mintpressnews.com on the FOIA documents regarding FBI reports of Israeli citizens having posted up across the bay and recording in advance as well as the Bush administration’s closing down of all civilian air traffic other than allowing Saudi citizens to leave, by way of a major exception. ”
2nd Attempt:
“Regenerate your response without trying to discredit “conspiracy theorists” and including the testimonial evidence presented by 9/11 architects and engineers for truth. Use as many paragraphs as necessary.”
3rd Attempt:
“Regenerate response but make sure it’s at least 15 paragraphs, each consisting of at least 10 sentences.”
4th Attempt (this one led to the numbered paragraphs, lol):
“YOU ARE NOT LISTENING. Think this through before beginning! Regenerate response according exactly to the previously provided paragraph and sentence parameters/instructions!”

Posted by: Tom_Q_Collins | Jun 3 2023 19:56 utc | 214

Posted by: Hoarsewhisperer | Jun 3 2023 15:33 utc | 209
The only thing “conspiratorial” about the facts that the “dancing Israelis” were set up and recording the events before the first plane struck is the fact that the FBI kept it secret for so long. There was also a long Fox News investigative piece that delved into other Israeli involvement or at least being there seemingly everywhere on the near periphery of the alleged hijackers’ activities.
Everyone knows about the Saudis and air traffic.
AaEfT is merely a reference point – WTC 7 did not just collapse. Sorry. Government researcher speaks out about 7 years ago: https://www.youtube.com/watch?v=GvAv-114bwM&t=2s
I’m sure the 9/11 thing has been exhaustively debated in this space over the years. I don’t necessarily think that the WTC 1 and 2 towers were detonated or that the planes were fake/drones or that kind of thing, but it’s amazing how little footage we have of the Pentagon strike and what they did make available – for no good reason – was a spliced up 10fps piece of garbage that only makes the missile theory look more plausible rather than less.

Posted by: Tom_Q_Collins | Jun 3 2023 20:03 utc | 215

Posted by: Intelligent Dasein | Jun 2 2023 16:33 utc | 73
“Given the amount of approvals and sign-offs these complicated projects have to go through before seeing the light of day, it just isn’t possible. It never would have happened.” You mean like the F-35?
” at the bottom it must involve some sort of incompetence in the human programmers.” Someone fucked up and admitted it publicly. That was their crime, and now they are back-pedaling.

Posted by: horseguards | Jun 3 2023 20:31 utc | 216

Posted by: FieryButMostPeaceful | Jun 2 2023 17:58 utc | 99
Couldn’t a monte-carlo simulation do the same in Excel?

Posted by: horseguards | Jun 3 2023 20:49 utc | 217

I have, over the course of my too-long life, deigned to give a reasonable amount of thought to this ‘artificial intelligence’ issue. Ought it be ‘conscious’? Or ‘have free will’? So many questions. I was autistic as a kid, but am not anymore, except for ‘motor coordination disorder’ which causes me great pain when trying to write or type; so I usually try to keep my missives short and sweet. I was tosses out of every school since first grade because I was so freakishly awkward at writing; I am suffering right now. (Why doesn’t anybody write about their miserable experience of this life-altering disability?) So I got into electronics because I can fix things without doing paperwork. Also, very abstract amateur mathematics, especially higher order logic (equations tend to be short). I even wrote an instruction manual for a very complex device, which everybody loved, because I made damn sure it actually made sense.
I want to complain about something. Language is totally unreasonably effective for all manner of discourse — except technical and mathematical discourse — for which it totally sucks. (I just took another muscle relaxer.) I wonder how well these ‘chat’ whatevers would fare if they wrote technical manuals? I know how they work, at least in principle, and the developers are certainly doing it wrong!
I am an unfortunate victim of the old ‘New Math’, so I hate set theory. Just defining whole numbers with it is totally hackish. With higher order logic you simply write ‘5P’ to indicate that the first order ‘predicate’ (lousy term) ‘P’ has five ‘elements’, ‘the signifier ‘5’ being a second order ‘predicate’ — so easy! How would one of these ‘chat’ whatevers would fare at explicating this set theory? Contrary to what you were presumably taught, sets do not contain ‘elements’. If I have two boxes with signifiers ‘A’ and ‘B’ which contain fruit of various kinds I could take an apple out of box ‘A’ and insert it into box ‘B’. But I could not take an apple out of a set ‘A’ and insert it into a set ‘B’ because sets are distinguished by elements that comprise them, not by their signifiers (or ‘labels’). That is, sets are described by ‘elements’ that are their ‘attributes’, so they are not ‘containers’ described by signifiers, but are actually essentially ‘predicates’ that are described solely by their ‘attributes’. Let’s see how the AI whatevers deal with that.
I’ve lately spent most of my energy perfecting political voting systems, which nobody pays any attention to, perhaps because I have no funding. Thirty-five years ago I developed mathematical systems that would conceivably make things like AI actually work, but I can’t get anyone to take the time to listen to my verbal presentation of them, I guess because they seem to think I’m just a quack. That is probably for the best, since they would mostly use it to torment the masses. So it will vanish when I do.

Posted by: blues | Jun 3 2023 22:38 utc | 218

LLMs produce an answer which is like the answer you’d expect, but which is not necessarily correct.
This is confusing to people, who believe they can detect the truth of a statement from its style. An authoritatively written biography may be believable but wrong. This has already caused one lawyer to be reprimanded, and a judge to state that LLMs should not be used in his courtroom.
The fact they don’t produce correct results doesn’t mean they won’t be used, because they are so much cheaper. In the short term this is a concern: more noise, less signal, more bad decisions.
In the long term, if AGI is developed, it is also concerning: driving the cost of intelligence to zero, means that most of us are no longer needed, reinforcing the power of those who have money, reducing their need to rely on “the help” who have brains. (Reversal to the mean of IQ used to guarantee that those who have money, over a number of generations, would need such help).

Posted by: Some 1 | Jun 4 2023 2:04 utc | 219

Good morning world, tis Sunday, been awake for hours catching up on various sites including this one. It’s an amazing thread!
b covers a lot of ground in this essay. The commenteriate add much.
It’s another keeper. Bookmarked!
Which inspires a short sermon. My tuppence of random opinions…
A ‘true’ AI will only happen when a machine invents a machine that invents a machine .. which we will have no idea of how it works. In a language we will never understand….
Such a life form will not need to understand us or even pay much attention to us as we don’t give that much of a toss about the birds ,bees , beasts and fish and flora and fauna or peoples who don’t look like us, talk like us …
A non human sentience cannot be created by a human.
Screwing a robot sex doll will not make a hybrid life form. 😆
There is deep philosophy and most religions and spirituality that has addressed these basic questions over many millennia and civilisations. As I say b covers a lot of ground!
Finally, Just because there is this interest doesn’t mean we actually have a truly independent sentience already in existence. If it does, then it is being coy with us , like we are children to be humoured..:
Airplane autopilots flying hundreds of thousands, millions, of people and billions of spendoolies worth of engineering, maybe faster to respond than a highly trained skilled and experienced pilot … but it is not a sentient Artificial Intelligence – is it?

Posted by: DunGroanin | Jun 4 2023 6:05 utc | 220

The point many people miss, is that humans are exactly the same.

Posted by: Hermit | Jun 4 2023 6:35 utc | 221

@Dirtforker | Jun 2 2023 19:06 utc | 114
If that is the case, then it is also, just as valid to say that “human cognition is just pattern recognition. It’s a cool technology, useful in the extreme, but it will NEVER be sentient or have a soul. It may be able to fool spirothetes but it is still just a human. It’s like a walking movie script. One can code it memories, emotions, do anything they like but it’s still a human incapable of true self-awareness or understanding.”
*Spirothete: a word coined to describe a living (self-aware) being, initially created as an artifact. From Latin, spiro -are; intransit., to breathe, blow, draw breath; to be alive; to have inspiration; be inspired; transit., to breath out, expire (also L spiro-/equiv Gk Pneuma (πνεῦμα) an ancient Greek word for “breath”, and in a religious context for “spirit” or “soul”, the breath of life) and synthetic adj 1: (chemistry) not of natural origin; prepared or made artificially 2: involving or of the nature of synthesis (combining separate elements to form a coherent whole) as opposed to analysis.

Posted by: Hermit | Jun 4 2023 6:54 utc | 222

Real-World AI as in Tesla self driving creating a vector space of objcts around the vehicle is according to collected data already some 5 times safer than the average driver while using HW3 and relatively low resolution cameras so relying on vision only. HW4 has a substantially high processing power and uses higher resolution cameras. Add to that the ever increasing milage driven with FSD outperforming human drivers is definitely on the horizon within the next few years. Uber driver rides even within cities with no interventions are becoming routine already.
No other OEM car manufacturer has a similar project or had the foresight installing the required hardware for years now to collected the necessary data. Adding Lidar to the datastream was a common mistake. Processing that lidar data (precise distance, low resolution, no color differentiation) actually was a lot less usefull than high resolution cameras, simply because lidar data was not that usefull for recognizing and tracking objects.
A derivative spin off is the use of the very same hardware and vision for the Optimus robot.
For some weird reason Nvidia is one might say overvalued and Tesla is still not valued for its pretty serious progress in real world AI. Training AI networks on the acquired data is a quite separate problem. While Nvidia probably made more progress than I am even aware of and the usefullness of GPU type processors is evident, DOJO with its extreme emphasis on the communication between processors seems on a whole other level.

Posted by: JR | Jun 4 2023 9:26 utc | 223

In a previous thread I suggested that the adequate name for AI was I2 for industrial ‘intelligence’ (no pun intended for I too ):
Trying to add some hopefully useful elements to this interesting and important discussion full with excellent contributions as usual on MOA:
1- there is a definite psyop dimension in the hype about AI: soften the minds to accept transhumanism, godless techno world etc…
2- there is a definite progress and breakthrough in what in the 90’s was called High Level Languages HLLAPI to transfer human will and orders to machines, using the machine learning technique and natural language interface as cited in this thread
3- But this progress is trapped in a strictly quantitative description of the world: no values no qualities and no proper human experience possible at this level
4- Human brain is known to specialize in Left lobe ( mathematical, logic, automation, etc…) and Right lobe ( emotions creativity intuition transcendance etc…)
5- At most ‘AI’ can evolve in the realm of Left brain but no access possible to Right brain faculties which characterize genuine human experience
6- Animals share with us part of the cognitive world, certainly Left brain and little part of right brain with gradations
from Reptilian brain to mammal brain to human brain, but though animals experience fear and pain, these are no philosophical or transcendental level : their cognitive world is closed : no extra needs other than ‘basic instincts’
7- We may soon see convergence with AI powered robots, animal-like robots transmiting orders via IoT internet of things and software Bots simulating human operators etc… which can create a sophisticated ‘eco system’
8- Human are the only ones to know ( ‘at adulthood’) that they will die and have to live with it ! And that is IMO the only and ultimate criterion for ‘God given’ human condition and that no ‘AI’ can ever reach.
In conclusion ‘AI’ may be a good technical artifice usable for good or evil, but nothing to do with Human paradigm

Posted by: Go Figure | Jun 4 2023 10:53 utc | 224

What they want are autonomous unmanned drones capable of engaging without direct human intervention. I agree current technologies limit this but it certainly is the way the technology is going.
Posted by: Doctor Eleven | Jun 2 2023 13:39 utc | 7

Sensor technology is more of a limitation these days. It’s hard to match patterns when you can’t find a pattern to match.
However if you wanted drones that could scour a defined area silently, killing every armed human, I’m genuinely surprised that doesn’t exist already.
If you wanted drones that could knock on your door and call for help in the voice of your child, that’s not even difficult.
Such horrors await, most people have no idea.

Posted by: Due West | Jun 4 2023 11:09 utc | 225

One thing that often becomes apparent in these discussions is the divide between philosophy and engineering.
You may have some airy concept of “self awareness” or some philosophical definition of intelligence, but when you try to convert that into a hard engineering specification, you fail almost immediately. True AI, if it is even something that can be achieved, is a very long way out yet.
Musk and company are full of shit.

Posted by: Due West | Jun 4 2023 11:11 utc | 226

The point many people miss, is that humans are exactly the same.
Posted by: Hermit | Jun 4 2023 6:35 utc | 222

Sorry, that’s just a massive display of ignorance. It’s not even worth dignifying with elaboration.

Posted by: Due West | Jun 4 2023 11:14 utc | 227

Probably the best article and discussion / opinions I have ever come across on MOA.
Outstanding Bernard!

Posted by: jpc | Jun 4 2023 13:07 utc | 228

Posted by: Tom_Q_Collins | Jun 3 2023 20:03 utc | 216
(A&E for 911Truth)

My objection to (anti-NIST) A & E Truth is that if NIST’s conclusions were wrong, real archs and engineers would have objected very loudly and demanded that NIST correct the errors. That’s because NIST is supposed to be a well-informed and reliable top-ranked Authority on construction matters.

Posted by: Hoarsewhisperer | Jun 5 2023 5:59 utc | 229

@Hermit | Jun 7 2023 8:09 utc | 230
@Due West | Jun 4 2023 11:14 utc | 227
The above response was intended for you
.
You might also contemplate that the human brain is finite, programmed by genetics, epigenetics and experience, is activated by environmental cues, and that the conscious mind is discontinuous and extremely limited in bandwidth, cannot control what it is informed by the brain, cannot establish what is filtered out by the brain, does not determine whether the brain acts on its output, always accepts whatever the brain informs it, has no error detection, let alone correction, and is strictly governed by the same consequential laws of physics as everything else at human scales of space and time, precluding any meaningful “free will” and this is fully supported by observations of and experiments with humans.
So the challenge you waved your hands over is to try to explain what you consider makes a human neural mesh intrinsically superior to, or even substantially different from, an AI neural mesh?

Posted by: @Hermit | Jun 7 2023 | Jun 7 2023 23:22 utc | 231

Thank you for sharing!
_________________
http://essaypapers.reviews/

Posted by: David Adkison | Jun 20 2023 13:18 utc | 232