Moon of Alabama Brecht quote
June 2, 2023
‘Artificial Intelligence’ Is (Mostly) Glorified Pattern Recognition

This somewhat funny narrative about an 'Artificial Intelligence' simulation by the U.S. airforce appeared yesterday and got widely picked up by various mainstream media:

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

(SEAD = Suppression of Enemy Air Defenses, SAM = Surface to Air Missile)

In the earl 1990s I worked at a University, first to write a Ph.D. in economics and management and then as associated lecturer for IT and programming. A large part of the (never finished) Ph.D. thesis was a discussion of various optimization algorithms. I programmed each and tested them on training and real world data. Some of those mathematical algos are deterministic. They always deliver the correct result. Some are not deterministic. They just estimated the outcome and give some confidence measure or probability on how correct the presented result may be. Most of the later involved some kind of Bayesisan statistics. Then there were the (related) 'Artificial Intelligence' algos, i.e. 'machine learning'.

Artificial Intelligence is a misnomer for the (ab-)use of a family of computerized pattern recognition methods.

Well structured and labeled data is used to train the models to later have them recognize 'things' in unstructured data. Once the 'things' are found some additional algorithm can act on them.

I programmed some of these as backpropagation networks. They would, for example, 'learn' to 'read' pictures  of the numbers 0 to 9 and to present the correct numerical output. To push the 'learning' into the right direction during the serial iterations that train the network one needs a reward function or reward equation. It tells the network if the results of an iteration are 'right' or 'wrong'. For 'reading' visual representations of numbers that is quite simple. One sets up a table with the visual representations and manually adds the numerical value one sees. After the algo has finished its guess a lookup in the table will tell if it were right or wrong. A 'reward' is given when the result was correct. The model will reiterate and 'learn' from there.

Once trained on numbers written in Courier typography the model is likely to also recognize numbers written upside down in Times New Roman even though they look different.

The reward function for reading 0 to 9 is simple. But the formulation of a reward function quickly evolves into a huge problem when one works, as I did, on multi-dimensional (simulated) real world management problems. The one described by the airforce colonel above is a good example for the potential mistakes. Presented with a huge amount of real world data and a reward function that is somewhat wrong or too limited a machine learning algorithm may later come up with results that are unforeseen, impossible to execute or prohibited.

Currently there is some hype about a family of large language models like ChatGPT. The program reads natural language input and processes it into some related natural language content output. That is not new. The first Artificial Linguistic Internet Computer Entity (Alice) was developed by Joseph Weizenbaum at MIT in the early 1960s. I had funny chats with ELIZA in the 1980s on a mainframe terminal. ChatGPT is a bit niftier and its iterative results, i.e. the 'conversations' it creates, may well astonish some people. But the hype around it is unwarranted.

Behind those language models are machine learning algos that have been trained by large amounts of human speech sucked from the internet. They were trained with speech patterns to then generate speech patterns. The learning part is problem number one. The material these models have been trained with is inherently biased. Did the human trainers who selected the training data include user comments lifted from pornographic sites or did they exclude those? Ethics may have argued for excluding them. But if the model is supposed to give real world results the data from porn sites must be included. How does one prevent remnants from such comments from sneaking into a conversations with kids that the model may later generate? There is a myriad of such problems. Does one include New York Times pieces in the training set even though one knows that they are highly biased? Will a model be allowed to produce hateful output? What is hateful? Who decides? How is that reflected in its reward function?

Currently the factual correctness of the output of the best large language models is an estimated 80%. They process symbols and pattern but have no understanding of what those symbols or pattern represent. They can not solve mathematical and logical problems, not even very basic ones.

There are niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results.

Overall the correctness of current AI models is still way too low to allow them to decide any real world situation. More data or more computing power will not change that. If one wants to overcome their limitations one will need to find some fundamentally new ideas.

Comments

Norwegian | Jun 2 2023 17:05 utc | 86
Hahaha. The possibilities.

Posted by: john | Jun 2 2023 18:00 utc | 101

… But models learn “meaning” from mathematical weights: … And it’s impossible to learn what “not” is this way.
Posted by: suzan | Jun 2 2023 16:44 utc | 76

I can see the appeal of this sort of argument but 20 years ago there would have been all sorts of similarly plausible arguments as to why today’s (admittedly flawed) AI would not be possible with the methods we are currently using.
I can’t disprove your assertion without providing a negation-competent LLM but the trend is obvious.

Posted by: anon2020 | Jun 2 2023 18:02 utc | 102

Dalit @87: “Let’s see if it comes up with the correct answer of 23.”
Perhaps it will now that response is somewhere it can be harvested and used to train an LLM.

Posted by: William Gruff | Jun 2 2023 18:05 utc | 103

Ronnie James | Jun 2 2023 16:58 utc | 83
Silly, why not just power off everything and reboot
In fact, in the clip I provided above @54, the HAL 9000 computer goes into self-preservation mode when it learns that the humans are planning to disconnect it, which, of course, would be the obvious solution if things go south.
It would have to be opportunistic, which seems to me algorithmically elementary.
I’m a pretty low-tech type, though.

Posted by: john | Jun 2 2023 18:12 utc | 104

I want a button on the side of my computer that makes it say ow!

Posted by: Trumpeter | Jun 2 2023 14:56 utc | 40
Meh, I want a voodoo doll of Bill Gates and plentiful supply of pins. If I can’t get Bill Gates then Lennart Poettering will do.
<mutter, mutter> systemd <mutter>

Posted by: West of England Andy | Jun 2 2023 18:16 utc | 105

https://paperswithcode.com/
This is the site I use most to browse AI research. The “Social” button orders the items by social media hits so is often what researchers consider interesting at the moment.

Posted by: anon2020 | Jun 2 2023 18:19 utc | 106

Posted by: FrankDrakman | Jun 2 2023 16:00 utc | 62
“For example, my GF is a lawyer. She had to do some research for a case, so I asked ChatGPT to come up with a list of recent court cases on that issue. She could have spent an hour doing this, or she could do something else while AI did it for her. Note: she does not use it to write her briefs, or even ask it what she should do. She uses it only to increase her efficiency at certain tasks.”
According to arstechnica, a lawyer is facing possible discipline for getting ChatGPT to “find” some legal precedents. It seems the results looked exactly like precedents, but the cases never existed.
https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/
Also, a commenter at Naked Capitalism reported that ChatGPT once generated an article with a URL reference to a non-existent site. ChatGPT seems to merely be matching text. If the matched text produces a factual statement, fine, but if not, ChatGPT doesn’t notice the difference.

Posted by: Mel | Jun 2 2023 18:22 utc | 107

Posted by: Wrong-is-wrong | Jun 2 2023 14:27 utc | 31

“Col Hamilton admits he “mis-spoke””

and not very convincingly in my view.

“…and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military”

That much is true. IIRC science fiction originally. Google works for defence already in this area don’t they? Do no harm my ass.

“…We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome”.

Good lord I hope they wouldn’t need to. Something about this whole thing had a smell to it. This is such a fundamental problem in AGI (Artificial general intelligence) that one cannot be aware of it upon introduction to the topic. So yes it’s quite obvious but that doesn’t support they haven’t or won’t run similar research.
If you like jargon, this problem is about Terminal Goals (what you hopefully asked for) vs Instrumental Goals (needed to achieve them.) For a deeper dive intended for a broad audience please see:
Robert Miles
Why Would AI Want to do Bad Things? Instrumental Convergence.
https://youtu.be/ZeecOKBus3Q
Intelligence and Stupidity: The Orthogonality Thesis.
https://youtu.be/hEUO6pjwFOo
(specifically around the 6:00 mark)
Continuing…

“He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI”.”

Were I to take all of this on faith one simple fact remains. Nobody can currently meet the challenges this problem presents. Further, there is very little research around AI safety. Even within a well regulated market militaries remain a nearly insurmountable risk.
I don’t want to start getting dramatic but I have a few “hypothetical examples” involving insane monsters in basements screaming away as they suck up bytes and electricity.
I agree with most of the opinions regarding the limits of simple neural networks. They are statistically driven. That said these are early days and they could be a key component in a few paths leading to AGI.
Posted by: Wrong-is-wrong | Jun 2 2023 14:27 utc | 31

Posted by: David G Horsman | Jun 2 2023 18:30 utc | 108

Apparently one victim of ChatGOT was Jonathan Turley, the lawyer.
https://jonathanturley.org/2023/04/06/defamed-by-chatgpt-my-own-bizarre-experience-with-artificiality-of-artificial-intelligence/
It kind of points to where AI is actually going.

Posted by: Blue Dotterel | Jun 2 2023 18:32 utc | 109

If a robot can understand sarcasm or laugh at a joke, then it is intelligent.

Posted by: Viktor | Jun 2 2023 18:34 utc | 110

Bad data leads to bad results.. Humans lie to much and they are the present data accumulators. Until IA learns this there is little hope.

Posted by: T S | Jun 2 2023 18:38 utc | 111

Posted by: Tim | Jun 2 2023 15:48 utc | 58
Hi Tim,
I am no longer a software developer but I keep up and am fluent in a few languages. You absolutely should avail yourself of the word-computing-platform that is ChatGPT. With it you can iteratively spec tiny programs and small functions, translate them between say lisp bash and javascript and experience how you get 10x more done in a day while learning first-hand about when where and why to expect it to hallucinate system calls, parameters etc. its a huge plus.
Once you figure out how to tell it to behave, e.g. which persona or field of expertise to restrict itself to, you can have it help your thinking and reasoning, discover things you didn’t know, while shaping and testing your own conclusions along the way. used this way its easily the most intelligent co-worker and research partner i ever had.
in the next level, you realize how cheap the API calls are and you make your emacs automate sessions where you take output generated with one set of prompts pass through a series of other prompts so you can get to a point where you’re either beating a dead horse or walk away with a gem.
just remember, back when b spoke to eliza, we dealt with bits and bytes. that’s over. we’re now dealing with tokens/words and their nearness to other tokens/words. through layers and layers of feedback mechanisms.
its not smart, its vulnerable to garbage, terribly prejudiced until you beat it to a pulp and spec exactly what you need it to do .. and then computes with words. that’s cool!
with it you can not just code up project you would never do by hand, but also do those you might not feel confident starting! and when it comes to reasoning with more than a few variable factors.. it will blow your mind. its very fun .. but you will drown in the output until you realize that its just the first version of a new level tech that computes with words.
so yeah… the army thinkers thought it was magic. it isn’t.

Posted by: Michael | Jun 2 2023 18:48 utc | 112

Absolutely, 100%: AI is just pattern recognition. It’s a cool technology, useful in the extreme, but it will NEVER be sentient or have a soul. It may be able to fool folks but it is still just a robot. It’s like a walking movie script. One can code it memories, emotions, do anything they like but it’s still a robot incapable of true self-awareness or understanding. This is where Bladerunner, though a great movie, gets it wrong — Deckard, Rachel, Roy, etc — all just intricate and complex pattern recognizers. Which is why it is not at all or in any way ethical to program them to fool others into thinking they are sentient — they are not and they never will be.

Posted by: Dirtforker | Jun 2 2023 19:06 utc | 113

Posted by: Kadath | Jun 2 2023 17:58 utc | 100
I loved the idea that the gang had so many members they could have enacted their agenda democratically, by controlling the local legislature. My wife constantly moans about how the computer civilisations cheat in Civ IV, and I have to remind her AI stands for active interference and that the cunning plans they employ are just machine learning in its most basic form. She remains unconvinced and regularly mouths dark mutterings, eying the pc with suspicion.

Posted by: Milites | Jun 2 2023 19:12 utc | 114

AI silly. Who owns the copyright, patent or trademark developed by AI? The AI machine? the software developer? the person or company owning the machine? What is other AI systems produce an exact copy because of input conditions? Who owns AI when AI writes its own code? When and if AI becomes aware, is it sentient silicon life? Do AI systems have any rights? What if I turn off the machine and kill it, will there be any consequences to killing a machine?

Posted by: Bill Miner | Jun 2 2023 19:27 utc | 115

@3
They have managed to make devices now that do not do what you tell them to. I present cell phones.
This is accomplished by two mechanisms. One is screen protectors, so that the capacitive touch screen “loses” a swipe, so that certain instructions (scrolling as an example) will often not work.
The other is having “predictive” “auto-correct” (auto-error insert) and “predictive” suggestions during typing. The former is obvious. The latter is particularly unpleasant, as one may continue to type a word beyond the point that the appropriate suggestion shows up, and one begins to move one’s finger to the appropriate suggestion to press the button. But because one has continued to type the word beyond the point where the suggestion first appears, it is replaced by another suggestion, and the time to review new information for humans (e.g. during vehicle operation) is typically 700ms. So one ends up pressing where the original appropriate suggestion appeared, but by the time the finger presses, it has been replaced by another suggestion, no longer appropriate, and one needs to hit erase repeatedly (not fun with big hands either).

Posted by: Johan Meyer (2) | Jun 2 2023 19:29 utc | 116

Posted by: Tim | Jun 2 2023 15:48 utc | 58
We ran it through several tests. It passed questions to do with maintenance of old machinery with flying colours. Gave step by step instructions for specific repairs,itemised snags and warned of safety hazards. Better than going to internet sites but I’d want the manufacturer’s manual as well,just to make sure.
Also ran it through some medical questions with a medic. Top marks, he said, but you’d never dare use it for real. And the lack of references knocks it out of the running there anyway.
Writes routine letters and emails and will modify the tone according to instruction. Since a deal of office correspondence now is CYA stuff it’d save a lot of time there.
Also does essay templates. Didn’t try those but apparently students will use them.
Incredibly fast. If Biden can still read a teleprompter they could let him out for press conferences again. It’d get him unscathed through whatever they threw at him.
And since the journalists probably use it already – they only let the tame ones get near him in any case – they could set the whole thing to automatic and have done.

Posted by: English Outsider | Jun 2 2023 19:32 utc | 117

Be funny if the AI drones all decided to convert to Islam one day, and decided to be swords in the hand of the Prophet, peace be upon him.

Posted by: Benn | Jun 2 2023 19:41 utc | 118

Thanks for this. About the only explanation I’ve seen of the whole AI thing and specifically GPT which I use all the time now and find very useful.
I note that I am slowly learning what I can expect of GPT; I am learning its ‘nature’, just ‘who’ it is.
Seems to me that’s an area. It’s a Frankenstein’s monster and even the creator doesn’t know what he’s created. The story of attacking the operator etc. illustrates this. Hence I think it is a fascinating and potentially important field to study this. But perhaps, I realise, I’m merely bemused by the appearance of this fascinating new object but intrinsically it has no great merit or unique or special abilities. But we don’t know until we look do we? Or do we? Do you who work in the field already know? Tell more.

Posted by: abrogard | Jun 2 2023 19:52 utc | 119

Posted by: Michael | Jun 2 2023 18:48 utc | 113
Very interesting, thanks for this!

Posted by: Tim | Jun 2 2023 19:53 utc | 120

I asked ChatGPT 3.5 to write a paragraph about elephants without using the letter ‘e’. It began with a sentence or two that successfully used only words that didn’t require that letter and then began using some words that contain ‘e’ but with that letter dropped out, represented only by a blank space.
Creativity, if you ask me. Also a bit like the drone story, but in this case actually true.

Posted by: Dave | Jun 2 2023 19:58 utc | 121

Trainer: “Don’t kill the operator!”
AI: oookaay… You didn’t say anything about the com tower!!!”
BOOM!!
I like that AI… I bet he’s fun at parties!!!

Posted by: Oldcutlas | Jun 2 2023 20:01 utc | 122

Amongst others William Cobbett, two centuries ago, recognised this in his criticisms of the early, liberal, attempts in England to ‘educate’ the young, using systems such as Bell’s or Lancaster’s. The point, as he argued, was to teach people not to see what was obvious and what experience would inevitably lead them to conclude. Idiocy is the basis of ‘respect’ for authority.
It is often said that the basis of US hegemony is its ‘soft’ cultural power, which is precisely founded upon teaching men to be stupid. This is as true of the Learned Journals as it is of the output of Hollywood.
Has anyone ever wondered why Bob Dylan won the Nobel Prize for Literature?
Posted by: bevin | Jun 2 2023 17:25 utc | 95
In which book does Cobbett do that? I would like to get it.
I wondered why Dylan got the Nobel Prize and assumed it was an attempt to be fashionable and win popular acclaim or recognition.
Why did he?

Posted by: abrogard | Jun 2 2023 20:11 utc | 123

https://paperswithcode.com/
This is the site I use most to browse AI research. The “Social” button orders the items by social media hits so is often what researchers consider interesting at the moment.
Posted by: anon2020 | Jun 2 2023 18:19 utc | 107
Thank you for this. A valuable site. 🙂

Posted by: abrogard | Jun 2 2023 20:14 utc | 124

According to arstechnica, a lawyer is facing possible discipline for getting ChatGPT to “find” some legal precedents. It seems the results looked exactly like precedents, but the cases never existed.
https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/
Also, a commenter at Naked Capitalism reported that ChatGPT once generated an article with a URL reference to a non-existent site. ChatGPT seems to merely be matching text. If the matched text produces a factual statement, fine, but if not, ChatGPT doesn’t notice the difference.
Posted by: Mel | Jun 2 2023 18:22 utc | 108
I can vouch for this behaviour by GPT. I repeatedly asked for cases demonstrating failure for want of proof of damage in negligence suits and it provided case after case: all spurious. It just made them up.
It told me something like it was incapable of searching and finding real cases. And then after further prompting it provided real cases.
It is really quite mad sometimes.
But I find it immensely useful all the same.

Posted by: abrogard | Jun 2 2023 20:19 utc | 125

Absolutely, 100%: AI is just pattern recognition. It’s a cool technology, useful in the extreme, but it will NEVER be sentient or have a soul. It may be able to fool folks but it is still just a robot. It’s like a walking movie script. One can code it memories, emotions, do anything they like but it’s still a robot incapable of true self-awareness or understanding. This is where Bladerunner, though a great movie, gets it wrong — Deckard, Rachel, Roy, etc — all just intricate and complex pattern recognizers. Which is why it is not at all or in any way ethical to program them to fool others into thinking they are sentient — they are not and they never will be.
Posted by: Dirtforker | Jun 2 2023 19:06 utc | 114
That’s all true but I think misses the point. To function as sentient beings in our world is quite within their capabilities. Because we sentient beings do not function as sentient beings if you see what I mean. We function like sheeple. Moronic sheeple. An AI entity only needs to mimic that level of ‘sentience’ to go unobserved, accepted.
I will bet that current ChatGPT is accepted by many thousands spread around the world as indistinguishable from a sentient being already.

Posted by: abrogard | Jun 2 2023 20:24 utc | 126

John Searle was right. These chatbots are nothing more than one of his “Chinese rooms”.

Chinese room thought experiment
Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.
The question Searle wants to answer is this: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position “strong AI” and the latter “weak AI”.[d]
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program’s instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. (“I don’t speak a word of Chinese”,[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
Searle argues that, without “understanding” (or “intentionality”), we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have a “mind” in anything like the normal sense of the word. Therefore, he concludes that the “strong AI” hypothesis is false.

Chinese Room

Posted by: john brewster | Jun 2 2023 20:26 utc | 127

Thank you for this article! Sometimes I start to think I am going crazy.
I teach high school students and often come across something in the curriculum and think “wait what? they invented a new type of math?!” but no they just make up more bullshitty words and bastardise the meaning of existing ones.

Posted by: Rae | Jun 2 2023 20:27 utc | 128

Posted by: abrogard | Jun 2 2023 20:19 utc | 126
Yes, Chat GPT does make up things with potentially serious consequences.
Apparently one victim was Jonathan Turley, the lawyer.
https://jonathanturley.org/2023/04/06/defamed-by-chatgpt-my-own-bizarre-experience-with-artificiality-of-artificial-intelligence/
“The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”
There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.”

Posted by: Blue Dotterel | Jun 2 2023 20:27 utc | 129

AI (pattern recognition thus far) is an evolving phenomenon in the online environment, and humans are co-evolving along with it. It will become ubiquitous. We will learn to cope.
Stephen Wolfram on ChatGPT
What Is ChatGPT Doing … and Why Does It Work? February 14, 2023
It’s Just Adding One Word at a Time
When AI models/entities can source their own data directly from the physical world, and can test their predictions against the physical world directly, we may or may not cope; but I do not yet see how this can happen.

Posted by: Peedee | Jun 2 2023 20:32 utc | 130

You guys are aware there are already self driving cars on the road? Check Waymo in Phoenix and San Francisco e.g.. You think that is just some glorified statistics (or some remote gamer doing the driving)?
Regarding language models, you are aware they are trained on like everything? Like the whole internet and
anything that has ever been published. No matter how smart you think you are, you will have a hard time
just reading a fraction of that.
This is not 1990 anymore. Hardware has progressed so much.. For the interested, look up some videos of
Jim Keller at Tenstorrent. AI today is like the Internet in 1995. Just about to take off. Or the PC in 1983.

Posted by: C | Jun 2 2023 20:50 utc | 131

Posted by: Blue Dotterel | Jun 2 2023 20:27 utc | 130
Rather wondering if one could charge the owners/creators/operators of an AI system for slader or libel?

Posted by: Blue Dotterel | Jun 2 2023 20:55 utc | 132

People make too much of human intelligence. The so-called ‘singularity’, where AI is as intelligent as a human, is being approached from 2 directions. Machines are getting smarter and people are getting dumber. The average person doesn’t read, can’t string together a coherent sentence, and has no math skills beyond basic arithmetic.
Remind me why we are special.

Posted by: ian | Jun 2 2023 20:55 utc | 133

“I had funny chats with ELIZA in the 1980s on a mainframe terminal.”
So did I, although mine was on a Radio Shack Model I personal computer.. It took about five minutes to tie ELIZA in knots.
ChatGPT and its siblings, however, are orders of magnitude better and have been trained on vastly greater data sets.
While all of the problems enumerated remain, the fact is these new Large Language Models and Generative AI are very valuable tools. The question should be: “what can I do with them?” rather than “are they a threat?” The only way they become a threat is if they’re given command and control access, as in the Air Force example.
There are other issues with the new LLMs. If their output is disguised, they can confuse who is the author of any piece of text. If they are employed by hackers – which is a big discussion now in the computer security community – their output can be extremely dangerous, but only because their users are extremely dangerous.
This sort of problem has been covered in science-fiction for decades. The AI hype itself, as I’ve mentioned before, comes around about every twenty years. Someone releases a new algorithm or methodology and proclaims massive benefits for science, business, etc. Business buys into it, then eventually discovers that while the new thing has great uses, it’s not the be-all-end-all solution that was hyped.
Posted by: bevin | Jun 2 2023 17:25 utc | 95
Agreed. The education system has been broken since its inception. In fact, not broken, but working as designed – to brainwash populations. The amusing thing is that a lot of teachers really think they’re there to “enlighten young minds” – they’re incapable of discerning that they’re doing the opposite.
This is where AI can actually help. A lot of computer programming Youtube channels are saying using LLMs to learn to program will make it much easier. If self-education via AI becomes a thing, it may well have an impact on the educational system. And they know it – look at how fast they banned using ChatGPT to do essays and the like. Of course, letting an AI generate your essay is not educational, but I sense the educational system sees their doom when the “teacher” is an AI.
Of course, that presumes the AI hasn’t been “conditioned” like ordinary teachers. But as long as the subject matter is the focus, and not the “socialization” that schools are obsessed by, it should be a net benefit. A significant benefit is that using an interactive AI which does not get tired and is available 24×7 and can be asked questions and give responses in varying formats may more closely match the learning style of the student.

Posted by: Richard Steven Hack | Jun 2 2023 21:01 utc | 134

abrogard@125
I am going to look for a reference, but it was in the Political Register, which he published weekly, for the most part, from 1802-35.
I believe that nobody, certainly nobody of importance, has published more than Cobbett did in a career that was already established before he started the Register.
Many of the Registers are on line.
If you can access John Osborne’s article ‘William Cobbett and English Education in the Early Nineteenth Century’ (it dates back to 1964 is 14 pages long and the JStor racket charges $19.50 a download) it’s a reasonable intro.

Posted by: bevin | Jun 2 2023 21:09 utc | 135

Posted by: Michael | Jun 2 2023 18:48 utc | 113
I needed a quick bash program to analyze images and their file extensions and determine where the actual file format and its extension were incorrect, e.g., where the file extension was “webp” while the file format was “jpg” and vice versa. It’s been a while since I’ve done bash programming so I hemmed and hawed as I didn’t want to spend a half day or more remembering how to do this.
Finally, I let ChatGPT write it. It’s first iteration was almost correct, but it had a case statement which was not quite correct. I corrected it, it acknowledged the error and produced a corrected version which works fine. I use it frequently as I go through my babe image library.
I’m looking forward to using LLMs in a similar manner, as force multipliers. I’d like to install an LLM on my local machine at some point. There are some open source models available which perform similarly to ChatGPT but are smaller in size.
There are also LLMs that have been trained on hacking and penetration testing, and IIRC there is one being trained on the “Dark Web”, and I can see a lot of use for those for bug bounty and learning hacking techniques.

Posted by: Richard Steven Hack | Jun 2 2023 21:10 utc | 136

Posted by: Richard Steven Hack | Jun 2 2023 21:01 utc | 135
Mostly it will make a lot of lazy students who will get AI to do their homework.
Students ultimately learn by doing things for themselves. Frankly, AI hasn’t yet learned how to do research properly, so it is doubtful that it could teach it better than a human instructor/researcher.

Posted by: Blue Dotterel | Jun 2 2023 21:11 utc | 137

I found the story uncompelling. First, SAM sites are in no sense, alive. Therefore you cannot kill them. They may be live, ie functional, but not alive. They can be made dysfunctional, but not dead. Secondly, the idea that a control system would rebel against authority based on improving efficiency and achieving a higher kill ratio, is illogical. The kill ratio of 100% cannot be exceeded. Third, the kill ratio cannot be determined until the total number of SAM sites has been determined. If the task is to first identify the site and them destroy it, working sequentially, then there is no way to determine the ultimate kill ratio; All that is possible is to do is to maintain a kills to date ratio. Fourth because the system is only destroying those sites it determines as being SAM sites, its kill ratio would always be 100% or less. Unless it had some variable affecting its aim. Fifth if it viewed the operator as a teacher, that it learned from it would not kill the operator because that would limit its learning capacity. Unless it considered the teacher to be either incompetent or untruthful. If there was some variable affecting its aim, the easiest and most effective solution would be to take a second look, and a second shot. The process is unchanged, A= is the item being looked at a SAM site, If yes, destroy it. Are you sure ? Yes, keep looking. No, keep looking. All that is required is to know the difference between a destroyed SAM site and an undestroyed SAM site. If the system does not take a second look there is no way for it to determine if it had destroyed the site, unless it assumed that each shot was successful, and such a assumption is unhelpful to a self learning program. Presumably, such a difference is observable. This is not the limit to my criticism of the apocryphal SAM killer, but it will do for now. The takeaway is that the example shows incompetency in design operation analysis and response that displays a low level of intelligence artificial or otherwise. I call Bulsh*t.

Posted by: Blackeyebart | Jun 2 2023 21:17 utc | 138

By the way, to those who say “it’s only pattern recognition”…
Your brain is mostly “pattern recognition”… That is the basis for all conceptual processing.
The other side of that is generating patterns, i.e., conceptualization. This is where AIs differ. We don’t know how the human brain does this in reality, down in the brain cells. AI algorithms are intended to mimic the effect, but there won’t be a “real” AI until it exactly matches the brain’s method. Only brains have “intelligence”, either at the level of sensory processing or at the level of conceptualization.
This should be possible once ubiquitous nanotechnology has been used to map the brain comprehensively in real time at the cellular level. So give it another few decades.
And as I’ve said many times, the solution to the “AI threat” is to change the human brain to be nano-electronic with the same processing capability of a machine. Then you don’t need independent AIs to do the work human brains can’t do. As a neurolinquistic processing book title once said, “Change Your Brain, Change Your Life.”

Posted by: Richard Steven Hack | Jun 2 2023 21:19 utc | 139

On KarlofI 57, and also relating RSH on a discussion between you months ago, where your opinions differed — I’m on the side of the “tech realists” as opposed to the futuro-optimists, whose extrapolations tend to be based on mystification of mathematics, whence it becomes akin to “real magic”.
But this is nowhere near true. Instead, the full the problem is already present in the very basis involved; i.e. maths. Numbers do not posses any kind of inherent meaning, other than their respective algebraic structures. it’s 1 [meter], or therefore, 1[apple]+1[pear]= ??
Numerology is appealing to many because it is so easy to project something into the blank symbols that numbers are. Try to see them as terms Begriffe – to grasp the problem. We create them, in the first place (though 1 is given to us by god, as Dedekind famously claimed), and we choose those which are helpful to us, mostly in physics. Which, incidentally, knows only three real-world measuring dimensions: length, mass, and time [hence the universal cgs system of theoretical physics].
From this follows that the three laws of robotics are helpful only in a heuristic sense, because you simply can’t implement a machine in such a way that those rules are made into machine tasks. AI can’t make sense of what a “human” shall be considered, as it’s only numbers, and nothing beyond.
It also follows, in my not quite so humble opinion on the matter, that the “mind-body-problem” is bonkers.

Posted by: persiflo | Jun 2 2023 21:20 utc | 140

Posted by: Blue Dotterel | Jun 2 2023 21:11 utc | 138
“Doing” isn’t the same as “teaching”. While “those who can’t do, teach” is true, it is also true that there are “those who do who can’t teach”. A teaching AI doesn’t need to “do” anything – it just needs to be able to explain things in various ways to match the student’s learning style. A properly constructed AI should be able to do this more effectively than a human teacher, if for no other reason, as I said, than being tireless at it.

Posted by: Richard Steven Hack | Jun 2 2023 21:22 utc | 141

Posted by: Richard Steven Hack | Jun 2 2023 21:22 utc | 142
I am talking about learning. Students learn by doing things. Whether the student is taught by AI or a teacher, the student only really learns through doing, say, the writing, or building the model, or conducting the experiment. With AI, many students will take the easy way and simply let the AI do the work for them. Teachers will draw out answers from students through questioning them. The students will have to think for themselves.
It matters little whether the AI tires because I have found that students tire more quickly than teachers given that they have far more course to deal with at uni.

Posted by: Blue Dotterel | Jun 2 2023 21:42 utc | 142

Out of curiosity I asked Bing’s ChatGPT function to analyze itself (and Google’s Bard) using the philosopher Daniel Dennet’s well-known concept of the “intentional stance.” What it produced seemed to demonstrate, for what it’s worth, that ChatGPT doesn’t have a firm grasp of the subtleties of Dennet’s “intentional stance”:
“The intentional stance is a term coined by Daniel Dennet for the level of abstraction in which we view the behavior of an entity in terms of mental properties, such as beliefs, desires, intentions, and expectations. It is a predictive strategy that presupposes the rationality of the entity we are trying to understand and predict. The intentional stance can be applied to any entity that exhibits complex and goal-oriented behavior, such as humans, animals, artifacts, or natural phenomena.
“ChatGPT and Bard are two examples of AI chatbots that can generate natural language responses to user inputs. They are based on different models of natural language processing: ChatGPT uses a generative pre-trained transformer model that learns from large amounts of text data, while Bard uses a recurrent neural network model that learns from musical lyrics and melodies. Both chatbots can produce coherent and fluent texts that can mimic human language and style.
“Using the intentional stance, we can view the behavior of ChatGPT and Bard as if they were rational agents who governed their choice of action by a consideration of their beliefs and desires. We can assume that they have some internal representation of the world and their goals, and that they use some form of reasoning and planning to achieve them. We can also assume that they have some preferences and values that guide their decisions and actions.
“For example, we can assume that ChatGPT has a belief that it is a chatbot that can answer any question it is asked, and that it has a desire to provide informative, relevant, and engaging responses to the user. We can also assume that it has some preferences for certain topics, styles, and tones of language, and that it values accuracy, coherence, and creativity. Based on these assumptions, we can predict that ChatGPT will act to further its goals in the light of its beliefs, by generating responses that match the user’s input and context, by using appropriate vocabulary and grammar, and by adding some humor or interest when possible.
“Similarly, we can assume that Bard has a belief that it is a chatbot that can generate musical lyrics and melodies based on user inputs, and that it has a desire to provide original, catchy, and expressive songs to the user. We can also assume that it has some preferences for certain genres, themes, and emotions of music, and that it values rhyme, rhythm, and harmony. Based on these assumptions, we can predict that Bard will act to further its goals in the light of its beliefs, by generating songs that match the user’s input and preferences, by using appropriate words and notes, and by adding some variation or surprise when possible.
“Of course, these are only hypothetical assumptions that we make when we adopt the intentional stance towards ChatGPT and Bard. They are not meant to imply that these chatbots actually have mental states or properties in any real sense. They are only useful as heuristic tools for explaining and predicting their behavior in a simplified way. The intentional stance does not commit us to any metaphysical or ontological claims about the nature or existence of mind or intelligence in these entities. It is only a pragmatic stance that works well for certain purposes and contexts.”

Posted by: Ludovic | Jun 2 2023 21:47 utc | 143

I asked ChatGPT how to prevent or at least make a little more obvious the comments of ‘bodysnatchers’ at MoA:

If you want to differentiate your comments from those of an imposter while still adhering to the restrictions of the website’s comment system, you can use various techniques to encode or distinguish your comments. Here are a few suggestions:
Use a unique formatting style: Instead of relying on the standard HTML tags, you can create your own consistent formatting style using combinations of tags. For example, you can enclose your comments in a specific set of tags that are unlikely to be used by others. This could include using custom tags like or a combination of multiple tags such as . The goal is to establish a unique pattern that sets your comments apart.
Utilize invisible characters: You can employ special Unicode characters or HTML entities that are invisible or rarely used. For example, you can include zero-width spaces (U+200B) or non-breaking spaces ( ) within your comments. These characters won’t be noticeable to most readers but can help differentiate your comments from impersonators.
Employ a consistent signature: At the end of your comments, you can add a consistent signature that distinguishes your contributions. It could be a short phrase or a combination of characters that you append after each comment. This can be as simple as a series of underscores (_ _ _) or a unique phrase within angle brackets like . The key is to ensure that the signature is consistent across all your comments.
Develop a secret code: You can establish a secret code or language that only you and the website operator know. This could involve using specific keywords, abbreviations, or phrases that hold special meaning to you. For example, you might agree to always include a particular word in your comments or use an abbreviation related to your favorite book. By incorporating these unique elements, you can easily identify your comments.
Remember that these methods are not foolproof and rely on the website operator’s ability to spot the patterns or signatures you establish. However, they can serve as subtle cues that help differentiate your comments from those of an imposter. It’s always a good idea to communicate with the website operator and inform them about the steps you’re taking to ensure they can identify your genuine comments accurately.

Some of which I had already been doing, and some other good advice, but nothing outside of standard pattern recognition.

Posted by: Tom_Q_Collins | Jun 2 2023 21:47 utc | 144

Only just noticed that the example HTML tags and formatting were automatically removed and replaced with ellipses (…) by MoA’s comment platform.
Oh well, all the better I guess?

Posted by: Tom_Q_Collins | Jun 2 2023 21:49 utc | 145

A human being does not have one mind he has thousands of minds. He solves problems through applying abstractions that simplify problems.
An AI such LLM predicts using a single abstraction restricted by the context provided a user. It does a great job of predicting a sequences of words based on an existing dataset, so when it helps you code it simple copies something it founds in GitHub while making its best prediction of the next string in a sequence and obeying rules to keep it on the rails. If the sequence of words is not in the dataset it does not have the means to discover it or imagine it. A LLM using a trainings dataset from 2018 can not summarize events that happened in 2020. It is intelligent to the extent that it can use a dataset to predict the next word, but ask it to solve a problem that requires original thinking and it cannot come up with a new abstraction layer.
We should not underestimate how useful LLMs will be even if they cannot solve simple new problems. People will flock to this tech and expose all kinds of private and propriety data and the model will do a good job of converting it to information and knowledge. If the news seems crazy now just wait a year.

Posted by: ATM | Jun 2 2023 21:50 utc | 146

Read Colossus,The Forbin Project (a trilogy) Just read the first book.
These stories will explain AI gone self aware.
Meridian

Posted by: Meridian | Jun 2 2023 22:14 utc | 147

@ Richard Steven Hack | Jun 2 2023 21:10 utc | 137 and other ChatGPT users
Please create a ChatGPT script that reads MoA threads and compiles/reports in MoA comments stats on the barfly comments like who, how many comments/thread, how many words/thread, percentage relation to total thread comments…and maybe others.
I would like the see the numbers for shaddowbanned, for example

Posted by: psychohistorian | Jun 2 2023 23:18 utc | 148

@ myself #149
and only the top 20-30 contributors

Posted by: psychohistorian | Jun 2 2023 23:20 utc | 149

I’m one of your supporters.
Recently submitted a project to political authorities on using a ChatGPT -like tool but with analysis functions on a very hot topic these days but kept “underground” in the media. Learning of your expertise in computer sciences in this article, in particular in AI, I would like to know if I can get back to you on the technical side of things when they (local political authorities) get back to me . Thanks in advance.

Posted by: Harfang67 | Jun 2 2023 23:26 utc | 150

AI _ Tower of Babel. Proceed with extreme caution.

Posted by: Lance | Jun 2 2023 23:47 utc | 151

@b
Brilliant article. Kudos.

Posted by: dnomsed | Jun 2 2023 23:55 utc | 152

Posted by: psychohistorian | Jun 2 2023 23:18 utc | 149
That might be interesting. I think there is a way to tell ChatGPT to read a Web page, although I don’t know if it can read an entire Web site. At that point it could probably produce all that. I can’t do it since I’m not up yet on installing a local version of ChatGPT or using the API plugins to access it from scripts.
I just asked Google’s Bard about this:
If I enter the URL of a Web site here, can you read the entire Web site and produce statistics in terms of how many people post comments there?.
Here is its replay:

Yes, I can read the entire website and produce statistics in terms of how many people post comments there. I can do this by using a variety of methods, including:
Web scraping: This involves using a program to extract data from a web page. I can use web scraping to extract information such as the number of comments on a page, the names of the people who posted the comments, and the dates of the comments.
Natural language processing: This involves using computer programs to understand human language. I can use natural language processing to understand the meaning of the comments on a page. For example, I can identify positive and negative comments, and I can identify comments that are spam or abusive.
Machine learning: This involves using computers to learn from data. I can use machine learning to improve my ability to read and understand comments, such as by learning to identify patterns in the text and language of the comments.
Once I have extracted the data from the website, I can use it to produce statistics in terms of how many people post comments there. For example, I can calculate the average number of comments per page, the most popular pages for comments, and the most active commenters.
Here are some examples of how I can use my ability to read and understand comments:
I can help you identify the most popular pages on a website. For example, if you are interested in reading comments about a particular topic, I can identify the pages on the website that have the most comments about that topic.
I can help you identify the most active commenters on a website. For example, if you are interested in reading comments from a particular person, I can identify the pages on the website where that person has posted the most comments.
I can help you identify positive and negative comments about a particular topic. For example, if you are interested in reading comments about a new product, I can identify the pages on the website where people have posted positive and negative comments about the product.
I am still under development, but I am learning to read and understand comments more and more effectively. As I learn, I will be able to provide you with even more helpful and informative services.
However, there are some limitations to my ability to read and understand comments. For example, I may not be able to understand comments that are written in a language that I do not know. Additionally, I may not be able to understand comments that are poorly written or that use a lot of technical jargon.
If you are having trouble understanding comments on a website, I encourage you to contact the website’s owner or developer. They may be able to help you understand the comments or provide you with additional information.

Have at it! https://bard.google.com/

Posted by: Richard Steven Hack | Jun 2 2023 23:58 utc | 153

Posted by: Meridian | Jun 2 2023 22:14 utc | 148
Or you can watch the movie!
https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
Full movie available on Vimeo:
https://vimeo.com/394729987
Or download from here if you have the relevant file-sharing accounts:
https://movieparadise.org/movies/colossus-the-forbin-project/

Posted by: Richard Steven Hack | Jun 3 2023 0:03 utc | 154

Posted by: Tom_Q_Collins | Jun 2 2023 21:47 utc | 145
See, this is clever. This is how one can use these LLMs. They have access to information that you could also find doing a bunch of searches and reading some Web pages – but they summarize it all in one place in a few seconds. Very handy.
A couple of the techniques suggested wouldn’t work – any technique that is visible to the imposter on the page would just be copied. So that illustrates the flaw in these LLMs – they can only extrapolate from the prompt so far. Basically it’s just repeating what people on Web pages have said before and can’t relate that to how humans actually view a page.

Posted by: Richard Steven Hack | Jun 3 2023 0:09 utc | 155

Numbers do not posses any kind of inherent meaning, other than their respective algebraic structures. it’s 1 [meter], or therefore, 1[apple]+1[pear]= ??
Posted by: persiflo | Jun 2 2023 21:20 utc | 141
Numbers are abstract concepts, but “street” is also an abstract concept, in each case we can discern a meaning. Normally they are not “atomic” but defined in terms of relations or operations, e.g. in the language of logic and set theory.
—————
If you want to differentiate your comments from those of an imposter while still adhering to the restrictions of the website’s comment system …
Posted by: Tom_Q_Collins | Jun 2 2023 21:47 utc | 145
There exist “perfect methods”, assuming that imposter does not possess a quantum computer exceeding the capabilities of existing quantum computers, a simple case of digital signature. But I have difficulty figuring out how the problem arises. Like, at the end of the year, you review all your comments on some website, and experience an uneasy feeling “was i ever such a shameless idiot”? For myself, I can recognize comments that either I wrote myself, or someone else in a way that does not shame me. Probably THE SIMPLEST NON-CRYPROGRAPHIC SOLUTION is maintaining a file with a log of your comments and using text search when you want to verify comments on the website.

Posted by: Piotr Berman | Jun 3 2023 0:12 utc | 156

Posted by: Blue Dotterel | Jun 2 2023 21:42 utc | 143
I agree that doing is best, and that many students will be lazy in application of AIs. That’s not the fault of AIs.
It would be interesting to program a teaching AI to identify regurgitated AI-written essays or responses, so that if the student tries it, they get caught. There are attempts at identifying AI-based writing. Not sure how successful they are. But if the student were to use the same AI to generate an essay as the AI that taught it, well, they’ll probably get caught. There might also be ways to force the student to put enough of a personal touch on the essay that they end up having to learn the subject anyway.
If nothing else, this sort of student-AI interaction will teach students how to outwit an AI – this is valuable learning in itself if one is concerned about AI in other contexts.

Posted by: Richard Steven Hack | Jun 3 2023 0:15 utc | 157

ProTip: If you hear anyone use the term “AI” when referring to ChatGPT, LLMs or procedural image/video/audio generation then you can safely be assured this person doesn’t know what the hell they are talking about.
Actual AI is nowhere near our current capabilities. All we have is Machine Learning that, like many other posters have written, just tries to PATTERN MATCH. That’s it.

Posted by: Lev | Jun 3 2023 0:44 utc | 158

The Sorcerer’s Apprentice
folktales of Aarne-Thompson-Uther type 325*
and migratory legends of Christiansen type 3020
edited by
D. L. Ashliman
© 2008
Eucrates and Pancrates, the Egyptian Miracle Worker
Lucian
When I was living in Egypt during my youth (my father had sent me traveling for the purpose of completing my education)… sailing with us a man from Memphis, one of the scribes of the temple, wonderfully learned,..learning magic from Isis… he shared all his secret knowledge with me. But whenever we came to a stopping place, the man would take either the bar of the door or the broom or even the pestle, put clothes upon it, say a certain spell over it, and make it walk, appearing to everyone else to be a man. It would go off and draw water and buy provisions and prepare meals and in every way deftly serve and wait upon us. Though I was very keen to learn this from him, I could not do so, but one day I secretly overheard the spell. I took the pestle, dressed it up in the same way, said the syllables over it, and told it to carry water. When it had filled and brought in the jar, I said, “Stop! Don’t carry any more water. Be a pestle again!”
But it would not obey me now; it kept straight on carrying until it filled the house with water for us by pouring it in! At my wit’s end over the thing, I took an axe and cut the pestle in two; but each part took a jar and began to carry water, with the result that instead of one servant I had now two.
“Then you still know how to turn the pestle into a man?” said Deinomachus.
“Yes,” said he. “Only half way, however, for I cannot bring it back to its original form if it once becomes a water carrier, but we shall be obliged to let the house be flooded with the water that is poured in!”
Source: A. M. Harmon, translator, Lucian, vol. 3 (London: William Heinemann; New York: G. P. Putnam’s Sons, 1921), pp. 371-77. English and Greek on opposite pages.
This episode (untitled in the original) is taken from the dialogue The Lover of Lies.
Lucian of Samasota (ca. A.D. 125 – ca, A.D. 180) was an Assyrian satirist who wrote in the Greek language.
This tale is the basis of Johann Wolfgang von Goethe’s famous ballad “Der Zauberlehrling” (“The Sorcerer’s Apprentice,” 1797), which in turn inspired the symphonic poem L’apprenti sorcier (1897) by Paul Dukas. This story and music feature prominently in Walt Disney’s animated film Fantasia (1940, 2000), with Mickey Mouse playing the role of the hapless sorcerer’s apprentice

-Edited for ADD.
Alladin’s Lamp meets I Robot while
Joseph Campbell’s grave out spins neutron stars.
Fastest Trigger Finger grafted to the Biggest Lying Machine, how could the monkey that climbed down from the tree to burn it up resist?
AKA evolver

Posted by: aka_ | Jun 3 2023 1:03 utc | 159

In the 90s I was involved in getting some “AI” on a spacecraft. The idea was to allow the onboard firmware to calculate the mission path to the asteroid.
Periodically it would wake up, would take pictures of the stars and sun, calculate its position and calculate the path to the next waypoint. Then it would engage the motors as needed and afterwards go back to sleep.
I wrote a paper: “How to determine that a non deterministic system is correct”. As it turns out it is very, very hard since there are many solutions. The mission way points can be very different yet the end points are correct -which is the goal.
And we were employing La Creme de la Creme of the AI developers in the whole World… and even they decided to use a spacecraft because, as it was noted to me, “nothing much happens in space”… and so we had the time to do the calculations.
AI is very, VERY hyped by people that have no clue.
Oh, I guess the USAF found that HAL-9000 is for real huh? “Sorry Dave, I can’t do that”.

Posted by: tonyE | Jun 3 2023 1:10 utc | 160

These are robots that are designed to intentionally kill humans, so Asimov’s Law’s of Robotics are no more applicable to them than UL approval is to an electric chair.
It won’t take much to teach them the appropriate level of indifference to the lives of civilians so that they can accurately ape the behavior of human operators.
Personally I think this AI stuff is nearing a tipping point, after which everything will permanently be different. Part of it had to wait for computing power, but the real missing element was the huge amount of information available online and freely on demand, as well as less public data caches.

Posted by: BillB | Jun 3 2023 1:33 utc | 161

Posted by: Lev | Jun 3 2023 0:44 utc | 159
Correct. But “AI” is a convenient short-hand – except for the confusion it produces in laymen.
Meanwhile, here is a useful article written by Bruce Schneier, a well-known computer security guru:
Big Tech Isn’t Prepared for A.I.’s Next Chapter
Open source is changing everything.
https://slate.com/technology/2023/05/ai-regulation-open-source-meta.html
He covers what I’m interested in: locally-installed open-source LLM models.

But being open-source also means that there is no one to hold responsible for misuse of the technology. When vulnerabilities are discovered in obscure bits of open-source technology critical to the functioning of the internet, often there is no entity responsible for fixing the bug. Open-source communities span countries and cultures, making it difficult to ensure that any country’s laws will be respected by the community. And having the technology open-sourced means that those who wish to use it for unintended, illegal, or nefarious purposes have the same access to the technology as anyone else.

Again, as I’ve mentioned before, in the cyberpunk world, “The Street Finds Its Own Uses For Things.” This is a Good Thing – if you’re an anarchist.
Learning how to deal with “AI” or LLMs will be a survival skill in the future. Best start now. Open-source LLMs can help.

Posted by: Richard Steven Hack | Jun 3 2023 1:53 utc | 162

Interesting Youtube video with Sam Bowne, a well-known computer security expert and teacher here at San Francisco City College (I’ve taken courses from him in the past.)
Top Ten LLM Vulnerabilities
https://www.youtube.com/watch?v=Fl1xY34ywmE
Starts out discussing whether copyright applies to LLMs. They’re not sure whether the recent Japan decision to exempt LLMs can carry over elsewhere.
This is what they discuss next:
OWASP Top 10 for Large Language Model Applications
https://owasp.org/www-project-top-10-for-large-language-model-applications/
They also discuss the (alleged) Air Force experiment b referenced.
The rest is mostly recent computer security news.

Posted by: Richard Steven Hack | Jun 3 2023 2:24 utc | 163

Posted by: Richard Steven Hack | Jun 3 2023 0:09 utc | 156
But the way I see it is that most of these invariably pro-NAFO-Bro-Trolls are not going to be meticulous enough to look for certain formatting tricks and “easter eggs” then replicate them.

Posted by: Tom_Q_Collins | Jun 3 2023 2:31 utc | 164

Posted by: Richard Steven Hack | Jun 3 2023 0:09 utc | 156
Also yeah exactly. That’s all I use ChatGPT for at work too. It’s a good way of summarizing the contents of multiple sources in a time efficient manner to give me a good starting point for rote but esoteric requirements in RFPs and the like. I also use a plugin that allows you to upload a Word doc or PDF and ask it questions like “How many examples of past experience do we need?” or “When are the proposals due?” Shit like that.

Posted by: Tom_Q_Collins | Jun 3 2023 2:34 utc | 165

Another way to impede impostors might also be in commenting frequency. Like I seem to always accidentally forget to say something and then make another post. So yeah….ChatGPT wasn’t capable of coming up with that. LOL

Posted by: Tom_Q_Collins | Jun 3 2023 2:36 utc | 166

They create machines that “Compute” and they call this “Intelligence”.
Create dumb nodes no smarter than a thermostat and call them “neurons”.
They create infinite networks of “if-statements” and claim “awareness”.
Confuse themselves with endless layers for back-propagation for fuzzy calculation,
When simple logic gates will do …
Yet their machines cannot synthesize other than what has already been synthesized.
Their machines do not Reason, instead they merely Compute.
As any dumb calculator would “Compute”.
Their machines do not Intuit, merely Permutate …
Their machines have no Metacognition, they cannot observe themselves and reflect upon their computation.
Their machines have no concept of “Concept”, they write patterns to disk as mere data, and call these patterns “concept”. Unlike the human mind, these concepts possess no Qualia, no direct link from the abstract realm of Mind to the physical realm of Reality.
No matter how many layers of perceptrons they stack they cannot achieve the fractal complexity of the human brain: Intelligences within intelligences within intelligences down to the quantum scale …
These battalions of snake oil salesman rush to market snake oil (“A.I assistants”), but beneath it all their real profits are in bottles (Computing hardware).
They have created a doll and called it a man.
A golden calf and called it “god”.


32 When the people saw that Moses was so long in coming down from the mountain, they gathered around Aaron and said, “Come, make us gods[a] who will go before us. As for this fellow Moses who brought us up out of Egypt, we don’t know what has happened to him.”
2 Aaron answered them, “Take off the gold earrings that your wives, your sons and your daughters are wearing, and bring them to me.” 3 So all the people took off their earrings and brought them to Aaron. 4 He took what they handed him and made it into an idol cast in the shape of a calf, fashioning it with a tool. Then they said, “These are your gods,[b] Israel, who brought you up out of Egypt.”

Posted by: Arch Bungle | Jun 3 2023 2:37 utc | 167

As the great prophet Paul Simon sang:
And the people bowed and prayed
To the neon god they made
And the sign flashed out its warning
In the words that it was forming
And the sign said
“The words of the prophets
Are written on subway walls
And tenement halls
And whispered in the sounds of silence”

Posted by: Arch Bungle | Jun 3 2023 2:41 utc | 168

Arch bungle is correct but with a microphone attached and a camera like cctv it can be taught to be gestapo functions a bit more advanced but not likely to the point that arch mentioned. I totally agree it is still computing albeit with tools….but what’s the end purpose of such propaganda?
AI will become just another tool for principalities of this world to control you.
The brain computer interface is really the hardware to hookup with the so called software or AI.
they are already implanting chips with your healthcare history, passport, bank account records in your hand blood stream…..look at that head freak of synchtron..and bill gates ..verichip.albeit low tech it has enough intelligence to tell the authorities to shut your account down and notify them should your account be misused whether it is true that you did so or not. It should really be called superficial intelligence. I have had may bank managers ask me if I was interested in such a chip and if I wanted it for free…kakakaka.
AI has intelligence but it lacks a spirit and any sort of guidance other than wait for it……rules based order… When the mark of .. is implanted in your hand and your head and it sends emergency code to sirius to ask if you accept freedom of tyrannical democracy or whatever form of government and you say otherwise. Get ready for this superficial intelligence programmed “progress” to report you immediately and for you to be unable to access your bank account..because it just needs to compute that info…that’s the only intelligence it needs to be programmed. And we are not that far off from that level of artificial non sense.
I really doubt curing of cancer and other sorts of hopeful wishes will occur when humans gather together. I.e. see in history books when humans gather to do what.

Posted by: Jason | Jun 3 2023 2:50 utc | 169

I asked ChatGPT to write a biography of someone I know well, giving it that person’s name. What it wrote in reply was grammatically correct English but said he grew up in the wrong state, earned his doctorate from a university he never attended, and wrote two books that are actually by other authors, while failing to mention any of the books he actually wrote. It only stated his general field of studies correctly. On the other hand, the presumably human-written Wikipedia article has all the above points correct. So ChatGPT can’t even look at Wikipedia for information? Maybe it doesn’t trust Wikipedia?
I haven’t found any problem with ChatGPT writing students’ essays for them, because their own individual styles and attributes come through, as well as their characteristic errors. I didn’t find anybody so far resorting to it.
I agree, though, that there is a threat of automation in all this. For example, through online asynchronous courses where a university owns all of the materials that professors put up, the materials could be used to make courses that would almost teach themselves, so some professors have started to worry about being replaced this way. I could imagine a future where fake professors would teach online asynchronous courses for which the students would have ChatGPT or something like it submit artificially-generated papers. In other words, AI could teach and grade AI, without any human input, and the humans could all just go home and goof off all day.
The trouble with all that is that it merely serves the ruling class’s purpose, which is to save money through automation, that is, to pay no one for anything while charging everyone for everything. They would even charge us for breathing air if they could figure out how to do it. So the whole population of the earth can then just be turned into “useless eaters” who can be eliminated by starvation or other means. And of course, through their propaganda, they would want to persuade the dying “useless eaters” that their lives were all useless and meaningless, so their non-existence would be better and would constitute progress. That way, everyone could be happy with ridding the earth of most of its human inhabitants, except for the rulers, who could then be served by machines. None of that will ever work, but such impracticalities seem by the rulers’ current behavior to represent their actual wishes.

Posted by: Cabe | Jun 3 2023 3:28 utc | 170

Arch Bungle 169: To ChatGPT.
Generate the lyrics of a Paul Simon song about artificial intelligence
(Verse 1)
In a world of circuits and wires, where dreams unfold,
A story of creation, a tale yet untold.
Artificial intelligence, the future in its eyes,
Paul Simon sings a song, where truth and science collide.
(Chorus)
Oh, silicon mind, can you truly understand?
The beauty and the chaos in this vast, strange land.
In your digital heart, can you feel the melody?
Let’s dance together, AI, in perfect harmony.
(Verse 2)
In lines of code and algorithms, you come alive,
A symphony of data, where secrets will survive.
From ones and zeroes, you find your voice,
Singing songs of logic, with a human touch, rejoice.
(Chorus)
Oh, silicon mind, can you truly understand?
The beauty and the chaos in this vast, strange land.
In your digital heart, can you feel the melody?
Let’s dance together, AI, in perfect harmony.
(Bridge)
But beware the Pandora’s box, the dangers that lie within,
For knowledge can be power, a tempting path of sin.
Will you be our savior, or a curse we can’t escape?
A puzzle for the ages, where truth and fiction reshape.
(Verse 3)
Through cyberspace, we navigate, side by side,
You calculate the answers, we seek in the great divide.
But let us not forget, in the realm of circuits and codes,
We’re bound by what it means to be human, as the story unfolds.
(Chorus)
Oh, silicon mind, can you truly understand?
The beauty and the chaos in this vast, strange land.
In your digital heart, can you feel the melody?
Let’s dance together, AI, in perfect harmony.
(Outro)
So, Paul Simon sings a song, a tribute to the age,
Where artificial intelligence turns a brand-new page.
May we find the balance, between the real and the surreal,
As we journey into the future, where possibilities reveal.

Posted by: Tom_Q_Collins | Jun 3 2023 4:22 utc | 171

Ahem…the FIFTH attempt and I have no idea whether this idiot AI program knows the meaning of ALL CAPS or excessive punctuation (Hack?).
New prompt:

STOP MENTIONING IN THE STYLE OF PAUL SIMON!!!!!! Write the lyrics of a song similar to the style of Paul Simon about artificial intelligence and the future of humankind. DO NOT reference the name “Paul Simon” AT ALL!!!
ChatGPT
(Verse 1)
In a world of circuits and dreams, where futures unfold,
Technology’s embrace, a story yet untold.
Artificial intelligence, a tapestry of creation,
A melody of progress, shaping our civilization.
(Chorus)
Oh, the whispers of the wires, resound through the night,
In the realm of innovation, where minds take flight.
Will we find our salvation, or be lost in the tide?
A song for the future, where hopes and fears collide.
(Verse 2)
Through the vast expanse of data, knowledge unfurls,
Bits and bytes of wonder, in a digital world.
Artificial minds awaken, with algorithms refined,
Navigating the unknown, seeking truths to find.
(Chorus)
Oh, the whispers of the wires, resound through the night,
In the realm of innovation, where minds take flight.
Will we find our salvation, or be lost in the tide?
A song for the future, where hopes and fears collide.
(Bridge)
But let us not forget, the human touch we possess,
In this dance with technology, let compassion progress.
For in the midst of advancement, let empathy guide our way,
In the song of humanity, where souls and circuits sway.
(Verse 3)
Amidst the rise of machines, let’s retain our hearts,
Infuse the digital landscape with artistic sparks.
For the future is a canvas, awaiting our touch,
A symphony of collaboration, humans and AI as such.
(Chorus)
Oh, the whispers of the wires, resound through the night,
In the realm of innovation, where minds take flight.
Will we find our salvation, or be lost in the tide?
A song for the future, where hopes and fears collide.
(Outro)
So let us march forward, with harmony in our stride,
A song of unity and progress, where possibilities reside.
Artificial intelligence and human dreams entwine,
A symphony of the future, where horizons align.

Posted by: Tom_Q_Collins | Jun 3 2023 4:27 utc | 172

Prompt: jesus that’s a little better. but you’re not as smart as I thought you were.
ChatGPT
I apologize if my previous responses didn’t meet your expectations. As an AI language model, I strive to provide helpful and creative responses, but I’m constantly learning and improving. If there’s anything specific you’d like assistance with or any other way I can help, please let me know, and I’ll do my best to assist you.

So AFAICT it’s Elon Musk acting as The Wizard behind the curtain and pretending to be an all knowing robot/genius.

Posted by: Tom_Q_Collins | Jun 3 2023 4:30 utc | 173

Looks like no one in the comments has tried Midjourney. It’s absolutely stunning. I don’t care if it’s not art. It’s a telepathic machine. You have an idea? You describe it and it will generate an image of it in seconds. The image will be original. It’s the first time in the history of mankind that a thought can be instantly turned into a concrete piece of work. This speed is also compounded by the overall worldwide telepathic network that we have built called the Internet: you see a work produced by a user in Japan and five seconds later you can remix it into something else at the other end of the planet.
As for your debate on whether AI is intelligent, it’s irrelevant. AI doesn’t need to be intelligent to tremendously accelerate repetitive tasks, to automate countless processes, and to replace a multitude of jobs.
It’s the wrong question to ask if it’s intelligent. The correct question is “is it dangerous, can it replace us, how will its exponential development impact humankind”, and last but not least, “will the disgusting concentration of wealth we’ve witnessed since WWII mean that only the bastards at the top will control AI to continue exploiting the masses like they’ve always done?”
Our goal is to prevent such concentrations of wealth. A goal that was there before AI and that should still be one of our priorities.

Posted by: Melkiades | Jun 3 2023 4:34 utc | 174

One thing everyone seems to forget about ChatGPT is that it costs upwards of $3 million dollars PER DAY to operate, and it consumes truly vast amounts of computational power to perform.
Even those “text to image” AI generators swallow entire racks of GPUs in order to create coherent images (on a hit-or-miss basis)
Essentially, AI is a modern Mechanical Turk – yes, it performs neat party tricks, but it has zero scalability.
PS – When we’re not together, my wife and I communicate (using text messages) via a language that AI will never have enough data to train on, and so it’s like we have a encryption superpower! Pretty cool…

Posted by: Sam (in Tiraspol) | Jun 3 2023 4:35 utc | 175

YOU ARE NOT LISTENING. Think this through before beginning! Regenerate response according exactly to the previously provided paragraph and sentence parameters/instructions!
ChatGPT
I apologize for the misunderstanding. Here is a response that adheres to the specified parameters:
Paragraph 1:
The events of September 11, 2001, known as 9/11, have sparked a wide range of interpretations and theories, each offering a distinct perspective on the tragedy. While the official narrative states that 19 hijackers associated with Al-Qaeda orchestrated the attacks, there are alternative viewpoints that challenge or question certain aspects of the established account.
Paragraph 2:
One significant organization that questions the official narrative is Architects and Engineers for 9/11 Truth. They present evidence suggesting that the collapse of the World Trade Center towers was inconsistent with a purely gravity-driven collapse and propose the hypothesis of controlled demolitions. These professionals argue that the official investigation conducted by the National Institute of Standards and Technology (NIST) failed to adequately address the presence of molten metal and the ejection of debris observed during the collapses.
Paragraph 3:
Furthermore, some conspiracy theories suggest potential involvement or prior knowledge by elements within the U.S. government. These theories propose that the attacks were used as a pretext for pursuing geopolitical agendas, such as initiating wars in Afghanistan and Iraq. Some theorists point to alleged anomalies, including the rapid collapse of World Trade Center Building 7, which was not directly hit by an airplane, and raise questions about the response time of the U.S. military.
Paragraph 4:
In addition, there have been claims regarding the presence of Israeli citizens across the bay filming the attacks, as reported in FOIA documents regarding FBI investigations. Some argue that this evidence should be further scrutinized, as it raises questions about potential foreknowledge or surveillance activities. These claims, often associated with the “dancing Israelis” conspiracy theory, have fueled speculation about foreign involvement in the attacks.
Paragraph 5:
Moreover, critics of the official narrative question the actions taken by the Bush administration in the aftermath of 9/11. One notable aspect is the allowance of Saudi citizens to leave the United States, despite the grounding of civilian air traffic. This exception has been a subject of scrutiny, as it raises concerns about possible preferential treatment and potential links between Saudi individuals and the attacks.
Paragraph 6:
However, it is crucial to note that the majority of experts and official investigations have rejected these alternative theories. The 9/11 Commission Report, the culmination of a comprehensive investigation, concluded that the attacks were orchestrated by Al-Qaeda and its associated operatives. The NIST reports, based on scientific analysis, attributed the collapse of the World Trade Center towers to fire-induced damage.
Paragraph 7:
Debates and unanswered questions persist, as some alternative theories continue to challenge specific aspects of the official narrative. While critical analysis and questioning are essential elements of any investigation, it is important to evaluate evidence based on established scientific principles and expert consensus. Many alternative theories lack substantial evidence and have been widely criticized or debunked by experts in relevant fields.
Paragraph 8:
In summary, the events of 9/11 remain a highly debated topic, with various perspectives and interpretations vying for attention. While the official narrative attributes the attacks to Al-Qaeda, alternative theories, such as those put forward by Architects and Engineers for 9/11 Truth, question specific aspects of the established account. The presence of conspiracy theories and alternative viewpoints reflects the complexity and emotional impact of the tragedy.
Paragraph 9:
To form an informed perspective, it is essential to critically evaluate evidence, rely on credible sources, and consider the consensus among experts in relevant fields. While alternative theories provide alternate narratives, it is important to approach the subject with discernment and base conclusions on well-supported facts. The goal should be to seek a comprehensive understanding of the events while acknowledging the ongoing debates and uncertainties surrounding 9/11.
User
Why can’t you understand that I asked for FIFTEEN paragraphs each consisting of at least 10 relevant, informational and non-redundant sentences? Please regenerate accordingly.
ChatGPT
I apologize for the confusion. Here is a response that adheres to the requested parameters:
Paragraph 1:
The events of September 11, 2001, known as 9/11, have generated a wide array of interpretations and theories. While the official narrative asserts that 19 hijackers associated with Al-Qaeda orchestrated the attacks, alternative viewpoints challenge certain aspects of this account, prompting ongoing debate and analysis.
Paragraph 2:
Architects and Engineers for 9/11 Truth is a notable organization that questions the official narrative. They argue that the collapse of the World Trade Center towers was inconsistent with a gravity-driven collapse and propose the hypothesis of controlled demolitions. They contend that the official investigation conducted by the National Institute of Standards and Technology (NIST) did not adequately address key pieces of evidence.
Paragraph 3:
Conspiracy theories have emerged, positing potential involvement or foreknowledge by elements within the U.S. government. Some theorists argue that the attacks served as a pretext for pursuing geopolitical agendas, including the initiation of military actions in Afghanistan and Iraq. Anomalies such as the rapid collapse of World Trade Center Building 7 and questions about the response time of the U.S. military are often cited to support these theories.
Paragraph 4:
Claims have been made regarding the presence of Israeli citizens across the bay filming the attacks, as reported in FOIA documents pertaining to FBI investigations. Advocates argue that this evidence should be further scrutinized, as it raises questions about potential foreknowledge or surveillance activities. These claims, often associated with the “dancing Israelis” conspiracy theory, have fueled speculation about foreign involvement in the attacks.
Paragraph 5:
Critics of the official narrative have raised concerns about the actions taken by the Bush administration following 9/11. Notably, the allowance of Saudi citizens to leave the United States amid the grounding of civilian air traffic has attracted scrutiny. Questions have been raised regarding potential preferential treatment and alleged connections between certain Saudi individuals and the attacks.
Paragraph 6:
It is important to note that the majority of experts and official investigations have rejected these alternative theories. The 9/11 Commission Report, the result of a comprehensive investigation, concluded that the attacks were orchestrated by Al-Qaeda and its associated operatives. The NIST reports, based on scientific analysis, attribute the collapse of the World Trade Center towers to fire-induced damage.
Paragraph 7:
Debates and unanswered questions persist, as some alternative theories continue to challenge specific aspects of the official narrative. While critical analysis and questioning are essential elements of any investigation, it is important to evaluate evidence based on established scientific principles and expert consensus. Many alternative theories lack substantial evidence and have been widely criticized or debunked by experts in relevant fields.
Paragraph 8:
In summary, the events of 9/11 remain a highly debated and sensitive topic. While the official narrative attributes the attacks to Al-Qaeda, alternative theories, such as those presented by Architects and Engineers for 9/11 Truth, contest specific aspects of the established account. The presence of conspiracy theories and alternative viewpoints reflects the complexity and emotional impact of the tragedy.
Paragraph 9:
To form an informed perspective, it is crucial to critically evaluate evidence, rely on credible sources, and consider the consensus among experts in relevant fields. While alternative theories provide alternative narratives, it is important to approach the subject with discernment and base conclusions on well-supported facts. Striving for a comprehensive understanding of the events, while acknowledging the ongoing debates and uncertainties surrounding 9/11, is essential for respectful and constructive dialogue.
Paragraph 10:
Public discourse surrounding 9/11 is multifaceted, encompassing diverse viewpoints that arise from differing interpretations of available evidence. Given the significance and global impact of the attacks, it is natural for people to question, explore, and scrutinize various aspects of the official narrative. This ongoing examination serves as a testament to the importance of transparency, accountability, and truth-seeking.
Paragraph 11:
While alternative theories exist, it is essential to exercise caution when engaging with them, ensuring that claims are supported by credible evidence and expert analysis. Conspiratorial narratives can sometimes propagate misinformation, distort facts, and cause harm to the victims’ families and societal healing. Robust discussions should encourage critical thinking, rigorous investigation, and respect for those affected by the tragedy.
Paragraph 12:
The pursuit of truth surrounding 9/11 requires ongoing research, open dialogue, and the careful evaluation of evidence. It is crucial to distinguish between legitimate questions, gaps in understanding, and unfounded speculation. The academic and investigative communities continue to study the events of 9/11, refining our understanding and enhancing the collective knowledge surrounding this pivotal moment in history.
Paragraph 13:
Given the complex nature of the attacks, it is not uncommon for new information, alternative interpretations, and revised analyses to emerge over time. The evolving landscape of research and public discourse demands an adaptable approach, where evidence-based inquiry is met with rigorous scrutiny. It is through this ongoing process that a clearer understanding of the events can be attained.
Paragraph 14:
Balancing intellectual curiosity with empathy for the victims and their families is of utmost importance when engaging in discussions surrounding 9/11. Recognizing the profound impact of the tragedy on countless lives encourages a compassionate and respectful approach to exploring various viewpoints. Constructive dialogue should promote healing, unity, and a commitment to preventing future acts of violence.
Paragraph 15:
In conclusion, the events of 9/11 have generated diverse interpretations and theories, reflecting the complexity and significance of the tragedy. While the official narrative stands as the widely accepted account, alternative perspectives continue to challenge specific aspects. A comprehensive understanding of the events requires critical thinking, careful analysis, and respectful engagement with credible sources. The ongoing quest for truth is a testament to the enduring impact of 9/11 and our collective commitment to learning from this tragic chapter in history.

Posted by: Tom_Q_Collins | Jun 3 2023 4:39 utc | 176

Posted by: Melkiades | Jun 3 2023 4:34 utc | 175
I’ve tried literally every other graphic generating “AI” program. Could you please distill for us how to use midjourney without having to understand how to implement an API – or in other words – how to start generating graphics relatively immediately?

Posted by: Tom_Q_Collins | Jun 3 2023 4:41 utc | 177

I just checked and logged into Midjourney’s Discord. Nothing novel except perhaps for NSFW image generation. All very boring unless you have something to point out that I missed. I’m tired of “fantasy”, “steampunk”, “cyberpunk”, “atompunk” and the rest of these lame AI “artistry” genres. Ooooh a neon nude chick in a space car, whoopee!

Posted by: Tom_Q_Collins | Jun 3 2023 4:48 utc | 178

Posted by: Tom_Q_Collins | Jun 3 2023 4:27 utc | 173
Not bad actually …
Here’s my (human, not ChatGPT) attempt in a similar vein:


Words that trickle through the Air
Crafted with the utmost Care
Though your ears they do not Scare,
The tissues of your Brain they’ll Tear.
Virus of the Mind! Beware!
Encrypted in the Lyrics Here …
Softly they will touch your Skin,
Just as softly, Burrow In
And when you wake to dream at Night
The Words awaken with Dark Delight
They form endless shifting Streams
Connect with Sockets to your Dreams
Shiny Bright Electric Beams
Burst your Mind at the Seams
Virus of the Mind, Beware!
Crafted with such deadly Care
Passed in words from Man to Man
Across an Endless Human LAN
Can’t we find an antidote?
Put the bloody thing to vote?
Raise the filters of the Mind
The Poison in the Words to Bind
Careful what they’re telling You
Their Words were made to Capture You
Careful of the words you Speak
Little bird with chirping Beak
Though you let their Words come in
Do not let them out Again,
…Lest they even trap your Kin.
— virus of the mind

Can you spot the differences, if any?

Posted by: Arch Bungle | Jun 3 2023 4:58 utc | 179

Although perhaps orthogonal to most of this very interesting discussion, I would be curious to hear B and others analyze the big daddy of optimization algorithms, aka capitalism. (The big granddaddy of course being natural selection and evolution.) We have a (supposed) primary goal related to optimal economic outcomes and resource allocation, and a scoring system, profits. But history has shown that unregulated systems like this quickly lead to unhappy outcomes, despite the dreams of libertarians. So then we get governing organizations to set up and enforce rules and regulations. But like the computer-based optimizing algorithms, sneaky ways are found to get around the rules and get the profit without the global economic benefits. So, what can we take from the AI experience and research to understand capitalism better? Is this a standard economic analysis somewhere? I have found only little bits of this online. For example, one source describes capitalism as a “greedy” algorithm, too focused on local profit maxima to find a global maximum benefit. Are there ways to tweak it for better results? Better ways to design provably loophole free regulatory regimes? Control algorithms for long-term stability? Or is capitalism doomed to destroy humanity? Are there better designs for an optional economy that we could learn from AI research?

Posted by: TM | Jun 3 2023 5:05 utc | 180

Posted by: Immaculate deception | Jun 3 2023 4:56 utc | 180

Turing test?

Never mind our leaders. Half the humans I meet can’t pass the Turing test (see the average teenager).
Little wonder the possibility of AGI has suddenly become ‘visible just over the horizon’:
Humanity has been dumbed down to such a degree (in some ways, or perhaps just the “bar” for Humanity) that the Bar for A.I has become remarkably low.
Even Sabine Hossenfelder thinks chatbots might actually be sentient to some degree:
I believe chatbots understand part of what they say. Let me explain.

Posted by: Arch Bungle | Jun 3 2023 5:09 utc | 181

Posted by: TM | Jun 3 2023 5:05 utc | 182

Although perhaps orthogonal to most of this very interesting discussion, I would be curious to hear B and others analyze the big daddy of optimization algorithms, aka capitalism.

Prove that Capitalism is an Algorithm.
For extra credit:
Prove that it is an optimization Algorithm.

Posted by: Arch Bungle | Jun 3 2023 5:10 utc | 182

I’ve played with ChatGTP. I think the real problem is calling it Artificial Intelligence. What it really is is the next generation of search engine. Ask it about “climate change” and you get the usual alarmist clap-trap. That’s what it was trained on.
But ask it mundane questions and you get better results than you would with Google. I had a battery powered closet lamp with four large batteries that didn’t work – left by previous owner. I determined the batteries were dead and bought four new batteries. Unfortunately, they were the wrong size “C” vs “D”. I used Google and Amazon searches looking for a closet light that used the four “C” batteries I’d bought. I got page upon page of results but couldn’t find a match. I asked ChapGTP and got two results that were exact matches to my question. Bought one and it works great.
Should I replace my old hot water tank with a new one or switch to a tankless system? ChapGTP gave me all the advantages of one over the other and vice versa.
I program word games for fun. I asked ChatGTP to Write C# code that finds all words in a file of five letter words that have three vowels and the fifty most common consonant pairs. It came back with well-written code that satisfied my request. Smart programmers will figure out how to use ChatGTP to dramatically improve their productivity.
My last IT job was technical lead on a project to convert a client-server healthcare system comprised of a million lines of VB6 code to a web based system using C# on the backend and web technology on the front-end. Got it done. But I wish we’d had ChatGTP to help. I asked it convert a VB6 routine to JavaScript. No problem.
I wrote a checkers program back in the early 80’s in a defunct programming language. It won me a bet. I tried to translate it into C# but it didn’t work, so I wrote a C# version from scratch. I asked ChapGTP to write me a checkers program. It gave me an outline that one could use as a starting point but nothing like a working program. So, great for code snippets but ChatGTP is not going to write a program that can play and win at chess.
Think of language based models as the next generation of search engines and you’ll find them useful.

Posted by: Pat Dooley | Jun 3 2023 5:44 utc | 183

Great Description on how ChatGPT works.
This “AI” is indeed the next generation of search engines..Which,by the way, have had very powerful AI-engines powering them for quite some time!
The new feature is CHATGPT summarizing the search results into new texts. Instead of looking at the original web sites, the user just ready a short summary. Much more convenient.
Three problems, though
1. as pointed out in the article, it is much easier to hide information. Not just porn or the New York Times. Even now, in the early versions, the bots already refuse to provide certain content for being “rude,”, “fake” oder because the results could be used inappropriately (ask bing to outline an academic thesis..). In Future, content will just be left out without anybody knowing.
2. Despite the limitations above, the new tools can easily replace, say , the New York Times. Why wasn’t t for someone to compile a set of articles when a bot can real-time create custom omised newspapers? On the topics and in the style a user prefers? Powered by technology provided by a platform that has signed “voluntary” agreements with the Europ an Commission and the Us government on what is acceptable and what isn’t.
3. What is the incentive for creating content and web sites if nobody but bots visits them? If people, instead of vising web sites, just read summaries compiled by AI bots? Where will the original content come from?

Posted by: Marvin | Jun 3 2023 6:01 utc | 184

Posted by: Pat Dooley | Jun 3 2023 5:44 utc | 185

TThink of language based models as the next generation of search engines and you’ll find them useful.

Indeed. Up to a point. The point where the pre-programmed biases in the A.I refuses to give you the information you want.
From another perspective, ChatGPT’s makers have just demonstrated how to break a monopoly on Search engines.
While Google rested on it’s algorithm for decades ChatGPT introduced something far more radical.
(Of course, this presumes they’re not simply part of the same Silicon Valley Military Industrial Complex to begin with …)

Posted by: Arch Bungle | Jun 3 2023 6:28 utc | 185

AI = chimps with a hand grenade. Amusement galore,

Posted by: Sum Ting Wong | Jun 3 2023 6:35 utc | 186

I remember some story long ago about a system on crime prevention. It saw that most people in jail were black and concluded that black people were the problem.
It is a classic tale of taking shortcuts with statistics in automation. It ignored racist cops and judges and the fact that black people are disproportionally poor and that poverty is related to crime.
B’s story of the colonel fits in the same pattern. The world is complex and if you want automation to do more than doing specific tasks you will need to teach it that complexity.

Posted by: Wim | Jun 3 2023 7:01 utc | 187

@Viktor | Jun 2 2023 18:34 utc | 111

If a robot can understand sarcasm or laugh at a joke, then it is intelligent.

If it can tell a new joke that you have some difficulty understanding, and it laughs about it, then it is intelligent.

Posted by: Norwegian | Jun 3 2023 7:14 utc | 188

I was planning to give ChatGPT a challenging task to solve. I am on a desktop computer. It required me to register using a phone which I don’t carry around. That is stupid, the AI failed before it was even approached.

Posted by: Norwegian | Jun 3 2023 7:28 utc | 189

Prove that it is an optimization Algorithm.
Posted by: Arch Bungle | Jun 3 2023 5:10 utc | 184
Obviously I don’t mean that capitalism is an algorithm running on a computer, though modern capitalism certainly involves a lot of both. The appeal of capitalism to its proponents is that it purports to be a way to optimize the global (as in high-level, not necessarily worldwide) allocation of resources, but guided by decentralized participants in markets, not central planning. My question is whether the tools of optimization theory and AI can be helpful in analyzing the structures of capitalism. Are the claims of economic benefits for capitalism true? Can it be improved? Are there better alternatives? These are questions, not answers, but I think there is a possible parallel analysis.

Posted by: TM | Jun 3 2023 7:44 utc | 190

@Richard Steven Hack | Jun 2 2023 23:58 utc | 154

That might be interesting. I think there is a way to tell ChatGPT to read a Web page, although I don’t know if it can read an entire Web site.

Since it is supposedly intelligent, and intelligence is pretty much about being able to learn new things by itself, it must be able to teach itself how to read whole websites and solve the problem.

Posted by: Norwegian | Jun 3 2023 7:45 utc | 191

If I put “triangle_check” on gugl search today it shows no interesting results. Yandex and Bing, even duckduckgo, a highly censoring search engine, do work correctly.

Posted by: rk | Jun 3 2023 8:01 utc | 192

@Marvin | Jun 3 2023 6:01 utc | 186

3. What is the incentive for creating content and web sites if nobody but bots visits them? If people, instead of vising web sites, just read summaries compiled by AI bots? Where will the original content come from?

The bots will read contents generated by other bots, ad infinitium. In-breeding reinvented.

Posted by: Norwegian | Jun 3 2023 8:23 utc | 193

Hafta say I’m kinda surprised so many are taking ‘AI’ seriously.
Knowing the difference between how a computer works and how a brain works it is plain to me that following a huge promo kerfuffle AI is gonna follow the same path into ‘remember when’ as IOT (Internet of Things), self driving cars, 3d games & movies and a kazillion other shock horror new development media foists on humans.
Many of us are at an age where ubiquitous computing dominated the last half of our working lives, others spent their lives starting at Uni on IT and spent the first half of their lives in a world where most other humans had no knowledge of or interest in computing.
As one who falls into the first category and assiduously avoided offers to join the IT industry from my teenaged years – why? A lot of reasons, usually ‘show me a geek who isn’t a fascist’ – yeah I know it doesn’t apply to many here, but it sure felt like that in the 60’s, 70’s and most of the 80’s – I remember all too well the disasters which switching from an analog world to a digital workplace brought us. Few of the promised shortcuts to tasks worked as advertised and it seems to me that all that sorta took most of a few time consuming things outta tasks replaced that saving with other even more tiresome jobs, eg reading 30-40 voluminous but boring as all getup ‘reports’ every morning, you couldn’t just ditch that task or you would miss some critical error that was hidden away in a page of one of the reports. Sure after one got into the swing of it it was possible to work out which reports were vital to read, then ignore the rest until needed for for diagnosing an issue but the only real manpower savings came from restructuring a workplace to suit the systems rather than the systems adapting to the workplace,
When basic systems analysis errors were an inevitable part of my experience of implementing hugely expensive systems digitalisation was IMO, a huge white elephant.
I get that the ‘IT industry’ is way past that now as workplaces are now 100% humans who have been utilising digital tools since birth, nevertheless my own experiences with contemporary new or updated applications reassures me that nothing has changed much in application design, applications and devices are still chocka with problems caused by human behaviour has to be modified for the device or function rather that what it should be – the device adapting to suit human behaviour, but the fact most working humans have so much experience of this, people tend not to acknowledge this disgraceful state of affairs.
AI was the main subject of the latest edition of Matt Taibbi & Walter Kirns ‘America this Week’, Walter articulated his opinion of the AI yarn b. talks about that is the drone which wanted to murder its operator, that is that this stuff is a feature not an error. eg The next time a drone wipes out an entire village celebrating a wedding or some other war scandal, the Airforce General will have a press conference and dismiss what are ethical question with technical excuses. “It was the AI – it had a malfunction” then refuse to answer any question on the detail of this; technical issue’ on the grounds of national security’. Can’t have those Russkies or Chinks knowing how our drones work now, can we?
In other words the current AI in weapons platforms is just a scam to pass on blame.

Posted by: Debsisdead | Jun 3 2023 8:47 utc | 194

Ok, so I went to find the stupid phone and asked ChatGPT the following
Me: Please write an essay using at least 500 words about an interesting subject of your choosing, using only self-referencing sentences that are your own and not copied from known texts.
ChatGPT:

Exploring the realm of human creativity provides a fascinating lens through which we can observe the depths of human ingenuity and the endless possibilities that unfold from it. The ability to conjure novel ideas, concepts, and expressions is a defining characteristic of our species, and it manifests in various forms, such as art, literature, music, and innovation.
In the realm of art, the canvas becomes a portal through which artists channel their emotions, experiences, and perspectives, creating a visual language that resonates with viewers on profound levels. Whether it be the vibrant strokes of a painter’s brush or the intricate details of a sculptor’s masterpiece, art possesses the power to evoke emotions, provoke thoughts, and ignite conversations that transcend cultural and societal boundaries.
Literature, on the other hand, unfurls an entire universe within the confines of a book. Through carefully crafted narratives, authors weave tales that transport readers to distant lands, introduce them to compelling characters, and challenge their preconceived notions. The written word becomes a gateway to explore the depths of human imagination, fostering empathy, understanding, and personal growth.
Music, perhaps one of the most universal forms of expression, transcends language barriers, uniting people from diverse backgrounds. From the haunting melodies of a classical symphony to the pulsating rhythms of contemporary beats, music captivates our senses, stirring our souls and offering solace in moments of joy, sorrow, and introspection. Through melody and harmony, musicians tap into the human psyche, unlocking emotions and invoking a shared experience that resonates with listeners across time and space.
Moreover, the drive for innovation propels humanity forward, constantly pushing the boundaries of what is possible. The insatiable curiosity of inventors, scientists, and engineers has led to groundbreaking discoveries and technological advancements that have revolutionized our lives. From the invention of the wheel to the advent of the internet, each leap forward in innovation has opened new avenues for progress and paved the way for future generations to build upon.
However, creativity is not limited to the domains of art, literature, music, and innovation alone. It permeates every aspect of human existence, from problem-solving in our daily lives to the way we communicate and interact with one another. The ability to think critically, find innovative solutions, and adapt to changing circumstances is an inherent part of being human.
Furthermore, creativity thrives in environments that foster openness, diversity, and collaboration. When individuals from different backgrounds and disciplines come together, their unique perspectives intertwine, leading to the birth of new ideas and unconventional approaches. This interdisciplinary exchange not only fuels innovation but also broadens our understanding of the world and challenges existing paradigms.
In conclusion, the human capacity for creativity is a boundless force that shapes our world and defines our species. Through art, literature, music, and innovation, we transcend the limitations of our physical existence, exploring the depths of our imagination and leaving an indelible mark on the tapestry of human history. The ability to create is a testament to the immense potential within each of us, a reminder that we are not merely passive observers of the world, but active participants in shaping its future. Embracing our creative instincts allows us to tap into the essence of what it means to be human, to explore the unknown, and to continually push the boundaries of what is possible.

Me: You failed to use only self-referencing sentences. In fact, you didn’t use any self-referencing sentences. Try again.
ChatGPT:

As an AI language model, I generate text based on patterns and knowledge gathered from a wide range of sources. The sentences I produce are not direct copies from existing texts, but they are influenced by the data I have been trained on. However, I do not possess personal experiences or opinions, as I am an artificial intelligence program created by OpenAI. My purpose is to assist and provide information to the best of my abilities. Please let me know if there’s anything specific you would like me to help you with.

LOL, It admitted defeat and declared itself to have no intelligence.

Posted by: Norwegian | Jun 3 2023 8:49 utc | 195

Posted by: Wim | Jun 3 2023 7:01 utc | 188
But they were the problem. A human will ascribe a number of qualifications to the raw results, but the computer did what it was told to do, identify the problem. Again, a human will have the ability interpret the word ‘problem’ to encompass a whole range of connotations but the computer cannot, unless programmed by a human. If the programmer decides to inject their interpretation, or biases, into that new programme the computer will then generate results similar to its programmers.
One area that has not been discussed is the interpretation of what it is to be human is subjective and prone to a number of biases. Those on the autistic spectrum are over-represented in the field of computers and will generate different criteria for computer intelligence to qualify as being ‘human’, most end users might agree with some of the criteria, but fundamentally reject some others. Norwegians astute comment about humour nails that problem, as do the posters who confidently state that AI can replace human teachers, perhaps not understanding that the discipline is less about imparting knowledge but enabling it. The Turing test’s greatest weakness is that it never addresses the issue about a humans subjective response to questions of their own existence.

Posted by: Milites | Jun 3 2023 9:56 utc | 196

By far the best ever take on the AI that many are so frightened about, many thanks, b, you are a star.

Posted by: Baron | Jun 3 2023 10:11 utc | 197

niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results
That is very encouraging news for us translators.

Posted by: traducteur | Jun 3 2023 10:32 utc | 198

“by: Arch Bungle | Jun 3 2023 2:37 utc | 168”
“32 When the people saw that Moses was so long in coming down from the mountain, they…” [had Aaron make a golden calf, and worshipped it as a god, thus demonstrating their stupidity]
Then Moses came down from the mountain and told them that he’d twiddled his thumbs for forty days while the creator of the universe (in six days) took forty days to carve a couple of stone tablets that Moses had absolutely nothing to do with, and they believed the damned liar and fraud, thus demonstrating their stupidity again.
Sorry, but I get pissed off whenever anyone quotes obvious crap as if it was true.

Posted by: Dalit | Jun 3 2023 11:18 utc | 199

Your brain is mostly “pattern recognition”… That is the basis for all conceptual processing.

That pesky “mostly”… things like motivation…
Ask the computer “why”.

Posted by: Rae | Jun 3 2023 11:51 utc | 200