Monday, January 02, 2023

Yet Another Crappy Analysis of AI

Whenever some new breakthrough in AI (artificial intelligence) is made, you can depend on ignorant naysayers to break out the usual clichés.

We will be told that this program or that one isn't "really" creative (but no testable definition of `creative' will be given). We will be told that some new system doesn't "really" understand what you are saying (but no testable definition of `understand' will be forthcoming). We will be told that computers cannot have "minds" (but no test for having or not having a "mind" will be provided). We will be told that computers aren't "really" reading and understanding what they translate, even though no definition of what "really reading and understanding" will be given.

So it doesn't surprise me at all that there has been an enormous amount of ignorant nonsense written about ChaptGPT and related models. This dreck from Gary N. Smith, a professor of economics at Pomona College, is pretty typical for the genre.

Smith claims that "while GPT-3 can string words together in convincing ways, it has no idea what the words mean". Yet he does not provide any testable definition of "knowing what words mean". Does it really have no idea? I conducted a little experiment of my own, by asking ChatGPT, "What is a cow?" And here is the response:

A cow is a large, hoofed animal that is often kept on a farm for its milk, meat, and other products. 
Cows are part of the bovine family and are known for their docile nature and distinctive mooing 
sound. They are typically reddish-brown or black in color and have long horns on their head. 
Cows are used for a variety of purposes, including the production of milk, cheese, butter, and 
other dairy products. They are also raised for meat, leather, and other products. Cows are 
important sources of food and other resources for people around the world.
Does that really look to you like ChatGPT doesn't know what the word "cow" means?

So ChatGPT does indeed seem to define words. But can it reason correctly about them? So I asked, "Can a cow fly over the moon?" and ChatGPT responded as follows:

No, cows are not able to fly over the moon or any other celestial body. Cows are terrestrial 
animals and do not have the ability to fly through the air. They are able to walk and run on 
land, but they do not have wings or other body parts that allow them to fly. In fact, most 
animals are not able to fly, and those that are able to fly typically have specific adaptations, 
such as wings, that allow them to do so.
I was not satisfied with this, so I continued to ask, "Then how did the cow fly over the moon in "Hey Diddle Diddle" if it is impossible?" And here is how ChatGPT responded:
"Hey Diddle Diddle" is a popular nursery rhyme that has been passed down through the 
generations. It is a simple, nonsensical poem that is meant to be fun and entertaining, 
rather than literally true. The rhyme tells the story of a cow jumping over the moon, 
but it is not intended to be taken literally. Cows do not have the ability to fly, and 
the rhyme is not meant to be taken as a factual account of events. It is simply a 
lighthearted, playful rhyme that has been enjoyed by children and adults for many years.
Seems pretty accurate to me.

Smith justifies his claim by giving an example of a conversation about how many pennies there are in each group if you start with 47 pennies, you throw away 44, and divide the remainder into equal-size groups. In his example, ChatGPT gave the wrong answer and incorrect reasoning to get there. But that conversation isn't any evidence at all that ChatGPT didn't know the meaning of the words "pennies" or "group" or "47". If anything, it shows it was unable to do logical problem-solving in that particular domain, but that's a completely separate issue from knowing the meaning of words.

There's nothing easier than taking a chatbot and getting it to produce something nonsensical. After that, you can just claim it doesn't "really" understand English. But human speakers of English produce a flood of nonsense every day, as anyone reading Salon can verify. Does that mean that the writers don't "really" understand English?

I tried Smith's example, and here is what I got:

If you start with 47 pennies and throw away 44 of them, you are left with 47-44=3 pennies.
If you divide these 3 pennies into three groups, each group will have 3/3=1 penny.
So each group will have 1 penny.
Looks pretty good to me! So can we conclude that ChatGPT really does understand the meaning of words now?

I wish the people commenting on AI would demonstrate a little more seriousness about what it means to "really" understand utterances. Or any realization that "understanding" is not a binary; one can have "understanding" to a greater or lesser degree. Does a five-year-old understand the nuances of a word like "duty" the way a retired member of the military does?

I think the right way to view ChatGPT is that it does exhibit some understanding of what words mean, as evidenced by its ability to define them and use them correctly in context in many conversations. It doesn't mean that everything ChatGPT says is correct, nor that it holds no incorrect beliefs.

I am sure we will see many more crappy pieces like Smith's in the near future.

Friday, September 10, 2021

The Robert Marks Evasion: 7-year anniversary

Well, it seems so soon, but the 7-year anniversary of Robert Marks's complete failure to provide any evidence for his claims about information is upon us.

You may remember it was back on September 9 2014 that I first asked the illustrious Marks for some calculation justifying the following claim of his:

  Dear Prof. Marks:

Here

http://humanevents.com/2014/08/19/biological-information-new-perspectives-from-intelligent-design/

you claimed

"we all agree that a picture of Mount Rushmore with the busts of four US Presidents contains more information than a picture of Mount Fuji".

I asked you there for the details of the calculation that would show this, but you did not reply on that page, so I'm asking again.

Could you please provide me with your calculation to justify this claim?

Regards,

Jeffrey Shallit

  
I asked again after 3 months, 6 months, 1 year, 2 years, 3 years, 4 years, 5 years, 6 years, and now 7 years.

Still no response.

That is the nature of intelligent design creationism. Lots of wild claims, lots of bluster, but no actual evidence.

Science deserves better than this kind of nonsense. A lot better.

Saturday, February 27, 2021

The Fake "Science" of Intelligent Design

The fake "science" of intelligent design claims to provide a reliable methodology for determining if something was "natural" or created through the intervention of an intelligent agent.

Yet somehow its proponents never actually apply it to cases of genuine interest, like this one.

Why is that?

Thursday, September 10, 2020

Robert Marks: The Six-Year Anniversary

Six years ago, the illustrious Baylor professor Robert Marks II made the following claim: "we all agree that a picture of Mount Rushmore with the busts of four US Presidents contains more information than a picture of Mount Fuji".

I didn't agree, so I asked the illustrious Marks for a calculation or other rationale supporting this claim.

After three months, no reply. So I asked again.

After six months, no reply. So I asked again.

After one year, no reply. So I asked again.

After two years, no reply. So I asked again.

After three years, no reply. So I asked again.

After four years, no reply. So I asked again.

After five years, no reply. So I asked again.

Now it's been SIX years. I asked again. Still no reply from the illustrious Marks.

This is typical behavior for advocates of intelligent design. They do not feel any scholarly obligation to produce evidence for their claims. That's one way you know that intelligent design is pseudoscience.

Friday, March 13, 2020

Yet More Unsubstantiated Claims by Egnor


I realize that most people have better things to do currently than read the ravings of a creationist neurosurgeon, but Michael Egnor is at it again, making completely unsubstantiated claims about the minds of people and animals.

Here he claims that "abstract thought (as classical philosophers pointed out) is inherently an immaterial ability and thus it cannot arise from the brain or from any material organ". Actually, there's no evidence at all for this claim. As far as we know, abstract thought is no different from any kind of brain activity, carried out by our neurons and synapses. And if it does not "arise from the brain", what could it possibly arise from?

Abstract reasoning is actually not significantly different from any other kind of reasoning, a point of view espoused for the specific case of mathematical reasoning by George Lakoff and Rafael Nunez in their book Where Mathematics Come From: How The Embodied Mind Brings Mathematics Into Being.

Egnor claims that "Mental activity always has meaning—every thought is about something. Computation always lacks meaning in itself." This is a classic blunder, made by people who have little understanding of the nature of computation. Of course computations have meaning. When we sum the infinite series 1+1/4+1/9+... using a program such as Maple, by typing sum(1/n^2,n=1..infinity); who can reasonably deny that the answer Π2/6 it produces has meaning? This classic error was debunked as long ago as 1843, when Ada Lovelace wrote, "Many persons who are not conversant with mathematical studies, imagine that because the business of the engine is to give its results in numerical notation, the nature of its processes must consequently be arithmetical and numerical, rather than algebraical and analytical. This is an error. The engine can arrange and combine its numerical quantities exactly as if they were letters or any other general symbols; and in fact it might bring out its results in algebraical notation, were provisions made accordingly." This is an abstract example, but if you want examples related to the real world, just consider the data collected and processed to produce weather predictions. If these computations had no meaning, how is it that short-term weather forecasts are so accurate?

Egnor goes on to justify his bogus claim by saying, "A word processing program doesn't care about the opinion that you’re expressing when you use it." But what does this have to do with anything? A secretary that types up letters also probably doesn't care about the content of the letters the boss dictates; does this mean he/she has no mind? How did we get from "meaning" to "caring"? It's a huge non sequitur that Egnor doesn't bother to explain.

In another screed, Egnor repeats for the n'th time his bogus claims about the minds of animals. He writes, "No animal (except man) can do statistics, because statistical reasoning is abstract and only human beings are capable of abstract thought." But, as usual, he ignores the evidence against his claim, and provides not a shred of evidence in favor of it. All he does is assert. (Three links: one, two, three. I can produce many more.)

He closes with this, which is one of the least self-aware claims I've ever seen: "Only human beings can reason abstractly because only human beings have rational souls. Rational souls have an immaterial aspect—a spiritual aspect, because we are created in the Image of our Creator, who is a Spirit. That's a scientific inference."

No, that's just religious babble.

Monday, October 14, 2019

Robert George on Mill and Newman


Every so often, the Catholic Church goes through the bizarre process of elevating one of its adherents to the status of saint. This absurd spectacle demands that the wannabee-saint be responsible for at least two miracles. Now it's the turn of John Henry Newman (1801-1890), a British theologian. For some inexplicable reason, although Newman died almost 130 years ago, it's only quite recently that prayers that invoked his name have had the desired effect.

Jack Sullivan supposedly had back pain, and he claims to have been cured after praying to Newman. Well, it's not like spontaneous remission of back pain ever happens, right? It must have been a miracle!

Melissa Villalobos supposedly had internal bleeding while pregnant. She also prayed to Newman, and claimed to be healed. It must have been a miracle! No one could possibly come up with any other explanation, right?

Recently on twitter, Princeton professor Robert George celebrated this momentous event by recalling his paper on John Stuart Mill and John Henry Newman. I have to admit, I am not usually in the habit of reading papers published in obscure religious journals, but I was intrigued. So I read it.

That was a mistake.

It is pretty bad. Here, very briefly, are just a few of the things wrong with it: it's sloppily proofread; it uses private redefinitions of basic terms; it doesn't so much as argue as just make assertions; it's full of bafflegab; it doesn't adequately support its main contention; and it fails to be a scholarly contribution.

Sloppy proofreading: I'll just cite two instances (there are others): "defenses f freedom" in the very first paragraph! Then, later on, "neither to each other not to some common substance" ("not" instead of "nor"). Did anyone -- author or publisher -- make even the most cursory effort here?

Makes assertions instead of argues: "Christian philosophical anthropology ... has proved to be far more plausible and reliable than the alternative that Mill, quite uncritically, accepted". No actual argument or citation provided.

Private redefinitions of basic terms: religion is defined as "the active quest for spiritual truth and the conscientious effort to live with integrity and authenticity in line with one’s best judgments regarding the ultimate sources of meaning and value, and to fulfill one’s obligations (spiritual and moral) in both the public and private dimensions of one's life". A dishonest rhetorical ploy: define "religion" so broadly it encompasses nearly every action by an ethical person.

Bafflegab: top, p. 42: George uses 17 lines to make the trivial observation that happiness and human flourishing are functions of multiple variables with no obvious way to compare or weight them, in order to achieve a maximizing outcome everyone will agree with. Then why not just say that?

More bafflegab: "the dignity of human persons" (p. 44). "Dignity" is the ultimate weasel word; what you regard as essential to human dignity (e.g., forbidding contraception) I could just as easily regard as an example of human indignity.

Very few citations: e.g., George mentions criticism of Mill by Hart (but doesn't bother to give a citation). This is not scholarly behavior.

The main point is not adequately supported: Why exactly do duties automatically confer rights? Adherents of the religion of Christian Identity believe black people are subhuman and one has a duty to subjugate and exterminate them. How does this confer a right to do so?

Let's face it: the Christian account of morality is competely unsupported and incoherent. Some philosophers still have a medieval view of man's nature that is completely unmoored from modern discoveries of evolution and psychology.

Man is not a "rational creature" as George claims, and this absurdly bad essay is proof of that. In my field, junk as bad as this just could not get published in a reputable journal, and if it does somehow manage to, everyone would laugh.

Sunday, September 22, 2019

Robert Marks - Five Years Later, Still No Answers!


Five years ago, the illustrious Baylor professor Robert Marks II made the following claim: "we all agree that a picture of Mount Rushmore with the busts of four US Presidents contains more information than a picture of Mount Fuji".

I didn't agree, so I asked the illustrious Marks for a calculation or other rationale supporting this claim.

After three months, no reply. So I asked again.

After six months, no reply. So I asked again.

After one year, no reply. So I asked again.

After two years, no reply. So I asked again.

After three years, no reply. So I asked again.

After four years, no reply. So I asked again.

Now it's been five years. I asked again. Still no reply.

This is typical behavior for advocates of intelligent design. They do not feel any scholarly obligation to produce evidence for their claims. That's one way you know that intelligent design is pseudoscience.

I wish some brave Baylor student would have the courage to ask Marks in one of his classes for why he refuses to answer.