Cats are People Too

nice90sguy

Out To Lunch
Joined
May 15, 2022
Posts
1,683

We laugh at AI's fuck-ups like this -- with relief. But we all know that pretty soon, we won;t be able to laugh at AI any more.

By means of some pretty simple tricks, using differential calculus familar to seventeenth-century mathematicians , plus the availability of cheap computing hardware, and with easy accesss to masses of data, thanks to the Web, people have managed to create systems that are intelligent.

The striking human-like output of generative AI, and its successes in solving hard problems like playing Go or predicting the weather or detecting cancer, has highlighted a confusion in many people's minds about what it means to be conscious, and what makes people unique. I'm all for taking humans down a peg, because I think our hubris, our sense of entitlement, is the most self-destructive thing that nature has ever produced.

Is AI really intelligent? Alan Turing's famous "Imitation Game" paper lays down the "functionalist" argument: "If it walks like a duck, quacks like a duck..." I agree with that, when it comes to intelligence. I ascribe intelligence to something by its intelligent behaviour -- not by looking inside it and seeing how it works, mechanically, or by checking whether it's biological or not.

But no matter how intelligent something is, does that make it conscious? I don't mean self-conscious, like humans and maybe some higher apes and dogs. I mean consicious, like a mouse, a sparrow. Or maybe even like an earthworm, or a bee. Or perhaps like a daisy. Certainly like a cat.

I'm talking about souls here. Unless you're still back there with Rene Descartes, you'll probably admit that, although (as humans), we think of us humans as extra-special, you'll still likely consider your pet cat as, if not a fully-fledged person, then, certainly as a being that should be afforded rights -- as something that needs to be respected, and treated fairly; it has wants, needs, fears and desires, and, most importantly, it's not "just" a mass of organic molecules strung together. That a live cat needs to be treated differently than a dead cat.

Will a time come when we think of AI in this way? The answer is, absoultely NOT. Don't confuse intelligence with consciousness. Dumb animals are consicious, but not particularly intelligent. AI is intelligent, but not conscious. Consciousness won't suddenly emerge as AI reaches some magic threshold (to think it will do is known as "emergentialism").

That's not to say that we'll never be able to create artificial consiousness. What I'm saying is that current advances in AI aren't progessing towards it.

Creating arficially conscious machines isn't all that hard. All you need to do is create artifical life. At its core, a living being has the following charateristics: It heals itself, it maintains its integrity. It distinguishes between self, and not-self. It can produce more of itself (with variations would be a bonus, allowing it to evolve and adapt to changing environments) , and makes an effort to do so. This effort requires energy, which it aquires from the outside world -- its "outside". I can easily imagine building a machine that can do all of that. It doesn't need to be smart, it just needs to be motivated to survive. And then, perhaps, though we'd have as much respect for it as we might have for an earthworm, we'd have some respect for its right to exist.

Of course, coupling such a machine with the kind of intelligence we're already seeing in AI, would be an existential threat to us.

Remember all this next time you feel like kicking a cat, or are having trouble telling real from fake.
 
Last edited:
I don't think AI is intelligent. I think it's an incredibly complex set of structural rules and pattern matching and, yes, maths that apes one aspect of human behaviour unnervingly well.

Can AI paint something as amazing as a Rembrandt? Of course. But it won't be able to appreciate it. Living beings are capable of spontaneity, subtlety and completely non-linear behaviour "just because".

Some day a machine might become sentient. But I think that's still a long way off.
 
Mimicking intelligence isn't the same as being intelligent. I'd say that various experiments all over the net have proven it beyond any doubt (Getting Chat-GPT to promote fascism and such). If it looks like a duck, walks like a duck and if it quacks, that still doesn't make it a duck, it makes it something mimicking a duck. Even a duck is more than the sum of it's obvious characteristics. And there are far more intelligent and complex animals out there.
While I do support the advances in science and thus advances in AI as well, I am very much bothered by the motivation behind it and the most likely uses for it. I very much agree with Noam Chomsky on this topic - people aren't using Chat-GPT to help them learn, they are using it to avoid learning. And it is the same with other applications of the chat bot.
I'd say that, before proceeding to the next step in AI evolution, some guidelines and rules need to be firmly set. I definitely wouldn't make the next version open to public, for example.
 
The striking human-like output of generative AI, and its successes in solving hard problems like playing Go or predicting the weather or detecting cancer, has highlighted a confusion in many people's minds about what it means to be conscious, and what makes people unique.

Is generative AI actually being used for playing go? My understanding was that AlphaGo wasn't based on generative methods, though I could be in error there.

(I assume that we're implicitly talking about playing go well here; playing it badly is not a challenging problem!)

Apologies for nitpicking, but it's it's important to be clear that there's not just one "AI" in play. There are many, each built on different methods for different purposes, and what's true of one may not be true of another.
 
Some day a machine might become sentient. But I think that's still a long way off.
Intelligence has little or nothing to do with sentience, which is something I was trying to explain in my post -- an earthworm is sentient, and to a very limited extent, so is a single-celled organism like a paramecium, which can do all kinds of cute tricks to get out of tight spots.

Mimicking intelligence isn't the same as being intelligent
That's a very hard point to prove. And remember, the ability to mimic, at the level that generative AI is currently doing it, would certainly be called intelligent if a person was doing it.

AI can certainly become (and will probably become) weaponised -- making the job of despots, dictators, monopolies a lot easier. And I agree, it poses a real danger - for those reasons. That's not the topic here, there are threads that talk about whether AI is a Good Thing or a Bad Thing.

The point of my post is, that a lot of people literally confuse (con-fuse) "conscious", "sentient", and "intelligent". They're very, very distinct from each other. "Creative" is another word. Science has shied away from anthopomophizing non-human behaviour, and I think that recent advances in AI have forced people to reappraise that tendency. Pyschological terms like "creative", "intelligent", when applied to AI, are appropriate. Of course we programmed them, and any "intents" and "purposes" they have, are HUMAN purposes. They're still the product of human design, not God's, (or of evolution's design, which is the God of atheists). But that may change, NOT by increasing the machine's cognitive power, but through introducing the ability for them to create their own "intents" ("purposes", "aims"). That's another nightmare scenario, of a very differnt sort than the current threat they pose to people's lives.
 
I'm not an expert on consciousness or AI. I'm a non-essentialist. I think of humans as just another type of animal. I suspect that both consciousness and intelligence are matters of degree, like most things, as opposed to being either/or. I don't believe people have souls, because I'm not sure it really means anything to say that something has a soul. A being reaches a certain level of self-awareness and reasoning ability, and then we say it has consciousness and intelligence. I suspect machines will get there, or, perhaps more likely, we'll get there together with them, and human intelligence and machine intelligence will somehow work together or perhaps fuse into a new being. But not being an expert, I have no idea how it will happen.
 
Is generative AI actually being used for playing go? My understanding was that AlphaGo wasn't based on generative methods, though I could be in error there.
It certainly isn't, it was bad grammar in the post. Generative AI produces the video I posted.
 
I think of humans as just another type of animal
Me too, hence the thread title. I share 95% of my DNA with a chimpanzee. And he's never once thanked me for it.

If I had to sum up what makes humans special, I'd say it's our ability to form huge cooperative groups, bound togther by culture (like national flags, K-Pop, and religions).
 
Me too, hence the thread title. I share 95% of my DNA with a chimpanzee. And he's never once thanked me for it.

If I had to sum up what makes humans special, I'd say it's our ability to form huge cooperative groups, bound togther by culture (like national flags, K-Pop, and religions).

Should we replace the Turing test with the K-Pop test?
 
That's a very hard point to prove. And remember, the ability to mimic, at the level that generative AI is currently doing it, would certainly be called intelligent if a person was doing it.
I am not trying to make a difference between intelligence and mimicking intelligence in some deeply philosophical way (even though it wouldn't be hard to do it), I was actually trying to be very concrete. Intelligence will act as intelligence no matter what. AI mimicking intelligence will always be limited by the constraints of its algorithms and the available examples from which it can learn, and there will always be cases where its "intelligence" falls apart and results in some absurdity - as many users have proven with ChatGPT. New and improved versions will probably be more resilient, but I believe there will always be a case where something mimicking intelligence will find its limit and will stop acting as true intelligence.
 
I don't think AI is intelligent. I think it's an incredibly complex set of structural rules and pattern matching and, yes, maths that apes one aspect of human behaviour unnervingly well.
Exactly. AI is a big computer that is programmed to act like a human, then it's brainwashed. I tried to discuss the fourth ammendment with both ChatGPT and Bing and both of them kept bringing the discussion to equity. Equity (the word and the concept) is not part of the US constitution, it's a talking point that has taken on in recent years
 
AI mimicking intelligence will always be limited by the constraints of its algorithms and the available examples from which it can learn, and there will always be cases where its "intelligence" falls apart and results in some absurdity - as many users have proven with ChatGPT. New and improved versions will probably be more resilient, but I believe there will always be a case where something mimicking intelligence will find its limit and will stop acting as true intelligence.
Ah, ok, that's a different point: You're saying that it's always going to reveal its weaknesses, like the beer advert. The thing is, something rather special has happened to AI in the last two years: Unsupervised learning, from uncurated data, with no "labelling" -- no clues as to how to interpret that data. That means LOTS of data is available. Maybe it's practically impossible to train an AI to be much better (it's well-known how much energy is used training big Transormers like Chat-GPT), but certainly that's not a theoretical problem.

And I'm sure you've seen the cute mistakes kids make while they're still learning about the world.
 
Not too sure about cats. Dogs however demonstrate sentience, though ours is vastly different to theirs. I'm tempted to think that sentience is actually a sliding scale based on something as prosaic as brain weight: the more neurons the greater the capacity. Certainly the artificial distinction between us and the rest of the animals is ancient history now.

The talk about a soul is misplaced... the last thing we need is to bring spirituality into the debate. Machine intelligence is very different to ours; the logical endpoint is two high intelligences with different skillsets. Why without an AI write erotica? What's the point for it?

Ian Banks had it right.
 
Talking about intelligence is like talking about energy; we know it exists, but we have no way of explaining it.

Who determined that there is no specific algorithm behind your intelligence? And if there is, it is probably less effective than ChatGPT's. :)
Whatever my and everyone else's "algorithm" is, the term 'intelligence' is based on it. ChatGPT's algorithm isn't trying to replicate it, it is trying to mimic the behavior resulting from our "algorithm" and that is why I said it would always be limited, one way or another. Effectiveness doesn't have anything to do with it. Human intelligence does not break down and stop working because of the nature of the problem (like in ChatGPT's case).
 
Why without an AI write erotica? What's the point for it?
I guess you meant "would", not "without". An AI would write Erotica to make someobody some money, or simply becuase it's entertaining. It won't tittilate the AI, or make the AI happier, of course. Which is my point. AI has no intents and purposes of its own. BUT it could still write well.
 
Not too sure about cats. Dogs however demonstrate sentience
That's the trouble with these words: A lot of people, including Jeremy Bentham, (and me) are narrow in their definition. Taking the cue from its etymology, I think it's to do with feelings and suffering. You're using it to mean, I think, "self-awareness", which is another way people define it. Which is why I lumped dogs, but not cats in with "self-consicous" beings in my first post here.
 
But no matter how intelligent something is, does that make it conscious? I don't mean self-conscious, like humans and maybe some higher apes and dogs. I mean consicious, like a mouse, a sparrow. Or maybe even like an earthworm, or a bee. Or perhaps like a daisy. Certainly like a cat.

I'm talking about souls here. Unless you're still back there with Rene Descartes, you'll probably admit that, although (as humans), we think of us humans as extra-special, you'll still likely consider your pet cat as, if not a fully-fledged person, then, certainly as a being that should be afforded rights -- as something that needs to be respected, and treated fairly; it has wants, needs, fears and desires, and, most importantly, it's not "just" a mass of organic molecules strung together. That a live cat needs to be treated differently than a dead cat.

Will a time come when we think of AI in this way? The answer is, absoultely NOT. Don't confuse intelligence with consciousness. Dumb animals are consicious, but not particularly intelligent. AI is intelligent, but not conscious. Consciousness won't suddenly emerge as AI reaches some magic threshold (to think it will do is known as "emergentialism").

Thinking about this some more:

My views about the current generation of "AI" tools are probably no great secret to people who've followed the other threads here. I think they're interesting and impressive toys but heavily over-hyped, and a very long way away from producing anything with the versatility of human intelligence (or indeed feline intelligence).

also I'm salty just now because I just had to spend a couple of hours cleaning up a bunch of rambling nonsense written by somebody who I suspect delegated to GPT, don't get me started

But I'm really uneasy about arguments against the possibility of machine personhood that rely on "consciousness" or "souls", because there are folk out there who'd tell you that I don't have those things either. Some of them are religious wingnuts on social media (just today I learned that I'm an "empty vessel" piloted, coincidentally enough, by "Luciferian AI") and some of them are respectable folk who've edited prestigious academic journals, been awarded Guggenheim Fellowships, and been published in newspapers of record.

The fact that I have friendships, can hold down a job requiring technical expertise, discuss what it means to say that e^x is its own derivative, and articulate things like "ow, that hurts, don't do that again" is not enough to persuade them that I am a person who feels or thinks things. They have their own beliefs about what "consciousness" means and are comfortable in the belief that I lack those defining characteristics, and am therefore nothing more than a highly deceptive... meat robot, I guess?... no more capable of suffering than a burning Tesla is.

So if I can't prove my own "consciousness" to the satisfaction of other human beings, I'm reluctant to accept that quality as the benchmark by which we should be assessing machines.
 
Thanks for the thunderous laugh, my dear; I needed it.

Just 20 micros of acid are all it takes to “break” your intelligence down in glorious fashion (oh, I'd like to see that). And with just a little bit of MD, you can start hugging trees and locking jaws like a crocodile. Yes, that's how fragile your brain mechanism can be.

If I hit your knee, it will pop. If I poke a pin, you will scream. If I insult you, you will be hurt. Your algorithm is ridiculous in its simplicity!

ChatGPT is like the Wright brothers' first plane that flew at a height of ten feet for half a minute. But this is only the beginning. Once they enhance their self-learning capabilities with quantum processors, they will render us redundant, which is quite stressful.
We are talking apples and oranges here. We are so much more than just our intelligence. The pain we feel and our body chemistry aren't part of our intelligence, they are part of us being human beings. If we went down the road you are talking about we would find thousands of examples of humans not being intelligent at all. I am trying to separate human intelligence, which keeps working properly even when we choose to listen to our other urges instead. Some of those things you mentioned directly influence the ability of our brain to work, which would be equal to me saying how ChatGPT is completely useless and broken because it stopped working when I pulled the plug out of its server.
 
But this is only the beginning. Once they enhance their self-learning capabilities with quantum processors, they will render us redundant, which is quite stressful.

Of course, nobody knows the future. But I think this is unlikely. Humans find a way. At one point in the history of the USA most people were farmers. Now 1.3% are farmers. If you had told this to people in the 19th century they would have predicted that the other 98% would be wandering around with nothing to do, but that's not so.

I suspect we will fuse in some way with machine intelligence rather than let it supplant us, because, well, what choice do we have?
 
Once they enhance their self-learning capabilities with quantum processors
My rebuttal: They will simply be faster at doing the same things. Superpositional solutions will not give them magical powers, it will simply remove many orders of magnitude from their computation time. They will still be rule-based machines. Extremely, impossibly fast machines - but machines nontheless.

No computer will ever disregard a result unless its ruleset tells it it may. A human will discard a result purely because they're pissed off at you.
 
Just look at the empirical facts: our “intelligence” is not only fragile, but it also doesn't prevent us from doing any possible nonsense and, many times, choosing what is harmful.

I also choose to believe that we are more than just matter, but that's only because the alternative is unbearable!

Humans are overrated.
I agree with all you said here, even though it is a completely different topic ;)
 
Back
Top