Use of AI as a reviewer.

He'd have been well advised to do what was advised in the small print below his bot. Current bots aren't expert systems and don't claim to be. But already it's claimed Gemini is approaching the performance of an expert system, albeit, still hallucinating on occasions, though rather less. Given the speed of evolution I can't imagine how they'll perform in five years.
Saw a short documentary on the trouble Sports Illustrated is in. Have to agree with the reporter. The scary thing is the general publics increasing inability to tell the difference.
 
Given the speed of evolution I can't imagine how they'll perform in five years.
There are others, who doubt, that Language-Based-AI (LLM) will improve much. In the meantime it has been established, that language modeling gives you too little understanding of reality. Even Googles Gemini, does not have an intrinsic understanding of time and space and physics and of all the real tangible properties of every object around us. What is a hard surface, what is soft? What are aspects like reflections, echoes, noise, taste, smoothness, roughness really? An AI can only fake knowledge of all these elementary facts, by deriving them from some text, that has been fed into it to train it.

I believe that the AI engines are biggest scam in IT history. Born out of Microsofts desperation that Google dominates the search engine market.
 
An AI can only fake knowledge of all these elementary facts, by deriving them from some text, that has been fed into it to train it.
Agree this. Why it's called artificial intelligence I really don't know. There's zero "intelligence", merely a prediction of the most likely next word, done really fast. It's got no more intelligence than a game of Scrabble. It's not "faking knowledge", it's a fast type-writer, is all it is.

And don't get me started on its drawing attempts. Damn thing can't count, for starters, and what's with all those freaky claws? What surprises me is the number of people who don't seem to see those anomalies - that makes me wonder how those people actually perceive reality.

A tiny bee brain is smarter, remembering scents, the direction of the sun, how to dance.
 
Agree this. Why it's called artificial intelligence I really don't know. There's zero "intelligence", merely a prediction of the most likely next word, done really fast. It's got no more intelligence than a game of Scrabble. It's not "faking knowledge", it's a fast type-writer, is all it is.

And don't get me started on its drawing attempts. Damn thing can't count, for starters, and what's with all those freaky claws? What surprises me is the number of people who don't seem to see those anomalies - that makes me wonder how those people actually perceive reality.

A tiny bee brain is smarter, remembering scents, the direction of the sun, how to dance.
This is nothing new for many people who work in the language industry. I'm an editor in the professional services. About 25 years ago companies started touting machine translation as "nearly there, almost as good as a human translator." It wasn't. Now, 25 years later, with all the latest technology, it's still not.

MT is fine for one thing: understanding the general gist of a text. It's for information *consumption*, not information * production*.

The problem is that the companies developing MT tools put so much effort into marketing their products, and at a glance the quality is OK. Sentence by sentence, it looks fine. But these tools can't consider the text as a whole - not even a paragraph.

I've seen texts where the same word was translated three different ways in the space of four sentences. I've seen texts where whole blocks weren't translated, for whatever reason. I've seen texts where the tone and style shift from one sentence to the next.

The result is a mess. It's loose pieces of information with no internal cohesion or coherence. And all these AI generators suffer exactly the same problem. But at a glance it looks fine. It's only when you look closer, of if you need to rely on what they produce, that you realise it's all superficial.
 
@StillStunned For machine translation there is at least the original text that gives directions. Generating a story from a prompt goes all over the place after a few sentences.
 
Agree this. Why it's called artificial intelligence I really don't know. There's zero "intelligence", merely a prediction of the most likely next word, done really fast. It's got no more intelligence than a game of Scrabble. It's not "faking knowledge", it's a fast type-writer, is all it is.

And don't get me started on its drawing attempts. Damn thing can't count, for starters, and what's with all those freaky claws? What surprises me is the number of people who don't seem to see those anomalies - that makes me wonder how those people actually perceive reality.

A tiny bee brain is smarter, remembering scents, the direction of the sun, how to dance.
No one has to call it Intelligence, the terms, LLMs and bots are available. There's a certain amount of overclaiming in what bots can/could do which makes some afraid. That's to be expected, its developers, scientists and users are people, and people come in the full dimension of neuroticism, from abnormally fearful to abnormally robust.

With anything/one, you need to look at both, what it/they can do and what it/they can't do. LLMs are limited by the affordance they can be trained with, either 0 or 1. A bee has a physical embodiment and senses which provide it with affordance. It can smell, it can feel temperature, it can 'see' the sun, it is social and can learn from other bees. Simply by being in the environment, it's exposed to affordance far more extensive than 0 or 1. It's animate and can make honey but it can't do what an assembly of inanimate matter can. I can't see how the saying, 'As dumb as a rock', will ever be made redundant, though it might.

A bee can't produce text, not even bad text, it can produce honey and come after you and sting you if you try to take the honey. An LLM can produce bad text, it can't produce honey, nor can it come after you and sting you. Depending on one's ability to produce text, one's employer may come after you and fire you, if one's in the business of text production. However, like the bee we're animate, the product of natural selection, the survival of the fittest, we're the apex competitor. We've tamed the bees and use them as a tool to produce honey, I'm sure we can outcompete the bots and use them as mere tools.
 
No one has to call it Intelligence,
So why is every headline about Artificial Intelligence?

The designers of these tools, from the reading I've done, are claiming some mimicking thing going on, trying to emulate human thought.

Otherwise they'd be calling it Fast Acting Scissors, or some such, since all it's doing is predicting the most likely next word. It's nothing like sentient.
 
So why is every headline about Artificial Intelligence?

The designers of these tools, from the reading I've done, are claiming some mimicking thing going on, trying to emulate human thought.

Otherwise they'd be calling it Fast Acting Scissors, or some such, since all it's doing is predicting the most likely next word. It's nothing like sentient.
AI is as much a popular idea of what these tools are as it is a marketing term. They are Machine Learning or Neural Networks. Research has been ongoing with them for decades now.

No one who knows how they work think that there is any intelligence involved.
 
So why is every headline about Artificial Intelligence?

The designers of these tools, from the reading I've done, are claiming some mimicking thing going on, trying to emulate human thought.

Otherwise they'd be calling it Fast Acting Scissors, or some such, since all it's doing is predicting the most likely next word. It's nothing like sentient.
Precisely. But you're not obliged to use their words. They use a neural network model to mimic human learning, a neuron either fires or it doesn't, binary code is 0 or 1. As a means of machine learning, it's impressive, but no one has suggested how it could ever become sentient. That's science fiction.
 
AI is useful for grammar tidbits, like walking 'passed' or walking 'past.'

Little things like that, AI helps, also it helps give ideas to rephrase confusing sentences. Ask, and it will do it.
 
An AI can be programmed to “self-learn” and improve its ability to accomplish tasks.

In a simple example an AI thermostat can “learn” about how a home can be strategically heated or cooled for better energy efficiency based on information it gathers from a variety of inputs, including the coming and goings of the inhabitants, the ability of a heating or cooling system to change the temperature, the weather - both in real time and what has been forecasted.

An AI can improve the efficiency of an internal combustion engine based on a variety of input sensors and controls, and it can do it better than any human at regulating these functions.

What programming inputs and “sensors” does AI have to “self learn” about its writing skills? It isn’t limited to trying to simulate the work of human authors, it can be programmed to consider the feedback it receives, that could be from all kinds of feedback sources, and since it ‘hallucinates’ it can get feedback on novel product that is not based on previous input. It isn’t purely regurgitating or trying to emulate existing material and it is being programmed to “learn”.
 
Back
Top