Theory vs. Practice
Diagnosis is not the end, but the beginning of practice.
Artificial Intelligence (AI), the Great (De)Illusion
I have been contacted and requested to write an article about AI – and since I did not get news from them I have published it here (they have finally published it, more than a month later).
In contrast with the disastrous quality of the articles that are enthusiastically published to promote AI, my text was comparing today's instruments to those in use 30 years ago, and introduced new concepts of the kind so badly missing to make progress.
Having followed the "AI" players for 43 years, I indeed have some insights to share.
Theoretical and practical arguments are presented that are much needed to make progress in a discipline that, a few months ago, was in a state of "freezing" according to its specialists.
That was before a new wave of hype erased this "perception" with ChatGPT (a chatbot, something called Eliza 60 years ago) as the only word worth spelling in town... despite world experts having criticized ChatGPT in graphic terms:
Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity. ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.
So, if you are wondering what "AI" is in reality, or if you want to discover new ways to make (real) progress, keep reading to discover my article first titled "Making an Artificial Super Intelligence (ASI)".
Today, the "AI Winter" malediction has been conjured by a new wave of chatbot hype, it seems.
Yet, as the New York times has emphasized it, the hundreds of million dollars spent in marketing annually do not convince the most competent among us. We will explain in this article why – and how we, collectively, have been able to reach such a level of inconsistency.
I wrote an Eliza chatbot during my last year in High School... more than three decades ago. This program running on a (4KB RAM, 768kHz CPU) Pocket-PC was so effective at ridiculing my Philosophy teacher (knowing nothing about computers but nevertheless claiming with erroneous arguments that AI will never exist) that he spared me from the obligation to attend his atrociously boring classes, permanently (a win-win deal). As a French humorist said:
We always think we have enough intelligence, because that's what we judge with.
My version of Eliza had an impact by exposing the unsafe nature of a lazy and self-complacent human intelligence, which, in reaction, excluded me from the discussion (the very nature of Philosophy). Why?
Violence is the last refuge of the incompetent.
And today, the IA challenge remains despite the current battles opposing "deep learning" to "symbolic cognition" with new expressions like "neuro-symbolism" surfacing to mask our ignorance (the exact same battle existed under a different name thirty years ago).
Thirty years ago, we already had commercial "Expert Systems" (collections of rules written by human experts of a given field, made available to non-experts), and these products were mostly useful as reminders for those who already knew the field quite well.
Why? Because, like an encyclopedia supposedly "containing all human knowledge", an Expert System requires several university degrees to decipher each domain-specific jargon abundantly used by its authors... mostly to hide their ignorance: "Jargon is the last refuge of the incompetent" – a fact enlightened a couple of times in "Avatar: The way of Water" (2022).
Since "Science" is a human organization, "knowledge is power" has quickly been transformed by "money is power" – explaining why so many (1) research documents written by public researchers (paid by the taxpayer) are not publicly available for free and why so many (2) researchers are scared to irritate their hierarchy and risk not being funded and/or published any more:
If you think, that only social sciences, law, history and literature are hijacked, you did not pay attention. Think that mathematics and physics are free? Think again.
When money tops everything the afflicted activity invariably becomes fraudulent: see how politics, justice, sports, media, universities, arts, health-care, philanthropy, etc. have diverged from their mission to enrich a very few entirely dedicated to marketing junk – at the expense of the many merely trying to advance the state of society (the famous "common good").
Back to AI. At the time, this new field of research posed an interesting question: how will a computer program start to add new (and preferably better) rules? And, beyond addressing a particular problem, how will it start to become self-conscious?
Today, "Weak AI" is a massive collection of human behaviors picked by algorithms made by human experts. That's why Europeans complain that they cannot compete with China – which has a access to a much larger pool of human behaviors (because they are more numerous, and there's no attempt to limit the mass-collection of everything they do and say... in the presence of a nearby smartphone equipped with cameras, microphones, and dozens of other sensors absolutely useless for end-users).
So "Weak AI" is, in reality, closer to
"Big Brother Is Watching You"
Shoshana Zuboff's book "The Age of Surveillance Capitalism"
documents the unprecedented power of the GAFAM that are funded
by governments (that is, the taxpayer targeted by this surveillance)
to predict and control human behavior.
Google's Chief Economist, Hal Varian, under his own words lists
the following priorities:
1. increasing data extraction and analysis,
2. contractual forms relying on surveillance,
3. customized services depending on users,
4. continual experiments on consumers.
This has led many observers to suggest that the underlying pursued
goal was simply a complete privatization of democracy – a system
where mega-corporations (all owned by a very few) would replace the
Nations and their governments. than to anything related to an artificial ability to think like humans. And its purpose seems to be limited to feeding the "social credit" "Although Mark Zuckerberg has not set a goal for us, we understand
that there are some application areas that are important to the company:
text understanding, translation, image recognition and, in particular,
face recognition. [...]
Much of the hateful content that Facebook removes is removed before it
is posted, thanks to automatic AI detection. The detection of videos or
images of terrorist propaganda [...] are reported as soon as they are
posted and will be added to a blacklist of items to be banned."
–Yann le Cun, Turing Award, New York University professor,
Facebook fundamental research director, "When the machine learns" machinery with enough data about everyone... for the "elites" to control their own population Privatised censorship (click the link to read more)
"The problem: The action points agreed with online companies in secret
meetings of the Forum may have a direct negative impact on our freedom
of expression. Why? Because one of the topics that is being discussed
is the censoring of online content by private companies – without
any judicial process.
Many case studies highlighted by onlinecensorship.org have shown that
private companies regularly violate fundamental rights in the online
space, flouting the principle that restrictions on civil and human rights
must be based on law. This practice is now being encouraged and pushed
by the EU Commission.
Additionally, the EU Commission repeatedly denied us access to the
documents that are being discussed by the IT Forum. The reason for
our requests is simple: The EU Commission has a very bad record of
keeping such projects in line with fundamental rights." now the legitimacy of the "authorities" (under the control of the forces behind the "free and open markets" that have never existed) is crumbling.
The diversity of human behaviors sometimes gives superior results as compared to the strict rules written by experts (still in use, but as safety-guards now) – especially if the pursued goal is to make a turtle dance or talk in a convincing way.
How far away is Artificial General Intelligence (AGI)?
Conceptually, we did not make progress: "Weak AI" is an incremental extension of "Expert Systems" introduced by search-engine technology (mere syntactic and semantic analysis – the contents are not understood at all).
This "intelligence" is still 100% human, and humans behaviors, while often accomplishing successful tasks, are notoriously sparingly involving logic (habits and accidents better define mankind).
To make progress, we must stop faking AI with things that are neither "artificial" nor even "intelligent".
Insights are acquired – not by "copy & paste" – but rather by being involved in problem resolution.
"Artificial neural networks" (pattern matching) find their theoretical roots in the 1670-1920s and have been first involved in computers in the 1960s. They convert a dataset into a few floating-point numerical values, facilitating classification since a deviation from the canonical value is measurable.
Hashing functions also convert a dataset into a numeric output, but any bit modification in the input dataset will be expected to change a large number of bits in the resulting hash (the goal is to uniquely identify each dataset without disclosing anything about the input).
In contrast, neural networks provide a measure of likeliness so that, like for the method of least squares, similar datasets will provide
Similar: "alike though not identical."
– The American Heritage Dictionary of the English Language, 5th Edition output values.
In the 1990s I have used "back-propagation" (involving the measured errors to adapt the model) to perform OCR on bank checks processed in real-time by motorized hand-scanners at the cash-register of supermarkets. The remaining character recognition errors (some checks were torn, teared, tarnished) were corrected by checking for typos in a scanned yellow-pages database where duplicated entries had been removed (to speed-up lookups). I used the 1790s "method of least squares" (the mother of all artificial neural networks) to authenticate down-sampled scanned hand-written signatures (with pretty good results).
"Deep learning" is a family of such machine-learning techniques with their improvements and specialized versions over time. Here, "deep" means that multiple layers are involved in the network (yet another incremental enhancement). In image processing, lower layers will process and identify edges, and higher layers will attempt to identify characters or other objects.
So, "artificial neural networks" and "deep learning" are not artificial intelligence at all – they are part of artificial perception (using image processing algorithms to generate arithmetic values that attempt to distinguish an object from another, the difficulty being to do it reliably under different point of views, and with partially-visible objects).
Further, these techniques are not new – what is new is an ever-increasing funding – which allows researchers to venture in ever-growing complexity (hence an ever-growing energy consumption). That's not how biology works because, in Nature, live creatures must be able to feed themselves, while traversing long periods of diet spending energy to find and catch food (often unwilling to please its predator).
So, if the pursued goal is really to make progress, we must go back to the drawing board.
What is intelligence?
For me it's "the capacity to get away from reality – while staying relevant and preferably useful".
For humans, going too far is called craziness. It might still involve intelligence, but its basis is no longer a sharable reality so the behaviors are seen as (and are often) inadequate.
- Robots can't have intelligence: not strictly following orders is considered a malfunction.
- Insects have an instinct (expert system) and little capacity to evolve at the individual level.
- Humans have an instinct that can be bypassed by their capacity to innovate, dream, experiment... and they can better share experiences (emulation, language, writings, video, education).
Search (Weak AI), heuristics, logic, inference – all of them have a weight in what we call imagination (which is rarely exploring new ways randomly).
But intelligence, which depends on but is often confused with perception, is merely a capacity. It can contribute to, but does not generate, a personality.
What is consciousness?
Consciousness is the self-made guide of the museum of your personal experience.
It starts when intelligence is involved within a (finite, perishable) body and has the challenge to interact with an universe.
A paralytic newborn will enjoy less interactions than others but a personality will nevertheless emerge – and some skills will develop in areas that are neglected by those enjoying a wider mobility.
Unlike intelligence, consciousness feels, wants, hopes, doubts, misses and fears. In a nutshell, that's me, you, anyone. And, remarkably, its balance depends on its depth:
The more we are conscious, the less we will feel the need to destroy others to protect ourselves – because (a) our past successes make us confident of the outcomes, whatever happens and (b) our past failures make us accept an inevitable deadly failure in our never-ending quest for reaching the best possible state of adequacy to the challenges of life.
We all become what we do.
That's why long-term over-doers have higher levels of perception, intelligence and consciousness than those ever-trying to avoid facing reality because reality is, for the impotent, an insurmountable obstacle to reaching his goals. Satisfying ambitions without capacities has to rely on
plain lies "narratives" aiming at weakening people's insights "perception" so that they can be mislead and abused.
The fact that today's powers rely on negating reality with
"Deliberately ambiguous and contradictory language used to mislead
and manipulate the public. A mode of talk by politicians and officials
using ambiguous words to deceive the listener."
– The American Heritage Dictionary of the English Language, 5th Edition is not encouraging:
Tricks and treachery are the practice of fools, that don't have brains enough to be honest.
So... why don't we have yet an AI being Intelligent and Conscious?
Because today a computer is merely a fixed-size (and fixed-shape) abacus.
"AI" researchers have (collectively chosen to be?) fooled (by) themselves (for the sake of personal interest?) into believing (or pretending) that mere arithmetic and/or derived symbolic layers can resolve everything – despite constant evidence of the contrary.
The main problem is not the self-complacency of the very few in charge – but rather the inability for the rest of us to stop them – despite ever-diminishing returns:
The ultimate result of shielding men from the effects of folly is to fill the world with fools.
A 5-year old human "central nervous system" needs around 21 Watts – one millionth of the 21 mega-Watts consumed by the best super-computers that are unable to process complex tasks like new problem handling.
The 36 trillions cells of the human body communicate to repair themselves (with sane-cell description), share data (about threats) et collaborate (to maintain our body and mind in a functional state).
Doing the real thing requires a massively decentralized and parallelized highly reconfigurable self-organization.
The form is the function. And, as mother Nature has shown, this can only take place wirelessly. Rigid silicon boards are... inadequate.
"Deep learning" requires a lot of computing power and therefore energy to crunch large datasets. Graphical Processing Units (GPUs) are performing better than CPUs because they enjoy many more tiny cores and a lot of dedicated faster memory. Yet, managing the large number of required GPUs is very expensive, and inefficient to scale (bringing data to GPUs is horribly slow).
But there's a second major mistake done in this context: pretending that the shadows we watch on the wall of the cavern are alive.
Counting by night, from far away on a hill, the enlightened windows of a tower building will not let you guess what the people are doing there.
Yet, that's what we pretend to do when we claim to have identified in our brains the areas and mechanisms involved by a given task.
Even worse, the progress made at degrading brain capacity (by interfering with it) has encouraged researchers to erroneously believe that they understand what they are doing.
We proudly conclude that cutting the legs off a flea makes it deaf to our commands to jump.
Correlation is not causality.
We must recognize the very nature of our own human capacity to build on that (so far successful) basis.
Let's invoke things that are established but still not well explained:
- How comes that, rarely but indisputably, we can remotely feel the exact moment of the loss of the ones we love?
- How comes that groups synchronize? (the biological cycles of religious communities of ladies automatically converge)
- How comes that ideas spread all over the Planet at the same time... even in animals? (the knowledge of isolated wild monkey communities spreads from a continent to another even without communication)
Radio-waves and micro-waves (not involved in biology) can't do that – they can't even cross a mountain or bad whether. And they can only degrade the way our cells and brains work.
The real transmission waves involved in our biology are of another nature – they don't fear distance and obstacles – and they are key to maintain our body and mind alive.
No real progress will take place until we responsibly learn how to use these powerful waves for our own good.
Then, we will be able to make Artificial General Intelligence (AGI) that will inevitably turn into Artificial Super Intelligence (ASI) IF AND ONLY IF we provide enough "brain" volume for it to develop that much.
At least, we will be able to control that part – up to the point where we will discover, at our own expenses, the scale of our ignorance.
That's what life is for: widening its reach. And if we must die while doing that, then this will have been a life worth traversing.