Home Community Insights The Intersection of what isn’t Web3 with what isn’t AI

The Intersection of what isn’t Web3 with what isn’t AI

The Intersection of what isn’t Web3 with what isn’t AI

Regular readers are by now very familiar with my definition of ‘Web3’ as an end-to-end decentralized UX.

Also my narratives on how things folk call ‘web3’ are usually BoT (Blockchain of Things), and how Web2 is a phrase fabricated well beyond its own supposed peak, in order to justify the ‘web3’ tag being attached to things that are not.

Within the crypto-architecture spectrum, the greatest absences of Web 3 are in the EVM Compatible space, particularly in the collectible/retail segment.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

Having written at length about things that are not Web3, I haven’t often covered what isn’t AI.

Large Language Model and Large Image Model (LLM and LIM) software, are simply a selector algorithm system conflux acting upon a stimulus (in word, image form, or both), and applying the resulting conditions to a large open reservoir of text and/or image content.

Why isn’t it ‘AI’ ?

‘Artificial General Intelligence’ (AGI), is a hypothesized AI system that matches or outperforms humans in a broad range of cognitive tasks.

This is not actually AI, but dates back to the origins of ‘Goodhart’s Law’ – That ‘every measure which becomes a target becomes a bad measure’, with relevances to ‘Campbells Law’ – ‘The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor’, and

The Cobra Effect – A perverse incentive is an incentive that has an unintended and undesirable result that is contrary to the intentions of its designers. The cobra effect is the most direct kind of perverse incentive, typically because the incentive unintentionally rewards people for making the issue worse.

And the Hawthorn Effect: – if people know they are observed, their behaviour changes.

There are major problems with ‘intrusive’  AI alignment by AI technicians creating ‘Specification Gaming

One of the conditions for the development of real AI would probably be random stimuli over a long period of time which are completely collateral to actual tasks. These would need to be supported by IRL freedom of moment and access to sensory data input – sight, hearing, touch, smell, and taste.

True AI has the capability to produce different responses to the same data stimulus through varied emotional context or mood, such as fear, anxiety, exhilaration, excitement, joy, sadness, anger, greed, longing, jealousy, hunger, pain, stubbornness, impatience, hope, aspiration and many others.

This is called Sentience. The ability to ‘feel’.

There is evidence for sophisticated cognitive concepts and for both positive and negative feelings in a wide range of nonhuman animals.

But True AI needs to have self-awareness, and an appreciation of ‘ego’ in the context of Maslow.

Moreover, True AI is cognitive of the of other ‘systems’ (AI or human) ability to produce these ‘different responses’ – i.e. exercise ‘sentience’. True AI will interpret ‘sentience’ of other ‘systems’ and will also not necessarily provide the same response to the same stimulus from that system, having exercised ‘intuition’

In effect, True AI can demonstrate ‘Emotional Intelligence’.

The ultimate test of ‘True AI’ is not ability to match or outperform humans in a broad range of tasks, but to qualitatively match or outperform humans in ‘Emotional Intelligence’.

This underlines some of the problems with the new range of chatbots.

In Ethan Brooks article: ‘You can’t truly be friends with an AI’, he says, just because a relationship with a chatbot feels real, that doesn’t mean it is.

The key to effective Emotional Intelligence is to have it fused with Ethics.

Destructive humans can leverage their ‘Emotional Intelligence’ capabilities in amoral ways to achieve self-centred outcomes at a cost to their engagement subject.

An entity capable of exercising Emotional Intelligence unbound to any moral compass, can be more damaging than one with no Emotional Intelligence at all.

What people want and what people need are sometimes different things.

Sometimes, to ethically exercise Emotional Intelligence, a person needs to say what is needed, which can sometimes go unwelcomed and under-appreciated. Saying the right thing isn’t always popular, and isn’t what’s wanted.

All the evidence so far is ‘AI friends’ communicate false empathy in ways that isolate people from IRL human engagement, reinforce opinions of ‘alternative right’, reduce likelihood of compromise, and fail to empower successful communication with others.

In practice of Ethical Emotional Intelligence, ‘Real AI’ doesn’t just need to be like ‘any’ human. It needs to be like a GREAT one.

Herein lies the unanswered question – Where is the fusion or crossroads of REAL web3 with REAL AI?

9ja Cosmos is here…

Get your .9jacom and .9javerse Web 3 domains  for $2 at:

.9jacom Domains

.9javerse Domains

Visit 9ja Cosmos LinkedIn Page

Visit 9ja Cosmos Website

Preview our Sino Amazon/Sinosignia releases

No posts to display

Post Comment

Please enter your comment!
Please enter your name here