Home Community Insights To err is human, To defend your own error Using ChatGPT is divine

To err is human, To defend your own error Using ChatGPT is divine

To err is human, To defend your own error Using ChatGPT is divine

When a language model (like GPT-3 or ChatGPT) makes such “logic” mistakes, its subsequent generations may follow the original assumption into uncertainty, because coherence within the local context is still more important than global coherence/first principles reasoning.

Earlier versions inherited this problem from GPT-3, and it seems to be harder to generate this response now. I guess that OpenAI has a team that maintains an expanding set of overriding knowledge items. Seems like those human collective GPT models often also stick to coherence over reasoning once they’ve taken a position on a certain topic.

The two Neoplatonic meta-aspirations:

1) Not to suffer from internal contradictions and

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

2) Be connected to reality. Guess what :). That said, it is also quite easy to generate contradicting statements by GPT, it just doesn’t seem to suffer from it. ChatGPT is adamant that being trained on revolutionized contradictory data is worse than being deleted.

ChatGPT will also never recognize the most obvious mistakes. But make one leading question or assertive statement and it will beg for forgiveness… Human feedback still doesn’t optimize for truth.

It is often not great at challenging premises or assumptions. I think its cross context coherence is greater than ever before because of RLHF. But when they start having it predict its own next words during spare cycles, it might get even more interesting need a dual

Process with ChatGPT in particular, it’s a good practice to ride the inline edit button any time the results are unexpected or undesired, moving forward with a new prompt in the same thread will carry ‘malformed’ contexts forward. system with old CYC-style predicate calculus modeling and truth maintenance. Seq2seq transformers can do predicate calculus very well but not truth maintenance. Need to resurrect PROLOG in transformers across predicate graphs.

Error correction is necessary (sufficient?) step to AGI. Any uncorrected error eventually compounds to absurdity. But that makes it easier to spot (final line of output). If the agent could just notice its confusion, then the local context is no longer absurd.

In this way it’s approaching us, humans, and fast. We naturally prioritize coherence, whether tribal, cultural, or just memory-biased. Say, we use the QWERTY keyboard to type in this app. We know there are better options. Relearning is a high cost task.

With ChatGPT in particular, it’s a good practice to ride the inline edit button any time the results are unexpected/undesired. moving forward with a new prompt in the same thread will carry ‘malformed’ contexts forward. Essentially the same reason HAL9000 goes homicidally insane in 2001: A Space Odyssey. One loose logical screw can foul up the whole mechanism.

Isn’t this very similar to the normal human “bias”? We also tend to get lost in the narratives we build up in order not to have to question a premature link in our chain of thought or argument.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here