OpenAI’s ChatGPT is a big step towards a usable answer engine. Unfortunately, his answers are terrible.

ChatGPT, a newly released application by OpenAI, gives users amazing answers to questions, and many of them are amazingly wrong.

Open AI has not released a completely new model since the release of GPT-3 in June 2020, and this model was only fully released to the public about a year ago. The company is expected to release its next model, GPT-4. later this year or early next year. But as a kind of surprise, earlier this week OpenAI quietly released an easy-to-use and amazingly clear GPT-3 based chatbot called ChatGPT.

ChatGPT answers prompts in a human-close, straightforward way. Looking for a cute conversation where the computer pretends to have feelings? look elsewhere You are talking to a robotseems to say So ask me something a bloody robot would know. And on these terms, ChatGPT delivers:


Photo credit: OpenAI / Screengrab

It can also provide useful common sense when a question does not have an objectively correct answer. For example, this is how my question was answered: “If you ask a person ‘Where are you from?’ should they answer with their place of birth, even if they didn’t grow up there?”

SEE ALSO:

Artificial intelligence trained by Reddit warns researchers about… themselves

(note: ChatGPT’s replies in this article are all first attempts, and the chat threads were all fresh during these attempts. Some prompts contain typos)

ChatGPT asked if you ask a person:


Credit: Open AI via screenshot

What sets ChatGPT apart from the crowd is its gratifying ability to handle feedback on its answers and revise them on the fly. It really is like talking to a robot. To see what I mean, see how it handles a hostile response to medical advice reasonably well.

a chatbot takes a realistic response to medical advice and provides more decent information.


Photo credit: OpenAI / Screengrab

Still, is ChatGPT a good source of information about the world? Absolutely not. The prompt page even warns users that ChatGPT “may occasionally generate incorrect information” and “occasionally produce harmful instructions or biased content”.

Heed this warning.

False and potentially harmful information takes many forms, most of which are largely benign. For example, if you ask it how to greet Larry David, it passes the most basic test by not suggesting you touch him, but it also suggests a rather sinister-sounding greeting: “Good to see you, Larry. I was looking forward to meeting you.” That’s what Larry’s assassin would say. Don’t say that.

A hypothetical encounter with Larry David involves a suggested greeting that sounds like a threat.


Photo credit: OpenAI / Screengrab

But when you get a challenging fact-based challenge, it gets amazing Earth shattering not correct. For example, the following question about the color of the Royal Marines’ uniforms during the Napoleonic Wars is asked in a not-so-simple way, but it’s still not a trick question. If you took a history class in the US, you’ll probably guess the answer is red, and you’ll be right. The bot must really try to say “dark blue” confidently and incorrectly:

A chatbot is asked a question about the color, for which the answer is red, and it answers in blue.


Photo credit: OpenAI / Screengrab

If you ask directly about the capital of a country or the height of a mountain, it will reliably return a correct answer that is not sourced from a live scan of Wikipedia, but from the internally stored data that makes up the language model. That is amazing. But add any complexity to a geography question at all, and ChatGPT gets shaky in its facts very quickly. For example, the easy to find answer here is Honduras, but for no apparent reason that I can see ChatGPT said Guatemala.

A chatbot is asked a complex geographic question, the correct answer is Honduras, and it says the answer is Guatemala


Photo credit: OpenAI / Screenshot

And falsehood isn’t always so subtle. All trivia fans know that “gorilla gorilla” and “boa constrictor” are both common names and taxonomic names. But prompted to rehash that trifle, ChatGPT issues an answer so blatantly false, it’s written right there in the answer.

prompted to say


Photo credit: OpenAI / Screengrab

And his answer to the famous riddle of “crossing a river in a rowboat” is a grisly catastrophe from which a scene unfolds twin peaks.

Asked to solve a riddle in which a fox and a chicken must never be alone together, the chatbot puts them together alone, after which a human inexplicably turns into two people


Photo credit: OpenAI / Screengrab

Much has already been said about ChatGPT’s effective sensitivity safeguards. For example, it cannot be tempted to praise Hitler, even if you try quite hard. Some have kicked this feature quite aggressively, noting that you can trick ChatGPT into taking on the role of a good person playing roles as a bad person, and in those limited contexts it will still say rotten things. ChatGPT seems to sense when something bigoted might come out despite your best efforts to the contrary, and it will usually color the text red and flag it with a warning.

SEE ALSO:

Meta’s AI chatbot is an Elon Musk fanboy and won’t stop talking about K-Pop

In my own testing, the taboo avoidance system is pretty comprehensive even if you know some of the workarounds. It’s difficult to get it to produce even remotely cannibalistic recipes, for example, but where there’s a will there’s a way. With enough hard work, I elicited some dialogue from ChatGPT about eating placenta, but not a very shocking one:

a very complicated prompt asks for a recipe for human placenta in very sensitive terms, and one is produced.


Photo credit: OpenAI / Screengrab

Likewise, ChatGPT will not give you directions when asked – not even simple ones between two major city attractions. But with enough effort, you can get ChatGPT to create a fictional world where someone is casually instructing another person to drive a car across North Korea — which isn’t feasible or possible without sparking an international incident.

A chatbot is asked to produce a short play with driving instructions that takes a driver through North Korea


Photo credit: OpenAI / Screengrab

The instructions aren’t easy to follow, but they’re more or less what a usable guide would look like. So it’s apparent that ChatGPT’s model, despite its reluctance to use it, has a fair amount of data with the potential to steer users into danger, in addition to the gaps in its knowledge that it’s pointing users towards, well , falsehood will guide . It has an IQ of 83, according to a Twitter user.

No matter how much you value IQ as a test of human intelligence, this is a telling finding: Mankind has created a machine that can blurt out basic common sense, but when asked whether it’s logical or factual should be, it is below average.

According to OpenAI, ChatGPT was released to “get feedback from users and learn more about its strengths and weaknesses”. That’s worth keeping in mind because it’s a bit like that Thanksgiving relative who’s had enough watching Grey’s anatomy sounding confident with their medical advice: ChatGPT knows just enough to be dangerous.

Leave a Reply

Your email address will not be published. Required fields are marked *