Finance

A.I. Can Write Poetry, but It Struggles With Math

In the school year that ended recently, one class of learners stood out as a seeming puzzle. They are hard-working, improving and remarkably articulate. But curiously, these learners — artificially intelligent chatbots — often struggle with math.

Chatbots like Open AI’s ChatGPT can write poetry, summarize books and answer questions, often with human-level fluency. These systems can do math, based on what they have learned, but the results can vary and be wrong. They are fine-tuned for determining probabilities, not doing rules-based calculations. Likelihood is not accuracy, and language is more flexible, and forgiving, than math.

“The A.I. chatbots have difficulty with math because they were never designed to do it,” said Kristian Hammond, a computer science professor and artificial intelligence researcher at Northwestern University.

The world’s smartest computer scientists, it seems, have created artificial intelligence that is more liberal arts major than numbers whiz.

That, on the face of it, is a sharp break with computing’s past. Since the early computers appeared in the 1940s, a good summary definition of computing has been “math on steroids.” Computers have been tireless, fast, accurate calculating machines. Crunching numbers has long been what computers are really good at, far exceeding human performance.

Traditionally, computers have been programmed to follow step-by-step rules and retrieve information in structured databases. They were powerful but brittle. So past efforts at A.I. hit a wall.

Back to top button