Australia markets close in 1 hour 23 minutes

    -59.60 (-0.84%)
  • ASX 200

    -53.70 (-0.79%)

    -0.0006 (-0.07%)
  • OIL

    +0.22 (+0.42%)
  • GOLD

    -5.20 (-0.28%)

    -1,134.88 (-2.69%)
  • CMC Crypto 200

    -6.95 (-1.07%)

    -0.0001 (-0.02%)

    +0.0003 (+0.03%)
  • NZX 50

    +22.48 (+0.17%)

    +6.89 (+0.05%)
  • FTSE

    +15.16 (+0.23%)
  • Dow Jones

    -22.96 (-0.07%)
  • DAX

    +227.04 (+1.66%)
  • Hang Seng

    +18.49 (+0.06%)
  • NIKKEI 225

    +49.88 (+0.17%)

Facebook taught its AI to speak math

Andrew Tarantola
·Senior Editor
·3-min read
Witthaya Prasongsin via Getty Images

I speak two languages, English and Bad English. My understanding of math is significantly worse. In fact, I had to redo Calculus 2A four different times in college in order to graduate, mostly because I could never properly calculate a ladder's rate of acceleration as it fell away from a wall. You know, the sort of theoretical quandaries that really matter in our day to day lives.

My numerical idiocy aside, Facebook has trained an AI to solve the toughest of math problems. Real superstring stuff. In effect, FB has taught their neural network to view complex mathematical equations "as a kind of language and then [treat] solutions as a translation problem for sequence-to-sequence neural networks."

This is actually quite a feat since most neural networks operate on an approximation system: they can figure out if an image is of a dog or a marmoset or a steam radiator with a reasonable amount of certainty but precisely calculating figures in a symbolic problem like b - 4ac = 7 is a whole different kettle of fish. Facebook managed this by not treating the equation like a math problem but rather like a language problem. Specifically the research team approached the issue using neural machine translation (NMT). In short, they taught an AI to speak math. The result was a system capable of solving equations in a fraction of the time that algebra-based systems like Maple, Mathematica, and Matlab would take.

"By training a model to detect patterns in symbolic equations, we believed that a neural network could piece together the clues that led to their solutions, roughly similar to a human's intuition-based approach to complex problems," the research team wrote in a blog post released today. "So we began exploring symbolic reasoning as an NMT problem, in which a model could predict possible solutions based on examples of problems and their matching solutions."

Essentially the research team taught the AI to unpack mathematical equations much in the same way that we do for complex phrases, like the example below. Instead of breaking out the verbs, nouns and adjectives, the system silos the various individual variables.

The researchers focused primarily on solving differential and integration equations, but, because those two flavors of math don't always have solutions for a given equation, the team had to get tricky in generating training data for the machine learning system.

"For our symbolic integration equations, for example, we flipped the translation approach around: Instead of generating problems and finding their solutions, we generated solutions and found their problem (their derivative), which is a much easier task," the team wrote and which I vaguely understand. "This approach of generating problems from their solutions — what engineers sometimes refer to as trapdoor problems — made it feasible to create millions of integration examples."

Still, it apparently worked. The team achieved a success rate of 99.7 percent on integration problems and 94 percent and 81.2 percent, respectively, for first- and second-order differential equations, compared to 84 percent on the same integration problems and 77.2 percent and 61.6 percent for differential equations using Mathematica. It also took FB's program just over half a second to arrive at its conclusion rather than the several minutes it required for existing systems to do the same.