Deep Learning for Symbolic Mathematics
Typically when you think of applications of deep learning neural networks, they include the types of things we've been discussing here, like image recognition, audio analysis and synthesis, language completion or translation, etc. problems usually thought of from the standpoint of statistical modeling or manifold learning. You don't typically think of them for use when dealing with symbolic math problems.
I recently came up on an interesting paper that addresses this problem. Guillaume Lample and Francois Charton in Facebook's AI research group proposed an interesting way to restructure symbolic math expressions so that they could then be used to train RNN seq2seq neural nets.
They restructured the symbolic expression problem so that they could use the same deep neural nets used for language translation.
What they did was restructure mathematical expressions as trees. Operators and functions are internal nodes, operands are children, and numbers, constant and variable are leaves of the tree. This clever restructuring of the representation is what allows the problem set to then be used to train a neural net.
There's a nice easy to read write up of this work in Quanta Magazine.
You can read the original paper on Deep Learning for Symbolic Mathematics here.
So the take away message is that how you go about representing your data can be just as important as what kind of neural network you use to solve a particular problem.
The neural net for symbolic learning actually out performed Mathematica on some integration problems. Of course the usual criticisms were made of the neural net system, 'the neural net doesn't really understand the math', etc.
Some of you might remember John Koza's work in the 90's on evolving software using genetic programming. Software expressions were also modeled as tree structures and the tree structures were then evolved in this work.
There were a number of papers around that time that also used tree structures to represent and manipulate mathematical expressions. Karl Sims work on artificial evolution for computer graphics comes to mind.
I recently came up on an interesting paper that addresses this problem. Guillaume Lample and Francois Charton in Facebook's AI research group proposed an interesting way to restructure symbolic math expressions so that they could then be used to train RNN seq2seq neural nets.
They restructured the symbolic expression problem so that they could use the same deep neural nets used for language translation.
What they did was restructure mathematical expressions as trees. Operators and functions are internal nodes, operands are children, and numbers, constant and variable are leaves of the tree. This clever restructuring of the representation is what allows the problem set to then be used to train a neural net.
There's a nice easy to read write up of this work in Quanta Magazine.
You can read the original paper on Deep Learning for Symbolic Mathematics here.
So the take away message is that how you go about representing your data can be just as important as what kind of neural network you use to solve a particular problem.
The neural net for symbolic learning actually out performed Mathematica on some integration problems. Of course the usual criticisms were made of the neural net system, 'the neural net doesn't really understand the math', etc.
Some of you might remember John Koza's work in the 90's on evolving software using genetic programming. Software expressions were also modeled as tree structures and the tree structures were then evolved in this work.
There were a number of papers around that time that also used tree structures to represent and manipulate mathematical expressions. Karl Sims work on artificial evolution for computer graphics comes to mind.
Comments
Post a Comment