In Browser demo
Simple physics simulation: Recurrent Neural Network predicts a flight of a plane and takes a blowing wind int an account. There are 3 versions of the network that predict:
- a position of a plane (1 point)
- a position of few key vertices of a plane (6 points)
- a position of every vertex of an original plane model (14 points)
Networks ware trained with different lengths of samples, which is the number of keyframes used to create a prediction on the next one. 1, 3, 5, 7 9 or 11 frames ware used.
To train the network a number of plane flight simulations were generated in Houdini with Vellum. In this simulation, a turbulence force was provided so the plane would not take the same path every flight and so the path would be more interesting. There’s also a wind that blows for a random time starting at the random frame.
Every flight was saved as a separate .npy file with an array that contains positions of plane vertices at different frames.
The network did manage to learn how wind or hitting the ground affect the shape of the plane:
.hip file
- A plane simulation is generated in
/obj/generate_data
geometry node - Trained models are used in
test_single_point
,/obj/test_all_points
or/obj/test_all_points
to generate animation. To do this use the interface of acreate_keyframes
node to set desired parameters and evaluate it. To play animation useswitch1
node. - To generate samples use
/out/wedge1
. It randomizes parameters and creates samples. Multiple instances of Houdini can be used to do this faster: a PID of a Houdini process is used in a filename so instances of software would not overwrite files created by other instances. - Use
plane.ipynb
to train models.
In-Browser demonstration
A demonstration of networks that predict positions of key vertices position is available on http://www.pkowalski.com/demo
It was written in JS and uses libraries:
Keras models ware converted for tf.js with the command:
tensorflowjs_converter --input_format keras src_model trg_dir
How to use
The demonstration should start after page was loaded. To change simulation parameters use the UI that appears at the top right part of the page. To apply selected parameters and restart simulation click Apply
.
A graph on the top-left part of the screen shows a time taken by the neural network to predict the next frame.
Comment
First N
frames are generated by offsetting the start position, where N
is a length of a sample that network uses to predict the next one. This is why a sudden change in the shape of a plane:
Due to a small number of samples that networks ware trained on there are some combinations of parameters that cause the animation to break: