Neural Network Snake Game Controller in Java
This is a Java implementation of a snake game that is controlled by a Q-Learning Neural Network. The Neural Network is trained by letting the snake play the game, without any pre-defined training data. The snake learns from its own experience and improves its performance with each game played.
Usage
- Clone this repository
- Compile and run the main class
- The game will start automatically, and the snake will begin to move
The neural network will begin to learn from the snake’s movements and improve its performance over time.
Training Data and Game Logs
The training data and game logs are not created until the code is run. The neural network is trained by letting the snake play the game, and the training data is generated from the snake’s movements during the game. The game logs are also generated during the game, and they contain information about the snake’s movements and the game’s state.
Performance Improvement
Without changing any hyperparameters, the neural network should show significant improvement within 50 games. As the snake continues to play, the neural network will learn from its mistakes and improve its performance. The more the snake plays, the better the neural network will become at controlling the snake.
Contributing
If you would like to contribute to this project, please feel free to submit a pull request. We welcome any suggestions or improvements that can help make this project better.