this post was submitted on 02 Jun 2024
52 points (93.3% liked)

Programming

17378 readers
361 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 
  • neural network is trained with deep Q-learning in its own training environment
  • controls the game with twinject

demonstration video of the neural network playing Touhou (Imperishable Night):

it actually makes progress up to the stage boss which is fairly impressive. it performs okay in its training environment but performs poorly in an existing bullet hell game and makes a lot of mistakes.

let me know your thoughts and any questions you have!

you are viewing a single comment's thread
view the rest of the comments
[–] 100@fedia.io 8 points 5 months ago (3 children)

one problem ive seen with these game ai projects is that you have to constantly tweak it and reset training because it eventually ends up in a loop of bad habits and doesnt progress

so is it even possible to complete such a project with this kind of approach as it seems to take too much time to get anywhere without insane server farms?

[–] zolax@programming.dev 3 points 5 months ago (2 children)

one problem ive seen with these game ai projects is that you have to constantly tweak it and reset training because it eventually ends up in a loop of bad habits and doesnt progress

you're correct that this is a recurring problem with a lot of machine learning projects, but this is more a problem with some evolutionary algorithms (simulating evolution to create better-performing neural networks) where the randomness of evolution usually leads to unintended behaviour and an eventual lack of progression, while this project instead uses deep Q-learning.

the neural network is scored based on its total distance between every bullet. so while the neural network doesn't perform well in-game, it does actually score very good (better than me in most attempts).

so is it even possible to complete such a project with this kind of approach as it seems to take too much time to get anywhere without insane server farms?

the vast majority of these kind of projects - including mine - aren't created to solve a problem. they just investigate the potential of such an algorithm as a learning experience and for others to learn off of.

the only practical applications for this project would be to replace the "CPU" in 2 player bullet hell games and maybe to automatically gauge a game's difficulty and programs already exist to play bullet hell games automatically so the application is quite limited.

[–] 100@fedia.io 2 points 5 months ago (1 children)

i mean if you could in the future make an ai play long games from start to finish, it would be very useful to test games with thousands running at once

[–] zolax@programming.dev 1 points 5 months ago

definitely. usually algorithms are used to calculate the difficulty of a game (eg. in osu!, a rhythm game) so there's definitely a practical application there