We need more a more primitive way to detect user input rather than Jump/Walk/Sprint/etc input events. This lack of user input detection is really holding a lot of rooms back, including some of my own.
Use cases and why current input detection isn’t sufficient.
Let’s create a scenario: Let’s say I want the player to perform a short dash in a given direction by tapping the Shift key. Simple enough, detect when the sprint button is pressed and move the player a bit.
Here’s the problem: On controller the sprint button is mapped to pressing down the joystick, which feels horrible especially when trying to do it quickly in succession. On mobile this is even worse, since sprinting is automatically done when the touch stick is moved far enough.
Ok fine whatever, what if I detect when the jump button is pressed. Ok well now I jump and dash at the same time, so then let’s make it to where I can only dash if I’m in the air. Well now I have to jump anyways to dash. Wait, but now I want to add a double jump. Well, I could set this to the crouch input, but then I run into the same problem of it not feeling good to use on controller and mobile.
This could all have been avoided had I’d been able to check the raw inputs of the user and map accordingly to each platform (which we can already detect).
Mockup
Here’s a few chips I believe would be useful:
Input Button Pressed Event - Sends an execution when the configured button is pressed.
Input Button Released Event - Sends an execution when the configured button is pressed.
Input Last Button Pressed Event - Sends an execution as well as outputs a string of the last button pressed.
Input Last Button Released Event - Sends an execution as well as outputs a string of the last button released.
Conclusion
The addition of raw user input detection could open up a lot of possibilities for advanced creation, allowing creators to create even more enjoyable experiences that keep players engaged!