One of the things that I've been thinking a lot about recently is writing software that, instead of strictly defining things can infer a user's intent. As I pondered the solution above, it occurred to me that defining a bounding box exactly 34 pixels in size was doing just that ... limiting the ability for the software to infer the user's intent in touching the screen. What if his touch point is 35 pixels away from his intended target?
Instead, what if you compare the user's touch point against the entity positions and simply select the closest one (regardless of whether it's 35 or 36 pixels away)?
The method below does just that given an array of entities and a touch location:
This assumes single finger data entry, meaning that it's probably best served for things like menus and options.private static Entity FindNearestEntity(Entity[] entities, ref TouchLocation touch) { Entity selectedEntity = null; float currentDistance = 0;
// The entity with the nearest for (int i = 0; i < entities.Length; i++) { var entity = entities[i];
float distance = Vector2.DistanceSquared(touch.Position, entity.Position); if (distance < currentDistance) { selectedEntity = entity; currentDistance = distance; } } return selectedEntity; }
It's pretty exciting now that XNA has these touch APIs available to it. I am looking forward to doing more research in this space, and playing around with different "natural user interfaces".
For some great additional resources, be sure to drop by the natural user interface group on the web at: http://nuigroup.com/