First attempts at porting to mobile - February 23, 2019 by Steve
This week, I decided to see about porting Minotaur Maître D’ over to the iPad. Originally for this week's update I was going to do some more 3D printing, or maybe a project on the laser cutter. However the printer decided to die on me, and is currently en-route to the RMA processing center. The laser cutter also had a ribbon cable go bad, and we're now waiting for a replacement. At least the laser cutter worked pretty well for over a year before breaking, unlike the 6 weeks on the 3D printer.
With all the other maker tools out of commission, in order to feel productive, the first thing was to set up Unity on the MacBook Pro, and get XCode installed. I'm not going to go into detail on setting up the build process, since there are instructions on that provided by Unity.) At this point I had two goals: first getting the game to display properly, then adding in touch input.
When I wrote the first version of the game, I targeted playing in a browser. This meant I could make the player fit whatever
aspect ratio I wanted. Because the maze is square, I made the camera viewport square, then I just had to adjust the orthographicSize
of the camera to match the overall size of the maze.
Rect viewPort = new Rect {width = 1f, height = 1f};
Camera.main.rect = viewPort;
Camera.main.orthographicSize = (mazeSize * 1f) / 2;
From the documentation, "The orthographicSize is half the size of the vertical viewing volume", hence dividing the mazeSize by two.
Now when you move over to a mobile device, or allow the game to run full screen, you lose control over the aspect ratio. Since the orthographic size is the vertical dimension, just running my game on a mobile device in portrait mode meant the maze spills over the sides. To fix this, I needed to check which dimension is larger, and then chose the orthographic size appropriately.
Rect viewPort = new Rect {width = 1f, height = 1f};
Camera.main.rect = viewPort;
// For mobile devices:
float aspect = (float) Screen.width / (float) Screen.height;
if (aspect >= 1f) {
Camera.main.orthographicSize = (mazeSize * 1f) / 2f;
} else {
Camera.main.orthographicSize = (mazeSize * 1f) / aspect / 2f;
}
If the aspect ratio is greater than 1, then the width is larger than the height, so we proceed as we did before. However, if the aspect is smaller than 1, vertical is smaller and the image will spill over. So we scale the height down using the aspect ratio, so the maze will fit in the horizontal dimension with whitespace above and below.
This is a simple yet dumb fix for now, the UI elements don't line up correctly anymore, and is generally a mess. The maze fits on the screen though, and that's enough to test with.
At this point, I just wanted to see if playing by touch was possible. I also did not bother diving into free assets for adding in virtual joysticks or anything like that yet. (Honestly I'll probably go that route in the end, why reinvent the wheel?)
So I went with a very simple control scheme. You touch anywhere on the screen, then drag your finger in the direction you
want the character to run. Programmatically this is very simple. Touches in unity have several phases.
Of these, we only really care if the touch is in Began
and then after that, we only care if it hasn't Ended
or Canceled
.
That is to say, we need to know when the touch (and we're only going to deal with the first touch) starts, and while it is still down (moving or not).
Every FixedUpdate()
we'll get all Input.touches
, and then take a look at the first one. If the touch's phase is TouchPhase.Began
, it means
that the player has just touched the screen. So we'll capture that point. In the next FixedUpdate()
if that touch is still there (i.e. phase is not
TouchPhase.Ended
or TouchPhase.Canceled
), then we will compare the touch location with that start touch location obtaining a Vector2
giving us a heading and a
distance.
Touch[] touches = Input.touches;
if (touches.Length > 0) {
// we'll use the first touch.
Touch myTouch = touches[0];
if (myTouch.phase == TouchPhase.Began) {
touchStart = myTouch.position;
} else if (myTouch.phase != TouchPhase.Ended && myTouch.phase != TouchPhase.Canceled) {
moveHorizontal = normalizeTouchInput(myTouch.position.x - touchStart.x);
moveVertical = normalizeTouchInput(myTouch.position.y - touchStart.y);
}
}
Vector2 movement = new Vector2(moveHorizontal, moveVertical);
You'll see a normalizeTouchInput()
method being used on the difference in x
and y
positions. How that method works
can really change how your input feels. If, you have that output 1
, 0
, or -1
then you'll get controls that feel
almost like a directional pad. Either you are pushing the right or left or neither. It's very similar to hitting keys on the keyboard.
Except that it feels nothing like that. To me at least, there is a disconnect between dragging your finger some distance and having
the movement go all or nothing. It's useful to note if you decided to do this, set a minimum threshold for the movement. In other words,
make sure you have to move away from your start point some distance before normalizing to 1, otherwise you'll get lots of movement
without even really moving your finger. This whole scheme feels very weird the further you drag your finger away from the start as well.
// All or nothing, like pressing a key.
private float normalizeTouchInput(float moveVal)
{
if (moveVal >= touchNorm) {
return 1f;
} else if (moveVal <= (-1f * touchNorm)) {
return -1f;
}
return 0;
}
Now, another way to normalize the output is to do something very similar, but rather than using touchNorm
as a minimum
movement amount, you make it a maximum. If you are below that amount, rather than return a zero, return a value between 0 and 1.
// proportional return
private float normalizeTouchInput(float moveVal)
{
if (moveVal >= touchNorm) {
return 1f;
} else if (moveVal <= (-1f * touchNorm)) {
return -1f;
}
return moveVal / touchNorm;
}
This feels a bit more joystick like, and gives a bit finer control. I still had a bit of an issue playing like this though because I'd lose where the start point was, and found myself drifting further away from the start the longer I kept my finger down. A bit of visual feedback might help with this. Also I feel like a bit of a dead zone around the start point might also help with control as well.
At this point, the game is playable. The touch control code fits in just after getting the keyboard inputs and
before the call to add force to the player's RigidBody2d
. The full fixedUpdate()
is below. You can see how the touch input
just overrides the Input.GetAxis()
calls.
void FixedUpdate() {
if (!GameController.instance.isWaitingForNewLevel()) {
float moveHorizontal = Input.GetAxis("Horizontal");
float moveVertical = Input.GetAxis("Vertical");
Touch[] touches = Input.touches;
if (touches.Length > 0) {
// use the first touch.
Touch myTouch = touches[0];
if (myTouch.phase == TouchPhase.Began) {
touchStart = myTouch.position;
} else if (myTouch.phase != TouchPhase.Ended && myTouch.phase != TouchPhase.Canceled) {
moveHorizontal = normalizeTouchInput(myTouch.position.x - touchStart.x);
moveVertical = normalizeTouchInput(myTouch.position.y - touchStart.y);
}
}
Vector2 movement = new Vector2(moveHorizontal, moveVertical);
rb2d.AddForce(movement * speed);
}
}
The physics based movement does not work as well as I would like. In fact, I've started to agree with the comments I had received on my entry that
the controls had to be tightened up. I'm also pretty sure I'm just going to end up using a virtual joystick asset rather than roll my own, just to get the visual feedback.
I'm working on changing the player control to something that isn't just AddForce()
on the RigidBody2D
which will allow a bit more
control over how the player moved as well. I'll cover that in the next post though, as I ended up making a simple project that
allows me to quickly experiment with different schemes. I want to clean that up a little and put it up on GitHub so
anyone can pull it down and play with it.