When I “signed up” for this project, is seemed to me that my goal would be to focus on the visual elements, predominantly, the shaders and particle effects.
As it turned out, Tim Volp was also to be the other programmer on this project, and I know from previous projects, they are also his goals.
The visual elements were spit down the middle, with Tim looking after the shaders (he had some really great ideas to follow up) while I looked after the fog particles.
The way things ended up, I was also looking after the character controllers and the camera.
My main problem with the character controllers was not knowing that the character controller had an isGrounded function built into the component. Careful reading of the API’s would have alerted me to that in the first place. It seems that one of my main problems is to looking for solutions for what I want something to do, but if I start my search with the API’s, I will have a better understanding of what a component can actually do, and what terms I can use for better Google searches. As it was, I was probably keeping my characters too far off the ground using a raycast, instead of the preexisting isGrounded function.
The fog wasn’t much of a challenge, just getting the right particles to do the trick. I did learn quite a bit about the Sukurinan Particle System though. It was the first time I have used this system, having opted for Legacy Particle Systems, previously. The way, I found, to get the particles to size up adequately, was to create the system, when the x scaling of the emitter was at .3, then recreate it when it was at a scale of 10 and again when it was at a scale of 30. The different attributes of the fog, that could be controlled through script, were them plotted against the scaling, and a formula was derived that would plot the curve of the fog elements.
This led to unusual looking formulae within my code that looked a bit like this:
fogEmmitter.startLifetime = (0.0016f * xScale * xScale) + (0.145f * xScale) + 10.49f + startLifeTimeOffset;
fogEmmitter.startSpeed = (-0.0014f * xScale * xScale) + (0.1139f * xScale) + 0.549f + startSpeedOffset;
fogEmmitter.startSize = (-0.0005f * xScale * xScale) + (0.0998f * xScale) + 0.514f + startSizeOffset;
fogEmmitter.maxParticles = (int)((366.97f * xScale * xScale) + ( 339.25f * xScale) + 7581f + maxParticlesOffset);
fogEmmitter.emissionRate = (-0.226f * xScale * xScale) + (92.663f * xScale) + 656.68f + emissionRateOffset;
My biggest challenge for the project was the dynamic camera. This was an unusual camera setup whereby the player was never to have control of the camera. The theme of the game was about trust, and the player needed to have trust that we would be doing the right thing with the camera. Let me start by saying the 5 – 6 weeks is not nearly enough time to establish a decent camera system for a “Journey” like game. The guy who was bought in to work on the Journey cameras was at it full time for 2 – 3 years. Unfortunately, the camera wasn’t up to the task, even for the gallery, but I was close. So close, but no cigar. I think I have found out the problem with it.
Because there are two players affecting the movement of the camera, I was looking at different ways to affect the camera, mainly through the use of following the rotational direction of a target that is between the two players. I started using Quaternions, but the camera would switch around way to fast and jitter in certain places. I settled for using a distant target and trigger boxes that could swap the targets around for when there was a specific purpose, but there were many complaints about the camera and it’s angles.
I did some more research into the camera and I thought I had the solution. The tutorial area is supposed to be fun and unstructured, so the players can faf around and get used to the controls, without any pressure. I believed I had achieved this and even play tested the new camera with my son, but somehow, it too failed me. It was “OK” for the first part of the level, but half way through, it was pointing in the wrong direction.
I was devastated, for we were now at the gallery. Some very quick and nasty trigger boxes were set up to try and get everything finalised, but Savik and I were not very happy with the outcome.
This weekend, I believe that I came up with another solution. I, for some weird reason, always thought that a players magnitude, was the same as his velocity. I was experimenting with their velocity when I realised that I had been over thinking the problem (as usual). I just needed to find the average of their individual transform.forwards and use that for the target rotation.
I will be play testing this with Tim today (which is something that was sadly missing in this project, but will be rectified in future efforts), but I think I have finally done well with this iteration of the camera movement.