Tag Archives: Post Mortem

Postmortem on Pulse Monitoring Plug-in for Designer’s Game

I found that this was an unusual process.  To date I have been deeply immersed in most of the games that I have collaborated in.  The closest to this was my involvement with “Valour” where the programmers went in hard to start with, set a lot of things up and then had much less contact with the designers as they advanced their ideas through the game play.

To even mention “Valour” send shivers down my spine and it is not a comparison to the final product of “Be Brave”, created by Savik Fraguella.  My involvement was even less, in the scheme of things.  I created the software to get Unity to accept an incoming pulse beat from the wearer and feed that pulse rate into an system that would emit a “Fear Factor” rating that Savik would then be able to use to influence the content of his game.

I quickly, over the space of about 2-3 weeks, completed this task and handed it off to Savik to await any problem that might emerge.  There was one situation where the program would quit and then restart as the level was changed but this was quickly over come by making it a persistent singleton that could start during the main menu and then remain through the levels but destroy itself when either quitting, or completing the level(s).

Savik, at one stage, did borrow the equipment needed to test the game, but he either didn’t install the FTDI drivers, or there was a problem installing them.  I referred him to my blog post regarding the project for installing them.

As we got closer to Open Day, where the game would be displayed, I regularly checked with Savik if he wanted the Arduino to test the game, but he wasn’t interested, focusing on the art work of the game.  It wasn’t until the day before Open Day that I realised that there were some problems.

The pulse rate wasn’t getting through to the build of Unity.  It still processes the information quicker in Unity than in the build but that is another story.  I narrowed the information coming into Unity in order for the information not to clog the ability for the build to run.  Instead of collecting all the information, I narrowed to Arduino to only send information that I could use, which was the actual beat rate.

Video showing the Pulse rate through a simple GUI and the Fear Rating being influenced by the pulse rate.  (I must apologise for spelling “Serial” wrong)

With some help from my tutor, who performed some Unity black magic, I was able to get the build to work properly.  When I find out what black magic was performed, I will amend this post with the information.

Savik showed some quite advanced scripting skills that impressed me, both with him knowing that they existed in the first place, and with the application in a game setting.  Using Pims Algorithm to randomly generate a level so that the exit was not close to the entrance was among the concepts that I found interesting.  I would do well to sit with Savik and discuss how he came to implement them and other ideas such as, apparently, using the fear state to assist in the generation of the level.

What I have learned through this project is not to adopt a “set and forget” mentality.  By remaining closer to Savik as he progressed through the making of the game, I would have been aware of the problems with my component much earlier, and I also would have gained a better understanding of how he utilised and adapted those scripts that left me with a positive impression.

 

Advertisements

Gallery showing Post-mortem.

What a random placement schmozzle .

Learning from the past, we actually organised a play testing session after we believed we had fixed all the bugs from the previous play test.

The 3ds Max bug that affected my laptop last time, was still present, so I could not build from my computer.  For some odd reason, Tim could still see the missing assets on his laptop.  I could not explain this, and still can’t.

We were satisfied until we got to the venue and did a quick speed run of the level.  That is when the camera gave up the ghost.  It worked perfectly fine in the tutorial area and then had severe problems in the later stages of the level.

We tried to quickly trouble shoot this problem and then decided that for most of the final end of the level, to include another of Savik’s trigger boxes to try and keep the players aimed in the right direction.

We did another build and had to call it quits at that point.

Then Unity took over and nothing seemed the same again.  Camera angles that were working previously, now didn’t work.  Fog that had spawned in the exact same place since the beginning, suddenly spawned in one of two places, the place where it was designed to spawn, or at a place that seemed to be somewhere outside of the temple.  Target points for the cut scenes suddenly seemed to move of their own volition and occasionally the camera couldn’t make it back to it’s starting position.  The camera check for hitting objects seemed to be hit and miss, occasionally staying way high in the air.  Most of these weird bugs could possibly be attributed to some error in the quick fix for the camera at the end, except for the random spawning of the fog, and the sudden maneuverability of the camera targets for the cut scenes at the shrines.

The game seemed to be pretty well received and it was a huge improvement over the first effort put up for play testing, but in the end, it still left a bitter taste in my mouth.

The secret of success for a good camera is to have no feedback over it.  It becomes such a part of the game that it isn’t noticed.  I realise that 6 weeks is hardly enough time to develop a fully evolved camera system, but I feel that this should have been better … at the very least, to point in a direction that it was intended.  My only solace is that the guy bought in to work on the camera for Journey spent almost two years getting it right.

For the first play test, my responsibility was for the wind turbine trigger, that was never used in any iteration of the games, the mechanics of the headbutt trigger button, the player controllers, the fog and the camera.  The main problem I had with the player controllers for the first play test was probably that I didn’t realise that the controllers had a .isGrounded function built in.  I was achieving this through a raycast, that was probably not quite grounding the player enough.  After the first play test, my main focus was now the camera and to a lesser degree the fog mechanics.

It wasn’t until after the second play test, when I was trying to explain how my camera was supposed to be operating, that I realised a serious mistake in working out my height and what the camera was looking at.  I wasn’t applying a base y position for these applications.

After the second play test, my main focus was on the camera and the cut scenes at the shrines.

Things I have learned from this project:

Making time to play test as soon as you have a viable core game loop happening.  That way you can identify bugs early and work to correct them sooner than we did in this project.

Clear guidelines from the outset as to the number of levels and the arrangement of assets/puzzles within those levels.  This way, early testing of things such as cameras and character controllers can be devised and implemented before making it into the levels.

Just because a system will work in one part of the level doesn’t mean that it will work in all parts of the level.

Did I mention that full play testing will reveal what works and where it doesn’t, so make time for the team members to play test the systems within the game.

Second Play Test Post-mortem

What a schmozzle (again).

There were so many problems with this play test that it is hard to know where to start.

For a root cause, I would probably start with the lack of private play testing before hand.  Again, most of the bugs would have been identified before hand.

For a second problem, we had no way of building again the morning of the play test.  The designer incorporated .max files in the repo which meant that no one was able to build from their lap tops, and as we were using Unity 5, we couldn’t use the college computers to create another build.

As a result, we were stuck with the build that was created the night before, that we knew had serious bugs.

The player controllers worked well, but the camera required boxes that changed the settings for the camera and the target.  The problem with these boxes was that they recorded the previous target, but when boxes overlapped, the original target was overwritten by the boxes and eventually, the distance target was lost completely.

What I really needed, for this game was to have a camera that would have the ability to free roam, without the wild erratic swings when the players changed direction, and incorporate them with the camera box triggers that would direct the player to the right area.  The ideal would be to have triggers that would load up points of interest within the scene and give a weighting to them, but I doubt there would be enough time and I wonder over the effectiveness of this method, when the ideal is for the players, in the tutorial area, to experience the fun of an unorganised area, where there is no pressure on them.  Where they can enjoy getting to know the mechanics of the game and learn what can be done with them.

The other main problem was with the shrines, themselves.  I needed to take absolute control of the camera and create a cut scene so that no other actions, either by the player, or from any other source, could affect the camera at this delicate stage.

The fog will also need more effects to hide the actual spawning of the fog, and the transforming of the fog from spawn mode to full scale fog.

I didn’t learn anything new from this play test, except that shit can go wrong at the worst possible time.  We repeated many of the mistakes from the last play yest.

First Play Test Post-mortem

What a schmozzle.

While people did have a lot of fun playing our game, it was for all the wrong reasons.  When the player controller hit the edge of a mesh collider, it had the possibility of becoming airborne and bounce across the scene on the edge of the cloak mesh.  In conjunction with this, semi-skillful use of the jump mechanics would make the controller float across the terrain to the point where one piece of feedback called it a NASA sim.

The “Tutorial” area of the game was buggered because while the test elements worked on our individual test levels, they didn’t work in in the game proper, because Tim and I didn’t account for the puzzles being childed to one or more empty game objects.  We were using transform.position to move our puzzle elements, when we should have been using transform.localPosition.

This would have been identified had we organised play-testing before hand.  There was also miscommunication about how many levels there would be in the game.  I was working under the impression that there would be only one level and that the fog would be moving, or chasing, the players through a “valley”.  I found out on the Thursday or Friday night that there would be 3 different levels and that the fog would be chasing the players over open area.

This caused a major revision of how the camera, and fog would have to operate.  As a stop gap, I had the camera always facing towards the end of the level.  I then compensated by having a List of targets that the camera would move towards and a function to receive a message that would move the distant target to a new location.   One problem with this was that the targets were set up for the players to gain a collectable to activate the shrines, but this collectible wasn’t ready for the game.  The camera would move once the players were in a certain distance of the collectible, but as it was a short radius to achieve, the camera wouldn’t move on. The feedback was that the players wanted control of the camera, which was never going to happen, as part of the camera mechanics were used for jumping and who should have control of the camera.  Two players battling for control of the camera would tend to break the Trust mechanic that was the intention behind the design of the game.

Things that I have learned from this play-test is that there need to be a very clear understanding of the game, it’s design and how the levels are going to be created and laid out.  This understanding needs to be gained in the very early stages of development.  That way, I would have had a clearer understanding of the level well in advance of the play-test.  Any changes from that design need to be made in consultation and agreed with by all the members of the team

As has been a problem with other projects in the past … play testing is vital before trying to demonstrate our game.  This iteration was never play-tested by us as a group before we tried to impose the first play-test on our “public”.  Many of the bugs that were in this game would have been easily identified and could have been corrected, if we had the opportunity to play test.  In that way, we, or at least I, would not have been so embarrassed by the bugs that became apparent during this test.

Post Mortem on Second Robot Battle

The story of the second battle is a case of an incorrect strategy. I had several points loaded into the map and the aim was for the bot to pathfind its way to the nearest point, then to sit there and camp until another bot turned up, whereupon my bot would fight it.

I named the bot “It’s a legit strategy” and it could well be, but, I did not account for someone else to a/. have the same strategy as me or 2/. have no pathfinding at all.

It was another unmitigated failure in yet another bot challenge. The first thing I noticed was that I obviously had a variable that wasn’t being reset, even though I seemed to have no problem with it in pre-testing. Depending on the first point it was using as a camping position, the “look at” spot remained the same, meaning that regardless of where it was camped, there was only one position where it was looking in the correct direction. The rest was just my inadequate strategy.

This is causing me a bit of bother, because in both cases, I have spent a lot of time on the bots and have had a negative result on both occasions.

What I should have done was to plot a course that took in the entire map. Any contact with the enemy would be met with a battle. A failure to meet the enemy means that it, too is moving around the map. This could possibly be the opportunity to begin “camping”

If the enemy was to run away, chase them or plot an intercept course. If they move towards me while attacking, plot a course where I can slip around a blind corner and shoot from ambush. Rinse and repeat the strategy.

I was also impressed with Tom’s choice of using rays to steer clear of the walls. I can see that it could be a very effective way of negotiating the map with a little bit of speed.

I am “re-building” my bot to incorporate most of these features, but after this weekend, it will probably have to remain in my “spare” time as there will be a lot of extras than will need to be done before the end of this trimester.

Another one bites the dust : Post Mortem on my first Killbot battle.

I spent way too much time for this assessment, and ended up last of all in the round-robin tournament.  I think the thing that saved me from finishing last of all in the tournament proper was that I started immediately into round two and then after losing my first match went immediately into round two of the repechage.

The immediate flaw was my accuracy.  I was only taking it from the enemies last known position and its current position.  As a result, many of my bullets were wasted by complete misses, especially when taken from longer ranges.

The next flaw was hitting the wall in the top right hand corner.  I have gone over my code and have seen what that issue was.  The problem was that, if the enemy was firing bullets at me, I was calculating the perpendicular angle to the bullets and moving along that vector.  I had some conditions set up where if the enemy was within a certain rang of my x or y value, I would invert the particular axis of the perpendicular axis so that I wouldn’t get trapped against the wall.  The problem was in the overlap of where x < 8 and y > 26.  I had conflicting versions of where the bot was supposed to go and as a result, it crashed into the wall and lost health continually bumping the wall.

The third immediate flaw was that the movement wasn’t random enough, as was shown by the unmerciful treatment by Quickshot26.  Chris’s accuracy was uncanny, and by his own admission, was really set up to detect linear movement.

There are several things I can do to improve the accuracy of my bot.  The first is that I had been guessing my bullet speed from earlier e_bullet scans and diving by the number if frames I though had occurred between the scans.  I could have been using input.bulletSpeed to give an accurate velocity on the bullets.  The same goes for my bot’s velocity.  I was calculating that from my previous position and my current position.  Again, I should have been using input.velocity to accurately calculate my velocity.

This will be the main thing I will concentrate on as I work out my pathfinding algorithms for the maze battle.

Postmortem on “Valour” .. my latest group project.

This was meant to be the highlight of our trimester.  We had 6 weeks to produce a game with the only stipulation being that it couldn’t be from any of the favourite genres of anyone involved in its development.  For the most part, we had to rule out First person shooters and First and Third person RPG’s.  Really, it wasn’t much of a restriction, because who wants to make on of those games over 6 weeks, while still having to work on production code as well.  Who would want to play one made over that period.  I couldn’t be that good with that time constraint.

I found that I was more vocal during the design stage of this project, but also found that my views were not being listened to.  The original idea was for something that I would not consider a game.  I thought it was very soft on gameplay elements and more like a movie.  I have since learned that the best way to approach this is to call in one of the lecturers and have the designers pitch the idea to them.  It feels gratifying when some of your ideas are later incorporated into the design.  Health Bars, health drops (which weren’t included in the final “game”).  I felt my idea of having the health drops being scripted to spawn just out of view if it looked like you were going to die, would have helped give the appearance of some sort of gameplay.

This is a game, where the main character moves through levels, releases some prisoners and then dies … only to be revealed that he was actually a toy being manipulated by a boy in a sand pit .. only to be revealed that maybe he could have been real after all with is mother being told of his death.  Lovely twisty story, but Journey, it isn’t.

The early days of development went well, with  one of the designers accessing a 2d side scrolling game that we could look at appropriating the character from, and also some parallax scrolling scripts from.  The main aim of my brief, was to look after the special effects.  Particle effects. explosions, any shaders that could be used in the game.

About 3 weeks in and I was told that we are no longer using the parallax scrolling scripts.  The designers didn’t like the fact that the player was stationary and the scenery was moving.  Jamie and I didn’t like the sound of this move but as the designers were looking after most of the level design, I believed that what ever made their job easier would be better for the team and this project.  I had nothing to do over the last week of the project, after getting rid of all the arrays, checkpoints etc from the game, as we were making small levels that only had the one checkpoint at the start.    This gave me the chance to focus on Production code and some side work on shaders and gaining an understanding for how they worked.

I was getting worried by the Tuesday before the first draft of this assignment was due, especially after being warned that Gibbs tends to leave things till the last minute and Nick suddenly lost all ability to access Unity or Github.  Gibbs didn’t turn up in the Wednesday that the fallback version was due, and ended up submitting all the levels, slowly, over the Wednesday morning.  My first look at the levels was Wednesday afternoon.  I find it hard to understand how the game moved so far away from the original design to what was submitted.  This was done without consultation and obviously on the fly at the very last minute.  The levels looked nothing like what was planned for.  The character was walking on box colliders, infront of a solid piece of scenery.  The original design had several layers of foreground, back ground and middle ground.

It appears that this was the easiest way out for a designer that did nothing towards the art for the first 5 and a half weeks of this project.  This is reflected in the pushes to the repo.  55 pushes up until last Friday.  Then nothing until Tuesday afternoon.  19 pushes over the last 2 days of this project

I believe I must also accept a lot of responsibility for the way this project ended up.  Gibbs was the team leader and seemed very keen.  I should also have been a “back up” team leader and made sure that the art was being done.  I believe that I shouldn’t have to be a parent, but it is obvious that someone had to be in charge.  I am just as responsible as everyone else in the group for the mess of this project.

To think that I was bitching about the last project because there was no leader.  There was, however,  great communication between the working members.  With “Valour”, there was a leader, but no communication.

Communication, I now believe, is the key to a projects success or failure.  Moving forward, I believe that for my next group project, there needs to be an understanding from the start, that the group members will have several skype check ins between classes, checking how everyone’s work is progressing.  This can be bought down to daily as deadlines approach.  Everyone can relate how they are going and if they are having problems with their work.  Anyone that reports that they have done nothing, or if their work doesn’t match what they claim to have completed needs to be cautioned, and in the event of repeated failures, need to be, either evicted from the group, or bought before the Lecturers for counseling.  Anyone who does not report can be assumed to have not done their work and treated in a similar manner.  This means that the documentation needs to be well thought out, well published, followed up on and amended, if necessary.  The gantt chart needs to be reasonably accurate and time MUST be made to allow play testing after prototype has been completed and also before release of the final version.  Simply put, what the lecturers have been expecting from us.

A full list of my personal credits for this project are:

Modifying the existing player controller script (from the ActionGame2d package out of the Unity Asset Store) and modifying the Parallax scrolling script that was abandoned from the project.

Creating the prefab and script for the mine.  It had two “lights” that flashed on the top, an area trigger that started a countdown and flashed the lights at a rate that became quicker and quicker till the mine exploded.  It also had a collider that instantly blew up when touched.

I created the particle effects for when it blew up.  There was the explosion, smoke and bright sparkly bits that resembled pieces of the mine.

mine particle effects

mine particle effects

I created particle effects that generated puffs of dust from the player moving.

Player generating dust puffs from footsteps.

Player generating dust puffs from footsteps.

I also created 4 different types of smoke.s to be used in the scenes and in the backgrounds, however these were not utilised in the final game, probably due to the lack of depth provided by the final art.

Smoke effect indicating High smoke that is close to the player.

Smoke effect indicating High smoke that is close to the player.

 

Smoke effect indicating High smoke that is further away from the player.

Smoke effect indicating High smoke that is further away from the player.

 

Smoke effect indicating low thick smoke that is close to the player.

Smoke effect indicating low thick smoke that is close to the player.

 

Smoke effect indicating low thin smoke that is close to the player.

Smoke effect indicating low thin smoke that is further away from the player.

Created a shader that could work the health bar of the player (previously revealed in this blog post https://richarddanielstein.wordpress.com/2014/12/01/yay-shaders-a-breakthrough/)

Created the Game Director that handles the explosions, the player’s health and some sound.

Created a prototype health pickup that restored the player’s health but was not utilised in the final project.

Created a lot of Lists and objects to handle having various checkpoints within a level.  Also had functions that destroyed all the assets and then re-instantiated them when the player re-spawned.  Had some fun with that when I was trying to destroy the assets in the screen and Unity told me that if I wanted to destroy them, I had to use DestroyImmediate(object, true);  I was an obedient servant of Unity and did this instruction … only to have all of my prefabs deleted from their folder.  I then created a throw away list to keep the clones in and deleted them, if they existed.  But that is all moot, because the levels changed down to simple levels and had only one checkpoint.  This allowed that the entire scene was reloaded when the player died.

This project has helped me with my professional goals by providing an opportunity for me to get involved, albeit in a small way, with creating particle effects that could be used in this game.  The only real one that survived into the game seemed well received as we displayed the game.  The others gave me experience in trying to work with particle flows, sizes, rates of speed and experimenting with their length of life.  All of these particles were created in the old legacy system as I had great trouble understanding the Particle System method of creating them.

The 3 important things I have learned from this project are:

  1. Having good communication is better than having an ordinary team leader.
  2. I need to become much more assertive in all areas.  From design ides to team leadership, I need to become more of a parent and make sure that work is being done, on time, and as recorded in the Gantt Chart.
  3. I want to work with particles and effects, but I also want to become a better general programmer and work further with animations – more leaning towards 3d models and the shaders to show off the work.  I really think that will help me get into this industry.