DrawClient … a C++ experience from hell and likely not the last

If this task has taught me one thing, it is that I want to be a Unity Developer at the end of this course.  For me, C++ is so unwieldy and unmanageable, for even the simple things, like a GUI.

For the last couple of weeks, we have been making a draw client that uses a network to connect to a draw server and create 3 pieces of art without the user seeing what is being created and having no control over what is being created.

Even before my first task, I had to download and include SFML into my Visual Studio Project.  That was a saga in itself but following several online tutorials, I was able to complete that.  I wouldn’t be able to redo it without referring to online tutorials again though.  It is a complicated process with many instructions for modifying the properties of the project.  Certainly not a simple drag and drop process.  This doesn’t even go close to describing the confusion when I was getting compile errors because certain .dll files weren’t where they were needed to be.  I ended up copying and pasting the whole set of SFML .dll files to several points in the project before it could compile for me.

Then there were the problems between working on my version of Visual Studio and the University’s version.  My version required the following command to connect to the network : send_address.sin_addr.s_addr = inet_addr(dest);  Whereas, the Uni’s version required this line of code: InetPtonA(AF_INET, dest, &send_address.sin_addr.s_addr);

So my first task was to try and create a heatmap depicting my mouse movement.  This would be saved off to a file on the close of the program and would reveal the areas that my mouse visited while the program was running.  Createing a window event was easy with SFML as was creating a mouse event that simply captured my mouse position every frame.

I created an Image, and called it heatmap.  I sized it according to the RenderWindow I created.  I then created a Color called “trace” and made it red with an alpha of .1.  This would allow for a stronger intensity of the red to show through in the areas that my mouse frequented.

This section of code would record the making of the heatmap.

if (event.type == sf::Event::MouseMoved)
{
//trace = heatmap.getPixel(event.mouseMove.x, event.mouseMove.y);
heatmap.setPixel(event.mouseMove.x, event.mouseMove.y, trace);
}

While, outside of the RenderWindow loop, this piece of code would create the heatmap file:

heatmap.saveToFile(“heatmap.bmp”);

 

My interpretation of the brief for the first part of this task was wildly off the mark.  I mistook the concept of a gui that you couldn’t see to being a gui that you couldn’t use.  It made no sense to me at the time and so I had a main function that drew stuff independent of any desire of the user.

When I tried to introduce intent for the user, I ran into many problems.  Creating a GUI in C++ doesn’t seem to be an easy task.  I was able to design simple buttons for all the things I wanted the user to have control over .. Pixel, Line, Box, Circle and Send.  I also created feedback images so that the user would know what button had been pressed.

After setting up all my textures and sprites, and setting up my classes to draw the desired sprites, I couldn’t get the program to compile.  With errors everywhere and time rapidly running out for this task to be completed, I did the “programmerly” thing and used Key Presses and mouse clicks to create the art.

So, with 4 things to make, I attached control to the F1 – F4 keys.  You tap F1 to make a pixel, F2 to make a line, F3 to bake a box and F4 to make a Circle.  After setting the keys, the mouse takes over.  With creating a pixel, it just looks for the left mouse down.  As soon as it has this, it will send to pixel off to the server and create the pixel on the server’s screen.  With all the other drawings, it takes the mouse down for the start of the drawing and then the mouse released for the end of the drawing.  Like the pixel, It grabs the mouse position when the left mouse button is pressed and stores it, it then grabs the mouse position when the button is released and stores that.  It then calculates what information it needs and sends the information to the server and draws it on the server’s screen.

The next part of the task was to send my mouse cursor information off to the server and receive every other connected cursor from the server.   I set up a counter that increments every frame and when it hits the target, will send and receive the information to and from the server.(just realised that I should have set it up as an int, instead of a float to make it run faster.  Guess that is a C# habit coming though with setting up timers)  The idea behind this is not to risk locking the computer up with send and receive requests.

The server sends the cursor information back as an array of cursor information and I capture the information with the following code which draws a circle, sets the fill colour as a random colour and give it 50% alpha.

auto* d = reinterpret_cast<PacketServerCursors*>(buff);
for (int i = 0; i < d->count; ++i)
{
sf::CircleShape circ(2);
circ.setFillColor(sf::Color(rand() % 255, rand() % 255, rand() % 255, 50));
circ.setPosition(sf::Vector2f(d->cursor[i].m_posX, d->cursor[i].m_posY));
texture.draw(circ);
}
texture.display();

This code then displays the information to the screen:

sf::Sprite cursourInfo(texture.getTexture());
window.draw(cursourInfo);

While I am not sure what is happening with my cursor, I am sure that I am sending it as no errors are encountered and the other people connected to the server are sending their cursor information and it is being drawn on my screen.

While I did have some success with this task, it was not a desirable outcome for me.  I really wanted to have a visual GUI what looked reasonable and gave the user feedback, but again, C++ beat me back into submission and it is functional, while not pretty.

 

 

Advertisements

Optimising a Ray-tracing programme

Greg has set us several tasks this week .. I think I can see through Steve’s plan here.  By getting Greg to set all the tasks, our group ire will be aimed elsewhere and Steve can come out of Studio 3, smelling like a rose 😉

This task was aimed at getting us to think (dread the thought) and to expose us at some handy little thread optimization techniques.

The programme, as handed to us by Greg, created the output image in 73.163 seconds (on this laptop).  As at this moment, the output image is being created in 7.5 seconds (again on this laptop)

Unfortunately, the notes I was making as I went along have been surrendered to the void and I will have to wing it.  I can’t remember how much time I saved from various steps, but I can tell you the main time saver, that cut a huge amount of time and another that cut me back from the 12 second mark down to the 7.5 second mark.

A series of spheres are generated, 53 larger spheres in a spiral in the centre of the screen and 900 lying in an ordered manner on the ground.  The spiral spheres are reflective in nature and show all/most of the other spheres on the ground, depending on the relevant positions.  There are also two lights on the scene.  One is a red light off to the left and the other is a white light off to the right (SPOILERS – important fact for later optimisation).

The core of the program is sending a ray through a screen pixel and seeing what it lands on and what information needs to be returned.  This creates a n^2 operation as it is scanning through every object in the scene and then scanning through every object again to determine the shadow values of the lights.

Step one was to go through the code and see where coding optimisations could be made. The first was where the ray would tell which part of the skymap it hit and then check if it actually hit a sphere.  I changed this to an if/else loop so that it would only return the sky map if there were not hits on a sphere.

The next step was to create a new shadow check function.  The programme was originally using the same loop for the “trace” function and even if it hit something, it would still keep searching for a closer sphere.  With the shadows, I knew that once there was an object between the ray hitpoint and the light, it would be in shadow and could return to the parent function.  This code enabled me to do that:

HitPoint shp;

for (int i = 0; i < m_renderables.size(); ++i)
{
if (m_renderables[i]->m_active)
{
// Check the sphere.
shp.nearest(m_renderables[i]->intersect(ray));
//return if there is one hit
if (shp.m_hit)
return shp;
}
}
return shp;

I then tried to redesign the way the spheres were created.  The Larger spheres were created from the ground up, This meant that for a majority of the scan, it would have to go through 50 or so spheres before it hit something.  I changed the way it was created so that it started at the top of the stack and wound down to the ground.  I felt this created a better optimised array for searching through.

From memory, these three changes dropped the running time to about 45 – 55 seconds.

The next change was a big deal.  I activated omp in my version of Visual Studio and used #include <omp.h> in several of my classes.  I then used this command: “#pragma omp parallel for”.  Trying to find the right point for it was easy enough.  The heaviest part of the main function is when the computer goes through two “for” loops and sends a ray through each pixel.  Every other place I used the command was then removed, because this was already optimised through the main function.

This bought my processing speed down to about 12.5 seconds.

After a couple of failed experiments that included changing the order that the spheres were created (addin 2 seconds to my process time) and stuffing around with leaving every 4th pixel blank and then lerping between the left and right pixels to fill it.  This showed up horribly when I placed the proper output image in photoshop and then placed the new output image as a layer and set the layer settings to “differences” (part of the brief is to have no differences between the correct output image and our own)

I had to find a way to get the program even more out of the n^2 way it was working the shadow calculations.  That is when I hit the idea that the first “trace” determines the hitpoint and which renderable object it hits.  I created a new int value in the Hitpoint class and called it m_renderableNumber.  Then as the function was scrolling through the renderable objects, trying to find the nearest object to the ray, I would be able to record which object is the nearest and use that as a basis for minimising the number of objects it needs to check against.

Here is the code I used to get the m_renderableNumber:

HitPoint hp;

for (int i = 0; i < m_renderables.size(); ++i)
{
if (m_renderables[i]->m_active)
{

// Find the nearest intersect point.
hp.nearest(m_renderables[i]->intersect(ray));
if (hp.m_renderable == m_renderables[i])
{
hp.m_renderableNumber = i;
}
}
}
return hp;

It wasn’t quite as easy as I just described.  I was originally using if(hp.m_hit = m_renderables[i]) and ending up with m_renderables always equaling 953.  I tried various other methods of trying to get the renderable number and as often as not ended up with a number something like -8355809.  I figured it was an error, because it was always the same number, but still couldn’t find out how I was getting a rubbish number.  Google is your friend?  Nah.

So now I had the number of the renderables array that I could start my search from.

After several false starts trying to find out how I could determine which light was being checked, I came up with this line of code: “if (ray.end().x < hit.m_position.x)”.

If true, we are searching for the red light to the left, else, it must be the white light to the right.

Next come the coding nightmare that I hated doing, but it meant that best case scenario, it was only checking 2 renderables instead of 953 and worst case, about 100 spheres.

I set up if and if/else statements covering different aspects of where the spheres were placed and there the light was placed for a total of 577 lines of code.

I will try and clean this code up a bit in the mean time, but I really need to try and understand c++ better so that I can use different classes and functions, as I do in c#.  C++ terrifies me because I just don’t understand how it works.

The above video shows my optimised raytracing shown against the control version that we were given at the start of the project.  It should be noted that the times are off due to having to use Open Broadcast System to get the footage.

Anyway, this bought my processing time down from 12.5 seconds, to 7.5 seconds, almost half the time and is still an exact match in photoshop.

Quite chuffed at the result, but not so much with the amount of code needed.

Creative goals for group project

When I “signed up” for this project, is seemed to me that my goal would be to focus on the visual elements, predominantly, the shaders and particle effects.

 

As it turned out, Tim Volp was also to be the other programmer on this project, and I know from previous projects, they are also his goals.

 

The visual elements were spit down the middle, with Tim looking after the shaders (he had some really great ideas to follow up) while I looked after the fog particles.

 

The way things ended up, I was also looking after the character controllers and the camera.

 

My main problem with the character controllers was not knowing that the character controller had an isGrounded function built into the component. Careful reading of the API’s would have alerted me to that in the first place. It seems that one of my main problems is to looking for solutions for what I want something to do, but if I start my search with the API’s, I will have a better understanding of what a component can actually do, and what terms I can use for better Google searches. As it was, I was probably keeping my characters too far off the ground using a raycast, instead of the preexisting isGrounded function.

 

The fog wasn’t much of a challenge, just getting the right particles to do the trick. I did learn quite a bit about the Sukurinan Particle System though. It was the first time I have used this system, having opted for Legacy Particle Systems, previously. The way, I found, to get the particles to size up adequately, was to create the system, when the x scaling of the emitter was at .3, then recreate it when it was at a scale of 10 and again when it was at a scale of 30. The different attributes of the fog, that could be controlled through script, were them plotted against the scaling, and a formula was derived that would plot the curve of the fog elements.

This led to unusual looking formulae within my code that looked a bit like this:

fogEmmitter.startLifetime = (0.0016f * xScale * xScale) + (0.145f * xScale) + 10.49f + startLifeTimeOffset;

fogEmmitter.startSpeed = (-0.0014f * xScale * xScale) + (0.1139f * xScale) + 0.549f + startSpeedOffset;

fogEmmitter.startSize = (-0.0005f * xScale * xScale) + (0.0998f * xScale) + 0.514f + startSizeOffset;

fogEmmitter.maxParticles = (int)((366.97f * xScale * xScale) + ( 339.25f * xScale) + 7581f + maxParticlesOffset);

fogEmmitter.emissionRate = (-0.226f * xScale * xScale) + (92.663f * xScale) + 656.68f + emissionRateOffset;

 

My biggest challenge for the project was the dynamic camera. This was an unusual camera setup whereby the player was never to have control of the camera. The theme of the game was about trust, and the player needed to have trust that we would be doing the right thing with the camera. Let me start by saying the 5 – 6 weeks is not nearly enough time to establish a decent camera system for a “Journey” like game. The guy who was bought in to work on the Journey cameras was at it full time for 2 – 3 years. Unfortunately, the camera wasn’t up to the task, even for the gallery, but I was close. So close, but no cigar. I think I have found out the problem with it.

Because there are two players affecting the movement of the camera, I was looking at different ways to affect the camera, mainly through the use of following the rotational direction of a target that is between the two players. I started using Quaternions, but the camera would switch around way to fast and jitter in certain places. I settled for using a distant target and trigger boxes that could swap the targets around for when there was a specific purpose, but there were many complaints about the camera and it’s angles.

I did some more research into the camera and I thought I had the solution. The tutorial area is supposed to be fun and unstructured, so the players can faf around and get used to the controls, without any pressure. I believed I had achieved this and even play tested the new camera with my son, but somehow, it too failed me. It was “OK” for the first part of the level, but half way through, it was pointing in the wrong direction.

I was devastated, for we were now at the gallery. Some very quick and nasty trigger boxes were set up to try and get everything finalised, but Savik and I were not very happy with the outcome.

This weekend, I believe that I came up with another solution. I, for some weird reason, always thought that a players magnitude, was the same as his velocity. I was experimenting with their velocity when I realised that I had been over thinking the problem (as usual). I just needed to find the average of their individual transform.forwards and use that for the target rotation.

I will be play testing this with Tim today (which is something that was sadly missing in this project, but will be rectified in future efforts), but I think I have finally done well with this iteration of the camera movement.

My struggle with a Check Point Manager

My current project needed a manager to record where the players need to respawn in the event of their death.

While it was unlikely that they could actually die in the Tutorial area of the game, I made sure that they would still respawn at the start of the level. When it came to the players going though a trigger to change the spawn point, I found that just using the following script, wouldn’t work.

using UnityEngine;
using System.Collections;

public class DontDestroyMe : MonoBehaviour {

// Use this for initialization
void Start () {
DontDestroyOnLoad (gameObject);
}

// Update is called once per frame
void Update () {

}
}

While this instance doesn’t destroy itself, it actually creates another instance of the CheckPointManager which seems to take priority over the current instance of the script and returns null object exceptions when trying to find the individual check points.

My research on the internet eventually led me to this site : http://unitypatterns.com/singletons/

This led me to using this for my CheckPointManager:

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class CheckPointManager : MonoBehaviour
{
private static CheckPointManager _inst;

public static CheckPointManager inst
{
get
{
if(_inst == null)
{
_inst = GameObject.FindObjectOfType<CheckPointManager>();
//Don’t destroy
DontDestroyOnLoad(_inst.gameObject);
}
return _inst;
}

}

public List<GameObject> checkPoints = new List<GameObject>();
public GameObject atlCharacter;
public GameObject ollinCharacter;

public bool spawnPoints = true;
public bool firstCheckpoints = false;
public bool secondCheckpoints = false;

void Awake()
{
if(_inst == null)
{
//If I am the first instance, make me the Singleton
_inst = this;
DontDestroyOnLoad(this);
}
else
{
//If a Singleton already exists and you find
//another reference in scene, destroy it!
if(this != _inst)
DestroyImmediate(this.gameObject);
}
}

 

Through testing, I found that when the game returned to the menu, the CheckPointManager was still operating and throwing Null object reference exceptions.  I modified the awake function to the following:

 

void Awake()
{
if(_inst == null)
{
//If I am the first instance, make me the Singleton
_inst = this;
if(Application.loadedLevel == 0)
DestroyImmediate(this.gameObject);
else
DontDestroyOnLoad(this);
}
else
{
//If a Singleton already exists and you find
//another reference in scene, destroy it!
if(this != _inst)
DestroyImmediate(this.gameObject);
}
}

The next problem was with the actual checkpoints.  as they were being destroyed, I ended up using the first “DontDestroyMe” script on the empty game object that was the parent of the individual check points.

This ended up working perfectly, for this project, at least, but I think it should work on any future game.

 

Gallery showing Post-mortem.

What a random placement schmozzle .

Learning from the past, we actually organised a play testing session after we believed we had fixed all the bugs from the previous play test.

The 3ds Max bug that affected my laptop last time, was still present, so I could not build from my computer.  For some odd reason, Tim could still see the missing assets on his laptop.  I could not explain this, and still can’t.

We were satisfied until we got to the venue and did a quick speed run of the level.  That is when the camera gave up the ghost.  It worked perfectly fine in the tutorial area and then had severe problems in the later stages of the level.

We tried to quickly trouble shoot this problem and then decided that for most of the final end of the level, to include another of Savik’s trigger boxes to try and keep the players aimed in the right direction.

We did another build and had to call it quits at that point.

Then Unity took over and nothing seemed the same again.  Camera angles that were working previously, now didn’t work.  Fog that had spawned in the exact same place since the beginning, suddenly spawned in one of two places, the place where it was designed to spawn, or at a place that seemed to be somewhere outside of the temple.  Target points for the cut scenes suddenly seemed to move of their own volition and occasionally the camera couldn’t make it back to it’s starting position.  The camera check for hitting objects seemed to be hit and miss, occasionally staying way high in the air.  Most of these weird bugs could possibly be attributed to some error in the quick fix for the camera at the end, except for the random spawning of the fog, and the sudden maneuverability of the camera targets for the cut scenes at the shrines.

The game seemed to be pretty well received and it was a huge improvement over the first effort put up for play testing, but in the end, it still left a bitter taste in my mouth.

The secret of success for a good camera is to have no feedback over it.  It becomes such a part of the game that it isn’t noticed.  I realise that 6 weeks is hardly enough time to develop a fully evolved camera system, but I feel that this should have been better … at the very least, to point in a direction that it was intended.  My only solace is that the guy bought in to work on the camera for Journey spent almost two years getting it right.

For the first play test, my responsibility was for the wind turbine trigger, that was never used in any iteration of the games, the mechanics of the headbutt trigger button, the player controllers, the fog and the camera.  The main problem I had with the player controllers for the first play test was probably that I didn’t realise that the controllers had a .isGrounded function built in.  I was achieving this through a raycast, that was probably not quite grounding the player enough.  After the first play test, my main focus was now the camera and to a lesser degree the fog mechanics.

It wasn’t until after the second play test, when I was trying to explain how my camera was supposed to be operating, that I realised a serious mistake in working out my height and what the camera was looking at.  I wasn’t applying a base y position for these applications.

After the second play test, my main focus was on the camera and the cut scenes at the shrines.

Things I have learned from this project:

Making time to play test as soon as you have a viable core game loop happening.  That way you can identify bugs early and work to correct them sooner than we did in this project.

Clear guidelines from the outset as to the number of levels and the arrangement of assets/puzzles within those levels.  This way, early testing of things such as cameras and character controllers can be devised and implemented before making it into the levels.

Just because a system will work in one part of the level doesn’t mean that it will work in all parts of the level.

Did I mention that full play testing will reveal what works and where it doesn’t, so make time for the team members to play test the systems within the game.

Second Play Test Post-mortem

What a schmozzle (again).

There were so many problems with this play test that it is hard to know where to start.

For a root cause, I would probably start with the lack of private play testing before hand.  Again, most of the bugs would have been identified before hand.

For a second problem, we had no way of building again the morning of the play test.  The designer incorporated .max files in the repo which meant that no one was able to build from their lap tops, and as we were using Unity 5, we couldn’t use the college computers to create another build.

As a result, we were stuck with the build that was created the night before, that we knew had serious bugs.

The player controllers worked well, but the camera required boxes that changed the settings for the camera and the target.  The problem with these boxes was that they recorded the previous target, but when boxes overlapped, the original target was overwritten by the boxes and eventually, the distance target was lost completely.

What I really needed, for this game was to have a camera that would have the ability to free roam, without the wild erratic swings when the players changed direction, and incorporate them with the camera box triggers that would direct the player to the right area.  The ideal would be to have triggers that would load up points of interest within the scene and give a weighting to them, but I doubt there would be enough time and I wonder over the effectiveness of this method, when the ideal is for the players, in the tutorial area, to experience the fun of an unorganised area, where there is no pressure on them.  Where they can enjoy getting to know the mechanics of the game and learn what can be done with them.

The other main problem was with the shrines, themselves.  I needed to take absolute control of the camera and create a cut scene so that no other actions, either by the player, or from any other source, could affect the camera at this delicate stage.

The fog will also need more effects to hide the actual spawning of the fog, and the transforming of the fog from spawn mode to full scale fog.

I didn’t learn anything new from this play test, except that shit can go wrong at the worst possible time.  We repeated many of the mistakes from the last play yest.

First Play Test Post-mortem

What a schmozzle.

While people did have a lot of fun playing our game, it was for all the wrong reasons.  When the player controller hit the edge of a mesh collider, it had the possibility of becoming airborne and bounce across the scene on the edge of the cloak mesh.  In conjunction with this, semi-skillful use of the jump mechanics would make the controller float across the terrain to the point where one piece of feedback called it a NASA sim.

The “Tutorial” area of the game was buggered because while the test elements worked on our individual test levels, they didn’t work in in the game proper, because Tim and I didn’t account for the puzzles being childed to one or more empty game objects.  We were using transform.position to move our puzzle elements, when we should have been using transform.localPosition.

This would have been identified had we organised play-testing before hand.  There was also miscommunication about how many levels there would be in the game.  I was working under the impression that there would be only one level and that the fog would be moving, or chasing, the players through a “valley”.  I found out on the Thursday or Friday night that there would be 3 different levels and that the fog would be chasing the players over open area.

This caused a major revision of how the camera, and fog would have to operate.  As a stop gap, I had the camera always facing towards the end of the level.  I then compensated by having a List of targets that the camera would move towards and a function to receive a message that would move the distant target to a new location.   One problem with this was that the targets were set up for the players to gain a collectable to activate the shrines, but this collectible wasn’t ready for the game.  The camera would move once the players were in a certain distance of the collectible, but as it was a short radius to achieve, the camera wouldn’t move on. The feedback was that the players wanted control of the camera, which was never going to happen, as part of the camera mechanics were used for jumping and who should have control of the camera.  Two players battling for control of the camera would tend to break the Trust mechanic that was the intention behind the design of the game.

Things that I have learned from this play-test is that there need to be a very clear understanding of the game, it’s design and how the levels are going to be created and laid out.  This understanding needs to be gained in the very early stages of development.  That way, I would have had a clearer understanding of the level well in advance of the play-test.  Any changes from that design need to be made in consultation and agreed with by all the members of the team

As has been a problem with other projects in the past … play testing is vital before trying to demonstrate our game.  This iteration was never play-tested by us as a group before we tried to impose the first play-test on our “public”.  Many of the bugs that were in this game would have been easily identified and could have been corrected, if we had the opportunity to play test.  In that way, we, or at least I, would not have been so embarrassed by the bugs that became apparent during this test.