Tag Archives: personal reflection

Am I really ready to Graduate?

I don’t think my aims have changed since starting this course.  I wanted to marry my animation skills with programming and either create my own games/apps or use these skills to get me further into a late career in the industry.

C++ nearly broke me and perhaps that is a sign of being too old to adapt between different languages.  What the last two studio subjects have taught me is that I want to be a Unity developer when I graduate.

It seems, from reading the job market correctly, that the main concerns left for me are to gain an understanding of the Mechanim system, Animation trees and experience with a range of repository formats (my only experience to date has been Git).

http://positron.applytojob.com/apply/212a0d256b69767c530055704667635e4c4414022e002f1302056a0e3e08277f130003/Game-Developer?source=INDE&sid=KEozYYGIc4eCtVVgxrKPo1nXTM9XlXe5CDF

https://au.jora.com/job/Unity-Developer-d4c05512962c672f7af8241b8ce5609f?from_url=https%3A%2F%2Fau.jora.com%2FUNity-3D-Developer-jobs&sp=serp&sponsored=false&sr=1

http://www.tsumea.com/news/230715/experienced-unity-developer

http://www.tsumea.com/news/020715/unity-prototype-designer

http://forum.unity3d.com/threads/paying-job-developer-animator-to-work-on-puzzle-platformer.326355/

Next Trimester, I am working on the Final Project, which meant that I will be looking at Animation Trees over the holidays and implementing a few of my previous animations so that I can readily understand how they work and how to best leverage them.  This way I will be prepared to implement animations as soon as we return from the holiday break.

Also, next trimester, I will be working with a group of ex-qantm students who have their own studio and are trying to get their game “Hands Off” greenlit through Steam.  I will be doing rapid prototyping for them for new game ideas and improvements or new concepts for the “Hands Off” title.  They will expect me to be knowledgeable in using Bitbucket and TortiseHg, which I will also be looking into over the holiday break.  As I am already conversant with using Git, I’m sure that it will not be a culture shock using Bitbucket and TortiseHg.

I think it will be a very busy lead up till the end of the year, but I am looking forward to it.  Perhaps because I am finished with C++, but I can’t just leave it there.  Some of the advertisers looking for Unity developers also recommend Unreal experience.  While I have used Unreal to show off Modeling assets and have used the Kismet system, I doubt that this would be enough to warrant a job so I will need to continue with my C++ training and try to become a lot more proficient in it.  Who knows, I might even begin to like it.

Advertisements

Postmortem on Pulse Monitoring Plug-in for Designer’s Game

I found that this was an unusual process.  To date I have been deeply immersed in most of the games that I have collaborated in.  The closest to this was my involvement with “Valour” where the programmers went in hard to start with, set a lot of things up and then had much less contact with the designers as they advanced their ideas through the game play.

To even mention “Valour” send shivers down my spine and it is not a comparison to the final product of “Be Brave”, created by Savik Fraguella.  My involvement was even less, in the scheme of things.  I created the software to get Unity to accept an incoming pulse beat from the wearer and feed that pulse rate into an system that would emit a “Fear Factor” rating that Savik would then be able to use to influence the content of his game.

I quickly, over the space of about 2-3 weeks, completed this task and handed it off to Savik to await any problem that might emerge.  There was one situation where the program would quit and then restart as the level was changed but this was quickly over come by making it a persistent singleton that could start during the main menu and then remain through the levels but destroy itself when either quitting, or completing the level(s).

Savik, at one stage, did borrow the equipment needed to test the game, but he either didn’t install the FTDI drivers, or there was a problem installing them.  I referred him to my blog post regarding the project for installing them.

As we got closer to Open Day, where the game would be displayed, I regularly checked with Savik if he wanted the Arduino to test the game, but he wasn’t interested, focusing on the art work of the game.  It wasn’t until the day before Open Day that I realised that there were some problems.

The pulse rate wasn’t getting through to the build of Unity.  It still processes the information quicker in Unity than in the build but that is another story.  I narrowed the information coming into Unity in order for the information not to clog the ability for the build to run.  Instead of collecting all the information, I narrowed to Arduino to only send information that I could use, which was the actual beat rate.

Video showing the Pulse rate through a simple GUI and the Fear Rating being influenced by the pulse rate.  (I must apologise for spelling “Serial” wrong)

With some help from my tutor, who performed some Unity black magic, I was able to get the build to work properly.  When I find out what black magic was performed, I will amend this post with the information.

Savik showed some quite advanced scripting skills that impressed me, both with him knowing that they existed in the first place, and with the application in a game setting.  Using Pims Algorithm to randomly generate a level so that the exit was not close to the entrance was among the concepts that I found interesting.  I would do well to sit with Savik and discuss how he came to implement them and other ideas such as, apparently, using the fear state to assist in the generation of the level.

What I have learned through this project is not to adopt a “set and forget” mentality.  By remaining closer to Savik as he progressed through the making of the game, I would have been aware of the problems with my component much earlier, and I also would have gained a better understanding of how he utilised and adapted those scripts that left me with a positive impression.

 

Why blogging can help organise your mind and open up unexplored avenues

This is yet another blog that has changed since I started writing it.  It was to be “Unity 5 and my failed attempt to access facial tracking using OpenCVSharp.”  I will now try to show why blogging can help to organise your mind and leave you open to possible solutions.  I will still tag this blog with Unity 5, OpenCVSharp, Webcam and facial tracking, just in case someone else is having problems with getting it to work in Unity 5.  The first part will briefly explain how I got to the stage before I got facial tracking to work in Unity and then explain how this process helped me find the solution.

Firstly, if you have a webcam attached to your computer and aren’t sure if it is working, this Windows trick will save you a heap of time.  Go to your start bar and type in “Camera”.  You can then click on the Camera button.  If it asks you to connect a camera, return to the main screen and press Fn-F6.  This will turn your camera back on and you should be right to go.  Now if you click on the camera, your ugly mug should now come up on the screen and you can take a quick photo of your self satisfied look after saving some time on Google.  Please do it, because that will be the last time you will see that look for some time.

Let me explain why this is the case.  Unity 5 is a 64 bit program and will need 64 bit versions of all the relevant DLLs. More commonly, these are 32 bit and they will not work in Unity 5. But the 64 bit versions can be found at this link: https://github.com/shimat/opencvsharp/releases

My search for trying to find a working version of Facial tracking led me to this site: http://forum.unity3d.com/threads/opencvsharp-for-unity.278033/ and from there I was able to download the relevant Unity package.  It was obvious that it did not include facial tracking and without more research, I wouldn’t get it to work.

I probably spent more time than it deserved, but finally found a working script that tracked faces from this obscure site: http://ux.getuploader.com/aimino/edit/3

It worked perfectly in Unity 4.x, but directly importing it to Unity 5 gave me errors.  I figured that this would be due to the 32 bit DLLs that the original project held.  I changed the “OpenCvSharp” and “OpenCvSharp.MachineLearning” DLLs to the 64 bit version, but ended up with an error, something along the lines that there was an error reading from the “haarcascade_frontalface_alt.xml” file.

I gave up at this point, because there was so much other work expected from me, and this was a prototype for a game that was using a pulse monitor that seemed to be working quite well.  My lecturer convinced me that I should do a blog about this failure so that it would 1. show the work I had done on researching this problem and 2. give anyone else the option to see what I had done and perhaps save them some work on researching and getting to the same point.

Because some time has gone by, I really can’t remember exactly what steps I went through to get to this point, but the ones that I do remember are listed above.

As I was trying to recreate this, I was trying to work out what I had and hadn’t done, so I made a copy of the Unity 4.x version and re-imported it to Unity 5. I then replaced the DLLs with the 64 bit version (as mentioned above).  Then I wondered if there were any other sneaky DLLs that might need replacing. I did find some more DLLs. There were five DLLs that I tried to find 64 bit versions of but they were only available in 32 bit and the fact sheets seemed to explain that they were good to use in a 64bit situation.  Then I found “OpenCvSharpExtern.dll” which I know can be a 64 bit dll, but was still a 32 bit.  I replaced it with the 64 bit version and tried to run the face tracking again.  Face tracking is now operational in Unity 5.

Face detection working in Unity 5

Face detection working in Unity 5

There is a slight bug, however.  It will work the first time you play a scene, but stopping it and trying to run it again will cause Unity to freeze, or become unresponsive.  This is likely to the one red error that it produces at the start of the run.

The error itself reads:

MissingFieldException: Field ‘OpenCvSharp.DisposableCvObject._ptr’ not found.
FaceDetectScript.Start () (at Assets/Scripts/FaceDetectScript.cs:43)

The error relates to this line of code found in the FaceDetect script:

CvSVM svm = new CvSVM ();

The main things I found trying to research this error are https://github.com/shimat/opencvsharp/blob/master/src/OpenCvSharp.CPlusPlus/modules/core/Ptr.cs and https://github.com/shimat/opencvsharp/blob/master/src/OpenCvSharp/Src/DisposableCvObject.cs .

They both appear to be part of shimat’s source code for creating the DLLs.  I am horrible with pointers in C++ and didn’t think that they were supported in C#, so I will try to find out a solution to this problem and will edit this post within 2 weeks.  NOTE: It will be below this with some sort of edited heading.

*****EDITED 21/07/15*****

The solution seems to be to toggle the following lines of code:

CvSVM svm = new CvSVM ();
CvTermCriteria criteria = new CvTermCriteria (CriteriaType.Epsilon, 1000, double.Epsilon);
CvSVMParams param = new CvSVMParams (CvSVM.C_SVC, CvSVM.RBF, 10.0, 8.0, 1.0, 10.0, 0.5, 0.1, null, criteria);

They are not being used elsewhere in the program and seem to be somewhat incomplete in their use.   CvSVM is a Support Vector Machine which is given labeled training data.  An algorithm outputs an optimal hyperplane which can categorise any new examples.

CvTermCriteria means the termination criteria for the iterative algorithm that the CvSVM is using and the CvSVMParams are the parameters used to train the CvSVM.

Usually there is a “Train” and “Predict” method called to the SVM to get it to function and adequately predict something.

I have no idea why this code was left in the project but it is obviously incomplete and the facial tracking works without them being included.

**********

This project had frustrated me in several ways.  Firstly, that for so long, I thought i was so close to having it working.  Secondly, I spent too long on it, without making accurate notes and recording my results which makes it harder to prove what you have done for research into the subject and third, if I had done the above, I could have stumbled on the solution a lot sooner.

What I aim to do in the future, and I highly recommend to others, especially any students, is to record what is happening with your research, even if you save it as a draft.  Actively copy links and what you are looking at, or for, as you go.  In this way, it is all laid out and you might be able to see holes in your logic or thinking as you progress.  At worst, leave it for a week or two and approach it with fresh eyes.

For something completely different.

For a recent side project in Unity, I was trying to find out if certain game objects were seen by the main camera.

To sum up what I was trying to do, I wanted to have enemies able to be highlighted (using a basic outline shader) if they were on screen with the player.  This meant finding if they were on screen with the player, collecting their game objects into a list and cycling through that list to change which enemy is the current target for an attack.

My initial research led me to OnBecameVisible() and it’s opposite, OnBecameInvisible().  This is supposed to tell the renderer when the object comes on screen, or any part of it, including it’s shadow, and needs to be rendered.  What sounded good on paper failed to work in reality.  One drawback is that it is hard to test in the Unity editor because the editor camera also affects the outcome (allegedly).  I say allegedly because I could not get it to work regardless.

What I ended up relying on were two GeometryUnity functions.  CalculateFrustrumPlanes and TestPlanesAABB.

On the script for the object I wanted to test, I used these variables:

private Plane[] planes;
public Camera cam;
private Collider coll;

The main camera is dragged onto the Camera slot in the inspector, although, with a prefab, it might need to be coded directly into the Start() function, where I used this piece of code to access the collider.

coll = GetComponent<Collider>();

In Update(), because the camera is moving with the player, I needed to constantly tell the script what is planes[] were using:

planes = GeometryUtility.CalculateFrustumPlanes(cam);

The script would then announce that it was visible by referring to an external bool function:

if(IsVisible()) {..//insert mad code here//..}

This was a very simple function that returned true when the collider could be “seen” by the main camera and false when it couldn’t.

bool IsVisible()
{
return GeometryUtility.TestPlanesAABB(planes,coll.bounds);
}

This code was used for a prototype to prove that I could implement this mechanic, although after implementing it, I realised that it was rather boring for the purpose I had envisaged and will likely be scrapped.  One of the rules of Game Design is that you have to be prepared to “kill” your children.  As a father, I find this is a horrible metaphor, but true still the same.

My main concern with this current layout was if, in the long run, the amount of processing and garbage generated would have been worth knowing what was visible.  In my thinking, it couldn’t be worse than having a raycast every frame to tell if the player is grounded, or not.

I am posting this in case I have to revisit the concept in the future (3 strokes plays havoc with the short term memory) and if anyone else needs to know a working way to detect if something is currently on screen.

C++ pseudo code, header implementation and how-too videos are the reasons why I am not liking C++

While I am sure that C++ is a great language to knuckle down into memory allocation and stack organising but i you have no idea how to implement the language into headers and use the resulting functions correctly, it is all but useless.

That is the situation for me at the moment.  Over the last few weeks, I have been subtly expressing my rage with comments that C++ makes me want to be a Unity developer and I have finally understood the reason why.

My latest two inquiries into the realms of supposed knowledge in the C++ world have been about how to create and implement a quadtree and collisions using Box2D contact information.

There is a wealth of information out there about problems encountered with quadtrees, if you already know how to set them up.  There is ample pseudo code detailing what is happening with them.  There are several complete pieces of code that outline the headers and .cpp files but alas, they don’t quite fit the bill for me.

Wikipedia has a great section on Quadtrees : https://en.wikipedia.org/wiki/Quadtree

It includes some pseudo code to outline how they are made and how it is created, called and accessed.  The problem for me is that I haven’t the foggiest idea how I can implement this pseudo code into headers and then into viable functions.

Box2D seems to be amazing, but again, only if you know how to use it in the first place.  While there are “tutorials”, it seems that there is no clue how to implement the header file and not solutions for errors that arise from the inevitable mistakes in writing the code.  I know that people aren’t born with this remarkable understanding, but I am buggered if I can find out how they came across this knowledge.  It seems to be on some secret internet that I have no way to access.

Any attempts to try and locate tutorial videos only reveal the startling results that various programmers have achieved with what it is you are looking for.

My problem is that tutorials can be found for Unity that will step you through the process so that you know what you are doing and why.  This has led me to believe that, with the internet, tutorials and forums are set up for a vastly different user experience when it comes to Unity and C++.

Unity and c#/js videos will go the extra mile to make sure that the viewer knows what and why something is being done.  C++ forums are set up for experienced programmers experiencing unusual problems and how to troubleshoot those problems.  C++ “how-to” videos are almost non-existent and C++ forums do not seem to cater for inexperienced C++ programmers.  When they do try to help beginners, their language is hard to understand and their expectations of your knowledge is beyond my abilities.

Over the last 5 – 6 weeks, I have spent countless hours researching Quadtrees and other spacial partitioning methods, Box2d collisions, header bloat, Making a GUI interface, Networking, installing libraries into Visual Studio, installing omp into Visual Studio.

From those countless hours, I have achieved installing omp and libraries into VS and bits and pieces of code and pseudo code that I can’t implement.

The reason I am not liking C++ is because it feels like I am learning to code with heavy blinkers on my eyes and both hands tied behind my back.

If I should stumble across any user friendly C++ tutorials and sites, I will edit this post with their addresses, but I don’t expect you to hold your breath waiting for them.

DrawClient … a C++ experience from hell and likely not the last

If this task has taught me one thing, it is that I want to be a Unity Developer at the end of this course.  For me, C++ is so unwieldy and unmanageable, for even the simple things, like a GUI.

For the last couple of weeks, we have been making a draw client that uses a network to connect to a draw server and create 3 pieces of art without the user seeing what is being created and having no control over what is being created.

Even before my first task, I had to download and include SFML into my Visual Studio Project.  That was a saga in itself but following several online tutorials, I was able to complete that.  I wouldn’t be able to redo it without referring to online tutorials again though.  It is a complicated process with many instructions for modifying the properties of the project.  Certainly not a simple drag and drop process.  This doesn’t even go close to describing the confusion when I was getting compile errors because certain .dll files weren’t where they were needed to be.  I ended up copying and pasting the whole set of SFML .dll files to several points in the project before it could compile for me.

Then there were the problems between working on my version of Visual Studio and the University’s version.  My version required the following command to connect to the network : send_address.sin_addr.s_addr = inet_addr(dest);  Whereas, the Uni’s version required this line of code: InetPtonA(AF_INET, dest, &send_address.sin_addr.s_addr);

So my first task was to try and create a heatmap depicting my mouse movement.  This would be saved off to a file on the close of the program and would reveal the areas that my mouse visited while the program was running.  Createing a window event was easy with SFML as was creating a mouse event that simply captured my mouse position every frame.

I created an Image, and called it heatmap.  I sized it according to the RenderWindow I created.  I then created a Color called “trace” and made it red with an alpha of .1.  This would allow for a stronger intensity of the red to show through in the areas that my mouse frequented.

This section of code would record the making of the heatmap.

if (event.type == sf::Event::MouseMoved)
{
//trace = heatmap.getPixel(event.mouseMove.x, event.mouseMove.y);
heatmap.setPixel(event.mouseMove.x, event.mouseMove.y, trace);
}

While, outside of the RenderWindow loop, this piece of code would create the heatmap file:

heatmap.saveToFile(“heatmap.bmp”);

 

My interpretation of the brief for the first part of this task was wildly off the mark.  I mistook the concept of a gui that you couldn’t see to being a gui that you couldn’t use.  It made no sense to me at the time and so I had a main function that drew stuff independent of any desire of the user.

When I tried to introduce intent for the user, I ran into many problems.  Creating a GUI in C++ doesn’t seem to be an easy task.  I was able to design simple buttons for all the things I wanted the user to have control over .. Pixel, Line, Box, Circle and Send.  I also created feedback images so that the user would know what button had been pressed.

After setting up all my textures and sprites, and setting up my classes to draw the desired sprites, I couldn’t get the program to compile.  With errors everywhere and time rapidly running out for this task to be completed, I did the “programmerly” thing and used Key Presses and mouse clicks to create the art.

So, with 4 things to make, I attached control to the F1 – F4 keys.  You tap F1 to make a pixel, F2 to make a line, F3 to bake a box and F4 to make a Circle.  After setting the keys, the mouse takes over.  With creating a pixel, it just looks for the left mouse down.  As soon as it has this, it will send to pixel off to the server and create the pixel on the server’s screen.  With all the other drawings, it takes the mouse down for the start of the drawing and then the mouse released for the end of the drawing.  Like the pixel, It grabs the mouse position when the left mouse button is pressed and stores it, it then grabs the mouse position when the button is released and stores that.  It then calculates what information it needs and sends the information to the server and draws it on the server’s screen.

The next part of the task was to send my mouse cursor information off to the server and receive every other connected cursor from the server.   I set up a counter that increments every frame and when it hits the target, will send and receive the information to and from the server.(just realised that I should have set it up as an int, instead of a float to make it run faster.  Guess that is a C# habit coming though with setting up timers)  The idea behind this is not to risk locking the computer up with send and receive requests.

The server sends the cursor information back as an array of cursor information and I capture the information with the following code which draws a circle, sets the fill colour as a random colour and give it 50% alpha.

auto* d = reinterpret_cast<PacketServerCursors*>(buff);
for (int i = 0; i < d->count; ++i)
{
sf::CircleShape circ(2);
circ.setFillColor(sf::Color(rand() % 255, rand() % 255, rand() % 255, 50));
circ.setPosition(sf::Vector2f(d->cursor[i].m_posX, d->cursor[i].m_posY));
texture.draw(circ);
}
texture.display();

This code then displays the information to the screen:

sf::Sprite cursourInfo(texture.getTexture());
window.draw(cursourInfo);

While I am not sure what is happening with my cursor, I am sure that I am sending it as no errors are encountered and the other people connected to the server are sending their cursor information and it is being drawn on my screen.

While I did have some success with this task, it was not a desirable outcome for me.  I really wanted to have a visual GUI what looked reasonable and gave the user feedback, but again, C++ beat me back into submission and it is functional, while not pretty.

 

 

Optimising a Ray-tracing programme

Greg has set us several tasks this week .. I think I can see through Steve’s plan here.  By getting Greg to set all the tasks, our group ire will be aimed elsewhere and Steve can come out of Studio 3, smelling like a rose 😉

This task was aimed at getting us to think (dread the thought) and to expose us at some handy little thread optimization techniques.

The programme, as handed to us by Greg, created the output image in 73.163 seconds (on this laptop).  As at this moment, the output image is being created in 7.5 seconds (again on this laptop)

Unfortunately, the notes I was making as I went along have been surrendered to the void and I will have to wing it.  I can’t remember how much time I saved from various steps, but I can tell you the main time saver, that cut a huge amount of time and another that cut me back from the 12 second mark down to the 7.5 second mark.

A series of spheres are generated, 53 larger spheres in a spiral in the centre of the screen and 900 lying in an ordered manner on the ground.  The spiral spheres are reflective in nature and show all/most of the other spheres on the ground, depending on the relevant positions.  There are also two lights on the scene.  One is a red light off to the left and the other is a white light off to the right (SPOILERS – important fact for later optimisation).

The core of the program is sending a ray through a screen pixel and seeing what it lands on and what information needs to be returned.  This creates a n^2 operation as it is scanning through every object in the scene and then scanning through every object again to determine the shadow values of the lights.

Step one was to go through the code and see where coding optimisations could be made. The first was where the ray would tell which part of the skymap it hit and then check if it actually hit a sphere.  I changed this to an if/else loop so that it would only return the sky map if there were not hits on a sphere.

The next step was to create a new shadow check function.  The programme was originally using the same loop for the “trace” function and even if it hit something, it would still keep searching for a closer sphere.  With the shadows, I knew that once there was an object between the ray hitpoint and the light, it would be in shadow and could return to the parent function.  This code enabled me to do that:

HitPoint shp;

for (int i = 0; i < m_renderables.size(); ++i)
{
if (m_renderables[i]->m_active)
{
// Check the sphere.
shp.nearest(m_renderables[i]->intersect(ray));
//return if there is one hit
if (shp.m_hit)
return shp;
}
}
return shp;

I then tried to redesign the way the spheres were created.  The Larger spheres were created from the ground up, This meant that for a majority of the scan, it would have to go through 50 or so spheres before it hit something.  I changed the way it was created so that it started at the top of the stack and wound down to the ground.  I felt this created a better optimised array for searching through.

From memory, these three changes dropped the running time to about 45 – 55 seconds.

The next change was a big deal.  I activated omp in my version of Visual Studio and used #include <omp.h> in several of my classes.  I then used this command: “#pragma omp parallel for”.  Trying to find the right point for it was easy enough.  The heaviest part of the main function is when the computer goes through two “for” loops and sends a ray through each pixel.  Every other place I used the command was then removed, because this was already optimised through the main function.

This bought my processing speed down to about 12.5 seconds.

After a couple of failed experiments that included changing the order that the spheres were created (addin 2 seconds to my process time) and stuffing around with leaving every 4th pixel blank and then lerping between the left and right pixels to fill it.  This showed up horribly when I placed the proper output image in photoshop and then placed the new output image as a layer and set the layer settings to “differences” (part of the brief is to have no differences between the correct output image and our own)

I had to find a way to get the program even more out of the n^2 way it was working the shadow calculations.  That is when I hit the idea that the first “trace” determines the hitpoint and which renderable object it hits.  I created a new int value in the Hitpoint class and called it m_renderableNumber.  Then as the function was scrolling through the renderable objects, trying to find the nearest object to the ray, I would be able to record which object is the nearest and use that as a basis for minimising the number of objects it needs to check against.

Here is the code I used to get the m_renderableNumber:

HitPoint hp;

for (int i = 0; i < m_renderables.size(); ++i)
{
if (m_renderables[i]->m_active)
{

// Find the nearest intersect point.
hp.nearest(m_renderables[i]->intersect(ray));
if (hp.m_renderable == m_renderables[i])
{
hp.m_renderableNumber = i;
}
}
}
return hp;

It wasn’t quite as easy as I just described.  I was originally using if(hp.m_hit = m_renderables[i]) and ending up with m_renderables always equaling 953.  I tried various other methods of trying to get the renderable number and as often as not ended up with a number something like -8355809.  I figured it was an error, because it was always the same number, but still couldn’t find out how I was getting a rubbish number.  Google is your friend?  Nah.

So now I had the number of the renderables array that I could start my search from.

After several false starts trying to find out how I could determine which light was being checked, I came up with this line of code: “if (ray.end().x < hit.m_position.x)”.

If true, we are searching for the red light to the left, else, it must be the white light to the right.

Next come the coding nightmare that I hated doing, but it meant that best case scenario, it was only checking 2 renderables instead of 953 and worst case, about 100 spheres.

I set up if and if/else statements covering different aspects of where the spheres were placed and there the light was placed for a total of 577 lines of code.

I will try and clean this code up a bit in the mean time, but I really need to try and understand c++ better so that I can use different classes and functions, as I do in c#.  C++ terrifies me because I just don’t understand how it works.

The above video shows my optimised raytracing shown against the control version that we were given at the start of the project.  It should be noted that the times are off due to having to use Open Broadcast System to get the footage.

Anyway, this bought my processing time down from 12.5 seconds, to 7.5 seconds, almost half the time and is still an exact match in photoshop.

Quite chuffed at the result, but not so much with the amount of code needed.