Tag Archives: Career Goals

Am I really ready to Graduate?

I don’t think my aims have changed since starting this course.  I wanted to marry my animation skills with programming and either create my own games/apps or use these skills to get me further into a late career in the industry.

C++ nearly broke me and perhaps that is a sign of being too old to adapt between different languages.  What the last two studio subjects have taught me is that I want to be a Unity developer when I graduate.

It seems, from reading the job market correctly, that the main concerns left for me are to gain an understanding of the Mechanim system, Animation trees and experience with a range of repository formats (my only experience to date has been Git).

http://positron.applytojob.com/apply/212a0d256b69767c530055704667635e4c4414022e002f1302056a0e3e08277f130003/Game-Developer?source=INDE&sid=KEozYYGIc4eCtVVgxrKPo1nXTM9XlXe5CDF

https://au.jora.com/job/Unity-Developer-d4c05512962c672f7af8241b8ce5609f?from_url=https%3A%2F%2Fau.jora.com%2FUNity-3D-Developer-jobs&sp=serp&sponsored=false&sr=1

http://www.tsumea.com/news/230715/experienced-unity-developer

http://www.tsumea.com/news/020715/unity-prototype-designer

http://forum.unity3d.com/threads/paying-job-developer-animator-to-work-on-puzzle-platformer.326355/

Next Trimester, I am working on the Final Project, which meant that I will be looking at Animation Trees over the holidays and implementing a few of my previous animations so that I can readily understand how they work and how to best leverage them.  This way I will be prepared to implement animations as soon as we return from the holiday break.

Also, next trimester, I will be working with a group of ex-qantm students who have their own studio and are trying to get their game “Hands Off” greenlit through Steam.  I will be doing rapid prototyping for them for new game ideas and improvements or new concepts for the “Hands Off” title.  They will expect me to be knowledgeable in using Bitbucket and TortiseHg, which I will also be looking into over the holiday break.  As I am already conversant with using Git, I’m sure that it will not be a culture shock using Bitbucket and TortiseHg.

I think it will be a very busy lead up till the end of the year, but I am looking forward to it.  Perhaps because I am finished with C++, but I can’t just leave it there.  Some of the advertisers looking for Unity developers also recommend Unreal experience.  While I have used Unreal to show off Modeling assets and have used the Kismet system, I doubt that this would be enough to warrant a job so I will need to continue with my C++ training and try to become a lot more proficient in it.  Who knows, I might even begin to like it.

DrawClient … a C++ experience from hell and likely not the last

If this task has taught me one thing, it is that I want to be a Unity Developer at the end of this course.  For me, C++ is so unwieldy and unmanageable, for even the simple things, like a GUI.

For the last couple of weeks, we have been making a draw client that uses a network to connect to a draw server and create 3 pieces of art without the user seeing what is being created and having no control over what is being created.

Even before my first task, I had to download and include SFML into my Visual Studio Project.  That was a saga in itself but following several online tutorials, I was able to complete that.  I wouldn’t be able to redo it without referring to online tutorials again though.  It is a complicated process with many instructions for modifying the properties of the project.  Certainly not a simple drag and drop process.  This doesn’t even go close to describing the confusion when I was getting compile errors because certain .dll files weren’t where they were needed to be.  I ended up copying and pasting the whole set of SFML .dll files to several points in the project before it could compile for me.

Then there were the problems between working on my version of Visual Studio and the University’s version.  My version required the following command to connect to the network : send_address.sin_addr.s_addr = inet_addr(dest);  Whereas, the Uni’s version required this line of code: InetPtonA(AF_INET, dest, &send_address.sin_addr.s_addr);

So my first task was to try and create a heatmap depicting my mouse movement.  This would be saved off to a file on the close of the program and would reveal the areas that my mouse visited while the program was running.  Createing a window event was easy with SFML as was creating a mouse event that simply captured my mouse position every frame.

I created an Image, and called it heatmap.  I sized it according to the RenderWindow I created.  I then created a Color called “trace” and made it red with an alpha of .1.  This would allow for a stronger intensity of the red to show through in the areas that my mouse frequented.

This section of code would record the making of the heatmap.

if (event.type == sf::Event::MouseMoved)
{
//trace = heatmap.getPixel(event.mouseMove.x, event.mouseMove.y);
heatmap.setPixel(event.mouseMove.x, event.mouseMove.y, trace);
}

While, outside of the RenderWindow loop, this piece of code would create the heatmap file:

heatmap.saveToFile(“heatmap.bmp”);

 

My interpretation of the brief for the first part of this task was wildly off the mark.  I mistook the concept of a gui that you couldn’t see to being a gui that you couldn’t use.  It made no sense to me at the time and so I had a main function that drew stuff independent of any desire of the user.

When I tried to introduce intent for the user, I ran into many problems.  Creating a GUI in C++ doesn’t seem to be an easy task.  I was able to design simple buttons for all the things I wanted the user to have control over .. Pixel, Line, Box, Circle and Send.  I also created feedback images so that the user would know what button had been pressed.

After setting up all my textures and sprites, and setting up my classes to draw the desired sprites, I couldn’t get the program to compile.  With errors everywhere and time rapidly running out for this task to be completed, I did the “programmerly” thing and used Key Presses and mouse clicks to create the art.

So, with 4 things to make, I attached control to the F1 – F4 keys.  You tap F1 to make a pixel, F2 to make a line, F3 to bake a box and F4 to make a Circle.  After setting the keys, the mouse takes over.  With creating a pixel, it just looks for the left mouse down.  As soon as it has this, it will send to pixel off to the server and create the pixel on the server’s screen.  With all the other drawings, it takes the mouse down for the start of the drawing and then the mouse released for the end of the drawing.  Like the pixel, It grabs the mouse position when the left mouse button is pressed and stores it, it then grabs the mouse position when the button is released and stores that.  It then calculates what information it needs and sends the information to the server and draws it on the server’s screen.

The next part of the task was to send my mouse cursor information off to the server and receive every other connected cursor from the server.   I set up a counter that increments every frame and when it hits the target, will send and receive the information to and from the server.(just realised that I should have set it up as an int, instead of a float to make it run faster.  Guess that is a C# habit coming though with setting up timers)  The idea behind this is not to risk locking the computer up with send and receive requests.

The server sends the cursor information back as an array of cursor information and I capture the information with the following code which draws a circle, sets the fill colour as a random colour and give it 50% alpha.

auto* d = reinterpret_cast<PacketServerCursors*>(buff);
for (int i = 0; i < d->count; ++i)
{
sf::CircleShape circ(2);
circ.setFillColor(sf::Color(rand() % 255, rand() % 255, rand() % 255, 50));
circ.setPosition(sf::Vector2f(d->cursor[i].m_posX, d->cursor[i].m_posY));
texture.draw(circ);
}
texture.display();

This code then displays the information to the screen:

sf::Sprite cursourInfo(texture.getTexture());
window.draw(cursourInfo);

While I am not sure what is happening with my cursor, I am sure that I am sending it as no errors are encountered and the other people connected to the server are sending their cursor information and it is being drawn on my screen.

While I did have some success with this task, it was not a desirable outcome for me.  I really wanted to have a visual GUI what looked reasonable and gave the user feedback, but again, C++ beat me back into submission and it is functional, while not pretty.

 

 

Creative goals for group project

When I “signed up” for this project, is seemed to me that my goal would be to focus on the visual elements, predominantly, the shaders and particle effects.

 

As it turned out, Tim Volp was also to be the other programmer on this project, and I know from previous projects, they are also his goals.

 

The visual elements were spit down the middle, with Tim looking after the shaders (he had some really great ideas to follow up) while I looked after the fog particles.

 

The way things ended up, I was also looking after the character controllers and the camera.

 

My main problem with the character controllers was not knowing that the character controller had an isGrounded function built into the component. Careful reading of the API’s would have alerted me to that in the first place. It seems that one of my main problems is to looking for solutions for what I want something to do, but if I start my search with the API’s, I will have a better understanding of what a component can actually do, and what terms I can use for better Google searches. As it was, I was probably keeping my characters too far off the ground using a raycast, instead of the preexisting isGrounded function.

 

The fog wasn’t much of a challenge, just getting the right particles to do the trick. I did learn quite a bit about the Sukurinan Particle System though. It was the first time I have used this system, having opted for Legacy Particle Systems, previously. The way, I found, to get the particles to size up adequately, was to create the system, when the x scaling of the emitter was at .3, then recreate it when it was at a scale of 10 and again when it was at a scale of 30. The different attributes of the fog, that could be controlled through script, were them plotted against the scaling, and a formula was derived that would plot the curve of the fog elements.

This led to unusual looking formulae within my code that looked a bit like this:

fogEmmitter.startLifetime = (0.0016f * xScale * xScale) + (0.145f * xScale) + 10.49f + startLifeTimeOffset;

fogEmmitter.startSpeed = (-0.0014f * xScale * xScale) + (0.1139f * xScale) + 0.549f + startSpeedOffset;

fogEmmitter.startSize = (-0.0005f * xScale * xScale) + (0.0998f * xScale) + 0.514f + startSizeOffset;

fogEmmitter.maxParticles = (int)((366.97f * xScale * xScale) + ( 339.25f * xScale) + 7581f + maxParticlesOffset);

fogEmmitter.emissionRate = (-0.226f * xScale * xScale) + (92.663f * xScale) + 656.68f + emissionRateOffset;

 

My biggest challenge for the project was the dynamic camera. This was an unusual camera setup whereby the player was never to have control of the camera. The theme of the game was about trust, and the player needed to have trust that we would be doing the right thing with the camera. Let me start by saying the 5 – 6 weeks is not nearly enough time to establish a decent camera system for a “Journey” like game. The guy who was bought in to work on the Journey cameras was at it full time for 2 – 3 years. Unfortunately, the camera wasn’t up to the task, even for the gallery, but I was close. So close, but no cigar. I think I have found out the problem with it.

Because there are two players affecting the movement of the camera, I was looking at different ways to affect the camera, mainly through the use of following the rotational direction of a target that is between the two players. I started using Quaternions, but the camera would switch around way to fast and jitter in certain places. I settled for using a distant target and trigger boxes that could swap the targets around for when there was a specific purpose, but there were many complaints about the camera and it’s angles.

I did some more research into the camera and I thought I had the solution. The tutorial area is supposed to be fun and unstructured, so the players can faf around and get used to the controls, without any pressure. I believed I had achieved this and even play tested the new camera with my son, but somehow, it too failed me. It was “OK” for the first part of the level, but half way through, it was pointing in the wrong direction.

I was devastated, for we were now at the gallery. Some very quick and nasty trigger boxes were set up to try and get everything finalised, but Savik and I were not very happy with the outcome.

This weekend, I believe that I came up with another solution. I, for some weird reason, always thought that a players magnitude, was the same as his velocity. I was experimenting with their velocity when I realised that I had been over thinking the problem (as usual). I just needed to find the average of their individual transform.forwards and use that for the target rotation.

I will be play testing this with Tim today (which is something that was sadly missing in this project, but will be rectified in future efforts), but I think I have finally done well with this iteration of the camera movement.

My struggle with a Check Point Manager

My current project needed a manager to record where the players need to respawn in the event of their death.

While it was unlikely that they could actually die in the Tutorial area of the game, I made sure that they would still respawn at the start of the level. When it came to the players going though a trigger to change the spawn point, I found that just using the following script, wouldn’t work.

using UnityEngine;
using System.Collections;

public class DontDestroyMe : MonoBehaviour {

// Use this for initialization
void Start () {
DontDestroyOnLoad (gameObject);
}

// Update is called once per frame
void Update () {

}
}

While this instance doesn’t destroy itself, it actually creates another instance of the CheckPointManager which seems to take priority over the current instance of the script and returns null object exceptions when trying to find the individual check points.

My research on the internet eventually led me to this site : http://unitypatterns.com/singletons/

This led me to using this for my CheckPointManager:

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class CheckPointManager : MonoBehaviour
{
private static CheckPointManager _inst;

public static CheckPointManager inst
{
get
{
if(_inst == null)
{
_inst = GameObject.FindObjectOfType<CheckPointManager>();
//Don’t destroy
DontDestroyOnLoad(_inst.gameObject);
}
return _inst;
}

}

public List<GameObject> checkPoints = new List<GameObject>();
public GameObject atlCharacter;
public GameObject ollinCharacter;

public bool spawnPoints = true;
public bool firstCheckpoints = false;
public bool secondCheckpoints = false;

void Awake()
{
if(_inst == null)
{
//If I am the first instance, make me the Singleton
_inst = this;
DontDestroyOnLoad(this);
}
else
{
//If a Singleton already exists and you find
//another reference in scene, destroy it!
if(this != _inst)
DestroyImmediate(this.gameObject);
}
}

 

Through testing, I found that when the game returned to the menu, the CheckPointManager was still operating and throwing Null object reference exceptions.  I modified the awake function to the following:

 

void Awake()
{
if(_inst == null)
{
//If I am the first instance, make me the Singleton
_inst = this;
if(Application.loadedLevel == 0)
DestroyImmediate(this.gameObject);
else
DontDestroyOnLoad(this);
}
else
{
//If a Singleton already exists and you find
//another reference in scene, destroy it!
if(this != _inst)
DestroyImmediate(this.gameObject);
}
}

The next problem was with the actual checkpoints.  as they were being destroyed, I ended up using the first “DontDestroyMe” script on the empty game object that was the parent of the individual check points.

This ended up working perfectly, for this project, at least, but I think it should work on any future game.

 

Why is it doing this vs Why isn’t it doing that

For several weeks, I was having a problem with my camera.  It would work perfectly on my test level but would behave differently when I bought it into other levels.  It wasn’t until I was explaining to a team mate, how the camera was working that I realised that I wasn’t using a base.y variable for the Y axis.  I had the .x and .z axis covered, but I had forgotten to do the .y.  The camera worked on my level because it was close to the .y equaling 0, but, obviously failed on levels that were built higher that the .y being 0.  In hind sight, the cause of this problem was screaming at me from all sides, but I couldn’t see it because I was focused on the end behavior of the camera.

I tend to approach troubleshooting from the view of the end result, and why my objects aren’t behaving like they are supposed to.

I think that I should be looking from the starting point of the problem and look at why they are doing what they are doing.  I know this doesn’t sound like much of a shift in thinking patterns, but this subtle shift has resulted in me finding the source of my problems easily and thereby implementing a solution.

For example, a team mate wrote a function to slerp look at an object and I tried to use it in another script that controls the target for the camera to look at when not in free roam mode.  I couldn’t get it working for a week.  With this new mind set, I could see what was happening.  I was transforming the rotation of the entire trigger box that the script was attached to.  I saw that the transform that was going into the function wasn’t what was being transformed.  It would work if the transform you are rotating is attached the the game object that you are trying to rotate.  The weird thing is, I new this code worked in other scripts but still couldn’t work out why it wouldn’t work in this piece of code.

Maybe someone tried to tell me this in previous Trimesters, but it really should be the first point to drive home when writing code.

 

When failure turns into success (because of this blog).

This blog was going to be about how I failed to make a working maxScript in 3dsMax.  I had successfully made the script , but due to an oddity (in my opinion) in 3dsMax, I didn’t realise it at the time.  It wasn’t until I was taking screen captures that I realised that my script actually worked.

The problem I tried to overcome,was that, coming from an animation background, I know that you have to go in and physically change the rotation of the pivot point before you export the asset as an FBX file.  Even though you check the Y-axis up box, it still doesn’t change the rotation of the pivot point which can cause problems trying to code the models in Unity when the Z axis is pointing to the sky.  This process can be a pain if there are several items in the scene.

The first step was to research maxScripts and the MaxScript Listener (http://www.polycount.com/forum/showthread.php?t=84895 and http://knowledge.autodesk.com/support/3ds-max/getting-started/caas/CloudHelp/cloudhelp/2015/ENU/3DSMax-Tutorial/files/GUID-4FB4F8E4-0B7C-4417-B725-B0700714A4F9-htm.html?v=2015).

Just using the Listener methods to try and record my actions, I created a small script that looked a bit like this.

Code from the MaxScript Listener

Code taken from the MaxScript Listener

My initial scene looked like this:

Just a standard box that has been converted to an editable poly.

Just a standard box that has been converted to an editable poly.

After executing the script, it looked like this:

The effect of running the "Listener" script on the model.

The effect of running the “Listener” script on the model.

This wasn’t the desired result.  I did try a variation of the script, whereby I then went back out of the Hierarchy Mode and rotated the model -90 on that same axis but after executing that script, the model was the same as it was before executing the script.  This did forewarn me about a potential problem I could be facing.

I later found a great deal of useful information at this site:  http://docs.autodesk.com/3DSMAX/16/ENU/MAXScript-Help/index.html?url=files/GUID-624D3D05-B15D-4A97-9F15-DA35CDB0DDD2.htm,topicNumber=d30e703708.  Typical of AutoDesk to have nearly all of the relevant documentation available to designers 😉

Another valuable source for me was http://www.neilblevins.com/cg_tools/soulburnscripts/soulburnscripts.htm which has a lot of scripts available to do a great many things in Max, except the thing that I wanted to do. Blevin gave me the clue that I need to get into the XForm and rotate the gizmo.  This will however create a  new modifier in the Modifier List, which will have to be collapsed when the model is converted to an Editable Poly again.  From my Listener script, I took a gamble that I would meed to rotate the mesh in the opposite direction.

While they are older videos, there is some good content in her to help with rollouts .. https://vimeo.com/album/1514565/page:1/sort:preset/format:thumbnail.

 

The problem for me was, when running my script, I dragged it from the Dialogue Box into my scene.  I didn’t, however, close this damn dialogue box and no matter how many times I tried to get it to work, it just wouldn’t work properly.  It was rotating the pivot point to the correct direction, but there were issues with the model’s movement and the fact the I couldn’t delete it.  Even starting a new scene, the model still came up.  I put this down to some bugs in my code.  What I didn’t know is that the code would have worked if I closed the dialogue box.

Once this had been discovered, it was full steam ahead and referring to this site http://knowledge.autodesk.com/search-result/caas/CloudHelp/cloudhelp/2015/ENU/MAXScript-Help/files/GUID-6E21C768-7256-4500-AB1F-B144F492F055-htm.html, it was a simple matter to create the macro by dragging the text of the program into the toolbar and assigning an icon to the new button.

Here are some screen grabs showing the process and result of changing the y axis.

Setting up a new box for testing.

Setting up a new box for testing.

Showing the button and the pop up text and the rollout window.

Showing the button and the pop up text and the rollout window.

After clicking the tool button you can see the pivot axis is rotated.

After clicking the tool button you can see the pivot axis is rotated.

Final code used to create this tool.

Final code used to create this tool.

In summation.  I could have finished this off so much earlier this weekend, but for the Open Script dialogue box.  Pro Tip:  Close dialogue boxes in Max before testing to see if your code is working properly.

Particles …

We had been operating on the assumption that many particles would slow down the frame rate, so just to make sure, I created a particle system to replicate the fog and tried it out.  I was able to create 60 000 particle fog with no real detriment to the frame rate.

Base frame rate for the scene .. range between 73 and 93 fps.

Base frame rate for the scene .. range between 73 and 93 fps.

Particle system running 60 000 particles with frame rate ranging between 80 and 93 fps.

Particle system running 60 000 particles with frame rate ranging between 80 and 93 fps.

I was hoping that there would be more editable factors from script, in the Shuriken Particle System in Unity, but that was not the case.  There is very little that can be edited from a script.  I was hoping that I would be able to control the size of the emitter by changing the radius of the circle.  This would allow for the slow decreasing of the fog in toward the player’s position.

While scrolling through the “Shape” option, I came across using a mesh for the emitter.  I created a circular mesh that was deformed, but when applied, I still couldn’t make any changes to the mesh as it was now embedded into the particle system.

What I did then was to attach the component for the Particle System and replicate the settings.  This gave me the opportunity to slowly rotate the particle effects so that it wasn’t a constant visual image and to also decrease the local scaling of the emitter so that it closed in on the players.

Particles emitting from a mesh and slowly revolving and scaling in towards the player

Particles emitting from a mesh and slowly revolving and scaling in towards the player

This is my first time using the Shuriken system however I have used the Legacy Particles in previous projects.  I set the duration of the clip for 30 seconds so that it wouldn’t appear so regular and having regular patterns seen by the players.  It is also set to looping because once it is on, it is going to stay on.  For the purposes of setting the particle system up, I also used pre-warm, but I am seriously considering using that setting when applying it in our game because it looks like crap when is starts off.

 

The start speed is set very low at 0,02 because it is a slow moving fog.  The start lifetime is set at 10 and this gives the height of the effect.  I have also set a large particle size because I want it to be seen from a distance and I wanted to block out as much of the background as I could.

To try and make the fog rolling, I set the initial rotation and it is rotating all the way through the particles life.\

There is a slight negative gravity so that the particles slowly rise into the air and they inherit 50% of the rotating velocity from the mesh.

I have also set the size of the particles to quickly fall to zero near the end of their life.  I believe that this also helps to give the impression of the fog rolling and seeming to rise up and roll over into the background.

This is by no means finished as far as the final fog will go but it is certainly enough for proof of concept and will suffice until the polish stage of the project.

The other thing I am considering is I am currently using a vertical ring shaped mesh and I would like to change it to a thin horizontal ring shaped mesh.  This will give me greater control to manipulate the mesh through script and try to follow the contours of the terrain when it has been modeled.