So I’ve been working on quite a bit of stuff recently. Just from these sketches below I’ve been trying to work on my illustrations and concepts more to quickly get out ideas. I’ve had to story board a bit in the past year but I always seem to struggle on certain aspects weather it is a character in a wired pose or just items I’ve never really drawn before. So for the most part I’ve been focusing on perspective and form and less on detail. Blocking quickly rather than focusing too much in one area and just generally trying to be more confidant when drawing.
I got to do some VFX for a few shots of a Documentery a few months ago last year which was fun. It was a pretty simple track/mask with some rotoscoping but a little fun no the less. The process was pretty simple though. The main thing was matching the lens blur when that moved, following camera bumps and matching the black and white balance coming from the screens as black from the screens where coming out as a dark grey.
The Harder shots where these close ups where I did the same as before but the camera was shaking allot more as was the lens blur.on top of this you can just pickup the scan lines from the pixels on the reference pics so I had to get a filter to mirror that effect. The bleed from the colours didn’t help either as I basically had to recreate each frame from other bits in the shot and also track those as well. In the end it was seemless and seeing as it was for a documentary it had to look as real as possible even though it was a meant to be a simple mask. I could have just taken a still frame and added my own camera shake and lens blur but I knew that wouldn’t look as real and potentially break immersion and for a documentary that would be a big no no.
Lastly we got all the logos and type I’ve worked on for the last year.
Sugarcane Kitchen – Hand lettered type for a Kitchen opening up in Cairns very soon.
VESPER GREEN LOGO [WIP] – Still working with the local Brisbane band on their logo. But this is probibly similar to what the end product will look like. At this point though I’ve gone with a custom font I’ve done up specifically for this logo and a hand drawn icon.
Paper trail was an idea I had of stick figures on pieces of paper in a Classic Noir style series of shorts. Its a project that at this point is 65% done. The environment just needs a bit of clutter. The Story Board it done as is the script. The next thing I really needed was voice acting but I don’t have the money to hire talent and people who could do it for free don’t have the time.
Here is a little something I’ve been working on. It’s to be a winter forest environment and this is the base. I plan on adding various other elements such as better snow on the ground and some on the trees as well as possibly putting it all into unreal so you can walk around and explore some ruins in the forest. This is just where I’ve started and what I’ve put together in a afternoon.
Here are the four variations on the trees I’ve made. Each have the same trunks just slightly modified. The branches and other bits where made using a particle system so I can make variations to change things up if need be.
And here is the set of branches and twigs that stick out of the trunk. It needs a bit of work before I start assembling it all in unreal but this seems to be a fairly easy and contained project.
Got a sculpt? Now you’ll need to retypoloize it if you want to use it in a game or animation. zbrush does this internally but you can use any stranded 3D package out there to achieve the same result. Infact after some first hand experience zbrush isn’t the best way to retypoloize at least with the tool.
So in my endevers I found two ways to retypoloize in zbrush. The first is probibly the better way but again most other programs can do the same. It is where you draw on the mesh and where the line overlap, the program crates verts and if it creates a quad or a tri it creates a polygon. This way is fairly fluid and can create some pretty good loops cause that’s what its all about but some fiddling externally is probibly needed.
The other way is how most other packages do retopology where you create verts on the mesh individually and extrude from each vert. The program places it on the surface of the model much like the example above. However where zbrush fails in this technique is that sometimes the verts will lock in place and are unable to move. The worst thing however is that you are not able to extrude more than one vert at a time to create a face. You have to do it and connect it one by one. You also can’t control loops in the mesh. You can select multiple verts a time but not in loops or faces or even edges.
Below is as far as I could get with the retypology in zbush. I need to bring it into another program for the rest as this took far too long and any other program could easily do it faster. I think its not too bad the only problem was that it took so long to get right and even then as you can probibly see as bit of it still looks a little janckey. Again though not a huge deal as it is easily fixable.
So zbush can retopologize but yeah how I did it took a while and I would rather do it anyother way. However due to the other technique I think I might come back to it for future models to see how I handle it the other way.
And hey may as well show off the latest version of the sculpt. Go on then.
While your having a look at this why not have a look at my showreel over here.
Mocap is a really interesting thing and if used correctly can save allot of time in animation. It also helps get real movements and weight behind actions but there are a few fall backs which hold it back but only really for our group in The Mechanic. We don’t have the best or most accurate capture data so allot of time will be going into fixing broken limbs and such and removing and changing broken frames. Importing the footage is easy enough we just need a rig to put it on, alternatively we could use the one the rig gives us. The suit and program we used was Axis Neruron. Heres how it works.
The suit has nodes which can be placed all over the body and hands. Our suit didn’t have enough for the hands which is sad but we got enough footage from the full body. Each node has a few things in it. A acelaromitor to record movement and gyroscope to record rotation. They all connect to each other and then to a control pack in the back that sends a signal to the wireless network and then to the program.
Thats the cool thing about this suit where as long as its connected to the wireless network it records everything. That means we could walk around the entire building.
Now for the practical side of things to use the data you need to convert it as this program records in raw. So you need to upload the files into the program and export them in a usable format like bvh. bvh can be used in allot of the major 3D packages as far as I’m aware and once imported you can match the data to your rig or take the one from the file. Wikipedia has a list of most of the programs that the file type works with here. It is a pretty stranded file format for motion capture hence many of them have it.
So why did this mocap break but other more expensive ones don’t? Well that’s probibly because the better ones use sensors. Why are sensors better. Well that’s simple. For one they use tracking points instead of nodes. This means that each sensor captures the data of each point it can see and usually is picked up my multiple sensors which will most of the time stop all the glitching we ran into. However it does have problems. First being that it’s super expensive for a students standpoint so theirs that. The second is that your limited to your work space or rather where the cameras are pointing. The gear we have as a positive only needs the wireless network connected to your pc so you can walk where ever you want. The wireless tech in the future of mocap has massive potential for animation and full body VR gaming.
Ex Oblivione, a short story by HP Lovecraft was the inspiration. As a team we all worked on 1 shot each of the sequence (links to the artstation is in the description of the video) and the sequence as a whole is the main journey though the dream land and to the afterlife. The song is Gymnopide No.1 but I’ve altered it to become more distorted as the scenes carry out to convey the feeling of uneasiness as while this is a happy calming place for the main character and the viewer, he is still killing himself and death is an uneasy thing. Yet its a relief at the end. The whole time the music builds and we go deeper and deeper into the dream realm until finally we are released.
I believe we convey these emotions effectively and the end product is appealing to the eye and flows well from shot to shot. Overall our team worked well together with minimal hiccups towards the end but there was some tension early on which we tried to resolve as best as we could. After all with our group of 5 it was bound to find some creative differences but we pulled though and got over it when it got to it. The asset quality is pretty consistent as well as the camera movements so each of the shots feel like they belong together.
For this production we used slack to communicate and google drive to store and share files among ourselves. As the effective project lead I made short deadlines for assets or parts of production to be completed in sprints every week before the start of class that week so we could get to work on the next thing and help each other fix what we needed to. We communicated almost daily or at a minimum 4-5 times a week.
This was the first project in which we as a team made the project plan ourselves and adhered to it pretty well. We wanted to have our shots fairly unique with compastion and colour scheme but the same in style and movement as that was our main goal. Overall as a whole we strived to make the viewer feel at ease and somewhat peaceful yet uneasy which was a hard feat to manage but again sticking to what we planed early on we stuck to it and pulled it off for the most part.
The group worked pretty well together as we helped each other when we could and supported each others developments of each scene. During the concept phase where we were drawing out our ideas however one member kept letting himself down with a self defeating mentality. Positive and Negative enforcement however didn’t help all too well as he was persistent at not believing in himself. Everyone had to do things that we didn’t like or aren’t good at to progress and get a better overall product. Being firm but reassuring I did manage to get him to do what he needed to and it turned out pretty good. Getting different people to do things is difficult as they’re a lot of different types of people and this was the first time dealing with such a self defeating attitude but now I’ve learn’t from is and can combat it in the future.
After that hurdle everything went pretty smoothly apart from some of us getting our stuff in unreal some even leaving it to the last day to do. This was partially my own fault as I may have not been pushing hard enough to get people to go above and beyond maybe because I was burnt out focusing that energy on one person at the time. Helping everyone I could we all managed to get our scenes done in the end. Part of this could have been from the criticism we were receiving or rather lack there of. At each stage of development we understood how to present our work in a visually appealing way but that meant people either had nothing bad to say or didn’t feel like it would be appropriate because it looked like so much work went into it at the time. Because of this we received no negative feedback from peers during the gallery walks we had. In hindsight even just a piece of paper with “leave harsher feedback please” would help. At the moment I do not know with certainty how to combat this with outsiders but the best feedback we got was when people knew that the harsher it was the better it would be.
We followed a pretty simple pipeline with idea, concept, previs, production, assembly and post with myself leading the charge as team lead for coming up with the idea. We all took part at each stage and reviewing our own stuff between ourselves at each stage. 3Ds max to make the models then quixel to create the textures and the we implemented the assets into unreal. Once the scene was assembled we then set up the camera movement and after the post processing.
During the production we mostly worked together ensuring that our assets looked similar to each other and flowed shot to shot. However we didn’t at first do this and neither did we later on.
A solution to this would be to just work closer as a team during the whole production. Speaking of improvement the part of this whole process I want to improve on as a whole is leading as mentioned before with trying to motivate others.
To get better I just need to keep doing more. This was the first project I did in 4.12 and now after the fact there are many features we didn’t take advantage of. Our use of online storage to share files was useful and helped with collaboration.
Overall I did my best with what we had and the situation to make what we wanted to make and take responsibility for what I ended up with.
Being in charge of a team is good and I like doing it but after the last 3 doing the same sometimes its nice to have a break or let others take bigger roles if they want you their to help support myself and the rest of the team for their sake.
Concpeting: While not everyone completed concept art in the time frame we wanted, each shot felt unique but still part of a collective.
Post Processing: Each shot was graded in engine to better exaggerate the colour and tone of the shot to push the abstract feelings.
Terrain Painting: The terrain ended up looking good and fit each scene well. However due to the power of the PC’s we would use a smaller terrain could have been more helpful.
Texturing: All texturing was done in Quixel as to ensure everything fit the style and it did.
Modeling: From concept to models we kept it pretty close to what we wanted and hence suited each others style as well as keeping the poly count low as to ensure it ran well in realtime.