All Articles by Johan Malmsten

325 Articles

240317 – Messing about in Fusion

So, I was sitting at the computer. Watching clips on youtube (as one does) and up popped a video that had a certain clip from a certain japanese show where young teens are forced to evangelize the birth of neon. And I thought. Huh. I could probably do what I saw in that clip.

Okay I may not be able to do the crisp character animation of the Eva Unit 01. But I was not thinking about that. I was thinking about the background. The red and yellow paint flowing past at incredible speed in the background. Reminding me of when filmmakers with little regards to their own safety film close-ups of volcano eruptions with a telephoto lens.

That. I think I can replicate that, at least.

So I opened Fusion and started connecting nodes. And the result was this.

Which resulted in this:

Ok. I couldn’t resist putting it angled over the virtual camera. And as there’s no foreground animation I went ahead and made the colors more contrasty.

All the animation in it is procedural. It’s basically just a few fast noise nodes that have been put through some distortions and colorizations. The only thing making it all move is a single expression that the noise nodes are linked together with.

Point(0.0, time*(2/3))

It moves the canvas of the noise upwards 2/3 of the total height of the resolution.

The eagle eyed of you might have noticed that there indeed are two saver nodes in that node tree. The other is there because I tried extracting the yellow with a color keyer and put it through an XGlow node from the Reactor toolset (please, someone pry me away from XGlow nodes! I love how they look but they take up soooo much time in my node trees! ;D)

The result reminded me of some kind of old school space battle where streaks of lazers burn through the view. Or maybe a kind of athmospheric re-entry of a vehicle. Anyway. It just looked plain cool.

So I had to give it its own render:

I’m not sure what I’ll use these for. It was mainly just an exercise to see if I can put my money where my mouth is, so to speak. I can say. I know how to do that, and be sure that I actually can.

Oh, And I’m counting these in the weekly upload pledge that I am failing so miserably to fulfill. I may know how to do videos, But I struggle with stuff like weekly uploads.

So… anyways…

Be seeing you!

240311 – A Torture Test of XGlows

A series of Fusion Comps where I tortured my computer for hours on end with lots and lots of glows, noise nodes and whatever I could think of. Loosely strung together with an exploring shape that’s sometimes a simple triangle and sometimes a 3D Tetrahedron.

I might revisit some of it to show how the comps were built if viewers want it.

240227 – The Yearly Tradition – aka Thimmi 2024

240130 – New Channel Trailer

I was messing about in Fusion and thought I would make a new channel trailer for the tube of you’s. That is all.

240128 – A BW Test of MalmScope Principles

So, here’s one of these uploads that I do that do very little to embiggen my subscriber-base. Instead, it is a sequence of clips I shot recently between 2.5-120 fps. Converted to BW and graded, and cropped to fit my windowboxing method that I am experimenting with.

Shot by me and auditory noise produced by me with a Deepmind 12

If, despite all logic, ye are curious what this “MalmScope” thing is all about. you can read a needlessly rambling post on this, my (in)frequently webpage:

https://www.jmalmsten.com/malmscope/

240121 – Content of the Week 2403

Hah! Just because I missed the 2nd week doesn’t mean I have to miss the third week deadline!

Now, here you can enjoy the audio portion of the Free Association Experiment I mentioned in the last video.

See you in the next piece of weekly content!

240119 – Content of the Week 2402

Ok, I missed the upload of week 2. To that I can only say FECK.

But… Let’s overcompensate for that with a very ill-advised decision.

CotW 2401 – the Pledge

In a vain attempt to force myself to get myself to do uploads more often I vowed to myself at new years eve that I will upload something each week. Something. Anything.

Well, I damn near failed at the first week. I had not filmed anything. So now, I was at the point that the vow meant. I need to do something. Anything.

So, here’s a series of shots filmed in pure desperation. And some sounds that I made to accompany it. Some may call the sounds music. I am not sure I would go that far.

I consider this the lowest effort. I hope I will make more interesting stuff for the coming weeks.

See you next week for… something… anything.

24 Screams of Christmas

Just in case someone interested missed it. During most of December I uploaded a series of youtube-videos that charitably could be called an “Advent Calendar”. Each day there was Herr Nicht Werner and each day he presented another scream. Some other shenanigans did ensue. But that was the main thing.

I welcome you now to watch this whole series. Each episode is just a few minutes. So you can probably get through the whole thing in a single sitting.

If this is not actually to your particular fancy… I… I do not know why you are here. 🙂

Oh, and if you wonder why it’s 24 and not 25. Well, here in Sweden, we celebrate eves, not days.

Take care now! Bye bye then!

MalmScope – Or the Constant Resolution Solution to the (nonexistant) problem of Aspect Ratio Sizes

DISCLAIMER:

The following is the text I have been using for a planned video that never seems to materialize. I have now decided to simply just make a blog post on this blog that I rarely update. One of these days I might actually finish the accompanying video. And on that day. I will add it to this blog post But as of writing this preface/disclaimer. I just want the drfx up on the blog so I can point people to it.

TLDR: the block of text references a bunch of stuff that was supposed to be in the post. But in fairness. There is only one downloadable file present. so I put it up here at the top. Read on to find out what it does. Why I made it and how it was made.

Please do enjoy the following diatribe


Ok. Look. Look at this. This. This is a solution to a problem. But, the thing is. It is not a solution to a global life threatening problem. It is not a problem connected to the current special military operation in Ukraine (the war), it’s not the ongoing economic turmoil faced by the Chinese financial world that could spill over to the world market. It’s not even one of those solutions that is in need of a hitherto not found problem. No. It is far worse. It is a problem. A problem no one actually cares about. Except me… I think.  

Indeed. This solution even makes for a situation that is probably worse than what we have today. But it is a solution to a problem that has been bothering my mind for quite a few years now. Let me explain.

Here’s the thing. The thing. The problem. 

I love movies. I’ve watched them all my life. I have been enjoying them as they are presented. And as I matured to be a teenager, I even got some bizarre want to make some myself. And so I began to search out filmmaking theories and practices. It became slightly obsessive. And as my own collection of movies grew. I jokingly realized I was buying DVDs, and later on, Blurays, not so much based on how good they were as movies, but rather, what I could learn from studying them. And more importantly, their supplements. The extras and featurettes, the commentaries, the interactive menus and the bonus features. 

And one of the earliest things that I started to think about was… the aspect ratio. The shape of the image. And what we as viewers get to see when we pay to watch these cinematic extravaganzas. Is it what we are supposed to see? Or is it cropped? And… is it really worth cropping an image to fill the finite pixels on screen?

Yes. Full screen. Pan scan. Anamorphic widescreen, letterboxes and pillarboxes. And over the years. I start to catalogue my findings and see repeating trends. And as IMAX started to partner up with movie studios to make thrill rides in a format mainly used for educational science projects. One of these recurring things recurred. Namely. The shape of the screen became a selling point once again. It was the widescreen debate again, Just like in the fifties, and in the silent ages, and the late 90s. But now in reverse. We had gotten used to Hollywood presenting things in cinemascope. And it was about as wide as we could possibly tolerate. There were wider options, sure, but Scope became the normal wide alternative. So now we were sold movies where the main point is the verticality they can let you see. And it is always illustrated in the same way. You have the biggest picture. And you crop it to show how much you lose by watching a film in a lesser venue. And…

And simply it did not sit right with me. Why? Because. They are using the shape to sell a size. And it is always with the assumption that the originating format is the intended format and that showing all of it is the intended thing. And that the originating shape corresponds with the biggest size that a cinema can offer. None of those things are necessarily true. 

You have seen these illustrations, I am sure. It shows a full IMAX  frame. And overlaid on it are these markings that show how much is cropped for each way you can watch it. Usually it has full IMAX 15 perf and 5 perf vertical 70mm and 35mm anamorphic, and DCI for 2K and 4K. And. Yes. Looking at it like that. It makes you wonder why not all movies are presented in 1.43:1 IMAX. But. Remember. The exact same tactic was used to sell us on widescreen releases back in the day. Only then, it had a Cinemascope full image as the base and from that you’d show the crops for 16:9 and 4:3. Look at that with the same logic and you’ll say, why not make all movies in Cinemascope and show the full width. And that’s not even mentioning productions shot on super35 where the negative is 1.37:1 and the result is cropped to 4:3, 16:9, 2.35:1 or whatever the filmmakers want. The Matrix and Independence Day and Terminator 2 and on and on. (Man, my references are old). These were released with full width scope prints to theaters and on home video they used more vertical space to fit 4:3 TV’s without the usual compromises that a strictly widescreen negative entails when doing the conversion. 

So. Going back to my solution to a problem that no one cares about. The problem is what I have touched upon here. That, when we discuss aspect ratios, that is, the shape of the image. We always assume there is this biggest shape that fits the mastering medium and we crop from that. So. If the mastering medium is 35mm cinemascope. Going to any other shape will always mean the image gets smaller. Same for 4:3 CRT tubes and 1.43:1 IMAX film. You get a smaller image than you could by filling the screen. It is all very logical. I think it is. Or?

Or does it have to be like this? 

What if we could choose the shape not based on whether it would be bigger or smaller. But. What the shot actually needs? I mean. Even in the cases where a movie storyteller wants to play with the shape. It is always done only in one dimension. The other dimension is always constant. Even when they venture out to muck about with both variables. It is always assumed that we should make each aspect ratio as big as possible for our distribution media. That we should max it out. When Wes Anderson made that film about a quirky adventure at a hotel. We got both 4:3 and 2.35:1. But they were maxed out and when they had sections in 1.85:1 it was basically full screen 16:9. This made both the 4:3 and 2.35:1 sections feel smaller than the 1.85:1 sections. I didn’t feel the width of 2.35:1, likewise I didn’t feel the height of 4:3. Because I was reminded that 1.85:1 was just as tall and wide as both of them together. 

So. Ok. Now I am going to propose part one of my solution. Which is… learn to accept window-boxing.

Yes. Window-boxing. 

For those unfamiliar with the term. It is the bastard stepchild inbred sibling of the more well known ways of presenting cropped images in film. 

Letterboxing is when you reduce the height but keep the width. It got its name from how it looked as if you were peering into a home through a letterbox. Yes, children, in them olden days. Mail came on physical paper to your home, through a hole or a box. Sometimes that hole was in your front door and it was apparently so common to peer through them that people understood what you meant by “letterboxing” an image. Kind of a creepy situation, now that I think about it. 

Pillarbox likewise keeps the height but puts black bars on the sides. It looks as if you watch things between pillars. These names are very creatively chosen, indeed.

Windowboxing is when you combine the two. You get a black frame all around. And since you normally have no excuse to do this (especially since we could get away from overscan on televisions), it is looked at as a sign of a mistake. Because you are essentially just wasting valuable screen area. Traditionally, even if you crop both the width and the height of the recording medium. You still usually would want to scale it up to fill out the target ratio with either pillar or letterboxing. 

These are the accepted facts. 

My proposal is to re-evaluate the third one. To make windowboxing acceptable. Make it work better than what we think we should do. 

To get back to The Grand Budapest Hotel. I am proposing that Wes Anderson should have windowboxed the 1.85:1 segments. That way. When it switches to scope. You get a wider view. And when you have the academy ratio, you get a taller image. You get the best of both worlds. 

Ok. But should the 1.85:1 bits really be the smallest ones? Confined by the height of scope and the width of academy? No, because then you are using the same old thinking I want to get away from. 1.85:1 should be just as significant on screen as the other shapes. 

So… that’s the rub… that question. If it should be windowboxed. And the purpose isn’t to make it feel smaller than the other ratios. How small should we make it? Between this and this. What is the appropriate scaling? Well. That’s the second part of my ruminations. That’s where my mathematical figures play their part. 

So. I started to dabble around in various forms. Using several techniques to try to get something sensible out of this nonsensical task I wanted to complete. 

At first. I went for the naïve approach. I took a canvas. I put in the widest and tallest ratios to get the extremes. And I drew a straight line between the corners. I then made crop guides where each aspect ratio between the two extremes were touching the guidelines. Ok. That is one result. But it did bother me. Doing it like this, did not get me the actual pixel dimensions. I would always need to draw that guideline to calculate the dimensions of each ratio visually. And I wondered if this even was a fair approach to go about this. Is this really a way to get an image that is of equivalent size when comparing the ratios.

As a side thing I also experimented making the guideline into a curve. Trying to mimic the intersection as if it was made with an ellipse instead of a rotated rectangle. I made a bunch of those crop guides and while it was a nice collection of rectangles it still felt like a very imprecise method to go about this. 

I am a subscriber to Matt Parkers channel. I wanted the impartiality of science. I wanted the dimensions not to be arbitrarily chosen. I wanted the assurance of… maths! And maybe a Klein Bottle… 

So to make this problem into something solvable by maths, I needed to boil it down to variables and constants. And I need to decide on what I wanted the solution to adhere to and fulfill. 

So. To start things off. I searched my feelings. I let go. I made the first decision based on logic. Since both height and width in this comparison are variable. The one thing that can be constant between the resolutions is the resulting resolution. 

X and Y in this equation therefore are unknown but Resolution is known. Because this is derived from the known X and Y of another resolution. 

So. We can now have this:

originalX * originalY = Resolution

And:

newX * newY = Resolution

In those two equations only newX and newY are unknown. And Resolution is the same in both. 

I know. It’s not exactly quantum maths. But here’s where MY brain got stumped. 

If we keep the numbers tiny, we can have an example like this. With a 4:3 ratio being converted to a 6:2 ratio with a constant resolution:

4 * 3 = 12

newX * newY = 12

It will work if newX is 6 and newY is 2. Because 6*2 = 12. The numbers are tiny enough that you can guess the right result. And I made it easier on myself by saying 6:2 instead of 3:1 even though the two are mathematically equal. 

But. Let’s throw it into a real world scenario. 

4:3 in a 1080p master is commonly shown as 1440 wide and 1080 high. Now let’s say you want to show a 1.9:1 ratio with the same amount of pixels as that 4:3 image.

So. Let’s populate the equation:

1 440 * 1 080 = 1 555 200

newX * newY = 1 555 200

ok. Now it gets harder to just guess what newX and newY should be to both have an aspect ratio of 1.9:1 and that 1 555 200 resolution.

Again. I was stumped. For years I couldn’t figure it out. My brute force solution was to… just brute force each ratio. Yes. I would simply just type a vertical number into a calculator and multiply it by the ratio and make a spreadsheet of the results. Adjusting the height of each try until the result was as close to the target resolution as possible. Until I would land at the result of:

1716 * 906 = 1 554 696

But that is such a terribly inefficient method. Again. It’s not exactly rocket science. And I do enjoy a good spreadsheet in regular intervals. I should be able to get there quicker than just trial and error. 

So. Years passed. I had basically given up. My daytime job had some restructuring. I found I had an opportunity to take some classes. I took maths with an entirely unrelated reasoning. But this problem lurked away. Maybe I could tackle it one day. And shortly after I had a literal Heureka moment. In a shower, not a bath. But. I stood there slack-jawed. Holy carp. THAT’S IT!

To put my old thinking into context. Since newX and newY is written out as two variables I had thought of it as a two variable problem, and as such the solutions involved graphs and plotting and could get two answers where only one would be relevant. BUT!

The realisation that struck me is that newX is not independent of newY. No, newX can only be one thing for each newY. Yes, newX is completely calculated by newY*newRatio. So therefore.. 

newX = newY*newRatio

So this

newX * newY = Resolution

Is exactly the same as:

(newY * newRatio) * newY = Resolution

Yes, I know the multiplication marks are a bit redundant with the parenthesis. But I prefer to be overtly clear about these things. Nothing should be considered as given. If I can misunderstand it, I WILL. And I don’t want that.

Anyways. And since I know what newRatio is. (It’s the x of an x:1 aspect ratio, that’s easy to calculate by just dividing width with height. 16:9 is 16/9 which is 1.78, roughly, and you just put :1 to the right of it) I now have reduced the problem to one with only one variable. As long as I can find out what newY is I get newX for free!

So. With basic algebra I restructured it so all the knowns are on one side and the sole unknown is on the other. 

So, 

(newY * newRatio) * newY = Resolution

becomes

newY^2 * newRatio = Resolution

Which becomes 

newY^2 = Resolution / newRatio

Which finally is 

newY = sqrt(Resolution/newRatio)

And to put that to the test we take the example: 

1 440 * 1 080 = 1 555 200

Solving with 1.9:1 aspect ratio:

newY = sqrt(1 555 200 / 1.9)

Which is 

newY = 904.724…

And since

newX = newY * newRatio

We have

newX = 905 * 1.9

So.

newX is 1719

And 

newY is 905. 

And as such, I ran around the town streets, naked, flailing my arms about, shampoo and lukewarm water getting everywhere.  Laughing maniacly. It is done! It has been solved! I can now get new accurate dimension at any ratio while keeping the resolution constant! All in one beautiful equation! Ok, it’s two, but still! And dodging more policeman-officers than I really thought wouldn’t be out this time of day. By the time I realized where I was and what state I was in, I was already caught. I had been fitted a nice new snug but kind of constricting jacket. And now I was transported to a fine facility where I was told I would be greeted by specialists in fields that would be beneficial to my current predicament. Top men, they said reassuringly.. top… men…

A few court cases later where crying children and angry parents on witness stands really wanted me to stay indoors for the foreseeable future, I was nevertheless let go. Deemed maybe not completely mentally sane, but at least not a danger towards myself and otters… I mean others! Or maybe I meant otters. 

Nonetheless. While my story here may have been rambling, and in a few cases… exaggerated… that is largely how I ended up with this formula. Using it, you can get within a couple of pixels of the dimensions needed to take one source images resolution and make another aspect ratio while keeping the resolution constant. 

I put it in a google sheets doc (Disclaimer: This sheets doc is not made public yet) to make the process even more automated. 

Now. The pedants out there probably did notice I fudged the numbers slightly. But it was only to make the numbers of pixels even (since computers hate odd numbers in general) and to get total resolution below the source resolution instead of above it for general neatness. 

Now! In conclusion!

You maybe wondering. Where does this formula take us? If I did persuade you in the first bit. How do you use it? Should you bother? And are there gamebreaking pitfalls when using it? What does this mean for viewers at home and in cinemas? Can we reevaluate hardware in current setups?

First off. If the story gains nothing from playing with the shape of the screen. Do not bother. Pick one shape. Keep it maximized throughout. This whole thing only really makes sense if you are intending to mix ratios and are willing to open the Pandora’s box of issues viewers will think they have when watching something mastered in the way I propose. Remember the nighttime battle in Game of Thrones? Some of you are still bitter about it. I assure you, there will be viewers that will make Game of Thrones fans look meek and compliant if you mess with the eldritch horrors of windowboxing.

But, if you decide to sign my waiver of responsiblities. How do I propose you use this formula? Well, here’s my suggestion:

  1. You decide what shape the shot needs to be. 
  2. You shoot it in a way that ensures that there is as many pixels as possible in your budget after cropping to that ratio. 
  3. You make a windowbox that suits your master frame (usually this is one of the broadcast or projection standards). You use the base resolution in the formula and derive from it the new dimensions. 
  4. You scale and frame the source video to fit that window-box. 
  5. Go to 1 for the next shot. Rinse and repeat. 

Yes. That is how easy it should be. But ok. I get it. Most people don’t have time to build window-boxes for each shot-shape. And I mean… only a madman would spend the man-minutes needed to create a bunch of them and upload them in a big package to the internet…

Yes. 

Yes I did. I did set up tons of vector-shapes in layers in Krita and used the formula to get dimensions for each aspect ratio I could think of. See the link in the description to find that zip-file. (Disclaimer: I never did upload them… sorry) They are very simply built. Just a black frame around the white. To make the white transparent you can in your NLE of choice simply use an appropriate blending mode. I do prefer Multiply. If you know of a better one. Add it into the desert of the comments. 

As you may see in the file structure. These crop-guides are organized in folders according to the image resolution. And under those there are for most of them a level of folders which corresponds to the master-frame of the system. To help you navigate these I have made this spreadsheet. It shows what the dimensions are for the resolution for each and every aspect ratio. 

It should also be noted that in order for these to work as intended. You need to add them to the target timeline with no added scaling. It should be centered and pixel by pixel 100 % scale. In Resolve (my favourite), this can be set for the whole project here in Project Settings > Image Scaling > Mismatched Resolution Files > Center Crop With no Resizing. To have it specific to the timeline in question you can look in the timeline settings for the same setting. And you may even want only these windowboxes to behave this way while the actual filmed footage should resize to fit the master frame for ease of use. So you can on the timeline override the timeline and project settings for single files by selecting the clip, opening the inspector and in Retime and Scaling, you have the settings. Set scaling to crop to retain the 1:1 pixel scaling of the source file. 

And for those that wondered about nr 2 in that list of suggested steps. That’s very dependent on the camera you have access to. You can for instance, in some cameras, gain extra pixels on the vertical dimension by setting the camera to film in 4:3 mode or similar. On my own Panasonic GH4 I can use that 4:3 mode by going into the menu and finding a cryptically named setting that was intended to be used with anamorphic lenses. So with it I can get more vertical pixels for aspect ratios that are horizontally narrower than 1.54:1 and maximize the resulting resolution post-crop. For example. If I intend to shoot on that camera and I plan on cropping the sides to IMAX-shaped 1.43:1. I have basically two options. Either use UHD 3840×2160 recording and crop to 3088*2160 to get a maximum resolution of 6 670 080. Or I can use that same 4:3 mode to record 3328×2496 and crop the height to 3328×2328 which has a resolution of 7 747 584. Yes. You gain a whole megapixel by choosing an appropriate recording setting. 

But I do digress.

Especially as I now have gotten hold of a Blackmagic Pocket Cinema Camera 6K 2nd Gen which has very different ratios and resolutions to play with. Oh what fun.

Just look into your documentation that comes with the camera to find out what settings are best for you. 

So wait. Why do I even have all these folders of slightly different resolutions in that package of windowbox cropguides? Surely there should be a method that involves even less end user input? I mean. I can almost hear you now. You look through the files and you find that the specific shape you need is not there. You ABSOLUTELY POSITIVELY must use a shape of 1.47:1 and you need it for your timeline resolution of 1371×99999 and neither 1.43:1 or 1.50:1 will be acceptable for your pixel-peeping eyes. Surely. There cannot be a solution for you to be able to choose ANY source resolution and target ratio? Surely! To spend hours of my life to build something that spits out the correct shape and doesn’t need to be in separate image files. Something that you can import to the NLE and have it do all of this for you. I said at the beginning of this whatever length video that this is a problem noone has, and that noone should bother about? 

Yes… I built a Fusion Macro where you enter the source resolution. It calculates the correct new resolution, makes a rectangle and makes it a windowbox for any resolution and any target scale and it can be used both in Fusion and in Resolves Edit page… 

And I have it here for free… link in the description. 

Feel free to ungroup it in Fusion to customize things to your hearts content. 

I edited this whole mess of a video using this macro. Did you enjoy it? I did.

Now… Whatever should I do with my life when I finally have solved this ancient problem that has not bothered filmmakers worldwide for years.

I think I will lie down in my bed. And I will sleep…

And no. 

I will not do this thing for after effects or premiere pro. Or Final Cut or any of the Open Source NLE I have not been able to run as well as the proprietary DaVinci Resolve.

why?

Because I can’t be bothered. It’s all in here if you want to rebuild it for other platforms. If I can do the maths, then surely most of you can as well. All I ask is that you credit me in just a text note somewhere or something. My ego wants the attention. 

Good bye now. I need to return to my other eternity projects that seemingly will never see endings. 

Please. Just go away. I need to sleep.

Oh for the love of…

(THE END)
…? 

MOAR KALEIDOSKOPIK!!!

No, It didn’t really sit right with me. The latest video being called “kaleidoscopic”. I mean. I liked the visuals. But it was not exactly “kaleido” just “scopic”. I think.

So, please here do enjoy an attempt where I go full on with the kaleidoscopic visuals.

Nearly everything on the audio side is produced with a few first takes with my Behringer DeepMind12 at various presets. recorded and layered with Audacity. There is some kind of electrical interference noise present that I need to troubleshoot. But overall I am very pleased with the result, even if the sound is a bit noisy and muddily mixed at times.

The visuals are all made in Davinci Resolve’s Fusion Tab. A single noise node being filtered and mirrored multiple times and colorized at the end by a second noise node.

Now please enjoy…

Well, I say enjoy… (looking awkwardly off-camera)

Something Kaleidoscopic This Way Cometh

Last night I had an urge to do something kaleidoscopic. No real plan beyond that. So this is a fast noise with a duplicate node giving 100 duplicates. Constantly rotating. Interacting with each other. And the usual film treatment on top.

The sound is a drone sound where I turned on my Deepmind 12 and found that the preset it was on at the moment behaved very cool when you just held the note. So I held two low notes, and I pressed the hold key to keep them down virtually. And I just recorded the output to Audacity while manipulating the various faders and volume knob on the synthesizer during the 10+ minute runtime. Just a compressor in post to even out the sound volume as it drifts in and out. I was planning on adding more layers of sounds. But this raw evolving drone was just too neat sounding to risk drowning out.

SpaceWater (Short)

Abstract forms dance in front of a field of stars. Just an abstract experiment. Presented in Black and White with stereophonic sound in select venues.

_____________________________________

Shot with #BMPCC6KG2. #BRAW 12:1, 2.7K 120fps.
Found sounds collected with #Zoom #M4 #Mictrak.
Synth sounds created with #VCVRackV2 Sounds processed with #Audiothings #Reels, Audiothings #Springs and #Softube #TapeEchoes

Edited and graded in #BlackmagicDesign #DavinciResolve and rendered in glorious #MonoChrome #BlackAndWhite

Learning Blender 3D’s Grease Pencil – Day 1 & 2

After much temptation I have now finally started my attempts to try to learn Grease Pencil in Blender 3D. I have dabbled for a while with Blender in general. Doing some abstract models and animations. But now is the time for me to jump in to do what I have spent most of my hobbies doing. 2D animation.

This will be an intermittent series of posts where I simply document what I am doing in Grease Pencil. Following various tutorials and trying to find ways to learn this thingamajigg enough to be able to call myself proficient in it.

Day 1 consisted of just getting a hang of the interface. How to draw simple lines. How to make the keyframes play in the order I want. And what better way to do that than to bring out ye olde bouncing ball. When all else fails. One never can go wrong with the bouncy ball.

Day 2 is today and I went ahead doing some more bouncy balls.

But balls are fun and all,, though I wanted to try out colors. So instead of a bouncy ball, here’s a blinking ducky… thing…

Ok… I realize now that exporting these as videos might not be that great of an idea as I they are very short loops. But with that ducky thingy I did find a rather nice workflow thing where I basically set up each color as a material. And I can then hot-swap them after I did the coloring of the drawings and it automatically updates on all frames that uses that material/color. I mean… this is a feature I have heard of for years and it seems like a very nice thing to have when doing big projects. So in a sense, it’s basically just me being late to the proverbial party.

Oh, well..

I’ll see if I can get some more stuff through this thing.

Oh, and holy heck it’s been a long time since I did anything on this site.

210628 GoProHero9BlackSlowestMoTest

210503 – “Hey!” short

210406

210321 – Yet another sped up twitch stream

As the title says, it is another one of them. I need to set something up so I can make these on a more regular basis. And actually knowing what I am supposed to animate before I start to stream to an audience of… 1… I think that’s a bug… It’s probably zero viewers.