Category Archives: Blog

240317 – Messing about in Fusion

So, I was sitting at the computer. Watching clips on youtube (as one does) and up popped a video that had a certain clip from a certain japanese show where young teens are forced to evangelize the birth of neon. And I thought. Huh. I could probably do what I saw in that clip.

Okay I may not be able to do the crisp character animation of the Eva Unit 01. But I was not thinking about that. I was thinking about the background. The red and yellow paint flowing past at incredible speed in the background. Reminding me of when filmmakers with little regards to their own safety film close-ups of volcano eruptions with a telephoto lens.

That. I think I can replicate that, at least.

So I opened Fusion and started connecting nodes. And the result was this.

Which resulted in this:

Ok. I couldn’t resist putting it angled over the virtual camera. And as there’s no foreground animation I went ahead and made the colors more contrasty.

All the animation in it is procedural. It’s basically just a few fast noise nodes that have been put through some distortions and colorizations. The only thing making it all move is a single expression that the noise nodes are linked together with.

Point(0.0, time*(2/3))

It moves the canvas of the noise upwards 2/3 of the total height of the resolution.

The eagle eyed of you might have noticed that there indeed are two saver nodes in that node tree. The other is there because I tried extracting the yellow with a color keyer and put it through an XGlow node from the Reactor toolset (please, someone pry me away from XGlow nodes! I love how they look but they take up soooo much time in my node trees! ;D)

The result reminded me of some kind of old school space battle where streaks of lazers burn through the view. Or maybe a kind of athmospheric re-entry of a vehicle. Anyway. It just looked plain cool.

So I had to give it its own render:

I’m not sure what I’ll use these for. It was mainly just an exercise to see if I can put my money where my mouth is, so to speak. I can say. I know how to do that, and be sure that I actually can.

Oh, And I’m counting these in the weekly upload pledge that I am failing so miserably to fulfill. I may know how to do videos, But I struggle with stuff like weekly uploads.

So… anyways…

Be seeing you!

24 Screams of Christmas

Just in case someone interested missed it. During most of December I uploaded a series of youtube-videos that charitably could be called an “Advent Calendar”. Each day there was Herr Nicht Werner and each day he presented another scream. Some other shenanigans did ensue. But that was the main thing.

I welcome you now to watch this whole series. Each episode is just a few minutes. So you can probably get through the whole thing in a single sitting.

If this is not actually to your particular fancy… I… I do not know why you are here. 🙂

Oh, and if you wonder why it’s 24 and not 25. Well, here in Sweden, we celebrate eves, not days.

Take care now! Bye bye then!

MalmScope – Or the Constant Resolution Solution to the (nonexistant) problem of Aspect Ratio Sizes

DISCLAIMER:

The following is the text I have been using for a planned video that never seems to materialize. I have now decided to simply just make a blog post on this blog that I rarely update. One of these days I might actually finish the accompanying video. And on that day. I will add it to this blog post But as of writing this preface/disclaimer. I just want the drfx up on the blog so I can point people to it.

TLDR: the block of text references a bunch of stuff that was supposed to be in the post. But in fairness. There is only one downloadable file present. so I put it up here at the top. Read on to find out what it does. Why I made it and how it was made.

Please do enjoy the following diatribe


Ok. Look. Look at this. This. This is a solution to a problem. But, the thing is. It is not a solution to a global life threatening problem. It is not a problem connected to the current special military operation in Ukraine (the war), it’s not the ongoing economic turmoil faced by the Chinese financial world that could spill over to the world market. It’s not even one of those solutions that is in need of a hitherto not found problem. No. It is far worse. It is a problem. A problem no one actually cares about. Except me… I think.  

Indeed. This solution even makes for a situation that is probably worse than what we have today. But it is a solution to a problem that has been bothering my mind for quite a few years now. Let me explain.

Here’s the thing. The thing. The problem. 

I love movies. I’ve watched them all my life. I have been enjoying them as they are presented. And as I matured to be a teenager, I even got some bizarre want to make some myself. And so I began to search out filmmaking theories and practices. It became slightly obsessive. And as my own collection of movies grew. I jokingly realized I was buying DVDs, and later on, Blurays, not so much based on how good they were as movies, but rather, what I could learn from studying them. And more importantly, their supplements. The extras and featurettes, the commentaries, the interactive menus and the bonus features. 

And one of the earliest things that I started to think about was… the aspect ratio. The shape of the image. And what we as viewers get to see when we pay to watch these cinematic extravaganzas. Is it what we are supposed to see? Or is it cropped? And… is it really worth cropping an image to fill the finite pixels on screen?

Yes. Full screen. Pan scan. Anamorphic widescreen, letterboxes and pillarboxes. And over the years. I start to catalogue my findings and see repeating trends. And as IMAX started to partner up with movie studios to make thrill rides in a format mainly used for educational science projects. One of these recurring things recurred. Namely. The shape of the screen became a selling point once again. It was the widescreen debate again, Just like in the fifties, and in the silent ages, and the late 90s. But now in reverse. We had gotten used to Hollywood presenting things in cinemascope. And it was about as wide as we could possibly tolerate. There were wider options, sure, but Scope became the normal wide alternative. So now we were sold movies where the main point is the verticality they can let you see. And it is always illustrated in the same way. You have the biggest picture. And you crop it to show how much you lose by watching a film in a lesser venue. And…

And simply it did not sit right with me. Why? Because. They are using the shape to sell a size. And it is always with the assumption that the originating format is the intended format and that showing all of it is the intended thing. And that the originating shape corresponds with the biggest size that a cinema can offer. None of those things are necessarily true. 

You have seen these illustrations, I am sure. It shows a full IMAX  frame. And overlaid on it are these markings that show how much is cropped for each way you can watch it. Usually it has full IMAX 15 perf and 5 perf vertical 70mm and 35mm anamorphic, and DCI for 2K and 4K. And. Yes. Looking at it like that. It makes you wonder why not all movies are presented in 1.43:1 IMAX. But. Remember. The exact same tactic was used to sell us on widescreen releases back in the day. Only then, it had a Cinemascope full image as the base and from that you’d show the crops for 16:9 and 4:3. Look at that with the same logic and you’ll say, why not make all movies in Cinemascope and show the full width. And that’s not even mentioning productions shot on super35 where the negative is 1.37:1 and the result is cropped to 4:3, 16:9, 2.35:1 or whatever the filmmakers want. The Matrix and Independence Day and Terminator 2 and on and on. (Man, my references are old). These were released with full width scope prints to theaters and on home video they used more vertical space to fit 4:3 TV’s without the usual compromises that a strictly widescreen negative entails when doing the conversion. 

So. Going back to my solution to a problem that no one cares about. The problem is what I have touched upon here. That, when we discuss aspect ratios, that is, the shape of the image. We always assume there is this biggest shape that fits the mastering medium and we crop from that. So. If the mastering medium is 35mm cinemascope. Going to any other shape will always mean the image gets smaller. Same for 4:3 CRT tubes and 1.43:1 IMAX film. You get a smaller image than you could by filling the screen. It is all very logical. I think it is. Or?

Or does it have to be like this? 

What if we could choose the shape not based on whether it would be bigger or smaller. But. What the shot actually needs? I mean. Even in the cases where a movie storyteller wants to play with the shape. It is always done only in one dimension. The other dimension is always constant. Even when they venture out to muck about with both variables. It is always assumed that we should make each aspect ratio as big as possible for our distribution media. That we should max it out. When Wes Anderson made that film about a quirky adventure at a hotel. We got both 4:3 and 2.35:1. But they were maxed out and when they had sections in 1.85:1 it was basically full screen 16:9. This made both the 4:3 and 2.35:1 sections feel smaller than the 1.85:1 sections. I didn’t feel the width of 2.35:1, likewise I didn’t feel the height of 4:3. Because I was reminded that 1.85:1 was just as tall and wide as both of them together. 

So. Ok. Now I am going to propose part one of my solution. Which is… learn to accept window-boxing.

Yes. Window-boxing. 

For those unfamiliar with the term. It is the bastard stepchild inbred sibling of the more well known ways of presenting cropped images in film. 

Letterboxing is when you reduce the height but keep the width. It got its name from how it looked as if you were peering into a home through a letterbox. Yes, children, in them olden days. Mail came on physical paper to your home, through a hole or a box. Sometimes that hole was in your front door and it was apparently so common to peer through them that people understood what you meant by “letterboxing” an image. Kind of a creepy situation, now that I think about it. 

Pillarbox likewise keeps the height but puts black bars on the sides. It looks as if you watch things between pillars. These names are very creatively chosen, indeed.

Windowboxing is when you combine the two. You get a black frame all around. And since you normally have no excuse to do this (especially since we could get away from overscan on televisions), it is looked at as a sign of a mistake. Because you are essentially just wasting valuable screen area. Traditionally, even if you crop both the width and the height of the recording medium. You still usually would want to scale it up to fill out the target ratio with either pillar or letterboxing. 

These are the accepted facts. 

My proposal is to re-evaluate the third one. To make windowboxing acceptable. Make it work better than what we think we should do. 

To get back to The Grand Budapest Hotel. I am proposing that Wes Anderson should have windowboxed the 1.85:1 segments. That way. When it switches to scope. You get a wider view. And when you have the academy ratio, you get a taller image. You get the best of both worlds. 

Ok. But should the 1.85:1 bits really be the smallest ones? Confined by the height of scope and the width of academy? No, because then you are using the same old thinking I want to get away from. 1.85:1 should be just as significant on screen as the other shapes. 

So… that’s the rub… that question. If it should be windowboxed. And the purpose isn’t to make it feel smaller than the other ratios. How small should we make it? Between this and this. What is the appropriate scaling? Well. That’s the second part of my ruminations. That’s where my mathematical figures play their part. 

So. I started to dabble around in various forms. Using several techniques to try to get something sensible out of this nonsensical task I wanted to complete. 

At first. I went for the naïve approach. I took a canvas. I put in the widest and tallest ratios to get the extremes. And I drew a straight line between the corners. I then made crop guides where each aspect ratio between the two extremes were touching the guidelines. Ok. That is one result. But it did bother me. Doing it like this, did not get me the actual pixel dimensions. I would always need to draw that guideline to calculate the dimensions of each ratio visually. And I wondered if this even was a fair approach to go about this. Is this really a way to get an image that is of equivalent size when comparing the ratios.

As a side thing I also experimented making the guideline into a curve. Trying to mimic the intersection as if it was made with an ellipse instead of a rotated rectangle. I made a bunch of those crop guides and while it was a nice collection of rectangles it still felt like a very imprecise method to go about this. 

I am a subscriber to Matt Parkers channel. I wanted the impartiality of science. I wanted the dimensions not to be arbitrarily chosen. I wanted the assurance of… maths! And maybe a Klein Bottle… 

So to make this problem into something solvable by maths, I needed to boil it down to variables and constants. And I need to decide on what I wanted the solution to adhere to and fulfill. 

So. To start things off. I searched my feelings. I let go. I made the first decision based on logic. Since both height and width in this comparison are variable. The one thing that can be constant between the resolutions is the resulting resolution. 

X and Y in this equation therefore are unknown but Resolution is known. Because this is derived from the known X and Y of another resolution. 

So. We can now have this:

originalX * originalY = Resolution

And:

newX * newY = Resolution

In those two equations only newX and newY are unknown. And Resolution is the same in both. 

I know. It’s not exactly quantum maths. But here’s where MY brain got stumped. 

If we keep the numbers tiny, we can have an example like this. With a 4:3 ratio being converted to a 6:2 ratio with a constant resolution:

4 * 3 = 12

newX * newY = 12

It will work if newX is 6 and newY is 2. Because 6*2 = 12. The numbers are tiny enough that you can guess the right result. And I made it easier on myself by saying 6:2 instead of 3:1 even though the two are mathematically equal. 

But. Let’s throw it into a real world scenario. 

4:3 in a 1080p master is commonly shown as 1440 wide and 1080 high. Now let’s say you want to show a 1.9:1 ratio with the same amount of pixels as that 4:3 image.

So. Let’s populate the equation:

1 440 * 1 080 = 1 555 200

newX * newY = 1 555 200

ok. Now it gets harder to just guess what newX and newY should be to both have an aspect ratio of 1.9:1 and that 1 555 200 resolution.

Again. I was stumped. For years I couldn’t figure it out. My brute force solution was to… just brute force each ratio. Yes. I would simply just type a vertical number into a calculator and multiply it by the ratio and make a spreadsheet of the results. Adjusting the height of each try until the result was as close to the target resolution as possible. Until I would land at the result of:

1716 * 906 = 1 554 696

But that is such a terribly inefficient method. Again. It’s not exactly rocket science. And I do enjoy a good spreadsheet in regular intervals. I should be able to get there quicker than just trial and error. 

So. Years passed. I had basically given up. My daytime job had some restructuring. I found I had an opportunity to take some classes. I took maths with an entirely unrelated reasoning. But this problem lurked away. Maybe I could tackle it one day. And shortly after I had a literal Heureka moment. In a shower, not a bath. But. I stood there slack-jawed. Holy carp. THAT’S IT!

To put my old thinking into context. Since newX and newY is written out as two variables I had thought of it as a two variable problem, and as such the solutions involved graphs and plotting and could get two answers where only one would be relevant. BUT!

The realisation that struck me is that newX is not independent of newY. No, newX can only be one thing for each newY. Yes, newX is completely calculated by newY*newRatio. So therefore.. 

newX = newY*newRatio

So this

newX * newY = Resolution

Is exactly the same as:

(newY * newRatio) * newY = Resolution

Yes, I know the multiplication marks are a bit redundant with the parenthesis. But I prefer to be overtly clear about these things. Nothing should be considered as given. If I can misunderstand it, I WILL. And I don’t want that.

Anyways. And since I know what newRatio is. (It’s the x of an x:1 aspect ratio, that’s easy to calculate by just dividing width with height. 16:9 is 16/9 which is 1.78, roughly, and you just put :1 to the right of it) I now have reduced the problem to one with only one variable. As long as I can find out what newY is I get newX for free!

So. With basic algebra I restructured it so all the knowns are on one side and the sole unknown is on the other. 

So, 

(newY * newRatio) * newY = Resolution

becomes

newY^2 * newRatio = Resolution

Which becomes 

newY^2 = Resolution / newRatio

Which finally is 

newY = sqrt(Resolution/newRatio)

And to put that to the test we take the example: 

1 440 * 1 080 = 1 555 200

Solving with 1.9:1 aspect ratio:

newY = sqrt(1 555 200 / 1.9)

Which is 

newY = 904.724…

And since

newX = newY * newRatio

We have

newX = 905 * 1.9

So.

newX is 1719

And 

newY is 905. 

And as such, I ran around the town streets, naked, flailing my arms about, shampoo and lukewarm water getting everywhere.  Laughing maniacly. It is done! It has been solved! I can now get new accurate dimension at any ratio while keeping the resolution constant! All in one beautiful equation! Ok, it’s two, but still! And dodging more policeman-officers than I really thought wouldn’t be out this time of day. By the time I realized where I was and what state I was in, I was already caught. I had been fitted a nice new snug but kind of constricting jacket. And now I was transported to a fine facility where I was told I would be greeted by specialists in fields that would be beneficial to my current predicament. Top men, they said reassuringly.. top… men…

A few court cases later where crying children and angry parents on witness stands really wanted me to stay indoors for the foreseeable future, I was nevertheless let go. Deemed maybe not completely mentally sane, but at least not a danger towards myself and otters… I mean others! Or maybe I meant otters. 

Nonetheless. While my story here may have been rambling, and in a few cases… exaggerated… that is largely how I ended up with this formula. Using it, you can get within a couple of pixels of the dimensions needed to take one source images resolution and make another aspect ratio while keeping the resolution constant. 

I put it in a google sheets doc (Disclaimer: This sheets doc is not made public yet) to make the process even more automated. 

Now. The pedants out there probably did notice I fudged the numbers slightly. But it was only to make the numbers of pixels even (since computers hate odd numbers in general) and to get total resolution below the source resolution instead of above it for general neatness. 

Now! In conclusion!

You maybe wondering. Where does this formula take us? If I did persuade you in the first bit. How do you use it? Should you bother? And are there gamebreaking pitfalls when using it? What does this mean for viewers at home and in cinemas? Can we reevaluate hardware in current setups?

First off. If the story gains nothing from playing with the shape of the screen. Do not bother. Pick one shape. Keep it maximized throughout. This whole thing only really makes sense if you are intending to mix ratios and are willing to open the Pandora’s box of issues viewers will think they have when watching something mastered in the way I propose. Remember the nighttime battle in Game of Thrones? Some of you are still bitter about it. I assure you, there will be viewers that will make Game of Thrones fans look meek and compliant if you mess with the eldritch horrors of windowboxing.

But, if you decide to sign my waiver of responsiblities. How do I propose you use this formula? Well, here’s my suggestion:

  1. You decide what shape the shot needs to be. 
  2. You shoot it in a way that ensures that there is as many pixels as possible in your budget after cropping to that ratio. 
  3. You make a windowbox that suits your master frame (usually this is one of the broadcast or projection standards). You use the base resolution in the formula and derive from it the new dimensions. 
  4. You scale and frame the source video to fit that window-box. 
  5. Go to 1 for the next shot. Rinse and repeat. 

Yes. That is how easy it should be. But ok. I get it. Most people don’t have time to build window-boxes for each shot-shape. And I mean… only a madman would spend the man-minutes needed to create a bunch of them and upload them in a big package to the internet…

Yes. 

Yes I did. I did set up tons of vector-shapes in layers in Krita and used the formula to get dimensions for each aspect ratio I could think of. See the link in the description to find that zip-file. (Disclaimer: I never did upload them… sorry) They are very simply built. Just a black frame around the white. To make the white transparent you can in your NLE of choice simply use an appropriate blending mode. I do prefer Multiply. If you know of a better one. Add it into the desert of the comments. 

As you may see in the file structure. These crop-guides are organized in folders according to the image resolution. And under those there are for most of them a level of folders which corresponds to the master-frame of the system. To help you navigate these I have made this spreadsheet. It shows what the dimensions are for the resolution for each and every aspect ratio. 

It should also be noted that in order for these to work as intended. You need to add them to the target timeline with no added scaling. It should be centered and pixel by pixel 100 % scale. In Resolve (my favourite), this can be set for the whole project here in Project Settings > Image Scaling > Mismatched Resolution Files > Center Crop With no Resizing. To have it specific to the timeline in question you can look in the timeline settings for the same setting. And you may even want only these windowboxes to behave this way while the actual filmed footage should resize to fit the master frame for ease of use. So you can on the timeline override the timeline and project settings for single files by selecting the clip, opening the inspector and in Retime and Scaling, you have the settings. Set scaling to crop to retain the 1:1 pixel scaling of the source file. 

And for those that wondered about nr 2 in that list of suggested steps. That’s very dependent on the camera you have access to. You can for instance, in some cameras, gain extra pixels on the vertical dimension by setting the camera to film in 4:3 mode or similar. On my own Panasonic GH4 I can use that 4:3 mode by going into the menu and finding a cryptically named setting that was intended to be used with anamorphic lenses. So with it I can get more vertical pixels for aspect ratios that are horizontally narrower than 1.54:1 and maximize the resulting resolution post-crop. For example. If I intend to shoot on that camera and I plan on cropping the sides to IMAX-shaped 1.43:1. I have basically two options. Either use UHD 3840×2160 recording and crop to 3088*2160 to get a maximum resolution of 6 670 080. Or I can use that same 4:3 mode to record 3328×2496 and crop the height to 3328×2328 which has a resolution of 7 747 584. Yes. You gain a whole megapixel by choosing an appropriate recording setting. 

But I do digress.

Especially as I now have gotten hold of a Blackmagic Pocket Cinema Camera 6K 2nd Gen which has very different ratios and resolutions to play with. Oh what fun.

Just look into your documentation that comes with the camera to find out what settings are best for you. 

So wait. Why do I even have all these folders of slightly different resolutions in that package of windowbox cropguides? Surely there should be a method that involves even less end user input? I mean. I can almost hear you now. You look through the files and you find that the specific shape you need is not there. You ABSOLUTELY POSITIVELY must use a shape of 1.47:1 and you need it for your timeline resolution of 1371×99999 and neither 1.43:1 or 1.50:1 will be acceptable for your pixel-peeping eyes. Surely. There cannot be a solution for you to be able to choose ANY source resolution and target ratio? Surely! To spend hours of my life to build something that spits out the correct shape and doesn’t need to be in separate image files. Something that you can import to the NLE and have it do all of this for you. I said at the beginning of this whatever length video that this is a problem noone has, and that noone should bother about? 

Yes… I built a Fusion Macro where you enter the source resolution. It calculates the correct new resolution, makes a rectangle and makes it a windowbox for any resolution and any target scale and it can be used both in Fusion and in Resolves Edit page… 

And I have it here for free… link in the description. 

Feel free to ungroup it in Fusion to customize things to your hearts content. 

I edited this whole mess of a video using this macro. Did you enjoy it? I did.

Now… Whatever should I do with my life when I finally have solved this ancient problem that has not bothered filmmakers worldwide for years.

I think I will lie down in my bed. And I will sleep…

And no. 

I will not do this thing for after effects or premiere pro. Or Final Cut or any of the Open Source NLE I have not been able to run as well as the proprietary DaVinci Resolve.

why?

Because I can’t be bothered. It’s all in here if you want to rebuild it for other platforms. If I can do the maths, then surely most of you can as well. All I ask is that you credit me in just a text note somewhere or something. My ego wants the attention. 

Good bye now. I need to return to my other eternity projects that seemingly will never see endings. 

Please. Just go away. I need to sleep.

Oh for the love of…

(THE END)
…? 

Something Kaleidoscopic This Way Cometh

Last night I had an urge to do something kaleidoscopic. No real plan beyond that. So this is a fast noise with a duplicate node giving 100 duplicates. Constantly rotating. Interacting with each other. And the usual film treatment on top.

The sound is a drone sound where I turned on my Deepmind 12 and found that the preset it was on at the moment behaved very cool when you just held the note. So I held two low notes, and I pressed the hold key to keep them down virtually. And I just recorded the output to Audacity while manipulating the various faders and volume knob on the synthesizer during the 10+ minute runtime. Just a compressor in post to even out the sound volume as it drifts in and out. I was planning on adding more layers of sounds. But this raw evolving drone was just too neat sounding to risk drowning out.

SpaceWater (Short)

Abstract forms dance in front of a field of stars. Just an abstract experiment. Presented in Black and White with stereophonic sound in select venues.

_____________________________________

Shot with #BMPCC6KG2. #BRAW 12:1, 2.7K 120fps.
Found sounds collected with #Zoom #M4 #Mictrak.
Synth sounds created with #VCVRackV2 Sounds processed with #Audiothings #Reels, Audiothings #Springs and #Softube #TapeEchoes

Edited and graded in #BlackmagicDesign #DavinciResolve and rendered in glorious #MonoChrome #BlackAndWhite

Learning Blender 3D’s Grease Pencil – Day 1 & 2

After much temptation I have now finally started my attempts to try to learn Grease Pencil in Blender 3D. I have dabbled for a while with Blender in general. Doing some abstract models and animations. But now is the time for me to jump in to do what I have spent most of my hobbies doing. 2D animation.

This will be an intermittent series of posts where I simply document what I am doing in Grease Pencil. Following various tutorials and trying to find ways to learn this thingamajigg enough to be able to call myself proficient in it.

Day 1 consisted of just getting a hang of the interface. How to draw simple lines. How to make the keyframes play in the order I want. And what better way to do that than to bring out ye olde bouncing ball. When all else fails. One never can go wrong with the bouncy ball.

Day 2 is today and I went ahead doing some more bouncy balls.

But balls are fun and all,, though I wanted to try out colors. So instead of a bouncy ball, here’s a blinking ducky… thing…

Ok… I realize now that exporting these as videos might not be that great of an idea as I they are very short loops. But with that ducky thingy I did find a rather nice workflow thing where I basically set up each color as a material. And I can then hot-swap them after I did the coloring of the drawings and it automatically updates on all frames that uses that material/color. I mean… this is a feature I have heard of for years and it seems like a very nice thing to have when doing big projects. So in a sense, it’s basically just me being late to the proverbial party.

Oh, well..

I’ll see if I can get some more stuff through this thing.

Oh, and holy heck it’s been a long time since I did anything on this site.

210628 GoProHero9BlackSlowestMoTest

210503 – “Hey!” short

210406

210321 – Yet another sped up twitch stream

As the title says, it is another one of them. I need to set something up so I can make these on a more regular basis. And actually knowing what I am supposed to animate before I start to stream to an audience of… 1… I think that’s a bug… It’s probably zero viewers.

Testing out Krita – And my animation template!

Continue reading Testing out Krita – And my animation template!

Yes! Yet another reboot of the SMA-project! (rules and such)

 

Yes! You read that correctly.

In an effort of continuing the streak I’ve been having with streaming. (well. I’ve done them at least. I have yet to gain followers.) In that attempt I will now revive the old idea of making a feature length movie using random chance as my sole guide.

The deck of 85 cards I started out with is still present and it’s been complemented with another one with 60 cards.

So, The rules I’m setting out for myself (subject of change if need be):

  1. The work will be live-streamed to anyone who cares to look in. (viewable on twitch.tv/jmalmsten_com )
  2. Choosing of the timecode to work on will be done with two decks of cards each streaming session:
    – Deck of 85 cards will select the minute
    – Deck of 60 cards will select the second
    Once drawn the minute card (from the deck of 85) will not be returned to the deck until the project is done!
    Before drawing the seconds card, the timeline will be checked to see that there are no animation already done inside the minute. If it is, then the corresponding cards for those seconds will be set aside until next full draw.
    – It is then up to me as a creator to come up with what will happen during this particular point in the narrative and draw frames accordingly. This can be more than a minute worth’s of content but beware of point 3 below.
    – The method of animation is all up to the animator at the moment. Again. Beware point 3.
  3. A new timecode to work on will only be drawn from the piles when the last timecode is fully finished. Once finished it will not be adjusted in any way until project is finished in full. This includes all visuals. Soundwork may be done according to point 4
  4. The soundwork for each timecode can extend to outside the animation. But once finished, it too should not be touched.
  5. The project will be created in 2K DCI Cinemascope resolution (2048×854) 24 fps and 5.1 surround sound. The finished timecodes will be uploaded to youtube in full HD Cinemascope (1920×800) 2.0 dolby stereo downmix.
  6. THERE IS NO RULE SIX!
  7. The Finished 4K form will be in two versions. Timeline Corrected where the narrative flows as on the timeline of the NLE. And a second one will be created that keeps the randomised shot-order mainly for fun of the viewer.
  8. Two playlist will be maintained on youtube ( youtube.com/jmalmsten ), the timeline-accurate order and the randomised chronological uploaded order. Gaps in the timeline will have placeholder footage.
  9. Once project is over. A remix of the audio with voice-acting and music will probably ensue before the final directors cut will be released alongside the other two versions.
  10. I’ll probably add something or subtract… we’ll see how things goes. This is a project mainly for fun anyways… (nervous laughter).

181101 – Just a Smile! :D

Just a smiling gif for use on Twitch streams when happy stuff happens. Continue reading 181101 – Just a Smile! 😀

181023 Twitch-stream – Corgie With A JetPack

Started out kind of aimlessly… Continue reading 181023 Twitch-stream – Corgie With A JetPack

181004 – Livestream Test Twitch.com – Poop

I have just spent the last few hours animating a gif where a man poops his pants… Continue reading 181004 – Livestream Test Twitch.com – Poop

streamingtest 2 – Live

Trying out Streaming

 

An exercise in futility – Part 1

DISCLAIMER
I do not claim ownership over After Last Season, El Mariachi, Upstream Color or even A Scanner Darkly. The following article is just a blog overview of what has been on my mind lately and it just happened to be After Last Season

I kid ye not. They did not only scotch-tape copy-paper over the top of the walls (it’s speculated that it’s hiding children-friendly illustrations since the scene was recorded in someones baby-room(allegedly)) but they also made sure to film a closeup of this to use as cutaways for the dialog edits.

Ok. Some may know that whenever there’s a discussion about “worlds worst movie” and I’m asked about which I think qualifies I always bring up 2009’s epic failure of narrative fiction called After Last Season.

Never heard of it? I do not blame you.

This film stumbled into existence that year. Was distributed to half a dozen american theaters. And then neither its creators or its distributors wanted to touch it ever since. There’s even rumors about how the theater owners were instructed to just burn the 35mm film prints rather than sending them back…

Helpful IMDB Trivia:
https://www.imdb.com/title/tt1196334/trivia

But it doesn’t end there. Oh, no.

Then the rumors started to come in about the production. That the film had a production budget of around $ 5 million. That most of it was basically just a scam to get away with as much money as possible. Because supposedly this wasn’t a Tommy Wiseau level passion project gone awry. No. This was supposedly a hoax all along. And when we see what the unsuspecting moviegoers saw at release… that theory didn’t sound all that far from the truth.

Just as a reminder. Here is what a talented filmmaker can do with roughly $ 7 000 production budget spent wisely in the earliest of the 90’s:

The blank guns barely worked (most of the full auto-fire on screen is looped because the guns would more often than not just fire once and never cycle). Action set-pieces was pieced together with shoestring and Rodriguez himself even became a human guinea pig for medical science to pay the meager bills the production amassed. I mean. The guy should write a book on how it was made.

https://www.amazon.co.uk/Rebel-Without-Crew-23-Year-Old-Filmmakerwith/dp/0452271878

 

Ok. You say. Not even the same genre, so how is it compareable?

well. Here’s Upstream Color from 2013. Made for a staggeringly high budget of $ 50 000:

CGI-effects. Thoughtful scifi. Everything.

Ok.

Now then. Look at what that $5 000 000 got us in 2009:

No. I am certainly not joking. This. This is what was shown as a theatrical feature. And watching the full thing? Even worse. Giant stretches of awkward silence. Barely audible dialogs that go nowhere. Characters brought up and forgotten. Set building that… is just kind of a warehouse when it’s not an obvious home standing in for a top of the tier university hospital. And. Copying paper. You will be fascinated by what they can expect us to accept with copying paper.

Yes. That is a cardboard MRI-machine in a movie allegedly produced with a $5millon production budget.

The list just goes on and on. Oh and those CGI-sequences. Those ENDLESS CGI SEQUENCES!

So why do I bring this up with the title of this blog-post? Well. Because a few weeks ago. After a couple of social media conversations I started to think about this film again. And ideas started to brew.

Step 1: First Trimmings

What if… What if one could salvage something from it? At least glean a little about its narrative structure. I have an NLE. I have a legally bought copy of the released film (which wasn’t easy to obtain) on DVD.

And this is bringing me down a rabbit hole I was not expecting to go down.

I ripped the video. Threw it into Premiere Pro and started the dissection. Step one. Simply replicating the edit.

This is the most tedius part. It’s just me going through the film shot by shot and putting in cuts where there already are cuts in the original. It’s the ground-work. And it will help me navigate later.

I sadly have no screen-shots of this step. I forgot about documentation.

Then. Step 2. Going through and just trimming anything that is dead space without doing obvious edits like jumpcuts and the like. This brought the runtime down from the original 1:30:28 to a pretty lean 01:08:50. I also took the liberty to speed up any footage that need not be as long but had no obvious edit-points (I’m looking at you, CGI-scenes!).

Step 2: Dogme 95

That was a couple of weeks ago and I put the thing away for a while to focus on some things more pressing. But the project still lingered in my mind.

And today I took another stab at it. This time I wasn’t nearly as careful. I brought out the weed-wacker and took out all the empty space even if it resulted in jumpcuts (I managed to hide a few. But  it wasn’t a priority).  I call this step the “Dogme95 version” because of how ruthless I was even though it technically doesn’t comply with their stringent criteria. And the result was a fairly lean 00:54:25 runtime.

you may also notice that I added a track above with a cinemascope crop. Now, why would I do that? Well. As other viewers have noted. The film is kind of poorly shot. And it’s surprising how many shots are helped by simply cropping it vertically a bit.

Was this an intended step that was abandon in the original Post Process? I don’t know. It just looks like it. The same with the colors. They do look suspiciously flat. Almost like if they shot the film in low contrast or scanned it low contrast to be able to do a real color grade later? We’ll probably never know. But I did go ahead and put both the crop and a layer of FilmConvertPro LUT onto it and it does look a bit nicer to the eyes. Remember though. I am still working with what is essentially a h264 rip of a 480p DVD release that seems to have been made with a soft focus 16mm workprint. So It still looks kind of awful. But just a little less awful makes this all a lot less of an agony.

And it’s still lit with those work-lights. And no amount of LUTs will get that ugliness nice.

Bask in the wonderful lighting that would make Roger Deakins Proud.

Step 3: Now what?

Well. I am afraid to say that no matter what the underpants gnomes would want. The next step is not “PROFIT” no matter how far I take this silly little project. This is as unofficial as it gets. I have made no attempt to contact the makers of this film regarding what I’m doing. And I highly doubt it they would be interested, anyway. I simply do this as an exercise in video-editing because it felt like the released cut feels more like a rough early workprint than anything finished.  It’s a challenge I gave myself. If I was handed this raw material. Could I get something out of it using the meager skills I have amassed over the years?

Maybe. Maybe not. Anyway. I can not show the results publicly anyhow. I

The abyss is staring back at me through the rabbit-hole…

The one thing I can say however. Is that forcing myself to watch and listen to the dialogs and the visuals have actually starter to get to me a bit I think. I’m starting to put together pieces of the puzzle I never knew were there.

But until the next update I will just leave you guys and gals with what is so affectionately called a “clock radio” and a video about the Rotoscoping techniques that made A Scanner Darkly (2006) that breeds the idea that maybe what we have here is a film that was meant to be the rotoscoped or at most of the image would be replaced in post.

But who knows?

a “clock radio”

oh…

And some crates.

Scary scary crates…

Be seeing you!