How To EASILY Film in RED RAW With The Nikon ZR

I’m about to show you how to easily film in RED RAW with the Nikon ZR. To save you some time, check out my RAW filmmaking cheat sheet that shows all of the best settings in one easy place, so you can reference it whenever you need.

And if you want help getting amazing-looking colors from your RED RAW footage, I also have a set of color presets that work fantastically with RED RAW. I’ll link those as well so you can check them out.

Why Settings Still Matter When Shooting RAW

When it comes to filming in RED RAW, you might think the settings you choose on the Nikon ZR aren’t that important. After all, you’re shooting RAW, right? You can just fix everything in post.

Unfortunately, that’s not true.

RAW video is very forgiving, and RED RAW in particular is one of the easiest RAW formats to work with. It gives you incredible control over exposure and color. But that doesn’t mean you can just switch the camera to RAW, hit record, and magically get gorgeous footage.

There are still several key settings you need to dial in to make your footage look its best, and I’m going to walk you through how to do that quickly and easily.

Enabling RED RAW on the Nikon ZR

Let’s start with the basics.

Grab your Nikon ZR, open the menu, go to the video camera icon, and set the video file type to R3D NE 12-bit in LOG3G10. This enables RED RAW recording in log, which gives you a very flat and desaturated image.

That flat image is exactly what we want for maximum flexibility in post.

Making RED RAW Easier to Work With Using View Assist

Next, we want to make filming in RED RAW as easy as possible. The Nikon ZR gives you some excellent tools to help with this.

The first thing you should enable is View Assist. This removes the super flat log preview and applies a 3D LUT so you see a more saturated and contrasty image on the camera screen.

Important reminder: this LUT is only for monitoring. Your footage is still recorded in flat log and will need to be color graded later.

Turning on View Assist also affects how some of the exposure tools behave, which is why I strongly recommend enabling it. That said, if you don’t want to use it, that’s okay. I’ll explain how exposure works without it as well.

To quickly enable View Assist, press the Assist Display button in the bottom right of the screen, then tap the Assist button in the bottom middle. Instantly, your footage becomes much easier to judge.

Choosing the Correct Base ISO

Now that the image looks better, you might think you’re ready to hit record. But before you do, we need to talk about exposure tools.

First, some ground rules.

When filming in R3D, I recommend always starting by setting your ISO to one of the camera’s two base ISOs: ISO 800 or ISO 6400.

The Nikon ZR makes this easy. Press the ISO button in the bottom left of the screen, and you can toggle between the low and high base ISOs.

If you’re filming in bright conditions, like outdoors in daylight, use ISO 800. If you’re filming in darker environments, like an evening wedding reception, use ISO 6400.

Using these base ISOs gives you the cleanest image with the least amount of noise.

You might be thinking, “Matt, why does ISO matter if I’m filming RAW and can change it later?”

Great question. The answer has everything to do with zebras.

Setting Up Zebras Correctly

Zebras are those black-and-white lines that appear on overexposed areas of your image. If you see them, it means parts of your image are too bright and unrecoverable.

You want zebras enabled almost all the time when filming.

To set them up properly, open the menu, go to the pencil icon, Video, G16, and select Zebra Pattern.

Set the highlight threshold to 245, which is where highlights start to clip when using View Assist.

For midtone range, set the value to 130 with a range of plus or minus 15. Shoutout to Calen Rhome from Gamut for testing the camera and figuring this out.

Once set, press OK and exit the menu.

To toggle zebras on and off, tap the Assist Display button again and then tap the Zebra Pattern button. The first tap enables midtone zebras, and the second tap enables highlight zebras.

Why Base ISO Matters for Zebras

Here’s why sticking to base ISO is so important.

If you change the ISO away from the base values, the zebra levels also change. That means the camera can no longer accurately tell you when highlights are clipping unless you manually adjust zebra values.

Nikon even provides a PDF chart showing the correct zebra values for every ISO. You can download it from Nikon or from a mirror I’ve uploaded.

With View Assist enabled and ISO set to 800 or 6400, your zebras at 245 will be accurate. If you don’t use View Assist, you’ll want to lower zebras to around 180.

This is why I recommend base ISO whenever possible. It makes exposure faster and far less confusing.

Using Zebras to Expose Correctly

Now let’s tie everything together.

Set your Nikon ZR to RED RAW, choose one of the base ISOs, and enable highlight zebras. Adjust exposure using your aperture or an ND filter until highlight zebras just begin to appear, then back off slightly until they disappear.

This ensures you retain highlight detail while keeping shadows clean and noise-free. This is how you capture the maximum dynamic range the camera can offer.

Next, switch to midtone zebras and check skin tones. You want to see zebras appear on the side of the face where light is hitting. If you don’t, your image is either underexposed or overexposed.

This is a balancing act. You want properly exposed skin tones without blowing out bright backgrounds, so you’ll often toggle between highlight and midtone zebras to find the sweet spot.

Using the Waveform Monitor for Extra Accuracy

If you want even more confidence in your exposure, Nikon gives you another excellent tool: the waveform monitor.

To enable it, go to the menu, pencil icon, Video, G19, and set Brightness Information Display to WFM. Press right on the joystick to adjust size and position.

I recommend setting the waveform size to large and transparency to 3. The small version is just too tiny to be useful.

Back out of the menu, tap the Assist Display button, and enable the brightness info icon to turn on the waveform.

The waveform shows brightness values across the entire image. Dark areas sit lower, bright areas higher, and the image is mapped horizontally across the display.

The Firmware 1.10 Waveform Upgrade

Make sure your Nikon ZR is updated to firmware 1.10 or newer. This update added a crucial feature to the waveform monitor: a red clipping line.

If any part of the waveform goes above this red line, that area is clipped and unrecoverable. Even better, this clipping line automatically adjusts based on ISO, so it’s always accurate.

When filming RED RAW, keep in mind that exposure values are compressed downward. Properly exposed skin tones should sit about ⅓ to ½ of the way above the shadow line on the waveform.

If you hit that range, your footage will look fantastic.

The Complete RED RAW Workflow

Here’s the full workflow from start to finish.

Set ISO to 800 or 6400. Enable highlight zebras. Adjust exposure until zebras just appear, then back off slightly. Switch to midtone zebras and confirm they appear on skin tones. Use the waveform to double check skin tones sit about ⅓ to ½ above the shadow line. Finally, return to highlight zebras to make sure clipping is minimal.

This is the sweet spot of the Nikon ZR and where you’ll get the highest image quality and flexibility in color grading.

Final Thoughts

And that’s how you easily film RED RAW with the Nikon ZR!

Huge shoutout to Brandon Talbot for his helpful ZR videos that helped me confirm these settings.

If you want more videos about filmmaking, make sure to subscribe. Thanks so much for reading, and have a great day.

Is The M5 MacBook Pro Better For Video Editing? (Buyer’s Guide)

Matt Johnson holding an M5 Macbook for Video Editing

M5 MacBook Pro Buyer’s Guide for Video Editors: Should You Buy It and What Specs Actually Make Sense?

The new 14 inch Apple MacBook Pro with the M5 chip is finally here, and if you are a video editor who has been thinking about buying a new laptop, you are probably asking yourself two very specific questions:

  1. Should you even buy this new M5 MacBook Pro at all?
  2. If you do decide to buy one, what specs actually make sense for video editing without wasting a ton of money.

In this buyer’s guide I want to help you answer both of those questions clearly and honestly, because not every new Apple release is automatically the best choice for video editors.

At the time of writing this, Apple is only offering one version of the M5 MacBook, which is the 14 inch MacBook Pro. There is no MacBook Air with M5, no 16 inch MacBook Pro, and no M5 Pro or M5 Max chips available yet. If you want an M5 laptop today, this single configuration is your only option.

Should Video Editors Buy the M5 MacBook Pro?

The short answer for most video editors is: probably not (at least not right now).

Based on everything we know so far, the M5 is a fairly incremental upgrade when it comes to video editing performance, and until we see an M5 Pro or M5 Max, it is hard to justify choosing this over discounted previous generation models.

That said, there are some real improvements with the M5 that can matter depending on how you work, so let’s walk through those before writing it off completely.

Faster Storage and Larger SSD Options

One of the most noticeable upgrades with the M5 MacBook Pro is the internal storage speed. Apple has moved to PCIe Gen 4 SSDs, which support transfer speeds up to roughly 6800 megabytes per second. That is nearly double the roughly 3500 megabytes per second we saw on the previous generation, and in some real world tests it can be even faster depending on the workload.

If you regularly copy large video files to and from your internal drive, especially if you shoot in 4K, 6K, or 8K resolutions, this faster storage can genuinely save you time over the course of a project. It is not the kind of upgrade that makes headlines, but it is absolutely something working editors will feel day to day.

Along with the speed increase, Apple has also expanded the maximum internal storage option on the 14 inch model to 4 TB, up from 2 TB on previous base models. As always, Apple’s storage pricing is extremely expensive, so this is more about what is technically possible rather than what I would actually recommend most people buy.

GPU and AI Performance Improvements

When it comes to raw performance, the CPU improvements in the M5 appear to be fairly modest, roughly around a 10 percent increase compared to the previous generation. For video editors, though, the more interesting change is the GPU, which has seen a much larger jump, with Apple claiming around a 34 percent increase in GPU performance.

This matters because modern video editing software, especially DaVinci Resolve, relies heavily on the GPU for playback, rendering, and effects processing. A stronger GPU can lead to smoother timelines and faster exports in many situations.

On top of that, Apple has added neural accelerators directly into each GPU core, and this is where things start to get more interesting for editors who use AI powered tools. Every major editing platform is adding more AI features with every update, including things like AI music editing, animated subtitles, multicam switching, voice isolation, upscaling, and denoising.

Apple specifically calls out tools like Enhance Speech in Premiere Pro, claiming it runs over four times faster than on an M1 MacBook Pro and nearly three times faster than on an M4 MacBook Pro. They also tested Topaz Video AI, where the M5 reportedly performs over seven times faster than an M1 and more than three times faster than an M4.

If your workflow leans heavily into AI based tools, these improvements can actually be meaningful.

Hardware Encoders, Decoders, and Real World Editing

Another important area for video editors is hardware encoding and decoding. These dedicated media engines are one of the biggest factors in how fast your laptop can play back and export video, especially when working with codecs like H.264 and H.265.

With the M5, these encoders and decoders do appear to be faster than on the M4 and earlier M series chips, so if you frequently export client videos in H.264 or H.265, you should see some speed improvements.

However, things change once you move into RAW workflows. Formats like RED RAW, ProRes RAW, and Blackmagic RAW still benefit from hardware acceleration, but they also rely very heavily on GPU performance. The important thing to remember here is that the M5 only has a base level GPU. It does not have the extra cores and performance you would get from an M5 Pro or M5 Max chip.

Because of that, editors who spend most of their time working with high bitrate or RAW footage may actually see better overall performance from an older MacBook Pro with a Pro or Max chip, even if it is technically a generation behind.

So, Who Should Buy the M5 MacBook Pro?

If you regularly use AI powered effects in your editing software, do work in Fusion or After Effects, and want to be somewhat future proofed as these tools continue to evolve, then the M5 MacBook Pro can make sense if you specifically want the newest Apple laptop available right now.

On the other hand, if you mostly edit RAW footage or want a more powerful machine overall that excels at heavier timelines, I would strongly recommend skipping the M5 for now and looking at an M4 MacBook Pro with an M4 Pro chip. With current discounts, those machines are often priced very close to a base M5 and offer significantly more sustained performance for professional video work.

The M5 MacBook Pro Specs I Would Actually Buy

If you do decide that the M5 MacBook Pro is the right choice for you, my recommendation is surprisingly simple. I would go with the base configuration, which includes the base M5 chip, 16 GB of memory, and a 512 GB SSD.

If you have a little extra budget, I would consider upgrading to the faster charger, which is a relatively small cost, and if you often edit in bright environments like coffee shops or near windows, the nano texture display is absolutely worth the upgrade in my opinion.

What I would not recommend is upgrading the memory or storage. The moment you do that, you are pushing the price to around $2000, and at that point you are much better off buying an M4 Pro MacBook or waiting for the inevitable M5 Pro and M5 Max models, which will almost certainly be significantly faster for video editing.

Should You Wait?

If you can wait, history suggests that the M5 Pro and M5 Max models will arrive sometime in early 2026, likely around the $2000 price point, and they should be a massive upgrade over the base M5. There are also rumors that Apple may introduce OLED displays in future MacBooks, possibly in 2026 or 2027, which would be a truly meaningful upgrade for video editors.

If you cannot wait, the M4 Pro MacBook Pro currently offers the best balance of price and performance for most editors.

No matter which route you go, choosing the right laptop comes down to understanding your workflow and not just buying the newest thing because it exists. Hopefully this guide helped you make a more confident decision.

If you want to dive deeper into editing techniques themselves, I also have a free “Edit Videos Like a Pro” guide that covers some of the most important lessons I learned over years of professional editing, and it applies regardless of which software or computer you use.

Thanks for reading, and I hope this helped you choose the right MacBook for your video editing workflow.

Interested in building your own PC instead? Check out my buyer’s guides.

Please note: Some of the Amazon and B&H links in this article are affiliate links and I earn a commission from them.

The Best Sony Video Format Settings (And When You Should Not Use Them)

Matt Johnson holding a Sony camera with the video format menu open

You just bought a Sony a7V, or another recent Sony camera, and now you’re staring at the video format menu wondering what on earth you’re supposed to choose.

Totally fair question!

Sony gives you a lot of options, and while that flexibility is great, it can also be confusing if you just want to know what setting will give you the best results without wasting time.

So let’s start with the short answer, and then I’ll explain why I recommend it, plus the specific situations where you should actually do something different.

Here’s The Short Answer:

For most situations, I recommend filming in XAVC HS, selecting the highest bit rate available, and making sure it says 4:2:2 10-bit if your computer can handle it.

That’s it.

If you want a quick reference you can come back to later, I’ve also put together a Sony Video Formats Cheat Sheet that shows you exactly which settings to use. It’s completely free!

Now let’s talk about why these settings make sense, and when they don’t.


A Quick Note on Testing and Hardware

Before we get into the menu breakdown, you should know that while this video and article are not sponsored by MSI, they did provide me with the RTX 5080 graphics card I used for all of the testing discussed here. Thanks to MSI for that.

I’ll also link to this card, along with other MSI GPUs I recommend for video editing, since they currently offer the fastest performance for modern codecs.

This hardware context matters a lot for what we’re about to talk about.


Understanding Sony’s Video Format Options

When you open the Sony menu and go to Shooting → Video Format, you’ll see several options, including:

  • XAVC HS
  • XAVC S
  • XAVC S-I
  • XAVC S HD

If you’re using a cinema-line camera like the FX3, FX30, or FX2, you’ll also see a DCI 4K option.

For the vast majority of your filmmaking, I recommend XAVC HS, and the reason is simple: XAVC HS uses H.265 compression, which is the most advanced and efficient codec your camera can record internally.

That means higher image quality and smaller file sizes at the same time, which is exactly what we want.


Choosing Frame Rate and Bit Rate

Once you select XAVC HS, you’ll need to back out and head into the video settings to choose your frame rate and bit rate.

You’ll typically see options like 24p, 60p, and 120p.

If instead you’re seeing 25p, 50p, or 100p, that means your camera is currently set to PAL mode, which is used in many countries outside the United States. This has to do with electrical frequency and broadcast standards, and we don’t need to go down that rabbit hole here.

What you need to know is that if you want access to frame rates like 120p, go into the yellow toolbox menu and switch your camera to NTSC.

From here, let’s assume we’re choosing 24 frames per second, and now Sony throws a bunch of other options at you, like 50M, 100M, 4:2:0, 4:2:2, 8-bit, 10-bit, and it suddenly feels overwhelming.

Let’s break it down quickly.


Bit Rate, Bit Depth, and Chroma Subsampling Explained

The number followed by an “M” represents the bit rate, measured in megabits per second. Higher numbers mean more data, which means higher quality video, so I recommend choosing the highest bit rate available.

Next comes 8-bit vs 10-bit, and this one is easy. Always choose 10-bit.

10-bit video records dramatically more color information, enabling your camera to capture billions of colors instead of millions, which results in smoother gradients, better skin tones, and far more flexibility when color grading.

The last option, and the one that causes the most confusion, is 4:2:2 vs 4:2:0, which refers to chroma subsampling, or how much color data is stored per pixel.

4:2:2 records more color information than 4:2:0, so on paper it sounds like something you’d always want to use. And in a perfect world, that would be true.

But here’s where your computer matters.


Why Your GPU Determines Whether You Should Use 4:2:2

This entire topic is the reason I reached out to MSI and asked for an RTX 5080 to test with.

Older computers, including Macs from before Apple’s M-series chips, older NVIDIA GPUs, and essentially all AMD GPUs at this point, do not offer hardware acceleration for H.265 video encoded in 4:2:2 color.

What that means in practice is that editing 4:2:2 footage on those systems is noticeably slower, with choppy playback, laggy scrubbing, and longer render times, because your computer has to decode everything in software.

Newer hardware changes that completely.

Apple’s M-series Macs and NVIDIA’s 5000-series GPUs support hardware encoding and decoding of 4:2:2 video, which makes editing dramatically faster.

To show you what that looks like, I tested the same DaVinci Resolve project on the same desktop computer, first using an RTX 3080 and then swapping in the RTX 5080.

With the 3080, playback wasn’t terrible, but scrubbing wasn’t smooth, and rendering a 10-minute project took 4 minutes and 6 seconds.

With the 5080 installed, playback was dramatically smoother, scrubbing felt instant, and the same render finished in 1 minute and 23 seconds, making it roughly 66% faster.

That’s a massive difference.


So Which Settings Should You Use?

If you’re editing on:

  • A Mac with an M-series chip, or
  • A Windows PC with an NVIDIA 5000-series GPU

You can safely shoot XAVC HS, highest bit rate, 4:2:2, 10-bit and enjoy both great image quality and fast editing performance.

If you’re using:

  • An AMD GPU
  • An older NVIDIA card
  • An older Intel-based Mac

Then I recommend choosing 4:2:0, 10-bit instead. The image quality is still excellent, and your editing experience will be significantly smoother.


The One Big Limitation of XAVC HS

There is one annoying limitation of XAVC HS that I was really hoping Sony would fix with the a7V, but unfortunately it’s still here.

You cannot record 4K at 30 frames per second using XAVC HS.

Yes, it’s weird, and yes, it’s been weird for years.

If you need 4K 30p, the workaround is to switch your video format to XAVC S, then select 4K 30p, 140M, 4:2:2, 10-bit.

XAVC S uses the older H.264 codec, which is less efficient than H.265, but the image quality is still very similar, so don’t stress about that.

The good news is that now that Apple and NVIDIA support hardware acceleration for 4:2:2, using XAVC S no longer comes with the editing penalties it used to.


Final Thoughts

Sony video formats can get complicated quickly, but once you understand what the camera is doing and how your computer factors into the equation, the right settings become pretty straightforward.

For most situations:

  • XAVC HS
  • Highest bit rate
  • 10-bit
  • 4:2:2 if your hardware supports it

And remember, if you want a quick reference you don’t have to think about, you can download my Sony Video Formats Cheat Sheet completely free.

If you want to learn more about cameras and filmmaking, consider subscribing, and as always, thanks for reading.

How To Build The ULTIMATE YouTube Desk For $500! (Complete Guide)

I just built the ULTIMATE YouTube desk!

It works for talking head videos, overhead product shots, horizontal content, and vertical video. Best of all, the entire setup cost right around $500.

In this post, I will walk you through exactly how I built it, the parts I used, and how you can build the same desk yourself.

Choosing the Standing Desk Frame

Every desk starts with two things: legs and a desktop.

For the legs, I wanted a standing desk frame that could support a very wide surface. My goal was a desktop at least 39 inches deep, which ruled out many budget frames.

After researching several options, I chose a VIVO standing desk frame from Amazon (FYI some of the links in this article are affiliate links to Amazon and I earn a commission from them). At the time, it retailed for around $200, but I picked it up during Prime Day for about $150. Pricing fluctuates, so be sure to confirm current pricing before purchasing.

The frame supports wide desktops and provides electronic height adjustment, which is essential for my workflow.

Why I Did Not Use a Prebuilt Desk

I have owned several standing desks over the years.

One of them is an Uplift Desk L desk that I use for video editing. It is excellent and very well built, and I have a full video dedicated to that setup.

The other desks I own are Husky work tables from Home Depot. These use a manual crank instead of electronics. They are extremely sturdy and significantly cheaper than motorized desks.

All of these desks use solid butcher block tops made from hevea or rubberwood. This material is durable, heavy, and far superior to particleboard.

The problem was depth.

Most desks are around 72 inches wide but only 24 inches deep. That works fine for typing, but it is limiting for overhead product shots, especially for vertical video.

The Kitchen Island Desktop Solution

My friend and fellow YouTuber Chris Brockhurst introduced me to a better solution.

Instead of using a standard desktop, he recommended using a kitchen island countertop. These are typically 72 inches wide and 39 inches deep, which is perfect for overhead filming.

Chris used an IKEA Karlby countertop, which is very popular. I tried one myself and ended up returning it.

While it looks great, it only has a thin layer of real wood on top. Underneath is particleboard, which does not hold up well to drilling, modifications, or long term use.

Why I Chose a Butcher Block Countertop

After more research, I purchased a butcher block kitchen island countertop from Lowe’s.

Both Lowe’s and Home Depot sell similar countertops in various wood types like oak, acacia, and rubberwood. Some are very expensive. The most affordable option is hevea or rubberwood, which I already had great experience with.

The Lowe’s countertop cost $310. Home Depot had a similar version under their house brand for about $50 less, but it was not available in store.

I strongly recommend buying in person if possible. You want to inspect the wood for dents, chips, discoloration, and excessive knots. What looks good on camera matters.

I had four slabs to choose from and picked one with minimal knots on one side, which became the tabletop surface.

One important note is that Lowe’s will not price match Home Depot on store brand countertops, even if they are nearly identical.

Also bring help. This countertop weighs about 100 pounds and is awkward to move.

Finishing the Butcher Block

Unlike IKEA countertops, this butcher block comes unfinished. That means you need to seal it.

I used Watco butcher block finishing oil in clear because lighter wood looks better on camera and helps products stand out.

You will also want staining rags or sponge applicators, rubber gloves, and sandpaper in 80 grit and 400 grit.

Sanding and Prepping the Wood

The countertop comes pre-sanded, but the edges are very sharp. You will want to sand those down.

I set the slab on sawhorses in my parents’ garage. You want a space that is relatively dust free and well ventilated because the oil smell is strong.

Using 80 grit sandpaper, round the edges and corners. Focus on the top and bottom edges. You do not need to sand the surface yet unless there is damage.

Next, lightly sand the surface with 400 grit sandpaper following the grain. This helps the oil soak in evenly.

Wipe everything down with a lightly damp cloth to remove dust.

Applying the First Coat of Oil

Stir the oil thoroughly and put on gloves.

Using the staining rag, apply oil with the grain across the top and sides. Go slowly and do not overload the wood.

Once applied, go back over the surface with a dry rag to remove excess oil. Pay close attention to drips along the edges, as these can darken if left alone.

Using a light source helps you spot bubbles or uneven areas.

After about 20 minutes, check again and wipe away any oil that has pooled. Let the first coat dry for at least six hours.

Second Coat and Final Sanding

Before applying the second coat, the surface may feel slightly sticky.

Lightly sand the top and sides with fine grit sandpaper, following the grain, until everything feels smooth.

Apply the second coat of oil the same way as the first. Wipe away excess oil and allow it to dry for another six hours.

If you find rough spots, lightly sand again. If sanding removes too much color, add another light coat of oil. The process is forgiving.

Once complete, the desktop is finished and ready to mount.

Attaching the Standing Desk Legs

Most standing desk frames are designed for one inch thick desktops. This butcher block is 1.5 inches thick.

The included screws will work, but you can buy longer ones if you want extra stability.

Place the desktop face down on a soft surface like carpet or a blanket.

Fully assemble the desk frame upside down on the slab before drilling. This makes alignment easier.

Measure carefully and mark all screw holes with a pencil. Drill pilot holes to prevent cracking, then screw the frame into place.

When flipping the desk upright, lift from the wood, not the legs. This prevents stress on the screws.

Final Thoughts on the Desk Build

I love how this desk turned out!

The matte finish reduces glare, the extra depth makes overhead filming easy, and the desk feels incredibly solid.

Thanks for reading and happy building.

iPhone 17 Pro Camera Test: Will It Overheat?!

When Apple releases a new iPhone, the big question is always the same. Is it actually better, or is it just a small refresh? With the iPhone 17 Pro and the iPhone 16 Pro, the differences are not dramatic on paper, but real-world use tells a more interesting story.

This comparison focuses on performance, heat management, battery behavior, and how these phones hold up when you actually push them.

Raw Performance Differences

Both phones are fast. There is no getting around that. Everyday tasks like messaging, browsing, and social media feel identical on the iPhone 17 Pro and the iPhone 16 Pro.

The difference shows up when you start stressing the phone. Things like recording long video clips, exporting footage, or running demanding apps back to back reveal a small but noticeable edge for the iPhone 17 Pro.

Apps load slightly quicker, and the phone feels more responsive under sustained use.

Heat and Thermal Management

Heat has been a concern with recent iPhones, especially for video shooters and creators. This is one area where the iPhone 17 Pro shows improvement.

During extended recording sessions, the 17 Pro stays cooler for longer. It still gets warm, but it takes more time to reach uncomfortable temperatures. The iPhone 16 Pro heats up faster when pushed hard, especially during 4K video recording or long camera sessions.

This matters if you use your phone professionally or rely on it for long shoots.

Battery Behavior Under Load

Battery life between the two phones is similar during light use. Texting, calls, and casual browsing do not show much difference.

Under heavy use, the iPhone 17 Pro pulls ahead. When recording video, navigating, and multitasking throughout the day, the 17 Pro drains more slowly and maintains performance better as the battery drops.

It is not a massive improvement, but it is consistent.

Camera Performance in Real Use

Image quality between the two phones is very close. Photos look sharp, colors are accurate, and Apple’s processing is still strong on both devices.

The difference comes in consistency. The iPhone 17 Pro handles challenging lighting a bit better, especially during longer video clips. Stabilization feels more reliable, and exposure changes are smoother.

For quick clips, social content, or professional backup footage, the 17 Pro is more dependable.

Everyday Experience

If you are using your phone casually, you may not notice much difference between these two models. The iPhone 16 Pro is still an excellent device and feels fast in almost every scenario.

If you regularly push your phone with video work, long recordings, or demanding apps, the iPhone 17 Pro feels more stable and better optimized.

This is a refinement upgrade, not a revolution.

Which One Should You Choose?

If you already own an iPhone 16 Pro, upgrading to the iPhone 17 Pro is not essential. The improvements are real, but they are incremental.

If you are upgrading from an older phone, or if you frequently deal with heat, battery drain, or performance slowdowns, the iPhone 17 Pro is the better long-term choice.

Final Verdict

Apple focused on polish with the iPhone 17 Pro. Better thermal management, slightly improved performance under load, and more consistent camera behavior make it the best version of this design so far.

The iPhone 16 Pro is still a great phone. The iPhone 17 Pro is simply more reliable when it matters most.

iPhone 17 Pro: A Filmmaker’s Review

The iPhone 17 Pro is a major jump in video quality in three of the areas filmmakers care about most. Compared to the iPhone 15 and 16 Pro, this easily becomes the best iPhone Apple has ever made for video.

In this post, I’m reviewing the iPhone 17 Pro specifically from the perspective of a working filmmaker who regularly uses an iPhone for both personal and professional video work. I’ll walk through the most important video upgrades Apple made and help you decide if it’s worth upgrading.

For transparency, this is not sponsored by Apple. I purchased this iPhone myself. This video is sponsored by my color presets, which work especially well with Apple Log footage and are linked below.

ProRes RAW Comes to iPhone

The biggest and most surprising upgrade to the iPhone 17 Pro is the addition of ProRes RAW video recording. This change affects nearly every other aspect of the camera system.

Since the iPhone 14, Apple has supported ProRes video. The iPhone 15 added Apple Log. Now, with the 17 Pro and Pro Max, Apple finally supports ProRes RAW.

Until recently, ProRes RAW was mostly limited to cameras recording externally to supported monitors. That limitation largely existed because RED held patents around internal RAW recording. After Nikon acquired RED in 2024, we started seeing more devices support internal or semi-internal RAW recording, and now the iPhone joins that list.

Technically, the iPhone 17 Pro cannot record ProRes RAW internally. You must record to an external SSD. While that may sound disappointing, it makes sense. ProRes RAW files are massive.

You’re looking at over 6GB per minute of footage. Even with a 2TB iPhone, internal recording would fill up shockingly fast. External SSD recording is the only practical option here.

ProRes RAW Quirks You Should Know About

When recording ProRes RAW, there are a few details worth keeping in mind.

First, while the Blackmagic Camera app reports 4K recording, it is actually capturing video in a 17:9 aspect ratio at 4224×2240. That’s slightly wider than standard 16:9 UHD.

Second, ProRes RAW maxes out at 4K 60fps. There is no 120fps option.

These quirks aren’t unusual. Many cameras that support ProRes RAW have specific resolution, aspect ratio, and frame rate limitations. Just plan for them when filming.

Apple Log 2 and Open Gate Recording

Alongside ProRes RAW, Apple announced Final Cut Camera 2.0, which introduces Open Gate recording and a new log profile called Apple Log 2.

Apple Log 2 offers a wider color space and improved flexibility when color grading. It is only available on the iPhone 17 Pro and Pro Max.

While Final Cut Camera 2.0 was not available at launch, the Blackmagic Camera app already supports both Open Gate and Apple Log 2.

Open Gate recording captures the full sensor in a 4:3 aspect ratio at 4032×3024. This is incredibly useful if you want to create both horizontal and vertical video from the same clip.

There are two downsides to Open Gate recording.

First, image stabilization is disabled. Since the phone is using the entire sensor, it no longer has extra pixels available for digital stabilization. You’ll want to use a tripod or stabilize in post.

Second, the Blackmagic Camera app does not currently allow Open Gate recording in Apple Log. You’re limited to Rec.709, which reduces grading flexibility. Hopefully, this is fixed in a future update.

Main Camera Image Quality

The main 48MP camera is unchanged from the iPhone 15 and 16 Pro. It uses the same sensor and lens coatings.

That means it still suffers from small but noticeable lens flares in darker environments or when filming into the sun. This remains my biggest complaint and something I hope Apple addresses in the iPhone 18.

That said, overall image quality is excellent. ProRes RAW does not magically add dynamic range, but it makes it easier to extract the maximum dynamic range the sensor already has.

This is especially useful when matching iPhone footage with cameras like the FX3, Z8, or ZR.

Why This Matters for Filmmakers

It’s wild how far iPhones have come!

Large cameras are often restricted in public spaces, concerts, museums, and venues that require media credentials. Meanwhile, the phone in your pocket can now record ProRes RAW and log video.

Apple already films many of its events on iPhone, and reports suggest more Apple TV productions are being filmed this way as well.

These phones won’t replace cinema cameras, but for run-and-gun filmmakers, they are becoming increasingly powerful tools.

Major Upgrade to the Front Camera

The selfie camera receives one of the biggest upgrades in years.

Apple finally replaced the aging 12MP sensor with a new 18MP square sensor. This allows the front camera to record both vertical and horizontal video without rotating the phone.

Holding the phone vertically while recording horizontal video feels strange at first, but it works incredibly well.

Video quality is noticeably improved. In the past, I avoided the front camera whenever possible. Now, I feel confident using it, even for professional content.

The larger sensor also improves stabilization, making handheld selfie footage look smoother while walking.

Dual Camera Recording Has Potential

The iPhone 17 Pro introduces the ability to record from the front and back cameras simultaneously.

This is useful for reaction-style videos and could be powerful for product reviews or behind-the-scenes content.

However, Apple currently bakes both angles into a single video file. Unlike Samsung, it does not record two separate clips.

A true pro mode with separate files would make this feature far more useful in editing.

Telephoto Camera Gets a Huge Upgrade

The third major video upgrade is the telephoto camera.

Apple upgraded it from 12MP to 48MP and increased the sensor size by 56 percent. This dramatically improves low-light performance, which was a major weakness on previous models.

The telephoto lens shifts from 5x (120mm equivalent) to 4x (100mm equivalent). Thanks to the higher resolution, you can digitally crop to a 200mm equivalent when needed.

Personally, I find the 4x focal length far more usable than the old 5x. It feels more versatile and easier to work with.

Bonus Filmmaker Features

There are several smaller upgrades filmmakers will appreciate.

The new vapor chamber cooling and aluminum body help manage heat much better. In my testing, I was unable to trigger screen dimming due to overheating, even outdoors.

Screen brightness increases to 3000 nits, up from 2000 on the iPhone 16 Pro. Combined with improved anti-reflective coating, outdoor visibility is noticeably better.

The iPhone 17 Pro also adds Genlock and Timecode support. These are niche but powerful tools for multi-camera workflows, video walls, and advanced productions.

To use Genlock and Timecode, Blackmagic is releasing a Camera ProDock that adds HDMI monitoring, audio inputs, SSD support, power, and sync ports. Once it’s available, I’ll be testing it extensively.

Final Thoughts on the iPhone 17 Pro for Video

The iPhone 17 Pro delivers three major video upgrades: ProRes RAW, a dramatically improved selfie camera, and a vastly better telephoto lens.

Add in Open Gate recording, better cooling, brighter displays, and pro-level sync tools, and this is the most filmmaker-focused iPhone Apple has ever made.

Oh, and if you like the colors in the video above, it was graded with my color presets.

Thanks for reading, and I’ll see you in the next one.