Skip to content

Are you ready? Rare ‘Blood Moon’ Total Lunar Eclipse Thursday night to Friday morning!

Sunday 9 March 2025

Get your sleep now because you might just be up all night later this week. Across all of Canada and down through the Unites States, Central America and South America, if skies are clear, we will be retreated to a Total Lunar Eclipse—a rare ‘Blood Moon’ Total Eclipse.

I do a lot of photography, but rarely have I ventured into night photography. I loved photographing the Total Solar Eclipse in April of 2024, but find that all too often, the night skies here in southern Ontario are either too bright or too cloudy for success.

Lunar Eclipse: March 13-14, 2025

A Lunar Eclipse is different. It happens when Earth’s shadow travels across the face fo the moon, turning our Moon a deep orange-red colour. It is also a much slower process, taking about 6 hours from start to finish. That’s why I recommend getting your extra hours of sleep in now.

Here’s some background about the eclipse from Space.com. this article has some very specific timings and descriptions of what’s happening when.

Times in this image are for Eastern Daylight Time. Image courtesy of NASA’s Scientific Visualization Studio)

We’ll start with the timings. I pulled these times from TimeAndDate.com and did my best to confirm the times across Canada.

PacificMtnSaskCentralEasternAtlanticNfld
Start of Penumbra8:57pm9:57pm9:57pm10:57pm11:57pm12:57am1:27am
Start of Umbra10:09pm11:09pm11:09pm12:09pm1:09am2:09am2:39am
Start of Totality11:26pm12:26am12:26am1:26am2:26am3:36am4:06am
End of Totality12:31am1:31am1:31am2:31am3:31am4:31am5:01am
End of Umbra1:48am2:48am2:48am3:48am4:48am5:48am6:12am
End of Penumbra3:00am4:00am4:00am5:00am6:00am7:00am7:30am

Next: equipment, composition, exposure and post-processing. You may want a shot showing the moon phases above a particular scene or landscape/cityscape, but you might also want a telephoto shot of the deep red of the moon at totality. If you have two cameras and two tripods, you could do both.

How to . . .

So, you want to photograph the Eclipse . . .

As I said, I am no expert in this field, so I have put together some resources to help you (Note: links below open in new tabs). But first, an overview from Gordon Laing:

A few key things to remember are to:

  1. Make sure you charge your phone or tablet and your camera batteries!. You will likely be out for a few hours.
  2. Stay safe. This is happening over night. Go with a friend or at least let someone know where you are and when you expect to be back.
  3. Use a tripod. Your arms will thank you.
  4. Switch your camera to spot metering mode. The spot should be over the Moon.
  5. Bring and wear a small headlamp that can be set to Red/Night Vision. This will allow you to see without disrupting your night vision.
  6. Keep your shutter speed as close to 1/125 as possible by adjusting the ISO. At slower shutter speeds, the moon will appear blurred—remember you and and the moon are moving relative to each other. Even if it seems to be very, very slowly, there is enough movement to demand as close to 1/125 as your ISO will allow.
  7. Be prepared to change your ISO as the Eclipse evolves. The Moon will grow more and more dim, yet it is still moving, so you want to keep the shutter speed up. Remember, noise can be cleaned up in post-processing (see Raw File Optimization).
  8. Head out Wednesday evening to plan where you will be to get the shots you want. Seeing things ahead of time and standing there planning for where the moon will be during the eclipse will provide greater confidence for success on the night of.
I seems we’re in for light cloud in southern Ontario on Friday at 1am—at least that’s what Windy is predicting.

Check the weather

I’m not sure where I’ll for this. Much depends on how clear the sky forecast is. Here are some sites to check:

There are apps for that

An app you may find helpful is PhotoPills. It’s free to download and checkout before forking over any cash. I have not made much use of it other than for planning for the Total Solar Eclipse last year. However, I see they have a very good YouTube video to help you get started. Also, I noticed on the PhotoPills website a number of free downloadable “Guides to Photographing . . .”, once you’ve provided your email. They have a free 143-page Moon Photography: The Definitive Guide (2024) and, most specifically, a 108-page Lunar Eclipses 2025: The Definitive Photography Guide. To get your guide, go to the PhotoPills.com > Academy > Articles and you will see a long list of very helpful guides for a number of outdoor shooting situations.

The other app i use is The Photographer’s Ephemeris (TPE). It is both a native (Desktop / iOS) app and a Web app. I find TPE much easier to use than PhotoPills and it seems photographer John Pelletier agrees in his comparison in 2020.

So, are you ready? The countdown is on—just four days to go! Good luck and all the best of luck for clear skies!

Thanks for reading! If you have any questions, comments, or discussion about the upcoming Lunar Eclipse, be sure to add a comment.


This work is copyright ©2025 Terry A. McDonald
and may not be reproduced in whole or in part without the written consent of the author.

Please SHARE this with other photographers or with your camera club, and consider subscribing to receive an email notice of new blogs.

Have a look at my work by visiting www.luxBorealis.com and consider booking a presentation or workshop for your Nature or Photo Club or a personal Field & Screen workshop at Workshops.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

New Lens! 9mm/1.7

Saturday 8 March 2025

Astrophotography, here I come!

For years, I was a prime lens kind of guy. But with the high optical and build quality of the OM System M.Zuiko lenses, their zooms especially, I was thrilled to be able to collapse my lenses down to three zooms which cover every focal length from 8mm to 400mm (16mm to 800mm in full-frame equivalents): the 8-25mm/4 PRO, the 12-100mm/4 PRO and the 100-400mm/5-6.3 IS. I added the 60mm Macro as a specialty lens, much as I’ve just done by purchasing the 9mm/1.7 prime lens.

ƒ4 has never slowed me down. Certainly the OM system zooms are sharp wide-open, something I did not experience with my Nikkor zooms. Shooting in low light situations of dim cathedrals and at the edge of light out in the field, I’ve always found that the OM’s IBIS covers slow shutter speeds and, when needed, I could bump the ISO. Any noise issues are effectively eliminated with Lightroom’s Enhanced Noise Reduction (see my review of the various Raw File Optimization treatments from earlier this year). So why a faster lens?

From what I’ve read and through my earlier attempts at Astrophotography, I’ve learned ƒ4 just doesn’t cut it.

Milky Way over George Lake, Killarney Provincial Park, Ontario.
Nikon D800E w/ 18-35mm at 18mm; ƒ4 @ 15 seconds; ISO 3200.

Astrophotography Backgrounder

Last month I attended an excellent webinar sponsored by OM Systems (the YouTube is available HERE). While much of it dealt with winter photography, photographer Peter Baumgarten also discussed astrophotography. As it turns out, the Milky Way begins to be visible in the southern Ontario night sky in late February and stays around right through the summer.

I have a lot of respect for Peter. He is a dynamic and very creative photographer, recognized as an Olympus Visionary/Ambassador. OM System includes his instructions for astrophotography in an online article, Astrophotography 101. I plan on having it up on my phone as I venture into this new realm of seeing. For added inspiration, I recommend visiting Peter’s website at CreativeIslandPhoto.com. You should also check out Landsby’s Guide To Stargazing & Aurora Viewing In Ontario and the Ontario Parks Blog. Ontario has a few ‘Dark Sky’ areas that will provide the best viewing conditions. The RASC has a map showing dark sky preserves across Canada and there’s also DarkSiteFinder.com (who just reminded me of the Lunar Eclipse this month).

Stars over the Rideau, Ontario
Olympus OM-1 w/ M.Zuiko 8-25mm PRO at 8mm (16mm efov); ƒ4 @ 20 seconds, ISO 400

How did I decide on the 9mm Summilux?

It was helpful that DPReview did some of the leg-work for me. In a 2023 article, they compared four top ultra-wide primes for M43 that fit the niche for astrophotography:

  • Laowa 7.5mm F2;
  • Meike 8mm F2.8;
  • Panasonic Leica DG Summilux 9mm F1.7 ASPH; and the
  • Samyang 10mm F2.8 ED AS NCS CS.

Although the P-L 9mm is the most expensive of the four, (tied with the Laowa), it is also head-and-shoulders above the rest in image quality AND it has autofocus. ‘So?’ you ask. ‘What’s the big deal about AF for stars? Do you need AF for astrophotography? Does it even work?’ Surprise! OM camera bodies have this wonderful feature called StarrySky AF!! Have you ever tried focussing on stars? With StarrySkyAF, there is no more guess-work or peering through magnified viewfinders to nail down focus. It’s a great feature!

Additional reading from Amateur Photographer and Photography Life as well as some M43 forum discussions helped to validate my decision, so I ordered the lens.

First Impressions of the P-L 9mm/1.7

I was thrilled that Camera Canada had the lens in stock and was able to ship it at no extra charge, with next day delivery. Talk about service! I have had nothing but excellent service from Camera Canada and can highly recommend them. They are based in London, Ontario, with their two ‘bricks-and-mortar’ locations operating as Forest City Image Centre. It’s the best of all worlds: Canadian-owned small business with online convenience, great pricing, and excellent service.

However, upon opening the box and holding the lens, I must admit to feeling a little underwhelmed, even disappointed, by the feel of the lens itself. Next to my OM System M.Zuiko lenses, the Panasonic-Leica seemed, umm, in a word, cheap—not inexpensive cheap but with a cheap feel to it. In all fairness, nothing rattled, and the focus ring is smooth; it also attached to the camera snugly—all good things. The lens is also a diminutive, which I appreciate, and the poly-carbonate lens body is certainly feather-light. But the lens does not exude the solid build quality, the ‘heft’ and feel of my OM System lenses. Even the plastic used in the lens hood doesn’t feel as robust as the lens hoods of my M.Zuikos. To look at it, and pick it up and feel it, the 9mm is clearly Leica in name only. But, perhaps I’m not being fair; it may well be Leica-quality in optics, which is the most important thing, but that remains to be tested.

The Panasonic-Leica 9mm/1.7 ASPH on my OM-1

So why didn’t I purchase the OM System M.Zuiko equivalent? Simple. There isn’t one. And worse, it’s not on their Lens Road Map. Why? Why? Why? OM Systems makes superlative, industry-leading, sharp, fast primes—why not at 9mm or 10mm?? Both 18mm and 20mm are such common focal lengths amongst the serious FF crowd. I loved my Nikkor 20mm/2. But, M.Zuiko primes skip right over from the 8mm/1.8 PRO Fisheye to the 12mm/2. Both are excellent lenses, but I didn’t want a fish-eye and 12mm is too narrow for the kind of coverage I wanted for astrophotography. OM System does offer the excellent M.Zuiko 7-14mm/2.8 PRO zoom, but it is big, it’s bulky, and it overlaps my existing and more useful zoom range of 8-25mm. And, at $1550, the 7-14mm it is also beyond my means.

So, the 9mm it is and the proof, they say, is in the pudding. Bring on the clear nights! 3am alarm here I come!

Thanks for reading! If you have any questions, comments, or discussion about M.Zuiko lenses, the OM-1 or the Panasonic-Leica 9mm/1.7, be sure to add a comment.


This work is copyright ©2025 Terry A. McDonald
and may not be reproduced in whole or in part without the written consent of the author.

Please SHARE this with other photographers or with your camera club, and consider subscribing to receive an email notice of new blogs.

Have a look at my work by visiting www.luxBorealis.com and consider booking a presentation or workshop for your Nature or Photo Club or a personal Field & Screen workshop at Workshops.


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Navigating the AI Juggernaut—A Photographer’s Perspective

Tuesday 25 February 2025

AssistiveAI, GenAI, Internal, External—Where’s the Authenticity?

NOTE: This article was published simultaneously on the Luminous-Landscape.com

AI is not photography! Or is it?

Let’s get once thing out of the way right off the top—the argument regarding AI should not be about ‘honesty’. Photography lost that battle decades ago. It has never truly been ‘honest’, though many still perceive it as so. Take a picture of your family and you get a reasonable facsimile on screen or in a print—complete with goofy looks, hair sticking up and that spare tire you’re carrying around.

But photos have been faked forever, and it’s not just Bigfoot and the Loch Ness Monster. From Abraham Lincoln’s portrait, to National Geographic’s Pyramids cover, even World Press winning photographs, hoaxes, misrepresentations, and alterations have existed for almost as long as the medium itself. Have a look at The Hoax Museum Photo Archive and you’ll get the idea.

So, AI doesn’t really change things, or does it?

Maybe we need to reframe how we think of AI. To my mind,

AI is to authentic photography as ultra-processed foods are to real food.

Ultra-processing reduces costs and make life more convenient, but a regular diet of it makes us lazy and less healthy.

AI is much the same. AI reduces costs and makes life more convenient. And, yes, it will also make photographers lazy and possibly less healthy.

Any foodie will understand the difference between real food and the ultra-processed stuff that has crept into our grocery carts. Think about all those artificial ingredients and the unnecessary fats, sugars and starches that are added to food to make you crave more of it. To the average consumer, UPF is convenient and it tastes great. The fact that it leads to health problems, such as the obesity and diabetes, doesn’t seem to matter to most people. AI in photography is much the same.

Assistive AI

To be fair, not all AI is a problem in photography. Various types of Assistive AI are commonplace. Most phone and camera manufacturers have incorporated Assistive AI and machine learning in some way, most often for AF-assist, but also for in-camera processing and up-scaling. Algorithms, such as Adobe’s Sensei, help you find photos in Lightroom without keywording. Other apps analyze photos and suggest keywords. Assistive AI also streamlines many tasks like creating background and subject masks. And, for high volume photographers of sports teams, events, grad photos, etc. it’s a real time-saver. But Assistive AI is not what people are concerned about. There is another aspect to AI that is raising concerns.

Generative AI

The difference with GenAI is that it creates new pixels in an image. But even GenAI comes in two flavours: internal and external. Internal GenAI analyzes the pixels within the image, then uses them to create pixels, kind of like cloning but more automated and on steroids. On the other hand, External GenAI uses AI algorithms to create pixels entirely new to the image, taking them from third-party photographs—photos that are not your own.

Male Waterbuck, Tarangire National Park, Tanzania
illustrating different uses of AI—only External GenAI introduces new pixels from third-party images.

You’ll know this from Photoshop’s Generative Fill. It uses pixels from other photographers’ work to fill in backgrounds and skies, to remove objects, and to fill in gaps and edges. You may have noticed ‘Generative Credits’ as part of your subscription. Those are to ‘pay for’ External GenAI services. Topaz Labs clearly labels and promotes their PhotoAI app as one that will generate pixels to fill in missing details. They also offer the option of using Cloud Rendering, for which they will sell you credits. And Luminar Neo boasts its GenErase, GenSwap and GenExpand. 

With GenAI creeping into our processing, photographers need to be aware of exactly what their tools are doing and how GenAI is working, especially when selling work, copyrighting it, or entering photo contests and competitions. In most cases, especially for personal use, it’s no problem at all—full GenAI ahead. But for other uses, it’s important to know which GenAI is permitted and which is not.

Part of the confusion lies with a lack of clarity in labelling. As mentioned above, Internal GenAI assesses the pixels within an image to make adjustments. You may be familiar with Adobe’s new ‘Distraction Removal – Reflections’ option. It works entirely internally, by analyzing existing pixels using internal AI algorithms, so it is not introducing any new pixels to the image. Many Denoise and Sharpen adjustments work internally as well, for example DxO Pure Raw and Lightroom’s ‘Enhance’ feature. Adobe’s Neural Filters are another example, but only some of them use Internal GenAI, for example, ‘Colorize’, ‘JPEG Artefact Removal’, and ‘Skin Smoothing’. So while all of this is GenAI, it is not using third-party images, so they are all ‘safe’ to use.

But is it ‘yours’?

The confusion begins as some of Adobe’s Neural Filters use External GenAI, introducing entirely new pixels to your photograph based on the works of other photographers. These include ‘Back Drop’, ‘Deep Blur’ and ‘Make-up Transfer’, to name a few.

So, the question arises, if you drop in a background taken from another photographer’s photo** is it still ‘your’ photo? After all, the new pixels aren’t your pixels—it was all done by computer algorithms based on photos you didn’t make. Copyright is based on ‘original works’ created by a ‘human author’. Is it original? Is it even human? ‘But—’ you might say, ‘it was my idea, my concept, right?’ Unfortunately, ideas can’t be copyrighted, only the physical expression of an idea.

Which one is the ‘real‘, original photo?
Three of the four photos have a background replaced by Adobe Express. Of course the hair and lighting are the usual give-aways, but this was done for FREE taking only 20 seconds per image and with no post-processing. Imagine what could be done with high-end AI generators in the hands of people who know hat they are doing.

**BTW: This reality of ‘taken from another photographer’s photo’ opens up another hornet’s nest of controversy. GenAI has to ‘learn’ and it needs a ‘pool’ of images to work with and ‘borrow’ from. Programmers have used photos from all over the web to do that, usually without the direct permission of photographers. In the wild west of fine print in user agreements, your photos are likely part of that learning (LINK). 

Look familiar?
Grand Tetons from Snake River Overlook.
This 2048x2048px image was generated by Adobe Firefly in 15 seconds, using the command “black-and-white photo of the grand tetons from snake river overlook”. From there it was cropped to a 4×5 aspect ratio.

If the image you produce isn’t entirely yours, nor was it generated completely by you, should you be taking the credit for it? Are you faking ownership if you put it forth as ‘your’ work? In once sense, it’s similar to ‘Made in Canada’ versus ‘Product of Canada’. ‘Made in’ only needs 51% of the content to be Canadian. ‘Product of’ requires 98%.

Recently the US Copyright Department has tried to clarify things. Interestingly, a hundred years ago, photographs themselves had to battle to be accepted for copyright. At the time, many felt that because photographs were machine-made they should not be. Now that we’re long over that hurdle, it seems the human touch remains a contributing factor. A human-made photograph is copyrightable; a machine-generated image, even if made using human commands, is not. The jury is still out on photographs that use GenAI to create parts of the image. Time will tell.

Incongruencies 

Photographer, Yosemite Valley.
Image generated by Adobe Firefly in less than 15 seconds. While far from perfect, imagine what could be done with more time and a high-end GenAI suite.

Then there is the visual impact. There has always been poor photography and there are times when GenAI doesn’t quite hit the mark. And worse, there are instances when the user thinks, “Wow! This looks amazing!”, but others can see right through the attempt at making something from nothing. Remember the HDR craze?

I notice incongruencies most often with lighting and colour alterations: while the sky is an early morning or late evening bright orange, the gloss on the bird’s feathers lack that warmth. Nonconformities also show up in overly-sharpened crisp, clear fur and feathers which are far more perfect than one would ever see in nature. I call them hyper-realistic. Another common AI error which is often missed, are shadows in the wrong place or shadows without anything there to cast the shadow.

User Beware

Perhaps the biggest problem with GenAI involves entering photos in contests or competitions. Photographers must read the fine print. You must be aware of which AI tools are ‘internal’ and which are sourcing pixels from elsewhere. Competition organizations, especially those dealing with nature photography, are very strict on what processes can and cannot be used. 

Case in point: the Canadian Association of Photographic Arts (CAPA) has gone to great lengths to create a detailed, 15-page document (LINK) which specifies exactly what tools can and cannot be used within a number of commonly-used editing apps. For example, with Lightroom, you may use the ‘Remove’ tool, but not with ‘Use Generative AI’. With Photoshop, you may use some Neural Filters, but not all, as pointed out above.

But then you may think—how can the judges tell? That depends on three things:

  1. how honest the photographer is;
  2. how realistically the AI is applied, and
  3. how closely the photos are scrutinized.

See this Audubon article about AI in nature photographs, and this CNET article to test your powers of discernment. Nowadays, before winning photographs are announced, each photographer must submit their original JPEG or raw image file for comparison to the submitted image. If the judges are suspicious, they can and will revoke the award, as happened with a Wildlife Photographer of the Year award winner in 2017. (LINK)

“The photographer, Marcio Cabral, denies he faked the scene and claims there is a witness who was with him on the day.” Still, his photo was checked by five independent scientists and all came to the same conclusion: a stuffed anteater from a nearby lodge and the anteater in the photo were one and the same. The award was revoked. In this case it wasn’t AI that was used, but AI can easily be used in the same way.

Authentic Photography

Moraine Lake, Banff National Park
2048x2048px image generated by Adobe Firefly in about minute.

To me, it’s all about authenticity. If any part or pixel of a photograph has been generated using External GenAI, one needs to ask: Is it still a legitimate photograph or should it be deemed something else, such as digital art? CAPA uses  the definition: “a captured image on a light-sensitive device (e.g. film camera, digital camera, smartphone, tablet, etc…) and recorded on film or in a digital format.” (LINK) If the pixels originated from another image, e.g. by using External GenAI, then it is not permitted. Is it still a photograph? Perhaps, but not an ‘authentic photograph’.

Moraine Lake, Banff National Park
The real thing—an authentic photograph by the author, which was NOT used as a reference image for the AI image above!

A definitive definition of a photograph is difficult at best, and is likely irrelevant. Historically, we’ve accepted when photographers had their prints and negatives  retouched, thereby changing the original capture. Is Lightroom’s Remove tool any different? It is when a GenAI algorithm introduces new pixels from another photographer.

Perhaps we need to become more declarative, as in ‘This is an authentic photograph made entirely by the photographer’ or simply, ‘No part of this image was generated using External GenAI’. This is similar to the growing pressure for clear labelling of ultra-processed food. Perhaps when GenAI is used in a photograph, it needs to be labelled as such, which is beginning to happen, e.g. in Facebook and Instagram.

This may well make room for an ‘Authentic Photography’ movement, similar to the ‘Real Food Movement’. Note: this is different from the Straight-out-of-the-Camera folks (SOOC) who renounce editing of any kind. There needs to be space, a distinction, for those who choose not to use External GenAI in their workflow. Along those lines, Radiant Image Labs has declared that their software, Radiant Photo 2, uses only Assistive AI, not Generative AI and they have committed themselves to authenticity in photography. There is some talk of Serif Affinity going the same route. One may think, ‘That’s economic suicide—everyone is going AI.’ Not true. Niche marketing is alive and well.

Will ‘authentic photography’ become a niche medium, one that may earn a premium? Again, time will tell. Radio was deemed dead once TV became popular, but radio is still with us, and despite the onslaught of digital music, vinyl LPs are still being pressed, and, of course, gelatin silver prints are still made along with palladium prints, cyanotypes, etc.

Image generated in Adobe Firefly in less than 30 seconds from the comfort of my own home.
Goðafoss, Iceland in winter
An authentic photograph.
How much travel photography will end up being AI-generated?
The same authentic photograph with Radiant Photo 2 applied—using Internal GenAI for improvements.

AI Photography

So, as photographers, where does AI leave us? Will we become flabby and suffer ill effects from Generative AI just like we do with ultra-processed food? Yes. And no. Generative AI is convenient and convenience itself breeds laziness. Many in our society are far less healthy simply due to the convenience of ultra-processed foods combined with the lack of activity spawned by TV remotes, drive-thus, and cars. Reduced health in car-oriented societies is conclusive.

Will photographers will become lazy in their pursuit and execution of photos simply because they can generate what need with GenAI? Consider this: whatever lousy sky they get in a spur-of-the moment shot, can be replaced. Some would argue, ‘Why get up at stupid o’clock in the morning, when I can adjust the lighting and colour effects of what ever photo I take to turn it into a Golden Hour beauty?’ As Luminar Neo tells us, “Twilight photos without waiting for the magic hour.”

As many point out, ‘We’ve been Photoshopping out imperfections for decades—what’s the big deal?’ ‘Besides, who would ever know?’ They have a point. The general public isn’t very discerning and much of commercial photography, especially in media and advertising, is throw-away—used once and gone. It’s the look that counts, not how you produce it. Isn’t it?

AI is here to stay. In fact it’s becoming better and better at creating photo-realistic images. Did you look at those photos from CNET? Already, you can give plain-English descriptive commands to software, such as Midjourney and Adobe Firefly, to generate images that can be further tweaked as you wish. And this is just the beginning. The time is not far off when the result will be fully photo-realistic—client- or printer-ready images at the specified output resolution. Photographers may well be out of a job simply because someone with better language skills will be doing a more efficient and effective job with AI. And, the lighting, the mood, the whole feel of the image can be changed with a few clicks. Art Directors will have all the creative freedom and control they’ve always sought.

The latest fashion, right from Paris . . .
. . . or Rome
Each photo was generated in Adobe Firefly in about 10 seconds.

It’s all down to economics

As always, the bottom line is economic. In an era of fast fashion and 24-hour news cycles, the convenience of GenAI has the potential for reducing production costs. The economics of GenAI images produced on the spot without the time and cost of hiring photographers and models will make GenAI photos commonplace in media and advertising, a cost-saving that simply can’t and won’t be passed up. Photographers may  still be needed to photograph the item, but after that?

Imagine car ads produced by AI. Who needs the complications and costs of photo shoots that are so dependent on, for example, the weather. Time is money and it’s far cheaper drop your make and model into an AI image. Is it realistic? Who cares? Advertising today is about creating an image, and isn’t that what AI is all about?

Will AI photography end up like ultra-processed foods—a quick fix for the masses? Definitely, yes. It’s already built into phone cameras. Serious photographers will continue to be more discerning. However, most consumers couldn’t care less if what they’re looking at is real or AI’d. It may sound flippant, but it’s true. And, let’s face it, for many run-of-the-mill commercial images, it wouldn’t matter one way or the other. What ever looks good, right?

But also, no. There will always be room for those who appreciate story-telling by humans, and the art and craft of making fine, authentic photographs. Authentic photography will likely become niche, like vinyl records and gelatin silver prints, but it will still exist.

In the meantime, it comes down to, ‘photographer beware’. Choose and use your tools wisely. Use GenAI all you want, but, if you plan to submit photos for contests or competitions, or simply want to work within the limitations of authentic photography, then be careful of the apps you edit with and the tools you make use of.

Thanks for reading! If you have any questions about AI and Authentic Photography, be sure to add a comment.


This work is copyright ©2025 Terry A. McDonald
and may not be reproduced in whole or in part without the written consent of the author.

Please SHARE this with other photographers or with your camera club, and consider subscribing to receive an email notice of new blogs.

Have a look at my work by visiting www.luxBorealis.com and consider booking a presentation or workshop for your Nature or Photo Club or a personal Field & Screen workshop at Workshops.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

OM System launches a new, more ‘urban’, creative creative camera body

Thursday 6 February 2025

The OM System OM-3–it’s the talk of the town today! A OM-3 reminiscent of the OM-3 from the 1980s and 90s, yet oh so much more capable! Reading through the tech data, I’m impressed at how OM is no longer hiding computational photography, art filters and video settings in the menus—they have put them front and centre in this cutting edge camera. An excellent move!

I’m not going to write anything more about this beyond stating my pleasant surprise. I recommend you read Peter Baumgarten’s excellent overview. He’s had it in his hands and has been shooting with it for some weeks now. He knows the OM System inside and outside, so he can give a much more informed judgement of the camera. You can see his post at Creative Island Photography. He is a very creatively-minded photographer—always thinking outside the box. He is an Olympus Ambassador who lives up on Manitoulin Island. The photos included in his review are worth the visit to his blog.

Raw File Optimization

Wednesday 5 February 2025

What’s the best app to demosaic, denoise and sharpen your raw files?

NOTE: This article first appeared on the Luminous-Landscape.com. It is reproduced below in its entirety.

We are in the Golden Age of Photography, with sensors fine tuned for low and high ISOs producing pro quality images for printing and publication. I’m always pleasantly surprised—and, thinking back to my film days, a little shocked!—by the quality that we can extract from raw files. When combined with software algorithms for demosaicing, denoising and sharpening, well, as I said, it’s the Golden Age of Photography.

We shoot raw files to extract as much information from a scene as is technically possible. Photographers choose raw capture because they place a higher value on quality, legacy, and individual vision than on having ’ready-made’ machine JPEGs, compressed and sharpened. Over the decades I’ve been working in digital, I have yet to meet an image file that didn’t benefit from editing, and raw is the place to start. But are we getting the most from our raw files?

I’ve been using Lightroom for as long as it has existed, and Photoshop before that. Naturally, I’m curious—is Lightroom extracting all the data it can and optimizing it to provide the highest image quality possible from my sensor? I’ve spent a lot of time, effort and money to get that raw file, I want to ensure I’m getting the most bang for my buck. The only guarantee of that is through testing.

This is the first in a series of investigations examining how to extract the highest quality possible from a M43 20.1 megapixel OM-1 sensor. At 5184x3888pixels, the sensor is ideal for any and every use of photos right up to making two-page spreads in photo books and magazine and fine art prints as large as you need (see Finding the Sweet Spot in Photography).

Method

I’ve subjected the same six images to six different treatments. Each result is output to three commonly-used sizes: (1) for web use; (2) for 4K TV use and smaller prints; and (3) for larger prints and publication. The images were selected to push the sensor while representing different styles of my photography: landscapes shot during the day and evening; travel photography, and birds and wildlife.

Photos

1.   Meru Forest is a highly detailed landscape, using the full 5184x3888px frame, made in the deep shade of the cloud forest on the slopes of Mount Meru, Tanzania. I used an M.Zuiko 12-100mm ƒ4 PRO IS lens at 35mm (70mm efov), handheld at ƒ5.6 @ ⅓ sec. at ISO 200. Due to the OM-1’s excellent integrated IBIS plus the IS of the lens, the slow shutter speed has not resulted in any loss of detail from camera, nor is there any foliage movement.

Meru Forest, ISO 200

2. Kilimanjaro Blues is a blue-hour landscape using the full 5184x3888px frame. It was made using the same 12-100 zoom, handheld at 100mm (200mm efov), ƒ5.6 @ ⅓ sec., ISO 800.  Note: the same ƒ5.6 @ ⅓ exposure as above is not a typo; it just worked out that way! You might scoff at using a photo taken at ⅓ sec. but I rarely shoot landscapes in ’perfect’ light. Secondly, the photo is sharp, edge to edge. This and the Meru Forest photo are also two of the few landscapes I made where I did not use Handheld High Res mode.

Kilimanjaro Blues,. ISO 800

 3. Sunset Vigil, Lion, Tarangire National Park, Tanzania is a highly detailed shot with an M.Zuiko 100-400mm ƒ5-6.3 IS at 292mm (584mm efov), handheld at ƒ8 @ 1/80, ISO 3200. The fur, whiskers, eye and teeth are tack sharp, despite it being a centre crop of 2741x3655px from a horizontal frame. ISO 3200 should put the apps to the test to smooth out noise without losing fine detail.

Sunset Vigial, ISO 3200

4. Grey Catbird, Ontario. Feather detail has always been mission-critical with wildlife photographers. This was made with the same 100-400 at 400mm (800mm efov), ƒ8 @ 1/200, ISO 3200 and is a 3152×4203 crop from a vertical frame. According to the internet pundits, the ones who disregard M43, these two ISO 3200 images should suffer from dreadful noise and loss of detail.

Grey Catbird, ISO 3200

Additional Comparisons:

I hadn’t intended to test higher ISOs as I rarely shoot above 3200. But, given the results from the ISO 3200 photos, I felt I should give high-ISO a shot, if only for a sense of completion. I  will discuss the results of these two shots separately, after discussing the other 4 together.

5. Junior (Immature Northern Cardinal) is an ISO 6400 file with lots of feather detail. It is one of the few shots I’ve made with an ISO that high, simply because I rarely need it and, being on old film guy, I was sceptical of the quality of high ISOs. This is a 2538x3384px vertical crop, made with the 100-400 at 400mm (800 efov), ƒ8 @ 1/320. 

Junior, ISO 6400

6.  Dad (male Northern Cardinal). This ’grab shot’ would not normally make the cut. Taken in the deep shade of our back yard, the Cardinal is moulting and looks plain without his crest. He is also perched awkwardly on the bird feeder stand. However, it is one of the few shots I’ve made at ISO 12800. I used the 100-400 at 400mm (800mm efov); ƒ8 @ 1/640.

Dad (male Northern Cardinal), ISO 12800

Raw Treatments

The five raw treatments are all commonly available, industry-standard apps for raw file demosaicing, denoising, and detail sharpening that are easily integrated into a Lightroom workflow. All processing was done on an M1, 16” MacBook Pro with 16GB of RAM.

  • LrBase are raw files processed normally through Lightroom CC (v8.1). I edit only enough to breathe life back into the machine image—to re-create my experience and my vision in the field. This changes neither the original intent of the image nor its natural feel. I work with the pixels the camera captures, so there are no dropped-in backgrounds, skies or subjects. Adjustments are made as needed to Exposure, Contrast, White and Black Points, Highlights and Shadows as well as Colour and Tint. Additionally, I add Adjustment Masks to shape the light. Detail Sharpening is typically set to 60.

Each of the additional treatments began with the original raw file and took a reasonable 15 to 20 seconds to complete.

  • LrEnhNR is the first of the five additional raw treatments. The LrBase file was run through Lightroom’s own Enhanced Noise Reduction algorithm set to 50 or 60; 75 for high ISO files. Finding the balance of noise removal while maintaining detail is critical. Afterwards, Detail Sharpening was typically set to 40, the Lr default for my ORF raw files.

Each of the following treatments began with the original ORF raw file. I did not use ‘File > Open In’ from within Lightroom as it creates unnecessarily large TIFFs.

  • DxO: DxO Pure Raw (v4.7.0); DeepPRIME XD2s and DxO’s optical corrections were applied to the ORF. Output to DNG.
  • ON1NN: ON1 No Noise AI (v2024.5) was applied to the ORF using the No Noise module set to ‘Standard’. The Tack Sharp AI module was not used. Micro Sharpening was set to the default 100 for all except for the Meru Forest file, set to 50; anything higher than 50 was too aggressive. Masking was not used. Output to DNG.
  • TPZ: ORF raw file + Topaz PhotoAI (v3.4.3) Raw Denoise at Standard (Strong was too aggressive) and Sharpening set to ‘All’ for landscapes and ‘Subject’ for birds and wildlife. Some tweaking of the subject mask was needed; it was helpful to have that option and something only Topaz and ON1 have. Output to DNG.
  • OMW: ORF was processed through OM Workspace (v2.3.3) with AI Noise Reduction plus the various built in lens corrections. Output as a TIFF; OMW does not output to DNG.

Output

After treatment, each DNG (or TIFF) was added back into Lightroom. The original LrBase edits were copied and pasted To each file, with some colour and distortion correction. JPEGs were output at 80% or 100% quality in sRGB colour space with Sharpening set to Screen. The three sizes of JPEGs represent common, everyday uses of photographs by the vast majority of photographers:

  • 1500x1125px (1.7mp), at 80% quality, to represent the needs of social media platforms, blogs, forums and other web uses, plus for HD projection; e.g. at camera club presentations. Typically, social media sizes are smaller, so any differences at this larger size should be even more noticeable.
  • 3840x2880px (11mp), at 80% quality, for 4K TV presentations, as well as laptop and desktop screens; e.g. wallpaper. This is also sized for prints a little larger than 9×12”, so it’s a test of prints up to that size as well.
  • 4800x3600px (17mp), at 100% quality, is an ideal print size for photo competitions and to hang on your wall as it makes a 12×16” print at 300ppi, easily matted to 16×20” for framing or presentation. Note: each of the wildlife photographs had to be up-sized from the original to this size during the Lightroom export to JPEG. Normally, this would mean a hit on quality, but surprisingly . . . well, read on.

Results

The results I report are based on real-world uses of photographs, not on pixel-peeping. At times, I zoomed in to 200% to check sharpness, but viewing photos at this magnification is completely unrealistic. As photographers, it is all too easy to make judgements and pronouncements based on pixel-peeping simply because we can, rather than we need to— a real dis-service to the overall photograph. After all, it’s the emotional impact that attracts people to photographs, not the pixels. The details are helpful in creating that impact but what they look like at 200% is irrelevant. My feeling is, pixel-peeper internet bloggers are simply after clicks in the guise of truth, precision and service to photographers.

(1) Web- and projection-size (1500x1125px; 1.7mp)

This size is larger than is needed for social media and were viewed at ‘Actual Size’ on my MacBook Pro, which is at 100%, but does not fill the screen. In each case, there is very little difference between any of the six treatments. Only when images are directly compared, one with another, very small differences begin to be noticed. The edge goes to LrEnhNR, DxO and TPZ for having very slightly sharper images that would show when viewed using HD projection (e.g. for camera club presentations), but would go un-noticed on the web. Even the LrBase image held its own against the treatments. ON1 and  OMW were also excellent, but suffered from a very slight shift to green in the small deep shadow and black areas between fur and feathers.

Bottom Line: With correct sharpening and texture, any of the six treatments would produce fine web images. If projecting, then a run through LrEnhNR, DxO or Topaz would produce slightly sharper results.

(2) Screen Resolution and up to 9×12” (3840x2880px; 11mp)

When JPEGs were viewed at 100% on-screen, as would normally be the case for these photos, the differences became more apparent. All five raw treatments produced excellent results with a light edge going to LrEnhNR and DxO for their consistency across all photos, with smooth skies, balanced mountain and cloud detail, and micro-detail in foliage, fur, and feathers.

Below is a series of 100% crops from each of the 3840x2880px files arranged for side-by-side comparison. Viewing them like this, on screen, gives a sense of what they look like as web images (#1 above). Click on each image to open it to view it at full size.

Meru Forest: Each section is a 1200x2000px crop. The full size of this file is 7200×2100 pixels. When viewed on-screen at 100%, there is very little difference between the six. This is to be expected with an ISO 200 file.
Kilimanjaro Blues: 1500x2400px crops. The full size of this file is 9000×2500 pixels. For me, the LrEnhNR produced slightly more 3D mountain detail and a more natural-looking sharpness in the foreground. DxO, ON1 and TPZ all appear slightly over-sharpened, which was mostly tamed by ensuring Detail Sharpening in Lightroom was set to ‘0’ and Texture was reduced to –30 to –50.
Sunset Vigil: Each 600x1500px crop (3600x1600px overall) provides a close-up view of the detail that can be extracted from ISO 3200 files using raw optimization. Every hair and whisker is revealed in all but the LrBase and OMW files, with no visible noise.
Catbird: Sensor noise has been eliminated and the feather detail extracted from this ISO 3200 file is exquisite. These are 600x1200px crops, totalling 3600x1300px. Who says small sensors can’t capture detail at high ISOs?

To my eyes, the stand-outs continue to be  LrEnhNR, DxO, ON1 and Topaz. They are all equally sharp. Unfortunately, the ON1 and OMW files suffer from the same slight colour shift noted above.

By the way, this size of 3840x2880px is similar enough to the 3600x2700px required for a fine art 9×12” print. The results shown above, clearly  show that any of the five treatments (barring artefacts) would make excellent fine art prints at that size.

Not seen in the Topaz crop of Kili Blues is some colour mottling in the plain, blue sky area along with diagonal banding at the near pixel level. This showed up each of the multiple times I ran the file through. I have contacted Topaz about this and they are working on it. TopazLabs claim the ‘pin hole’ artefacts are ‘dead pixels’, yet they do not show on any other treatment.

 (3) Framed Print Resolution (4800x3600px or 17.3mp)

To best approximate viewing distance for actual prints, these larger files were evaluated on-screen at 50%. Again, judging at 100% is simply not realistic as only photographers and internet pixel-peepers, not buyers, view prints this close.

Despite the larger size (17mp vs 11mp), the results of these comparisons closely mirror the results of the previous files, but as expected, the fine differences start to reveal themselves.

Once again, the LrEnhNR and DxO versions are the best, but not by far. The ON1 and Topaz versions are equally good, but each suffers as they did previously from colour shift and artefacts respectively. I’d like to work out these problems as the sharpness and three-dimensionality are excellent.

High ISO Photos

The last two of the six photos I examined were shot at very high ISOs of 6400 and 12800. I now know that my previous scepticism is unwarranted and I’m proven wrong, again—the results are phenomenal, across the board. Web-sized JPEGs are virtually indistinguishable across the five treatments. At 4K size, the OMW file drops out of the running. Even upsized to 4800 pixels, the detail, colour balance and exposure are amazing and very printable amongst the LrEnhNR, DxO, ON1 and TPZ files. The LrEnhNR appears more naturally sharp with the others appearing more hyper-realistic, easily tamed by dialling down the Texture adjustment in Lightroom to –30 to –50. This is something I will discuss further in my conclusions.

Like the photos above, click to open each file to view them at full size.

Junior: These 600x1200px crops from the 3840x2880px JPEGs display excellent detail across the board—even in the LrBase file (see note below). LrEnhNR, DxO, Topaz and ON1 and are critically sharp, maintaining even the smallest details in the crest and the fine feathers along the bird’s right side, though upon very close observation, each also exhibits some noise around these details.
Dad: At ISO 12800, these 600x1500px crops (file size =3000x1900px) show excellent results. Each of the LrEnhNR, DxO, ON1 and Topaz files are very printable. Slight variations in smooth feather detail sets them apart, but doesn’t make one better than the other, just different.

One surprise was the quality of the LrBase photo of Junior at ISO 6400. I printed it to 4×6 as an ArtCard and you would never know it was shot at ISO 6400. This is something we as photographers easily lose sight of when pixel-peeping becomes the norm. Most prints on a lustre baryta or matte paper will not show the noise to the same extent as is shown at 100% on screen.

Conclusions & Discussion

I would love to say that one treatment stood out head-and-shoulders above the rest and was a clear winner, but I can’t. They all produce excellent results. There may be differences at the pixel level when viewed at 200%, but as I’ve made a point of saying, that 200% is an unrealistic yard stick to use.

Lightroom’s own Enhanced Noise Reduction is excellent throughout the ISO range, providing very natural-looking sharpness and micro-contrast while maintaining smoothness of skies and excellent overall three-dimensionality. DxO Pure Raw seems like the best of the raw optimizing apps, with Topaz and ON1 being equally good, but with a couple of artefacts that need more investigation. OM Workspace was disappointing in its ability to create clean, sharp images, particularly at higher ISOs.

Natural versus Hyper-realistic ‘Look’

One thing I kept noticing with all the photos is the very natural-looking, clean sharpness and presence of the LrEnhNR files, which contrasts with what I see as the hyper-realistic sharpness and smoothness of the DxO, ON1 and Topaz versions. The initial appeal of JPEGs from them is captivating, but to me they seem too real, the birds and lion looking more like museum specimens prepared for exhibit. They look great to the untrained eye, but to someone who knows nature, they appear, perhaps, too perfect. As a former ‘film guy’, maybe I’m just more tolerant of a more natural look, and less tolerant of the plasticky smoothness associated with these treatments. It’s like they are trying to emulate something that doesn’t really exist in nature.

My concern is that this is the way photography is going due to large sensors and this pixel-peeping drive towards minute, clean detail—detail and perfection at a level which one would never see in nature without the proverbial bird in the hand. Perhaps it’s the generative AI aspects of the algorithms, creating pixel-level detail that is not normally seen. Or, maybe it’s just me. Having been involved with nature interpretation for decades, the look comes across as a bit hyper-real, perhaps in an effort to make nature look glossy and catch attention, rather than showing nature as it is. But, as I said, this is only my perception, a feeling I have.

Final Assessment

Overall, you can’t go wrong with Lightroom Enhanced Noise Reduction, DxO Pure Raw, ON1 No Noise, and Topaz Photo AI Raw Denoise and Sharpening for demosaicing, denoising and sharpening raw files. If you’re looking for a clear winner, you won’t find one. They all perform brilliantly and, save for the Topaz and ON1 artefacts which are being looked into, any differences highlighted here are only noticeable upon direct comparison, which in itself, is unrealistic. Differences may be more or less noticeable depending on the raw files you start with and your personal workflow.

I realize these results may be different from what internet bloggers and vloggers have found, and I did not rank one above the other by unnecessarily splitting hairs—but I have no skin in the game either. I do not represent any of these companies, nor do I make a commission from links or profits from clicks. I am only reporting what I see and for the vast majority of photographs, any one of the apps, other than OM Workspace, does a superb job of cleaning up noise while maintaining and sharpening the fine details of foliage, fur and feathers, with the noted caveat of glossy perfection.

For me, I will continue to use Lightroom’s Enhanced NR and am building into my regular workflow both DxO and ON1. I will also be spending more time making some prints to see which versions look better on baryta and matte papers.

Up next in this series:

How well can Topaz PhotoAI and ON1 No Noise ’rescue’ images with motion blur? — now available!


Thanks for reading! If you have any questions about OM System, the quality it produces, or the photos and observations shown above, be sure to add a comment.

This work is copyright ©2025 Terry A. McDonald
and may not be reproduced in whole or in part without the written consent of the author.

Please SHARE this with other photographers or with your camera club, and consider signing up to receive an email notice of new blogs.

Have a look at my work by visiting www.luxBorealis.com and consider booking a presentation or workshop for your Nature or Photo Club or a personal Field&Screen workshop at Workshops.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.