Olympus launched just a single camera last year, the consumer-focused E-PL9, as three other brands stole the show by venturing into full-frame mirrorless. Olympus now needs to prove that the Micro Four Thirds system, a relatively small-sensor format that often falls short of the resolution and image quality of larger APS-C and full-frame sensors, is still relevant in 2019. It aims to do just that with a new flagship camera, the OM-D E-M1X.
While the sensor inside it is the same as what’s used in the older OM-D E-M1 Mark II, the new $ 3,000 camera incorporates several firsts for Olympus, including dual processors, an image stabilization system rated for up to 7.5 stops of shake reduction, an autofocus system designed to rival DSLRs, and a built-in vertical battery grip. What it may lack in raw image quality, it makes up for in speed and performance.
The dual processors also power another major milestone in the industry — using deep learning algorithms to help the autofocus system. This is primarily useful when shooting motorsports, where it will recognize and focus on the driver’s helmet instead of the vehicle. (It can also recognize trains and airplanes.)
So what went into making the OM-D E-M1X and where’s Olympus headed next? To find out, Digital Trends sat down to talk with some of the people that actually make Olympus cameras: General Manager Elija Shirota, Team Leader and autofocus expert Tetsuo Kikuchi, and Senior Supervisor and deep learning expert Hisashi Yoneyama. The interview was translated by Akihito Murata, Vice President of Sales and Marketing, and the below transcript has been edited for clarity.
Digital Trends: Why did Olympus decide to put dual processors in this camera?
Eija Shirota: The first starting point is for reliability. We thought about how to create the ultimate reliability and one of the answers was to equip the camera with two engines.
Was the built-in grip necessary to fit in the dual processors? Why did you decide to incorporate the grip instead of as an add-on accessory?
Eija Shirota: [With] this model, a big thing for us is ultimate reliability. [If] we have a separate grip, the connectors — once we have those kind of parts in the middle — we can’t achieve ultimate reliability. We can make a reliable camera, but for that ultimate reliability, this type of thing should be integrated.
The other reason is that we’ve talked to many professional photographers and we’ve actually observed many professional photographers and how they use the cameras. Many of the photographers used the vertical position and would operate the camera without looking at anything.
To achieve that, we should have the exact same position of the buttons and shutter release [when shooting vertically]. If we want to achieve that, the grip needed to be integrated to keep everything in the same place.
With this new autofocus system, we made sure that the focus is stacked with the object.
With a previous camera, Olympus said the stabilization couldn’t get any better — but it just did. Can the stabilization get any better than 7.5 stops?
Eija Shirota: When we introduced the E-M1 Mark II, we thought that we’d done everything except for accounting for the rotation of the earth. But by eliminating all the other elements, we have managed to achieve up to 7.5 [stops].
Now, finally, without removing the rotation of the earth, we should not be able to go any higher — not just for Olympus, but for any other brand. So to answer your question: We managed to eliminate all those other elements besides the rotation of the earth. We are confident that this is the most powerful IS in the world.
What were some of the challenges to designing the E-M1X?
Eija Shirota: First of all, designing the dual engines. This is the first time for us to do that and that was a challenge to put the two engines together. The second thing is that it’s not about the features, but to reflect all the requests from professionals. It took us time to introduce a new model. It’s been awhile since we have introduced a new model the last time. But, this time was used to listen to professional photographers and their requirements for a camera. The autofocus was one of those.
What other big features are in the E-M1X?
Eija Shirota: The handheld high resolution shot is a big achievement for us. We even use hand shake to achieve it. From a technology point of view, this is a big achievement. This will allow the users to carry the smaller cameras without a big sensor. That’s a big achievement.
What’s new on the autofocus system on the E-M1X?
Tetsuo Kikuchi: For movie function, we use phase detection autofocus. We use a new way of controlling the system that makes the autofocus system more accurate. To be very concrete, in the past sometimes the autofocus just goes to the background. With this new autofocus system, we made sure that the focus is stacked with the object. This is more accurate than earlier E-M1 series.
What is the AF system rated for in low light?
Tetsuo Kikuchi: The sensor itself is the same, but we took a different approach to have more accurate autofocus with new algorithms. As a result, low light conditions are much more improved than the previous model.
This time, we’d like to stress the 9-point autofocus. The target for us was to have the same accuracy as a DSLR. We believe that we have achieved that with this new model. The trick is using the 9-point autofocus [mode]. Sometimes we see that if you use the continuous autofocus, the focus is not always stable. Sometimes focus is here and there, it’s a very small thing but this is very important for professionals.
We adjusted the algorithm to make sure the autofocus is always in the center. This is a very specific adjustment that we have made.
You will notice it when you use the nine-point AF and continuous autofocus, you will see a difference from previous models.
What technology made it possible to get that DSLR-like performance?
Tetsuo Kikuchi: We cannot explain the details, it’s confidential. But what we can say is that this is very much a new algorithm and the combination with the sensor priority autofocus makes this autofocus system very accurate.
How did you train the deep learning system?
Hisashi Yoneyama: It’s not done within the camera, we used a high specification computer. We used 10,000 images per category.
For example, when talking about cars, there are different shapes of cars, like Formula 1 and NASCAR. Per type, we give the system a couple thousand images to let the system run to recognize that car. This information is given on a high specification laptop, then transferred to the camera.
So you labeled those images by hand to recognize the different parts of the car?
Hisashi Yoneyama: Yes.
We are considering applying this technology to additional cameras.
What were some of the challenges you faced developing the tracking system?
Hisashi Yoneyama: The biggest challenge was how accurate the system can detect the model. For example, there are several different types of backgrounds and several different types of cars.
We had to make sure that the system recognizes the car and focuses on the car and the [driver’s] helmet accurately. To achieve that, we needed to give lots of images, so that was the biggest challenge.
Deep learning usually takes a lot of computer power. How did you fit everything in the camera?
Hisashi Yoneyama: You need a massive amount of data when you make the algorithms. This part is done not in the camera but separately on a PC. Once we get the data to make the algorithm, and transfer the algorithm to the camera, we don’t need that massive amount of data. The camera just uses the algorithms.
Do you see using more deep learning algorithms on future cameras?
Hisashi Yoneyama: Yes, we are considering applying this technology to additional cameras. But, the current challenge is that this camera has two engines. We need big power to run this algorithm, and this can’t be achieved by all the models, so we have to consider which models will receive this technology. But the answer is yes.
Akihito Murata: I’d like to add that, to fully utilize this technology, you need a very powerful engine. Without having two engines, it’s very difficult to achieve this. That’s why some brands use some of the deep learning technologies, but currently, it’s not possible to fully utilize that data. That’s why Olympus is, at the moment, the only one to utilize deep learning technology for cars, trains, and planes.