October 27, 2022

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Chronicles

Camera Hardware, Psychology, and Advancements With Nick Franco

Nick Franco is OSOM’s head of imaging and self-styled “chief of grind.” One of the company’s first employees, he’s been a part of our strategy since literally before the beginning, and his work touches more than just the photos the Solana Saga will take. During a brief moment between software sprints, I had the chance to talk to him about the work he’s done here as well as his time at Essential, across camera tuning and his love of remosaic to Easter eggs and Halo. 

So, Nick, you’re OSOM’s “head of imaging” and one of the company’s first employees. That’s a curious combination, hiring someone for what sounds like a later-stage product role at the very beginning. Tell me a little about that and how you got here. 

I'm a founding member, and I’ve been working here since before OSOM was “real.” I actually had an email before I was getting paid. Laughs. I was working out of my garage for Jason, getting some of the vendors together and working out what camera we wanted. At the time, we were even considering reviving the Essential Gem concept. 

I came from Essential, where I was the camera prototype engineer. I started as an intern and worked all the way up, pulled some crazy hours, touched a lot of stuff across Android and hardware, and built a good understanding of what it takes to put a device together. I had a great mentor there — Dr. Ling (Essential’s Director of Imaging, internally known as the Queen of Image Quality) taught me a lot. The lessons we learned on PH-1 and the new things we explored on PH-2 and Gem gave me a broad understanding and skill set. 

Before we get too in the weeds with camera stuff, let’s talk about that other work you’ve done. Like you touch on, being at a startup means doing a lot of varied things, and that’s particularly true for you. Can you describe some of the other work you do here?

I own a lot of the display features and tuning, the camera (hardware all the way to the user app experience), and some of the system tweaks — dark mode, lock screen adjustments, and even the haptics. That’s the best part of working here: I touch every part of this phone, so I can point to almost anything and say, “yeah, I worked on that.” 

It’s a really cool and unique experience to be here doing this, as opposed to a larger company where you have to point at the work you’ve done with a magnifying glass. 

Outside of work, I shoot nonconventional photography. I studied at RIT for scientific imaging. Photography is a big part of what I do.

You joke a lot about being married to the grind — “chief of grind.”

I do, but I enjoy pulling crazy hours because I really love my work here. And I also play a lot of Halo.

We do play a lot of Halo.  

That’s going to get cut out, right? Laughs. But the crazy hours I pull are because I do have a lot of care about what we’re putting out into the world. And we want to make sure that we’re building something that we think is worthy. 

What’s your favorite thing you’ve done at OSOM?

I made a fun easter egg! Laughs. Can’t talk about that yet. If we meet up, you’ll see it on my phone. Let me put it this way: It feels at home. I think it's “Essential.” 

Let’s dive in more on the cameras. Head of imaging means you’re in charge of camera tuning, right?

I touch the camera architecture all the way to the final image. A lot of people don’t know it, but smartphone cameras only produce good images with a lot of tuning. You’ve got to clean up sensor data, lens artifacts, and feed it into the ISP. Then you adjust color, noise reduction, frame fusion. You do so much to get that final image, but there’s a balance to everything.

What do you think is the hardest thing about tuning a camera?

Honestly, getting it where people will like it. Every camera has something that people don’t like. Some are too blue, some are too sharp. It’s the psychology — understanding what people really want and what will work. 

How much detail do you want to preserve or lose with noise reduction? How aggressive could we compensate for shadows and highlights? How blue should the sky be, and how green do you want your grass? There are so many tradeoffs to make in so many places. It takes a lot of effort to make that into an experience that everyone can reliably enjoy.

So what’s your opinion on the explosion in sensor sizes we’ve seen recently? Do you think there’s a limit there — will they just keep getting bigger until we’re all carrying full-frame smartphones? 

I don’t think there’s necessarily a limit on sensor sizes, but I do think there’s a limit on how chunky we want our phones to be. Past a point, you can only get optics so good and so small. Processing is getting pretty good, but there’s a balance: How many hardware artifacts can you tune out on the software side? There are optical issues like the base sharpness and flare, which are really hard to get around. Other things like aberration and distortion we can deal with in software — we can really push things pretty far while letting those things slide as the algos get better.

What is the biggest change in smartphone photography happening right now?

More unique use cases outside the standard photo and video. Things like periscope cameras, infrared, macro modes. Even just having an ultra-wide is fairly new. This changes what you can do with the phone as you jump into more unique use cases. 

We first saw a trend in manufacturers adding dual-camera systems. More fusion and depth effects came from this. Then tri-camera systems with ultrawide cameras added a whole new effect to mobile phone images. It was all made possible by better software and better optics! Today, with things like super-resolution and in-sensor zoom you arguably don’t need a telephoto anymore — you can get a 2x out of your main sensor. Using less hardware for these base features opens up space for unique new things. After all, the space on a mobile phone is super valuable to its designers. 

In-sensor zoom, or “remosaic,” is one of my recent favorite camera features, and it’s just starting to be popular on many more flagships. The feature has been around for quite a while, and we were among the first working on it at Essential on the Gem, actually. With that small form factor, you could only fit one camera, so we went for an ultra-wide and used super-resolution and in-sensor zoom to get wide and telephoto.

You and I have nerded out about sensor zoom/remosaic a lot, but can you tell our readers more about how it works?

Years of marketing meant everyone thought a lot of megapixels made for a good camera. But the opposite is actually true: For a good camera, you want fewer megapixels for larger photosites (bigger areas that gather more light for the pixels on the sensor). By putting a lot of megapixels on a sensor and then binning them together, you can improve that photosensitivity by averaging, but your image gets smaller in resolution — 48 MP to 12MP for quad binning, for example. 

Because of this new array, you can also crop in on the center and get a true 2x in the center of the frame, retaining the same megapixel count you might get when binned. Of course, you can do that on any sensor with a resolution crop (or megapixel loss, in other words), but the remosaic process used by in-sensor zoom retains the fine detail (at the expense of some sensitivity) because you are still retaining the full resolution that you would have in wider binned mode!

What do you think is the future of smartphone photography? 

We’re leaning into processing with much more advanced algorithms. I think the future is more single-camera systems: Bigger sensors, huge resolutions, fewer telephotos, wider lenses, and the ability to just crop in as needed, cleaning images with even better multi-frame. It's like the CSI shows — ENHANCE!

One last question. I know there are a lot of folk out there that say they don’t care about camera performance. I admit that I used to be one of those people until I got a phone with a good camera. Then I realized that it’s nice to preserve memories in a visually appealing way. Having a nice camera changed the way I use my phone! What would you tell people who might feel that camera performance doesn’t matter? Make the case: Tell them why they should care. 

In the past, taking photos was very difficult, or you had to be very skilled to do so. Processing film or toting around a huge camera is a chore. Over time, as more people have access to a technology, more people will use it, and they’ll find new ways to use it. The camera is more than just a photo-taking device. It’s used in accessibility, translation, and documentation. A camera is so much more than a way to take a pretty picture. It’s a tool. And a better camera is a better tool. 

Even then, and as time progresses, cameras are going to get better and better, and the technologies to enhance those images are getting better and better. Things like Google’s Unblur and Adobe’s Recolor come to mind as ways we can take otherwise data-lacking images and make them more useful today. So you might not care now, but having the best image at the time leaves more opportunities to preserve and enhance those images down the line, if you decide that you do care about photo quality later. 

Thanks for taking the time to speak to me, Nick. Let’s get on some Halo.