Wednesday, May 25, 2016

The End of Moore's Road: Sensor Edition

So I've been looking for a new camera, one that can record 4K 60fps video. It doesn't exist unless you count the $6000 Canon 1DX Mark II which is huge and weighs a ton. It's hard to go back to 1080p after seeing the rich details 4K offers but it's also hard to go back to 30fps after watching the smooth motion 60fps is capable of.

Right now you can get 4K or 60fps despite the fact that there are a number of relatively inexpensive ~$1000 cameras that can do near native 1080p @ 240fps. The bandwidth and processing requirements are similar but companies don't seem to see the need. Oh well. It's hard to choose 4K or 60fps even though either would be an improvement over my current 1080p30 setup. I guess that makes me Buridan's Ass.*

Anyway, one thing I discovered during my search is that image sensor quantum efficiency is around 60%.

Wow! This means that there is under a stop left of high ISO performance left before nature itself places a hard limit on improvement. This milestone sort of snuck up on me even high ISO performance is one of the most discussed aspects of digital imaging. Although this quantum efficiency level, quantitatively speaking, isn't as impressive as the technological records achieved in trying to reach absolute zero, semiconductor process sizes approaching the size of a handful of atoms, or even something like Vantablack, it's significant from a photographic standpoint.

There is probably around another stop of sensitivity available by replacing the color mosaic layer used in sensors with a 3 chip array which is sometimes used in video cameras. That approach represents significant enough higher cost and complexity that it will probably be a last resort after signal processing approaches have been exhausted.

Resolution wise, there's still plenty of room. Although diffraction limited optics exist - something which also amazes me - most lenses do not exhibit that high performance wide open. But as manufacturing improves and exotic lens shapes - such as those used in the Nokia 808 pictured below - become feasible, a diffraction limited f/2.8 lens can resolve nearly 400 megapixels on full frame; the current full frame megapixel champ maxes at around 50. Looking at Sony's sensors, the IMX318 has the finest pixel pitch at a computationally convenient 1.0 micrometers which implies a full frame scaling of 864 megapixels. If per pixel full color accuracy is desired, 1600 "Bayer megapixels" will be required to approximate 400 full color megapixels. Given the state of semiconductor manufacturing, this is definitely within the realm of possibility. In that sense, the effects of the end of Moore's law probably lie beyond the diffraction wall.


Dynamic range capability is related to signal to noise ratios. Quantum efficiency has a role in improving the signal quality while improvements to read noise can boost SnR in tandem. But current technologies impose another limitation: full well capacity. Full well capacity tends to be lower with smaller photosites. However, multiple photosites with smaller full well capacity are equivalent to the same sized single photo site with its larger full well capacity. According to DPReview, read noise is the only reason that sensors using larger photosites marginally outperform similar sensors with smaller photosites. If photon counting technology can be developed, read noise is not only essentially eliminated, but full well capacity is no longer an issue. This implies enormously higher dynamic range capability.

All of these improvements, save 3CCD, are dealing with a single sensor. But huge gains in image quality can be obtained by using multiple sensors for 3D imaging or processing tricks like the multiple lens/sensor gimmick approach by L16 or Leica Huawei P9. So while the high ISO sensor performance race is finished, there are plenty of events left.

* If Sony keeps its June release cadence, maybe the Alpha a6400 or RX100V will have 4K60. The GH5 might but I'm hoping the feature hits smartphones first.

QuickSync, a broken dream

The reason I'm writing this is because of my experiences in helping people stream and record on computers using integrated graphics. In my mind, I figured QuickSync was similar to AMD's VCE or NVIDIA's NVENC in that using hardware accelerated encoding would impose a minor performance penalty. In practice, Intel's implementation does impose a huge penalty if you are playing games on Intel's GPUs.

QuickSync is Intel's technology for encoding video. The idea is that their CPUs would have a bit of circuitry dedicated to accelerating video encoding and free the rest of the CPU to do other things e.g. game physics. It works but there are some major caveats:

Quicksync depends on the processor's integrated GPU (iGPU) which means that if the iGPU is being used, Quicksync performance falls. This means that this feature isn't really very useful for people playing somewhat demanding games on iGPUs - the area where hardware accelerated encoding would help the most. It's fine for encoding video or doing basic screencasts but it simply shares too many resources with the rest of the integrated GPU to work effectively while playing 3D games.

Now I'm not sure how much it would have cost to have Quicksync implemented as a fully discrete hardware solution; Intel probably reasoned that streaming and recording gameplay is an upper tier feature not needed by typical iGPU users and that users who do stream and record games would probably have a discrete GPU anyway, but it is unfortunate. Quicksync has been around for several years and so it's understandable that its earliest forms which were targeted toward video conversion might have left gaming uses as an afterthought. But given the huge growth in streaming and gameplay recording versus realtime movie file encoding, it's surprising that Quicksync has not adapted.

Quicksync does work on iGPU systems, but performance is inversely proportional to gameplay demands. This means that the whole recording and streaming process is very inconsistent. This performance relationship also exists even with a dedicated GPU if software x264 encoding schemes are used*, though this is rarely a problem on quad core+ systems.

So my recommendation for potential streamers and people hoping to record gameplay is to always have a discrete video card, even if it is hardly faster than the iGPU. This will ensure that all iGPU resources are free for Quicksync. In that case, all the iGPU is being used for is encoding and so it functions as truly dedicated hardware.

But for integrated graphics users, x264 on quad core and higher systems is typically going to be better than QuickSync for streaming, while the very low compression, i.e. low processing load, codecs used by FRAPS and DXTORY are going to be the best for recording. Of course the file sizes are relatively enormous but there ain't no such thing as a free lunch.

It would be nice to set aside iGPU resources for Quicksync and adjust the game's graphical settings after the fact.

* This is why I prefer to set processor affinities for non-streaming PC setups as you can set aside appropriate resources for a given quality level and know that no matter what happens in game, streaming quality remains consistent. It is a kind of virtual streaming PC.

Wednesday, May 18, 2016

The 2010s Aesthetic

Where did this come from? In video terms I'm talking about the slow motion, unrealistically graded, low depth of field, jump cut, steadicam look. Time lapse with sliders, drone shots. Add affected piano, guitar, and/or indie female vocal. Is this touchy-feely stuff because of Apple? I'll bet it is. Or maybe Instagram. Is your photo uninteresting but you want others to think it's meaningful? Add some vignetting, apply a film look, and maybe make it black and white. But can't lie, it works.

After years of this though, one wonders what the point is. There is no point, it's just noise. And this noise engulfs the only meaningful application of video and photography - to help tell a story. Not tell a story, but help. Otherwise viewers are making up meaning arbitrarily or in uselessly vague terms. Like there might be a photo of a worn out door in India. Oooh the stories it could tell! is the impression the photographer wants to give but I can't help but think "A picture is not literally worth a thousand words dummy, tell me what's going on".  No one on the planet, without the proper context, would ever hear The Great Gate at Kiev and deduce that the piece had anything to do with Kiev, a gate, or pictures at an exhibition. That's why contemporary classical music is so sterile whereas soundtracks, which help tell a story, aren't. It's why a straight up gallery of "award winning" photos is inferior to the photos in a National Geographic article.

The indie videographer and photographer crowd rarely tell stories and when they do, it's often subsumed under fancy technique and practiced faux-earnest narration. People in the future will look back, presumably, with the same bemused eye we look at kaleidoscope filter photos from the 70s. The photos they will be interested in, however, is the mundane slice of life back then sort of thing that I believe is the real draw for Bresson or Weegee's photos.

Or maybe I've unfairly implicated the purveyors of the 2010 aesthetic in a grand Sokal-esque conspiracy and I simply don't get it.

Saturday, May 14, 2016

A closer look at flight costs

Youtube user Wendoverproductions covers costs associated with plane tickets in good detail although there's a lot of vocal uptalk in the presentation.

Looking at things from the passenger side, I think the earlier case I made suggesting that shipping from China is mostly subsidized might be wrong. If airlines are profitable with $80 ticket prices hauling around 150lb people across the country and fuel costs are a small fraction of actual expenses, then sub $1/lb shipping rates for freight seem reasonable.

Saturday, May 7, 2016

Rockefeller problems

Just thinking about my previous entry on whether things are getting better or worse for the American middle class on an absolute scale ...

One of the themes the "things are worse" crowd brings up as evidence of the absolute decline of the middle class is the shift to dual income households. A comfortable family lifestyle with a single breadwinner was typical in the 50s. The Atlantic suggests that a comfortable middle class lifestyle today requires an income of over $130,000 - definitely not typical. That's not even typical for two income households.  So in that important sense, things have gotten worse.

Technological progress hides this decline. Without the increased productivity and technological progress, there wouldn't be any argument; a situation where one person was able to provide well for a family of four that changes to where two people are even less able to provide for that family is a disaster. But if workers are providing more goods and services than ever before, then one worker should be able to provide more for a family.

Intuitively, as material abundance grows, there is less need to work and greater financial security. In other words, if we had Star Trek levels of abundance, barely anyone needs to work whereas in times of extreme scarcity everyone is always working. Before the Industrial Age, for example, most people - children included - were working most of the day on farms.*

And yet here we are with increasing productivity but also having to work more just to maintain our living standard. It's something analogous to stagflation (what an ugly word) although it is more than an analogy since both problems share many of the same causes. Maybe I've written about them before but I'm sure I'll write about them again.

* That's one thing that history books don't really cover. There might be a small section on how peasants lived but the majority of the rest: battles, cities, kings, queens, inventors, philosophers, etc., represent only a tiny portion of the human experience. There's only so much to be written (but much more to be said) about toiling in fields and simple family life I guess. And in truth, the typical first worlder's life has more in common with city life and royalty than subsistence farm living.

Someone out there wondered whether it would be preferable to be a Rockefeller during their heyday or a regular American today.  Professor Don Bordreaux and the modern camp point to the amazing technologies and conveniences that are within the reach of typical Americans that even Rockefeller simply had no access to: advanced medicine, supermarkets, cheap air travel, better cars, instantaneous communication, access to incomprehensibly greater information and entertainment than ever before, etc. The Rockefeller camp, like Peter Schiff, point out that Rockefeller lived in a modern enough era that travel, entertainment, and access to information, were plentiful enough. Having Netflix is nice but not having to worry about financial security is even nicer.

If you are materialistic, Bordreaux is absolutely right. Materialistic has a negative connotation, but I mean a strong as opposed to a more indifferent preference for goods and services that are more varied, higher quality, and cheaper. It's this type of materialistic thinking that we have to thank for the standard of living we enjoy today. If you are more worried about status and stability, Schiff is right. Perhaps related: Bordreaux hates Trump while Schiff is more sympathetic (though not particularly supportive).

These are the best of times, these are the worst of times

An alien observing humanity over the past century would probably say that things have gotten a lot better for Earthlings as a whole. And it's hard to argue with the facts and figures from The State of Humanity or with Hans Rosling's interpretation of data for all sorts of metrics like poverty, disease, and access to basic necessities.

On the other hand, the May edition of The Atlantic has an article covering the deteriorating state of the middle class in America. However, the American middle class is, relatively speaking, the wealthiest large cohort in history so complaints about its situation are hard to take seriously. The middle class votes itself all sorts of privileges that undermine the poor and disproportionately take from the rich.

And is it really unreasonable for the American Dream, the idea that every generation will be better off than the previous, to end? And by better off, I don't mean only technologically better off which is almost a given, but also in relative income (which is an awful zero-sum way to look at things despite the popularity of angst over income inequality).*

Living memory has the US with maybe 5% of the world's population producing over a quarter of the worlds goods and services - largely thanks to being the only major power unscathed by World War II. As countries rebuilt and technology diffused, the relative position of the US will approach that of its population. Given the resources of the US, it's doubtful that American share of world GDP will ever drop to 5%, but the trend suggests that a resurgence in the relative position of the middle class is unlikely.

Given the political climate, it's clear we haven't really accepted our fate. The Atlantic is publishing an article next month about how the self-esteem movement has created a situation where happiness is only possible for the above-average. Unless you live in Lake Wobegon, this means most people will be unhappy. It's why Bernie Sanders complained about Romanians having faster internet and why Donald Trump says that they are beating us

This isn't unique to the US. Tall poppy syndrome, crab mentality, the Russian parable of the genie and the neighbors wish, and numerous psychological studies showing that inequality makes people unhappy suggest it's universal.

But internalizing and making policy out of our innate distaste for inequality is a mistake.

* Which is not to imply via the fallacy of affirming the consequent that inequality is desirable. The growth in inequality has been influenced by the largely ignored and even lauded crimes of inflation and cronyism.