Wednesday, February 24, 2016

Donald Trump has to destroy the Republican Party in order to save it

In 2012 I campaigned for Ron Paul. Cold calling hundreds of people, donating, volunteering/running to be the GOP precinct committee officer for my area, going to the county convention etc.

Although the Ron Paul campaign strategy left much to be desired, what people like me experienced at Republican Party meetings was much worse. The GOP insiders did everything they could to prevent Paul from getting the nomination: locking Paul delegates out, throwing Paul votes out, every technicality Robert's Rules of Order could muster (as well as downright cheating) ... it was ridiculous. We were the enemy as far as they were concerned. He was the only one with a plan to balance the budget without raising taxes. It sounds innocuous, but when it involves closing US bases in foreign countries, cutting budgets across the spectra of government programs, and limiting the power of the government to spy on us keep us safe, he was actually worse than most Democrats in the GOP's eyes.

Donald Trump isn't going to do any of these things. It's likely he will do just the opposite.* But he does have the resources to take on the GOP establishment and hold it hostage via a threat to run as an independent. Whether a greedy businessman can't be bought is questionable, but his claim that SuperPACs and donors won't influence him because he won't take their money is plausible. This refusal to accept money from Wall Street and lobbyists is also what makes Bernie Sanders a much more authentic and principled candidate than Hillary Clinton. Although Sanders would do greater damage to the United States than Hillary from a theoretical standpoint, between a principled "villain" and a political opportunist, it's hard not to root for the former.

Who knows where Trump lies. On one hand, he's a caricature of the successful American businessman: a loudmouth New Yorker, decadent and bigoted, flying around in a gold plated jet. It's hard to ignore that background and assume he's running out of a sense of civic duty rather than making the ultimate status play.

Even then, he's not really any worse than other candidates despite a remarkably consistent hysterical media narrative all the way from Fox News through the Huffington Post suggesting otherwise. They, along with pundits like Nate "Trump's got a 2% chance" Silver and party elites refuse to accept the reality of his popular support. Even though Trump is the frontrunner with the support of over a third of Republican voters, he had no Congressional endorsements until today. Now he has one or two compared to around fifty for Marco Rubio. The situation is even worse on the Democrat side with Hillary garnering around 99% Congressional support - a number that would make even North Korean election officials wince - despite being about even with Sanders in national polling.

"Disconnect" "Out of sync" "lost touch with reality" "in a completely different world"

I'm not sure of the best way to describe the disparity between the establishment and the rest of us, so that should cover it.  Savvy politicians might bend with the wind, but the establishment is too entrenched, too invested in the narrative they created to do that.  The GOP is crumbling, with the Democratic Party soon to follow, and all Goldman Sach's money and all the media men can't put the party together again. And good riddance.

* In that respect, he's like every other candidate running - though Rand Paul did have a balanced budget plan. Unlike his father, it is questionable whether Rand would have been uncompromising in his follow-through.

Monday, February 22, 2016

Reasons not to pre-order an Oculus Rift

Joel Hruska, one of my favorite tech commentators, wrote an article on why you shouldn't order a Vive which is something of a follow-up to his earlier article on why you shouldn't pre-order an Oculus Rift. There are many good reasons to avoid pre-ordering, but the ones he gave were bad. But there are good reasons to avoid it:
  1. The Early Adopter premium. The official story is that even the $600+ price on Rift is subsidized. Maybe, but it's basically made up of smartphone components and low-cost HMDs should be rolling out in a couple years just as we saw with smartphones.

    Is the build of materials for Rift really over $600? The Samsung Galaxy S6 has a BOM totalling $290. Throw in another $85 panel and the cost is still under $400. I don't see a way, even with custom lenses, that the other parts make up $200. Then again, the recently announced Vive is $800+ although the Vive also comes with wireless tracking and custom controllers.
  2. Lots of tinkering required. At launch there will be many VR experiences that will work out of the box. After all, there have been years of development on the DK1 and DK2 which make it a proven platform. However, VR support in current games is going to be uneven.

    For instance, Oculus had support for Unreal Engine 3 and then dropped it. So the chance VR will come to Killing Floor 2, a game I've been playing lately, is slim. It's possible to use VR with many games but in a mode that basically simulates a 360 degree monitor, i.e., no depth perception. It's still neat, but in a TrackIR plus multiple monitors way.
  3. Steep hardware requirements. This depends on application, of course. Minecraft for Windows 10 should be a great experience on the recommended hardware. But being able to maintain a 90fps minimum is more important since lower framerates can lead to motion sickness.

    There is no system in the world that can run ARK: Survival Evolved at 2160x1200 @ 90fps minimum at moderate quality settings. So even though the title has nominal VR support, it will be at settings that make the game look primitive.

    2160x1200 is even somewhat of a lowball figure as Rift renders in even higher resolution to account for necessary distortion calculations. A Valve developer figures 378 Million pixels/sec is the required rate ~ analogous to 1920x1080 @ 182fps minimum. This is absolutely doable on the Rift recommended minimum specs in somewhat older titles but difficult with newer ones. That said, VR is a prime candidate for SLI setups and 1920x1080 @ 91fps is much more attainable with today's graphics cards.

    The situation only gets better if we downgrade our expectations to the DK2's 75Hz refresh rate.
  4. Insufficient headset specs. Maybe 90fps is enough to eliminate motion sickness for most, but I can imagine that might not be enough for some people. This isn't new. Back when CRTs were the typical monitor technology 60Hz, caused headaches for some. I remember feeling motion sickness when I played Wolfenstein 3D and Doom for the first time.

    Even though my first experience with DK1 was positive, there were many things I wished it had. Resolution and field of view will have plenty of room for improvement over the coming years but even when they are high enough, a lack of haptic feedback and things like eye focusing on objects will prevent full immersion.

Wednesday, February 10, 2016

Intelligence Squared - Longevity

Intelligence Squared US held a debate on whether human lifespans are long enough. The side in favor put out some philosophical points that looked promising.
  1. Human identity is tied to a narrative: a beginning, middle, and end. This cycle is what it means to be human; to eliminate death is to eliminate what it means to be human. Further refined, humans are essentially hard wired to go through a biological life cycle of birth, growth, reproduction, and death and life/health extension radically alters that natural cycle.
  2. Intention is an important aspect in this discussion. A moral person should operate with only good intentions and wanting to live longer for the sake of living longer is not a good intention. It's narcissism.

But they also brought out some pretty bad arguments. One of them was along the lines of "we have bigger problems to worry about like poverty and ebola". This is a very bad argument because it is human ingenuity that has largely eliminated poverty and epidemics. People with more experience are better able to implement ideas across a broader range of situations than people with less. Is the world really a richer place with the loss of Norman Borlaug, Jonas Salk, Albert Einstein, and thousands of other innovators?

What insights would Goethe or Aquinas have for us today having experienced centuries of the human condition? An older, healthier population is generally going to be a wiser population.

Another bad argument was that only rich people will have access to longevity treatments. This will be true, but only for the first few years of availability. The demand for life extension once it is feasible is going to be unprecedented. Just as only the very wealthy had access to refrigerators, cars, air travel, and computers, competition will eventually bring life extension to those with less money. Ironically, one of the proponents of the motion cited Moore's Law in support of his wealth-inequality argument. If anything, Moore's Law, which is a microcosm for the exponential productivity gains exhibited by market competition, has been the key driver in bringing computing power to the masses. It's why access to smartphones is common even in impoverished countries.

Similar zero-sum thinking supplied the thinking behind the finite resources argument. "Won't people living indefinitely long mean we'll run out of resources?" And despite the repeated predictive failures among Malthusians, this question can never really die because resources are indeed finite. But available technologies can easily push the carrying capacity of the Earth well past the ten billion figure held in common wisdom.* 

One fact that both sides used had to do with opportunity costs. The side in favor suggested that our limited choice is part of what makes us human and that life extension eliminates those opportunity costs. But that is categorically false. Even if you were immortal, you cannot be in all places, acting in every possible way, at the same time. The side against provided a more compelling argument that our refusal to fight aging today will eventually mean that some cohort of human society will never even have the chance to make the choice of whether they should live longer or not.

Genesis 6:3

One audience member brought up an interesting point that Genesis 6:3 has God setting a limit of 120 years for human lifespan. Its a remarkably accurate figure. But it is not a metaphysical constant. And for many Christians, at least, the precepts of the Old Testament are not necessarily binding. Given the fact that the Edenic state as well as the lifespans until Genesis 6:3 had humans living for hundreds of years (nearly a thousand for Methuselah), a life span over 120 years is not per se proscribed.

* Breeder reactors, vertical farms, continuous production through LED lighting, desalination, increased urbanization and higher density development make the 10 billion figure a joke. These sorts of innovations are never factored into the carrying capacity models which is why those models invariably fail. And all of these developments actually reduce the environmental footprint of humans which means that more areas can be left as pristine wilderness. The key is in developing cheaper, cleaner, and more plentiful sources of energy.

Tuesday, February 9, 2016

New Hampshire, or, the undoing of America's Potemkin Village

Glad to see Sanders and Trump come out on top in New Hampshire. Why would Americans turn out in record numbers against establishment candidates like Bush and Clinton if things are going well? Things aren't going well. Maybe people aren't actually thinking that government statisticians are purposely lying to them, but maybe that the statistics simply don't measure reality. That the Dow at 16,000 is reflection on the overall state of the American economy.

At any rate, nice to see the establishment Wall Street backed warmongers get their teeth kicked in for once. I don't particularly like Sanders or Trump, but unlike everyone else except for maybe Kasich, they don't ooze craven opportunism. So there's that.

Friday, February 5, 2016

Life at the End of Moore's Road

The earliest reference to the end of Moore's Law I can remember was in a 1995 book I read from Stanford University Press. Can't remember the name but the technologies we are hearing about now to keep it going, e.g. GaAs, photonics, etc. were mentioned.

It was assumed, if memory serves, that these technologies would be needed well before 14nm. And yet, here we are. 14nm on plain ol' Silicon. The hard limits are near and there are only a handful of tick-tock-tock cycles left before we reach them.

TSMC says they will have 7nm in 2017. Given the slowdown in fab shrinks and their difficulty even with 16nm, they are probably a few years off. But if the rumors are true that they've spent $16 billion on a new fab, it'll probably happen.

$16 billion is a ton and Intel's 5nm fab is likely to cost quite a bit more. That's a huge investment, even for Intel. And for what? Most of Intel's efforts lately have focused on improving efficiency but, like general computing power, battery life of portables is sufficient.

Going beyond 5nm? Well, that's like Law XVI of Augustine. Intel would basically have to go all in on a single fab. Enormous risk, meager reward.

At some point, the industry is going to essentially standardize, maybe around 5nm. Who knows. What I do know is that predictions of node shrinkage end times are usually wrong. But if I'm wrong, it'll be by a handful of nanometers at most.

So that will be interesting since Intel won't enjoy the huge process size/fab advantage it's had and will have to concentrate more on chip design. But, as it stands, there's not a whole lot of IPC improvements and clock speed improvements to wring out. Or are there?

First let's look at clock speed. The delta between stock clocks and overclocks has shrunk e.g. Intel's 4790K hits 4.4GHz on a single core and 4 GHz on all four cores stock. A decent overclock on air is about 4.7GHz on all cores which is about an 18% increase. For whatever reason, around 4.6GHz seems to have been the limit for overclocking since Sandy Bridge. Even Core2Quad reached around 4.3GHz on air.

A good air cooler is actually pretty sophisticated these days with finely machined fins paired with specialized fans and copper heatpipes with sponge shaped wicking metal and refrigerants inside. They are large and fairly heavy - a far cry from the simple heatsink and fan that the original Pentium required. And what an outcry there was when that was introduced!

Oddly, watercooling these days with all-in-one setups is far more elegant, though noisier, than the giant air-coolers used by overclockers. Unfortunately, the frequency gains in watercooling are marginal. For large gains, cascade cooling is required.*

Cascade cooling is basically a series of refrigeration units. It's ungainly for sure, but the 5.5GHz clocks that Hwbot typically reports is a substantial 38% increase. The heatpipe tower configuration has essentially reached maturation and even if Intel can scale clock speed at typical temperatures, the TDP will be beyond the reach of even the largest air coolers; that's triple and quad+ radiator territory. And if clocks can't scale at typical temperatures, then it's cascade. A chill box is probably the only acceptable form factor for that.

The other factor of CPU hardware performance is IPC. It's hard to measure because programs use CPU resources differently. Transcoding benefits from AVX/AVX2 units and secure storage benefits from cryptographic accelerators. 3D games work best with graphics hardware. But standard programs don't use any of that and are mainly integer workloads.

So how to improve standard integer IPC?

One tried and true method is by adding caches, and making them faster and larger. The only really interesting innovation in desktop CPUs since Nehalem are the eDRAM Skylake parts which offer up to 128MB of L4 cache. In many applications, this provides a very large speed up.

Geometrically, chip stacking theoretically means that lower latency and larger caches are possible. More registers, more execution units, more prefetch and branch prediction hardware etc. to allow for greater than linear speed increases. Near geometric increases, in theory.

I'm not sure why node shrinks (Sandy to Ivy, Haswell to Broadwell) haven't reduced cache latencies; from what I understand, thinner connections are slower connections despite the reduced distances.

Of course, there are non standard IPC gains to be had that can be done with fixed function hardware but with Intel's acquisition of Altera, perhaps specialized workloads can be programmed into an FPGA co-processor sort of the way GPUs are used in specialized compute applications. Except FPGAs are far more versatile. A large downside is that VHDL is a pretty intimidating language (at least it was for me), but given the other path to greater performance - parallelizing - it's probably a wash.

Looking at the software side, there's a lot of room for software optimization on both the application and compiler side. For instance, Dhrystones in unoptimized compiled program per cycle terms actually went down after Nehalem. Using an optimized compiler, however, shows post Nehalem architectures with better IPC.

And I think that with greater hardware stability, the greater the incentive to optimize the software side. When upgrade cycles are long, it's fast efficient code that wins out.

Then there are the exotics:

Graphene, high temperature superconductors, quantum computing, photonics, etc. I don't know what the lead time between university press release to actual production is, but it's very long. I'm not sure if there are mechanisms that systemically slow the adoption of  academic "breakthroughs" by industry, but eliminating those can help get us back on Moore's road.

* If power were near free, generating liquid nitrogen at the household could even be an option.