Wednesday, November 8, 2017

Kolchak: The Media Stalker

One of the more interesting, though not entirely original, commemorations of the hundredth anniversary of the Russian Revolution is Project 1917 which is recounting the events in real time. The following for today was written by Alexander Kolchak, former commander of the Tsar's Black Seas Fleet, when he was visiting the United States:
On the day of my departure, the first information was received about the Bolshevik coup, that Kerensky had fled, the government had fallen, and Petrograd was in the hands of the Soviets. Since I had often read such sensationalist stories in American newspapers before, I did not attach much importance to this, especially since it was extremely difficult to believe American newspapers.
 No comment needed

Wednesday, May 31, 2017

Avira makes me WannaCry

It's humbling when you perform due diligence and still get it wrong. For years I've used Avira as my anti-virus product since it is consistently in the top 3 or so in detection and performance rates according to various antivirus researchers.

But the free version of Avira did not detect WannaCry. Not from any lack of ability but rather because Avira wanted to differentiate their "Pro" version from the free version by limiting the ability to handle ransomware to the Pro version.

Screenshot of Avira failing to stop WannaCry

Why would ransomware ever be considered outside of antivirus' basic scope? The free version of Avira almost certainly has ransomware protection now but I have switched to BitDefender. Kaspersky's AV also has a good detection and performance rating while also having protected against WannaCry. NewEgg runs sales on Kaspersky all the time so that's probably what I'll be using later.*

I've paid for Avira before because I hate the ads, but I manage a lot of computers and there's too much competition to really pay for a dozen licenses as insurance. And not extending protection to "free" users is a major failing in my opinion so I doubt I will use Avira again.

* Uninformed people think Kaspersky is some kind of Russian backdoor but any top intelligence agency can easily break into any typical PC, antivirus or not thanks to "features" like Intel's Management Engine. WannaCry itself shows that the Russians shouldn't really be our top concern.

Friday, May 5, 2017

Windows Creator's Update

The hits just keep on coming with Microsoft Windows 10's new Creator's Update! Despite having Windows 10 Professional and disabling forced updates, Microsoft decided to helpfully throw me out of my fullscreen application and remind me anyway. Nagging the user is just the sort of behavior I expect from amateur shovelware, which, to be honest, is what Windows is. I can console myself in knowing that non-Americans, once again, paid more than I.*

Not only did this new update reset my sound configuration, it re-enabled XBOX GameDVR because I mean, maybe I really didn't mean it the first time when I went out of my way to disable it with the Anniversary Update. And what is this?

Just as with the forced updates and telemetry that remains on even when disabled, to make settings hidden or "managed by your organization" is another slap in the face of the user. It's too bad because Windows previously behaved as a background environment to control devices and run programs, i.e., an operating system. Now it feels like Apple software in that it limits user control, constantly tries to integrate with the Cloud, and tries to force a certain experience on users.**

At least it didn't change my wallpaper this time around. That and my software volume seems to have returned to a logarithmic scale (though it still mutes at 5). Good job, Microsoft.

* At current exchange rates that is $285 USD for UK buyers and $307 for EU.

** Maybe MSFT looks at AAPL's market cap enviously and figures if they can't copy Apple's amazing hardware (Microsoft's hardware and peripherals were good but just not Titanium Powerbook/iPad/iPhone/Air etc., good) they can get away with copying Apple's horrific software.

Saturday, April 29, 2017

A Month of Using JavaScript Whitelisting

It turns out that whitelisting Javascript on a site-by-site basis is probably more trouble than it's worth. For my browsing habits, I visit enough new pages that I need to enable functionality daily. Not only that, it's one of those things where if a page is malfunctioning and I reload with Javascript enabled that I can't quite rule out the possibility that a page might be calling on a third party site for Javascript which somehow might end up blocked.

I'm sticking with the whitelist system for psychological reasons. Time saved is probably negative but whitelisting helps avoid a negative first impression and I think negative impressions weigh more strongly than non-negative impressions. Even if I enable Javascript on a site and it turns out to have annoying ads, I am not caught by surprise. It's all a user versus webmaster control issue.

Giving the user more control over the application of pain actually lessens the pain. At least that's the lesson I took from Dan Ariely's story about his treatment for burns at a hospital; it was much more painful when the nurses just ripped his bandages off than when he was allowed to remove them at his own pace.

The same psychological mechanism informs, I think, my aversion to Windows rebooting itself automatically, giving control of software or media to Digital Rights Management e.g. Steam, Netflix, etc., or trusting my data to the Cloud. But it's clear that we are more or less conditioned to accept less client control in IT, and that's fine, but it's important to understand that it doesn't have to be that way.

Friday, March 31, 2017

Moore's Law: An Originalist Perspective

The reports of the death of Moore's Law have been greatly exaggerated. At least that's what Intel will have hoped you believed after its March 28th industry advertorial blitz Technology and Manufacturing Day. There are many formulations for Moore's Law but here it is as stated by Gordon Moore himself:
The number of transistors and resistors on a chip doubles every 24 months
It's a little tricky because you can always add more cores or even fluff like a GPU or fixed function accelerators to increase transistor counts even if yields degrade and programs suited for using these resources are rare. The law doesn't make an exception for these, or more pertinently, multicore designs, so let's see how the biggest chips from Intel compare over the past 20 years. I'll do GPU transistor counts as well to see how that compares. I expect that doubling transistor counts every two years is probably true, but for Nvidia rather than Intel.

An hour of googling later:

Man I hate wccftech so much. How such a garbage rumor site constantly gets top billing on Google searches is a mystery. It's almost a reverse hierarchy with rumor trash like wccftech and Motley fool on top, data-dredged (but still useful) SEO clickbait like CPUBoss after, followed by amateur enthusiast sites like Anandtech and Tom's Hardware, then professional sites like NextPlatform, and finally expert but accessible sites like David Kanter of Microprocessor Report Linley Group's RealWorldTech buried underneath.

Anyway here are the tables.* First GPU:

DateGPU NameTransistor Count - millions% change over previousExpected
April 1997Riva 1283.53.5
March 1999Riva TNT215328.57%7.00
February 2001GeForce 363320.00%14.00
January 2003GeForce FX5800 Ultra12598.41%28.00
June 2005GeForce 7800GTX302141.60%56.00
May 2007GeForce 8800 Ultra681125.50%112.00
January 2009GeForce GTX 2851400105.58%224.00
May 2011GeForce GTX 5803000114.29%448.00
May 2013GeForce GTX Titan7100136.67%896.00
May 2015GeForce GTX Titan X1200069.01%1,792.00
1H 2017?Titan Volta ?
Wow! GPU workloads are more parallel but it's clear Nvidia is outpacing Moore's Law by a comfortable margin. Not quite Kurzweil "accelerating change" fast but Nvidia could stagnate for four years and still be ahead. I'd say good job Nvidia but they've already taken a ton of my money which is all they really ever wanted anyway.

Now CPU:

DateCPU NameTransistor Count - millions% change over previousExpectedCore Count
May 1997Pentium II Klamath7.57.51
February 1999Pentium III Katmai9.526.67%151
April 2001Pentium 442342.11%301
March 2003Pentium M Banias7783.33%601
May 2005Pentium D Smithfield230198.70%1201
April 2007Core 2 Kentsfield MCM586154.78%2402
March 2009Xeon Gainestown (w5580)75027.99%4802
April 2011Xeon Westmere (E7-8870)2600246.67%96010
June 2013Xeon Ivy Bridge (E5-2697 v2)289011.15%192012
May 2015Xeon Haswell (E7-8890 v3)560093.77%384018
1H 2017?Xeon Kaby ?

Well there it is, Intel really has been able to keep Moore's Law on track! Granted, the samples from Westmere onward aren't consumer chips but rather pricey server parts with lots of cores, but they aren't one-off tech demos either. 

Moore's Law doesn't say anything about price or performance either which is why it technically doesn't matter that the E7-8890v3 was released at $7,200 whereas the Pentium II was $1,200 in 2017 Dollars.** Or that the 8890v3 is an 18 core chip that will be no faster than a one or two core version for most programs.

Anyway, I stand corrected. Or do I? Intel's answer to whether Moore's Law is dead was this chart:

The cost per transistor proves nothing except to show that even Intel isn't immune from imputing different intents to Moore's Law i.e., "What does a biannual doubling of transistor counts per chip mean?" 

For many years, increasing transistor counts meant a direct increase in performance which is why there was a big difference between a 286, 386, 486, etc. Those gains are gone. Intel's presentation points out how they are able to pack more transistors per area than ever before and how cost per transistor has gone way down thanks to their $10 Billion fabs - although you wouldn't realize it given their SKU pricing over the past decade.***

Intel's cost savings haven't translated to customer savings although the incidental benefits of lower power consumption and better frequencies for those lower power parts have. At the consumer and high end, the lack of real competition and price gouging isn't a big secret. Thankfully, Zen will bring sanity back to the market so Intel will probably have to cut this slide of their 60%+ Gross Margins at their next Technology and Manufacturing Day.

But the attack of Naples and Ryzen won't bring back the old days of huge IPC increases. Maybe it brings large price drops across the board where the $7,200 5.6 Billion transistor E7 gets priced closer to the $800 12 Billion transistor 1080Ti. Big whoop.

The single thread performance leader has never been the chip with huge core counts. And there are still architectural advantages that are unique to AMD, Intel, and IBM that could provide cumulative improvements to future chips using existing technology. Then there are not-yet-existing technologies that promise the world. Popular Science/IEEE Spectrum sort of "breakthroughs" that you stopped paying attention to years ago because it was beyond vaporware. Plasmaware?

Photonics, neural networks, quantum computing. Maybe those really are the future but I'd rather see an effort into proven methods for improving performance. Extreme cooling. Optimize packages, processes, and IT infrastructure for superconducting temperatures. I'm just being Cray Cray.

* I didn't factor in paper launches, custom runs, or more exotic chips like Tesla and Phi. Currently, the Xeon Phi 7290 with 72 cores holds the record as Intel's biggest chip with over 8 billion transistors. On Nvidia's side is the 15.3 billion transistor P100. Both aren't that much larger than the "mainstream" parts.

** Back then the Pentium Pro was better thanks to faster cache despite being an older design. That sort of imprinted the importance of cache in me like a little baby duck. Caches used to be on a separate socket, then moved onto the CPU board on a separate chip, and then onto the CPU itself. But there's still a case to be made for caches physically separate but still closer than RAM as you have with eDRAM. For desktops where power consumption isn't important, it would be great to see a socketed cache maybe on the underside of the CPU package made of good ol' fashioned $RAM

Change the standard for high performance computers to use liquid cooling to allow RAM to be even closer. We're at the point where the highest memory frequencies are on boards where RAM channels and traces are a shorter distance versus comparable motherboards. 

*** One thing that would be really great to see adopted is Intel's definition of transistor density to differentiate processes. While I'm not sure about the weighting of 60% to NAND and 40% to Scan Flip Flop units, it's a whole lot better than the 14nm 10nm marketing that isn't fooling anyone. Actually I think just a raw maximum transistors per square mm and some sort of confidence interval for maximum frequencies achievable at various temperatures would be best.

Wow, all these cool ideas that I have no ability to implement. I'll design the logo.

Friday, March 24, 2017

Flash down, Javascript to go ...

I didn't notice when Google set Chrome to block Flash by default. It's been years since I've played a Flash game which, I suspect, is the only use I or anyone else has had for the technology in recent memory.

The ecosystem of Flash-based games was impressive in a kind of eighties and nineties way. Back then, it was more common for programs to be authored by a single person which made for some idiosyncratic experiences – different user interfaces, different methods to solving some problem, different methods for accessing and extending program functions. Windows and Flash homologated some of that but single-author programs and games still feel very different.

Apple famously blocked Flash on its iPhones citing security and performance concerns but the security issue was overblown. iOS, Windows, Linux, Android, Tor, LastPass, Antivirus – all compromised. But the performance issue was real and it was a miracle that Flash games ran at all.* Even Flash advertisements could bog down a browsing session which was a big incentive to use an adblocker.

Most website designers never really learned that pop-ups, pop-unders, autoplaying videos with sound, animated Flash ads, affiliate redirects, advertorial sections, and the like are never going to be accepted. Rather than redesign their sites to use unobtrusive locally hosted linked images the way Techpowerup does when encountering an adblock user, we're increasingly seeing this:

This behavior isn't new and I wrote a while ago about how to bypass some of these types of page elements.

My first response is to block Javascript using the little ⓘ symbol next to the URL before resorting to manually blocking page elements. Sometimes it's easiest to just hit escape or click if it's an obscure site you doubt you'll visit again but this particular method of interfering with site access is so pervasive that I've switched to globally blocking Javascript and whitelisting sites.

The main issue here is that Javascript is still required for many sites. Page formatting is usually more primitive as well for non-Javascript versions**

At some point, I can imagine website designers requiring Javascript at which point the next step will be to switch to FireFox and install AdNauseam or even switch to Opera.

* I remember that most Flash games would get bogged down quickly at later levels even on powerful hardware as more things were happening on the screen. Setting Flash to "low quality" didn't fix much either. And the games were very primitive. The equivalent DOS game would have run fluidly on a 50MHz machine but when given the Flash treatment would well over ten times the computing power.

** I did notice that Google's non-javascript search includes limited chronological and verbatim filtering directly on the results page which is maybe the only advantage over the regular search.

Friday, February 3, 2017

No, the i3-7350K is not almost a i7-2600K

AnandTech ran an article today "The Intel Core i3-7350K (60W) Review: Almost a Core i7-2600K" that asks when Intel's newest budget overclocking CPU (the dual core i3) would match the venerable Sandybridge mainstream flagship (2600K).
If we’re only speaking performance (I’m sure Intel would rather happily speak efficiency), judging by our benchmark results, we’re almost there already. 
But there were two severe problems that make render his 14 page article largely useless. The first is that most of the people who pay the premium for overclocking CPUs are going to overclock them. Stock clock results are good data to have but largely reflect Intel's artificial product differentiation strategy.* The second is that many of the gaming benchmarks chosen weren't even CPU limited. Those are illustrative of the fact that most games are GPU limited but there are many games out there that are CPU limited in different ways, e.g., some favor more cores (The Division), some favor higher frequencies (ArmA/DayZ), some do better with more cache, various combinations of these, etc. But even in these games, a reviewer really should include overclocked results. What makes it worse is that Ian Cutress took the time to bench IGP results.


A lack of overclocked results make AnandTech's comparison poorly done. The problem is akin to that which plagues Amazon Vine reviews, namely, that the reviewer is approaching a product as a product to review rather than a product to solve a problem. It's frustrating because AnandTech used to be a premier review site.

An example of a competent comparison was done by Richard Leadbetter in in his article "Is it finally time to upgrade your Core i5 2500K?" which not only compares overclocked CPUs but also the effect of higher memory speeds possible with newer platforms. Frame time graphs and a wider program suite would be welcome, but Leadbetter's analysis is far more useful.


Is the i3-7350K almost an i7-2600K? It has better single threaded performance thanks to superior IPC and frequencies (both stock and OC) than the 2600K. For most applications, that's what matters. In multithreaded applications of multitasking, it's not as clear cut.

If the choice were between a 2-core 10GHz part and a 4 core 5GHz part where "aggregate" performance is equal, the higher frequency part is absolutely superior. But in cases where you can gain 100% more cores with a roughly 20% penalty in single threaded performance as is the case with the old i7 versus the new i3 , the picture isn't as clear. It all depends on the workload you have.

In general, though, the type of user who bought the 2600K back when fewer applications were optimized for multithreading will suffer a large downgrade, not a sidegrade as Ian Cutress suggests, by going to a 7350K.

* Since the advent of Sandybridge in 2011, Intel CPUs have largely settled on an overclock a bit higher than 4.5GHz – TIM issues aside. Intel's default clocks were much more conservative until recently which matters if you had a non-K processor but doesn't matter when looking at K processors, e.g., AnandTech's article.

  • 2600K 3.5GHz base 3.7GHz boost
  • 3770K 3.5GHz base 3.7GHz boost
  • 4770K 3.5GHz base 3.7GHz boost
  • 4790K 4.0GHz base 4.4GHz boost
  • 6700K 4.0GHz base 4.2GHz boost
  • 7700K 4.2GHz base 4.5GHz boost

Thursday, January 19, 2017

American vernacular

Some years ago in college I had a nosebleed. Ended up in the small town ER where they shoved adrenaline up my nose and hooked me up to an IV to replenish my precious bodily fluids.

Kansas winters are no place for tropical island people such as myself. A room at 70 degrees when it's below freezing outside is easily drier than a desert. A humidifier solves this, but as anyone familiar with North American construction knows, you have to keep indoor relative humidity (RH) under 50% or else invite mold, rot, mildew, and who knows what else.

Outdoors, almost all places with a good climate have outdoor RH above 50% minimum. Most American homes don't just start rotting and molding over during the summer when temperature and humidity are high, so why is a humidifier a problem during the winter? Condensation.

Warm humid indoor air at 70F and 50% RH will condense on cold surfaces like windows; things like drywall and wood will absorb moisture. The current trend is to use complicated wall assemblies with sheathing, cavities, heavy insulation, vapor barriers, taping, etc. to ensure warm moist air doesn't contact cold surfaces.

It's a bit like the performance outdoor apparel industry where you are meant to wear a baselayer, insulating layer, wind resistant layer coated with a durable water repellent finish and taped seams. Theoretically it sounds optimal but then the durable water repellent finish wears off or simply gets soaked, the base layer stops wicking and all that fancy moisture and thermal management fluff you read on the card attached to the garment you bought at REI disappears.

In homes, insulation doesn't get applied properly, tape eventually stops holding, OSB delaminates, and moisture becomes a real problem as the organic material in drywall, frames, and sheathing, gets eaten away and all the plasticizers and stabilizers holding it together end up in the environment. Or in your lungs. So despite all the fine chemistry DuPont has embalmed our stick homes in, we still can't crank the humidifier to match an ideal climate.

Maybe two by fours and drywall made economic sense when lumber was cheap and high quality, but that's not the case anymore. Stick construction and the industry that supports it is a huge waste that reflect, as I suspect the American trade deficit also reflects, a generally high time preference culture.

High time preference means building roofs out of asphalt shingles that require replacing every 20 years instead of building a smaller house with a metal or slate roof that will last hundreds. My father's first impression of American homes was that "they looked good but then your realize you can punch through the sheetrock and it seems lousy". Middle class Filipinos as well as developing countries everywhere use reinforced concrete blocks. The engineering and quality are no doubt worse than American concrete construction (commonly used in commercial settings) but I have little doubt as to which material will stand the test of time.

There are tons of concrete buildings still around from Roman times and plenty of stone ones around the world. Not so much with wooden buildings.* So it's not like the technology doesn't exist to build long lasting low maintenance homes that can actually accommodate 50%+ indoor RH and are impervious to mold, insects, fire, flood damage, and whatever else concrete/stone has going for it.

But when you talk to contractors, most are used to doing things a certain way. "It's just not done here" Rather they talk about high performance within the context of wood stick construction which, even at its best, is mediocre in comparison. The way the industry markets its latest housewrap or pressure treatment though, you'd think disruption was a constant. It's an illusion.

* Maybe Shakespeare's Globe Theatre (if that didn't burn down) and there's this one Japanese temple made of wood that gets rebuilt every so often.