Free Markets, Free People
I find something really interesting. In my previous post on creating the 2 Quickscript fonts, no one asked what I’d think was an obvious question, which is, "Wait. You made fonts? How the hell do you make a font?"
I find it fascinating that, especially today, when we have daily access to electronic typography, there’s so little interest in what fonts are, or how to make make them. Especially when literally anyone with a computer can make their own fonts. There’s even a free, online bitmap font creation program called Fonstruct. We spend our lives surrounded by typography and almost no one cares about it at all.
Which brings me to a trilogy of fantastic documentaries about design by a film-maker named Gary Hustwit: Helvetica, Objectified, and Urbanized. All three of them are enormously interesting, and one of them is about a font, Helvetica, which every single person in the Western world sees every single day of their lives. You should watch all three of them.
Also you should go read my latest auto review at Medium: Doctor Hoon: 2013 Mini John Cooper Works GP. And you should "recommend" it after reading, to make my Medium stats shoot up really high.
Today seems like a perfect day to tell you about some new technology I’ve been involved with for software development. Here’s a ninety second video with a high level description, made by the video training company that I did a course with last year:
I know some of our regular commenters, particularly looker, will be interested in this technology.
What if people could easily function with much less sleep?
Jon M at Sociological Speculation asked that question after observing that “new drugs such as Modafinil appear to vastly reduce the need for sleep without significant side effects (at least so far).” At extremes, as Jon M noted in a follow-up post, modafinil allows a reduction to 2.5 hours a night, but “the more common experiences seem to be people who reduce their sleep by a few hours habitually and people who use the drugs to stay up for extended periods once in a while without suffering the drastic cognitive declines insomnia normally entails.” In fact, alertness is not the only reported cognitive benefit of the drug.
The US brand of modafinil, Provigil, did over $1.1 billion in US sales last year, but for the moment let’s dispense with the question of whether modafinil is everything it’s cracked up to be. We’re speculating about the consequences of cheaply reducing or even eliminating the need for sleep for the masses.
If I can add to what’s already been said by several fine bloggers – Garett Jones at EconLog on the likely effect on wages, then Matt Yglesias at Slate sounding somewhat dour about the prospect, and Megan McArdle at the Daily Beast having fun with the speculation – the bottom line is that widely reducing the need for sleep would be a revolutionary good, as artificial light was.
For a sense of scale, there are about 252 million Americans age 15+, and on average they’re each awake about 5,585 hours a year. Giving them each two extra hours a night for a year would be equivalent to adding the activity of 33 million people, without having to shelter, clothe, and feed 33 million more people.
Whatever objections critics have, sleeping less will be popular to the extent that people think the costs are low. For all the billions of dollars spent trying to add years to their older lives, obviously people would spend more to add life to their younger years. Who ever said, “If only I’d had less time!”?
Consider that the average employed parent usually sleeps 7.6 hours each workday. He spends 8.8 of his remaining hours on work and related activities, 1.2 hours caring for others, and 2.5 hours on leisure and sports.
If he spends more time working productively (i.e. serving others), that’s good for both him and society. The time and effort invested in birthing, educating, and sorting people for jobs is tremendous, so getting more out of people who are already born, educated, and sorted is just multiplying the return on sunk costs.
That’s a godsend for any society undergoing a demographic transition after the typical fall in birthrates, because aside from hoping for faster productivity growth, the specific ways to address having fewer workers per retiree – higher taxes, lower benefits, more immigration, or somehow spurring more people to invest in babies for decades – are unpleasant or difficult or both.
And if he uses extra hours to pursue happiness in other ways, that’s generally fine too. A lot of people may simply get more out of their cable subscription. Others will finally have time for building and maintaining their families, reading, exercising, or learning a skill.
Yes, once a substantial number of people are enhancing their performance, others will likely have to follow suit if they want to compete. But then, that’s also true of artificial light and many other technologies. If people naturally slept only four hours a night and felt rested and alert, who would support a law forcing everyone to sleep twice as long, cutting a fifth of their waking hours so that everyone would slow down to the speed that some people prefer to live their lives?
I don’t think most people have such a strong presumption in favor of sleep. We like feeling rested, or dreaming, but not sleeping as such; a substantial minority of Americans sleep less than advised despite the known costs, and so reveal their preference for waking life over oblivion.
It is coming to the point that it is obvious that the terrorists have won. Why? Because they have provided government the excuse to intrude more and more into our lives and government is more than willing to use it. If this doesn’t bother you, you’re not paying attention:
Top U.S. intelligence officials gathered in the White House Situation Room in March to debate a controversial proposal. Counterterrorism officials wanted to create a government dragnet, sweeping up millions of records about U.S. citizens—even people suspected of no crime.
Not everyone was on board. “This is a sea change in the way that the government interacts with the general public,” Mary Ellen Callahan, chief privacy officer of the Department of Homeland Security, argued in the meeting, according to people familiar with the discussions.
A week later, the attorney general signed the changes into effect.
Of course the Attorney General signed the changes into effect. He’s as big a criminal as the rest of them.
What does this do? Well here, take a look:
The rules now allow the little-known National Counterterrorism Center to examine the government files of U.S. citizens for possible criminal behavior, even if there is no reason to suspect them. That is a departure from past practice, which barred the agency from storing information about ordinary Americans unless a person was a terror suspect or related to an investigation.
Now, NCTC can copy entire government databases—flight records, casino-employee lists, the names of Americans hosting foreign-exchange students and many others. The agency has new authority to keep data about innocent U.S. citizens for up to five years, and to analyze it for suspicious patterns of behavior. Previously, both were prohibited.
Your activities are now presumed to be “suspicious”, one assumes, just by existing and doing the things you’ve always done. Host a foreign exchange student? Go under surveillance. Fly anywhere the government arbitrarily decides is tied into terrorists (or not) it is surveillance for you (can the “no-fly” list be far behind?). Work in a casino, go onto a surveillance list.
And all of this by unaccountable bureaucrats who have unilaterally decided that your 4th Amendment rights mean zip. In fact, they claim that the 4th doesn’t apply here.
Congress specifically sought to prevent government agents from rifling through government files indiscriminately when it passed the Federal Privacy Act in 1974. The act prohibits government agencies from sharing data with each other for purposes that aren’t “compatible” with the reason the data were originally collected.
But the Federal Privacy Act allows agencies to exempt themselves from many requirements by placing notices in the Federal Register, the government’s daily publication of proposed rules. In practice, these privacy-act notices are rarely contested by government watchdogs or members of the public. “All you have to do is publish a notice in the Federal Register and you can do whatever you want,” says Robert Gellman, a privacy consultant who advises agencies on how to comply with the Privacy Act.
As a result, the National Counterterrorism Center program’s opponents within the administration—led by Ms. Callahan of Homeland Security—couldn’t argue that the program would violate the law. Instead, they were left to question whether the rules were good policy.
Under the new rules issued in March, the National Counterterrorism Center, known as NCTC, can obtain almost any database the government collects that it says is “reasonably believed” to contain “terrorism information.” The list could potentially include almost any government database, from financial forms submitted by people seeking federally backed mortgages to the health records of people who sought treatment at Veterans Administration hospitals.
So they just exempted themselves without any outcry, without any accountability, without any review. They just published they were “exempt” from following the law of the land or worrying about 4th Amendment rights.
Here’s the absolutely hilarious “promise” made by these criminals:
Counterterrorism officials say they will be circumspect with the data. “The guidelines provide rigorous oversight to protect the information that we have, for authorized and narrow purposes,” said Alexander Joel, Civil Liberties Protection Officer for the Office of the Director of National Intelligence, the parent agency for the National Counterterrorism Center.
What a load of crap. If you believe that you’ll believe anything government says. Human nature says they’ll push this to whatever limit they can manage until someone calls their hand.
And, as if that’s all not bad enough:
The changes also allow databases of U.S. civilian information to be given to foreign governments for analysis of their own. In effect, U.S. and foreign governments would be using the information to look for clues that people might commit future crimes.
So now our government is free to provide foreign governments with information about you, whether you like it or not.
This isn’t a new idea – here’s a little flashback from a time when people actually raised hell about stuff like this:
“If terrorist organizations are going to plan and execute attacks against the United States, their people must engage in transactions and they will leave signatures,” the program’s promoter, Admiral John Poindexter, said at the time. “We must be able to pick this signal out of the noise.”
Adm. Poindexter’s plans drew fire from across the political spectrum over the privacy implications of sorting through every single document available about U.S. citizens. Conservative columnist William Safire called the plan a “supersnoop’s dream.” Liberal columnist Molly Ivins suggested it could be akin to fascism. Congress eventually defunded the program.
Do you remember this? Do you remember how much hell was raised about this idea? However now, yeah, not such a big deal:
The National Counterterrorism Center’s ideas faced no similar public resistance. For one thing, the debate happened behind closed doors. In addition, unlike the Pentagon, the NCTC was created in 2004 specifically to use data to connect the dots in the fight against terrorism.
What a surprise.
I’m sorry, I see no reason for an unaccountable Matthew Olsen or his NCTC to know anything about me or have the ability to put a file together about me, keep that information for five years and, on his decision and his decision only, provide the information on me to foreign governments at his whim.
I remember the time the left went bonkers about the “Privacy Act”. Here’s something real to go bonkers on and what sound do we hear from the left (and the right, for that matter)?
I ran across an article in Forbes by Mark Gibbs, a proponent of stricter gun control, in which he thinks, given a certain technology, that gun control in reality may be dead.
That technology? 3D printers. They’ve come a long way and, some of them are able to work in metals. That, apparently led to an experiment:
So, can you print a gun? Yep, you can and that’s exactly what somebody with the alias “HaveBlue” did.
The receiver is, in effect, the framework of a gun and holds the barrel and all of the other parts in place. It’s also the part of the gun that is technically, according to US law, the actual gun and carries the serial number.
When the weapon was assembled with the printed receiver HaveBlue reported he fired 200 rounds and it operated perfectly.
Whether or not this actually happened really isn’t the point. At some point there is no doubt it will. There are all sorts of other things to consider when building a gun receiver (none of which Gibbs goes into), etc., but on a meta level what Gibbs is describing is much like what happened to the news industry when self-publishing (i.e. the birth of the new media) along with the internet became a realities. The monopoly control of the flow of news enjoyed by the traditional media exploded into nothingness. It has never been able to regain that control, and, in fact, has seen it slip even more.
Do 3D printers present the same sort of evolution as well as a threat to government control? Given the obvious possibility, can government exert the same sort of control among the population that it can on gun manufacturers? And these 3D printers work in ceramic too. Certainly ceramic pistols aren’t unheard of. Obviously these printers are going to continue to get better, bigger and work with more materials.
That brings us to Gibb’s inevitable conclusion:
What’s particularly worrisome is that the capability to print metal and ceramic parts will appear in low end printers in the next few years making it feasible to print an entire gun and that will be when gun control becomes a totally different problem.
So what are government’s choices, given its desire to control the manufacture and possession of certain weapons?
Well, given the way it has been going for years, I’d say it isn’t about to give up control. So?
Will there be legislation designed to limit freedom of printing? The old NRA bumper sticker “If guns are outlawed, only outlaws will have guns” will have to be changed to “If guns are outlawed, outlaws will have 3D printers.”
Something to think about. I think we know the answer, but certainly an intriguing thought piece. Registered printers? Black market printers? “Illegal printers” smuggled in to make cheap guns?
The possibilities boggle the mind. But I pretty much agree with Gibbs – given the evolution of this technology, gun control, for all practical purposes, would appear to be dying and on the way to dying.
Most intuitively know you can’t borrow your way out of debt, so it seems like a silly question on its face. But the theory is that government spending creates a simulative effect that gets the economy going and pays back the deficit spending in increased tax revenues. $14 trillion of debt argues strongly that the second part of that equation has never worked.
The current administration and any number of economists still believe that’s the answer to the debt crisis now and argue that deficit spending will indeed get us out of the economic doldrums we’re in. William Gross at PIMCO tells you why that’s not going to work:
Structural growth problems in developed economies cannot be solved by a magic penny or a magic trillion dollar bill, for that matter. If (1) globalization is precluding the hiring of domestic labor due to cheaper alternatives in developing countries, then rock-bottom yields can do little to change the minds of corporate decision makers. If (2) technological innovation is destroying retail book and record stores, as well as theaters and retail shopping centers nationwide due to online retailers, then what do low cap rates matter to Macy’s or Walmart in terms of future store expansion? If (3) U.S. and Euroland boomers are beginning to retire or at least plan more seriously for retirement, why will lower interest rates cause them to spend more? As a matter of fact, savers will have to save more just to replicate their expected retirement income from bank CDs or Treasuries that used to yield 5% and now offer something close to nothing.
My original question – “Can you solve a debt crisis by creating more debt?” – must continue to be answered in the negative, because that debt – low yielding as it is – is not creating growth. Instead, we are seeing: minimal job creation, historically low investment, consumption turning into savings and GDP growth at less than New Normal levels.
Not good news, but certainly the reality of the situation. Deficit spending has been the panacea that has been attempted by government whenever there has been an economic downturn. Some will argue it has been effective in the past and some will argue otherwise. But if you read through the 3 points Gross makes, even if you are a believer in deficit spending in times of economic downturn, you have to realize that there are other reasons – important reasons – that argue such intervention will be both expensive and basically useless.
We are in the middle of a global economy resetting itself. Technology is one of the major drivers and its expansion is tearing apart traditional institutions in the favor of new ones that unfortunately don’t depend as heavily on workers.
Much of the public assumes we’ll return to the Old Normal. But one has to wonder, as Gross points out, whether we’re not going to stay at the New Normal for quite some time as economies adjust. And while it will be a short term negative, the Boomer retirements will actually end up being a good thing in the upcoming decades as there will be fewer workers competing for fewer jobs.
But what should be clear to all, without serious adjustments and changes, the welfare state, as we know it today, is over. Economies can’t support it anymore. That’s what you see going on in Europe today – its death throes. And it isn’t a pretty picture.
So? So increased government spending isn’t the answer. And the answer to Gross’s question, as he says, is “no”.
The next question is how do we get that across to the administration (and party) which seems to remain convinced that spending like a drunken sailor on shore leave in Hong Kong is the key to turning the economy around and to electoral salvation?
I’m coincidentally the same age as Steve Jobs and Bill Gates. I’ve seen and worked in the industry they created – what we first called "micro-computers" and later "personal computers" or PCs.
Even that term is falling out of favor. "Laptop" is probably heard more often now, with "tablet" and "slate" moving in.
I’m wondering, though, if "slate" will actually stick. Just as "kleenex" is the word most of us use for a small tissue to wipe your nose (no matter how Kimberly-Clark feels about it), I wonder if we’ll someday be talking about "ipads" from Amazon and Samsung. That would merely be continuing the trend where "ipod" is becoming the generic term for an MP3 player.
This is one example of the power of Steve Jobs to set the agenda in the last ten years. There are plenty more.
The changing signs on Music Row in Nashville are another testament to his ability to turn an existing order upside down. The iPod changed the music industry beyond recognition, and here in Nashville we had a front-row seat to watch the changes.
The area of most interest to me, though, is in software. I’ve focused more on user interface design over the years than any other area. I’ve watched Apple drive a trend that is powerful and desirable in our industry: moving from just making something possible with technology to making it easy.
For decades, it was enough for a software program to make something possible that was not possible before. DOS-based software was never particularly easy to use. The underlying technology to make it easy just wasn’t there.
Jobs and Wozniak pioneered that era, but Bill Gates ruled it. He reduced IBM to irrelevance, along with Novell, Lotus, and WordPerfect, all major league software companies at one time.
To some extent, Bill understood the importance of making things easy; Excel was about ten times easier to use than Lotus 1 2 3. But he never really innovated much in making things easy. His forte was seeing good ideas produced by others and then copying those ideas and making products based on them affordable and practical. Windows was never the equal of the Mac until (arguably) Windows 7, but it ran on cheaper machines and Bill made it friendly to businesses, which were the biggest buyers of PCs until somewhere in the 1990s.
Steve Jobs and his crew were Bill’s best idea source. I sometimes thought that they served as the unofficial research arm of Microsoft for user interface design throughout the eighties and nineties. Apple sputtered through that period, producing hits (iMac) and misses (Newton). At one point, Bill Gates even stepped in with a capital infusion that saved Apple from likely irrelevance or even bankruptcy. I suppose he didn’t want to see his free research lab disappear.
During that era, Steve Jobs kept pushing the boundaries. The very first Mac was a pain to use, because it was too slow to do what he imagined, and had a screen that we would laugh at today. But it made some new things possible, such as real graphic editing. Though a PC was my main machine in the mid-1980s, I would put up with the Mac’s flaws to do my graphics work. The salesmen at our company often said our diagrams of the system we were proposing often clinched the sale.
I believe Jobs had a vision during that period of what personal technology could be like, but the nuts and bolts were not quite there. Nevertheless, he always insisted on "user first" thinking.
Jobs understood something that is still misunderstood by almost all companies in technology. You can’t innovate by asking your users to tell you what to do.
The typical technology company convenes focus groups and does market research, and then says "Ah, what buyers want is X, Y, and Z. OK, you lab guys, go create it for the lowest possible cost."
Steve Jobs understood that consumers and users of technology don’t know how to design technology products any more than movie goers know how to write screenplays. To create innovative and delightful user experiences, it is necessary to get inside the mind of the user and understand them so well that you know what they will like even before they do.
This is hard. It’s so hard that only two companies in my lifetime have been any good at it at all: Apple and Sony. And these companies have dramatically different batting averages, with Apple up in Ted Williams territory while Sony languishes around the Mendoza line.
Finally, about ten years ago, the underlying technology started matching up with Jobs’ vision. The result was the iPod.
There were plenty of MP3 players that pre-dated the iPod. I had one, from Creative. It had about enough storage for three albums, and required me to organize files and folders on it to store my music.
Steve Jobs saw the small, low power hard disks coming on line and realized they could be the foundation of a new, reimagined device. First, it would store hundreds of albums or thousands of songs – a typical person’s entire music collection. It would use software designed earlier to manage music – iTunes.
The big departure was the approach to user experience. The iPod was so simple to use that someone could pick it up and figure it out in about two minutes.
This was done by purposely leaving out features that were arguably useful. While the other MP3 makers were designing and marketing on checklists of features, the iPod stripped things down to the basics. And kicked the others to the curb.
Jobs realized before others that it was time to stop working on "possible" and start emphasizing "easy". When technology is new and rapidly evolving, something new is possible with each passing year, and giving buyers new features is enough to sell products. But when technology reaches a certain point, and the feature lists get long enough, all products have the essential features. The differentiation then becomes based on something very simple: what people like.
This is particularly true as technology starts appealing to a broad market. If you try to satisfy everyone in a broad market by including all the features anyone in a broad spectrum wants, you’ll end up with an unusable mess.
At some point in the evolution of technology for a given space, people just assume that the features they really need will be in all the devices they see. They start choosing based on emotion. That is, they seek what feels elegant and fluid to them, something they really want to be a part of their daily life.
This is where genuine design, based on universal design principles that go back decades or centuries, starts adding value. For example, Hick’s Law says that the time required to choose an option goes up as the number of options increases. Simply put, users get frustrated trying to find the feature they want from a long list of features in a menu, or trying to find the button they want on a remote control that has fifty-eleven buttons.
There is an entire body of knowledge in this space, and the first major computer/software company to emphasize designers who knew and understood this body was Apple. The culture at Apple values people who know how to get inside the mind of a user and then create a new way of interacting with technology that the user will love.
Jobs created and drove that culture. He went from turning the music business upside down with the iPod to turning the phone industry upside down with the iPhone, and now Apple is remaking their original territory, the personal computer, with the iPad.
I’ve discussed before in the comments here that I don’t like the iPad. It’s slow and limited for my purposes, many of the web sites I use are not compatible with it, and I don’t like iTunes.
But it’s not designed for me. That’s a key lesson that designers grow to appreciate. Each design has a target audience, which must not be too broad. The true test of a good designer is whether they can design something for someone who is not like them.
I put my iPad in the hands of my 76 year old mother, and she immediately took to it. I showed her a few basic touch gestures, and she could immediately do the only things she uses a computer for – browsing and email. For her, it was easy, and as a veteran of the made-to-do-anything-and-everything Windows (I got her a computer for email and such six years ago), she really appreciated that.
The culture created by Jobs can do things that Microsoft, for all its money and brains, is not very good at. Microsoft people are smart. I work with many of them, so I’ve seen it firsthand. But almost all of them have a tendency that is all too common in the human race. They can only see the world through their own eyes, and are not very good at seeing it through the eyes of someone with a radically different background or different abilities.
When Microsoft teams start designing a new product or version, most of the times I’ve been involved, the process started with a list of proposed features. In other words, their process starts with what they want to make possible for the user.
Unlike Apple, the culture at Microsoft places little or no value on making things easy. This isn’t surprising, because Microsoft’s success over a span of decades has not been dependent on innovation in making things easy. It’s been in making things possible and affordable. They copied the "make things easy" part from someone else, usually Apple.
But even Microsoft has seen the direction for the industry laid out by Jobs and Apple, and realized that things have sped up. Copying isn’t good enough any more. Jobs perfected the process of laying entire segments waste with an innovative new entry, and as the iPhone showed, it can happen in a single year.
Those at Microsoft are starting down the path of worrying more about user experience. They may not like it much, but they realize it’s now a matter of necessity.
First, they created the XBox – an entirely new product in a separate division that successfully challenged established players in a world where user experience trumps everything else. Then, shamed by the abysmal Windows Mobile products they had produced in the phone space, they created a pretty decent product there in the Windows Phone.
Their steps are halting and tentative, but at least they are toddling down that path now. I hope they learn how to walk and run on that path, but given the effort it will take to turn their culture around, that will take a while.
I don’t know that they would have ever gone down that route if Jobs and Apple had not pushed them down it. I’ve chafed for most of my career at the apathy and ignorance in the Microsoft community around user experience. I’ve always believed that our systems and devices exist for users, not for our own aggrandizement. As such, we owe them the best experience we can give them.
I was never a major Apple customer. Apple was never a cost-effective choice for the business and data oriented software I’ve created.
But that doesn’t mean that I don’t appreciate what Steve Jobs did for our industry. I absolutely do. I wish he could have been around for another decade or two, continuing to show the world that "possible" isn’t good enough, and push the rest of the industry into respecting our users and making things easy.
There are certainly more comprehensive tributes, but this is my favorite so far. From Steven Horwitz:
Unlike many, I am not an Apple-phile. I honestly don’t get the emotional relationship people have with their products. HOWEVER… there is absolutely no doubt that Steve Jobs is a symbol of all that is right with markets and capitalism. This is a man who became very, very rich by making many people’s lives (including my own) very much better. He was a master at creating value and persuading people that they wanted things they didn’t know they wanted. He should be part of the pantheon of human heroes.
Unlike the political and military heroes of war we too often celebrate, Jobs is a hero of peace. He made his money through persuasion not at the point of a gun, and through mutual benefit not oppression and exploitation. Those of us who really desire a peaceful society should not celebrate those who were victorious in war, but those who created value through peaceful, voluntary, mutually beneficial exchange – exchanges that happen billions of times every single day. And we should do it no matter whether what was exchanged was electronic bits of magic, food for us to eat, or financial instruments that improve the movement of capital. They all create value and improve our lives, and all of their benefits are deserved.
Thanks for everything Steve and thanks for making the world a better place one peaceful, cooperative exchange at a time.
Microsoft’s //build/ conference is on, where they are rolling out plans for a pretty dramatic shift in Windows for the next generation.
I’m in sunny Anaheim at the conference, with no time to pen a long post. If you’ve got ten minutes to waste listening to me ramble, and you care about the Microsoft side of the tech industry, you can watch this video which was posted a couple of hours ago. Actually, it might be better to watch some other videos in the series that feature Microsoft executives with a lot more interesting and detailed things to say, but, hey, if you make fun of them in the comments here, they’ll never see it. Whereas you can point out that the camera angle makes me look like I have some kind of weird arthritis, and I just have to take it.
If you don’t care about software development, or do care but are apathetic or hostile to Microsoft, my apologies. Please return to our usual program of economic and political doom.