Free Markets, Free People

Technology


Scheming away your money … and privacy

That’s primarily what politicians seem to want to do despite protestations to the contrary by some.  They’re always looking for a new “revenue stream”.  And since tax payers are the only folks who actually pay taxes, they’re constantly dreaming up new ways to “access” your wallet.

Such as:

Rep. Earl Blumenauer (D-Ore.) on Tuesday reintroduced legislation that would require the government to study the most practical ways of taxing drivers based on how far they drive, in order to help fund federal highway programs.

Blumenauer’s bill, H.R. 3638, would set up a Road Usage Fee Pilot Program, which he said would study mileage-based fee systems. He cast his bill as a long-term solution for funding highway programs, and proposed it along with a shorter-term plan to nearly double the gas tax, from 18.4 cents to 33.4 cents per gallon.

“As we extend the gas tax, we must also think about how to replace it with something more sustainable,” Blumenauer said Tuesday. “The best candidate would be the vehicle mile traveled fee being explored by pilot projects in Oregon and implemented there on a voluntary basis next year.”

Because, you know, taxpayers paid for the highways, taxpayers have funded the maintenance of the highways and now they should pay for the privilege of driving on them as well. So, many single moms, who can barely afford gas for the car,  will likely be paying by the mile to go to work (as with most of these stupid schemes, the one’s who can afford it the least will get hit the hardest by it).

Brilliant!  Aw, what the heck, they can take public transportation, huh?

And what about privacy? What business is it of government how far you drive. One assumes they’ll be able to verify your mileage somehow, no? Do you really think it will be up to you to voluntarily keep records and report to the government? Of course not. So somehow the system has to be able to track you and tally mileage.

Yup:

In 2011, the Congressional Budget Office (CBO) released a study that explored how a VMT system might work. That report suggested devices could be fitted onto cars that log how far they have traveled, and these devices could be electronically read at gas stations to general tax bills for drivers.

That’s what I want … a government tracking device on my car.

Where’s Orwell when you need him?

~McQ


Welcome to the future

I’ve spent the last 20 years developing software, managing software development, and doing software systems analysis full-time. I’ve been programming since I was 16. The first time I went to college, my major was computer science.  Since then, I’ve seen one major revolution in the computer industry, which essentially blew up everything that came before it.

That first revolution was the advent of the PC. I started college in 1982, when there were very, very few PCs. Our computer lab had a couple of Apple IIs, Osbornes and TRS-80s. We played with them. They were cute. But we did our business on a Digital DEC 1133 mainframe with 50 dumb terminals and an old IBM 64 that only took punch card input.  Programming was done by an elite group of specialists Who logged into the mainframe and began writing in plain text, or who pushed up to a massive IBM card punch machine, and typed in their programs one line at a time, punching a card for each line of code.

The punch cards were the worst. At least, on the DEC, you you could compile your program while writing it. With the punch cards, you’d stack them up all in order—being very careful to write the card number on the back in magic marker, in case you dropped your card stack and had to re-sort the cards—and turn them into the computing center.  Then you’d go back the next day to learn that you’d forgotten to type a comma in card 200, and you had to re-type the card. You’d turn your stack in for another overnight wait, only to learn you’d missed a comma in card 201.

It was worse than being stabbed.

The PC killed all that. It killed the idea of programmers being this small cadre of elite specialists. And once PCs could talk to each other via a network, they killed the idea that software development had to take a long time, with shadowy specialists toiling away on one large machine in the bowels of the building.  By 1995, self-taught amateurs were programming database applications to handle their small business inventory in FoxPro.  Corporations employed hordes of programmers to build complicated database applications.

In fact, for all the snide attitudes they get, the people at Microsoft pretty much single-handedly created the worldwide software development community we have today. The Visual Basic for Applications programming interface for Microsoft Office, Visual Basic, and, eventually, the .NET Framework allowed millions of people to learn programming and create applications. Practically every corporation with more than 100 people has their own programming team, building custom desktop software for the organization.  Millions of people are employed as free-lance software developers. Yes, there are other programming technologies out there, but none of them—none—have had the impact on democratizing software development that Microsoft’s has had.

There are still some mainframe computers, of course. IBM makes 90% of them. It’s a small, mostly irrelevant market, except for governments, universities, and very large corporations. Every computer used to be a mainframe. Now, almost none of them are.

We’ve gotten used to this software development landscape. Even the advent of the Internet—while revolutionary in many ways—has not been particularly revolutionary in terms of software development. Until now, the changes to software development caused by the internet have been evolutionary—building on existing technologies. In fact, in some ways, the internet has been devolutionary. Thirty years ago, workers logged in to dumb terminals with no processing power, using a mainframe to do all the work.  Similarly, using the internet until now has mainly meant opening up a dumb web browser like Firefox or Internet Explorer to talk to a web server where all the magic happens. The browser has really been just a display device. The web server takes a request from a browser, retrieves information, gets database data, formats it, and sends it back in simple text so the browser can display it.

Until now.

In the past year or so, a number of technologies have hit the market that make the browser the center of processing instead of the web server.  Now, instead of running programming code on the server to do all the hard processing work, the browser can run JavaScript code and do all the processing locally. This has long been theoretically possible, but it was…difficult. Now, an entirely new kind of programming is possible, with JavaScript tools like Node.JS, Google’s AngularJS, and database technology that departs from the traditional relational database like MongoDB. (Though not, maybe MongoDB itself, for a variety of reasons.)

It’s still in it’s infancy. All of the smooth developer productivity tools we’ve become used to aren’t there yet. It’s still a bit of a pain to program, and it takes longer than we’ve been used to. Indeed, it’s very much like programming was back in the 1990s, when the developer had to code everything.

For instance, in Microsoft’s .NET development environment today, developers have become used to just dragging a text box or check box onto a form and having the software development environment write hundreds of lines of code for them. In a Windows application, a data-driven form that connects to a SQL Server database can be created almost entirely through a drag and drop interface, where the development environment writes thousands of lines of code behind the scenes. The developer has to actually write about 15 lines of code to finish up the form so it works.

We don’t have that yet with the current generation of JavaScript tools. We have to type a lot more code to do fairly simple things. So, for someone like me, who’s lived in the .NET world since 2001, it’s a bit of a pain. But it’s a thousand times better than it was just two years ago. Five years from now, the tools for rapid application development for browser-based applications will be everywhere.

This is the second revolution in computing that will change everything we’ve become used to. Right now, a software application generally has to be installed on your computer. A software application designed for Windows won’t install on a Mac or Linux machine. Well, that’s all going away. Browser-based software is platform independent, which is to say, it doesn’t matter what your computer’s operating system is. Do you need an office suite? Google already has a browser-based one online, and there’s probably someone, somewhere in the world, working on a desktop-based version right now.  Need to access your database to crunch some data? No need for Access or FileMaker Pro. We can do that in the browser, too. In fact, we’re pretty much at the point where there is no commonly-done task that can’t be done in the browser.

We can now make almost—almost—any software application you think of, and anyone, anywhere in the world, can run it, no matter what computer they’ve got. This is the second revolution in computing that I’ve seen in my lifetime, and it’s going to change everything about how applications are developed.  Software that is dependent on the operating system is essentially as dead as the mainframe. Microsoft’s advantage in building a software development community? That’s dead now. Desktop applications? Dead. In 5 years there’ll be nothing that you can’t do in a browser. On a phone. I think that means the days of Windows having a natural monopoly on corporate computing is now dead, too.

There’ll still be desktop systems, of course. And, as long as your system can act as a web server—and pretty much any desktop or laptop can—you’ll still have software that you run on the desktop.  After all, you may want to work locally instead of on a big server run by someone like Google or Microsoft, who’ll be doing God knows what with any data you store on their servers. But your choice of computer or operating system will not be driven by whether the software you want to use is available for it. In fact, in 10 years, when you think of desktop, you may just be thinking of a keyboard and display monitor that you plug your phone into to work more conveniently. Assuming that you just don’t have to talk into it to get things done.

If you’re working in software development, and aren’t embracing the coming wave of platform independent, browser-based programming, you’re not doing yourself any favors. It may take another 10 years or so, but the technology you’re working on right now is dying. For someone like me, who’s invested decades in Windows applications development, it’s a bit sad to see all that accumulated knowledge and experience passing away. It’s not easy to move into an entirely new development technology and go through all of it’s growing pains. But I don’t see any choice.

Thirty years ago, everything I learned in college about how to work with computers got tossed out the window.  All of those hours struggling to write RPG programs in IBM punch cards, learning about mainframes…all of it was utterly useless within a few years when the PC came out. Now it’s happening again.

I remember how, back in the 90s, all the old mainframe Unix guys were grumpy about having to kiss their Unix machines good-bye. In 10 years, I’m not going to be one of those old Unix guys.


The obvious question

I find something really interesting. In my previous post on creating the 2 Quickscript fonts, no one asked what I’d think was an obvious question, which is, "Wait. You made fonts? How the hell do you make a font?"

I find it fascinating that, especially today, when we have daily access to electronic typography, there’s so little interest in what fonts are, or how to make make them. Especially when literally anyone with a computer can make their own fonts. There’s even a free, online bitmap font creation program called Fonstruct. We spend our lives surrounded by typography and almost no one cares about it at all.

Which brings me to a trilogy of fantastic documentaries about design by a film-maker named Gary Hustwit: Helvetica, Objectified, and Urbanized. All three of them are enormously interesting, and one of them is about a font, Helvetica, which every single person in the Western world sees every single day of their lives. You should watch all three of them.

Also you should go read my latest auto review at Medium: Doctor Hoon: 2013 Mini John Cooper Works GP. And you should "recommend" it after reading, to make my Medium stats shoot up really high.

~
Dale Franks
Google+ Profile
Twitter Feed


For our readers in software development–a new technology called the agilo-modulizer

Today seems like a perfect day to tell you about some new technology I’ve been involved with for software development. Here’s a ninety second video with a high level description, made by the video training company that I did a course with last year:

 

I know some of our regular commenters, particularly looker, will be interested in this technology.


So lift those heavy eyelids

What if people could easily function with much less sleep?

Jon M at Sociological Speculation asked that question after observing that “new drugs such as Modafinil appear to vastly reduce the need for sleep without significant side effects (at least so far).” At extremes, as Jon M noted in a follow-up post, modafinil allows a reduction to 2.5 hours a night, but “the more common experiences seem to be people who reduce their sleep by a few hours habitually and people who use the drugs to stay up for extended periods once in a while without suffering the drastic cognitive declines insomnia normally entails.”  In fact, alertness is not the only reported cognitive benefit of the drug.

The US brand of modafinil, Provigil, did over $1.1 billion in US sales last year, but for the moment let’s dispense with the question of whether modafinil is everything it’s cracked up to be.  We’re speculating about the consequences of cheaply reducing or even eliminating the need for sleep for the masses.

If I can add to what’s already been said by several fine bloggers – Garett Jones at EconLog on the likely effect on wages, then Matt Yglesias at Slate sounding somewhat dour about the prospect, and Megan McArdle at the Daily Beast having fun with the speculation – the bottom line is that widely reducing the need for sleep would be a revolutionary good, as artificial light was.

For a sense of scale, there are about 252 million Americans age 15+, and on average they’re each awake about 5,585 hours a year.  Giving them each two extra hours a night for a year would be equivalent to adding the activity of 33 million people, without having to shelter, clothe, and feed 33 million more people.

Whatever objections critics have, sleeping less will be popular to the extent that people think the costs are low.  For all the billions of dollars spent trying to add years to their older lives, obviously people would spend more to add life to their younger years.  Who ever said, “If only I’d had less time!”?

Consider that the average employed parent usually sleeps 7.6 hours each workday.  He spends 8.8 of his remaining hours on work and related activities, 1.2 hours caring for others, and 2.5 hours on leisure and sports.

If he spends more time working productively (i.e. serving others), that’s good for both him and society.  The time and effort invested in birthing, educating, and sorting people for jobs is tremendous, so getting more out of people who are already born, educated, and sorted is just multiplying the return on sunk costs.

That’s a godsend for any society undergoing a demographic transition after the typical fall in birthrates, because aside from hoping for faster productivity growth, the specific ways to address having fewer workers per retiree – higher taxes, lower benefits, more immigration, or somehow spurring more people to invest in babies for decades – are unpleasant or difficult or both.

And if he uses extra hours to pursue happiness in other ways, that’s generally fine too.  A lot of people may simply get more out of their cable subscription. Others will finally have time for building and maintaining their families, reading, exercising, or learning a skill.

Yes, once a substantial number of people are enhancing their performance, others will likely have to follow suit if they want to compete.  But then, that’s also true of artificial light and many other technologies.  If people naturally slept only four hours a night and felt rested and alert, who would support a law forcing everyone to sleep twice as long, cutting a fifth of their waking hours so that everyone would slow down to the speed that some people prefer to live their lives?

I don’t think most people have such a strong presumption in favor of sleep.  We like feeling rested, or dreaming, but not sleeping as such; a substantial minority of Americans sleep less than advised despite the known costs, and so reveal their preference for waking life over oblivion.


US terrorism agency free to use government databases of US citizens

It is coming to the point that it is obvious that the terrorists have won.  Why?  Because they have provided government the excuse to intrude more and more into our lives and government is more than willing to use it.  If this doesn’t bother you, you’re not paying attention:

Top U.S. intelligence officials gathered in the White House Situation Room in March to debate a controversial proposal. Counterterrorism officials wanted to create a government dragnet, sweeping up millions of records about U.S. citizens—even people suspected of no crime.

Not everyone was on board. “This is a sea change in the way that the government interacts with the general public,” Mary Ellen Callahan, chief privacy officer of the Department of Homeland Security, argued in the meeting, according to people familiar with the discussions.

A week later, the attorney general signed the changes into effect.

Of course the Attorney General signed the changes into effect.  He’s as big a criminal as the rest of them.

What does this do?  Well here, take a look:

The rules now allow the little-known National Counterterrorism Center to examine the government files of U.S. citizens for possible criminal behavior, even if there is no reason to suspect them. That is a departure from past practice, which barred the agency from storing information about ordinary Americans unless a person was a terror suspect or related to an investigation.

Now, NCTC can copy entire government databases—flight records, casino-employee lists, the names of Americans hosting foreign-exchange students and many others. The agency has new authority to keep data about innocent U.S. citizens for up to five years, and to analyze it for suspicious patterns of behavior. Previously, both were prohibited.

Your activities are now presumed to be “suspicious”, one assumes, just by existing and doing the things you’ve always done.  Host a foreign exchange student?  Go under surveillance.  Fly anywhere the government arbitrarily decides is tied into terrorists (or not) it is surveillance for you (can the “no-fly” list be far behind?).  Work in a casino, go onto a surveillance list.

And all of this by unaccountable bureaucrats who have unilaterally decided that your 4th Amendment rights mean zip.  In fact, they claim that the 4th doesn’t apply here.

Congress specifically sought to prevent government agents from rifling through government files indiscriminately when it passed the Federal Privacy Act in 1974. The act prohibits government agencies from sharing data with each other for purposes that aren’t “compatible” with the reason the data were originally collected.

But:

But the Federal Privacy Act allows agencies to exempt themselves from many requirements by placing notices in the Federal Register, the government’s daily publication of proposed rules. In practice, these privacy-act notices are rarely contested by government watchdogs or members of the public. “All you have to do is publish a notice in the Federal Register and you can do whatever you want,” says Robert Gellman, a privacy consultant who advises agencies on how to comply with the Privacy Act.

As a result, the National Counterterrorism Center program’s opponents within the administration—led by Ms. Callahan of Homeland Security—couldn’t argue that the program would violate the law. Instead, they were left to question whether the rules were good policy.

Under the new rules issued in March, the National Counterterrorism Center, known as NCTC, can obtain almost any database the government collects that it says is “reasonably believed” to contain “terrorism information.” The list could potentially include almost any government database, from financial forms submitted by people seeking federally backed mortgages to the health records of people who sought treatment at Veterans Administration hospitals.

So they just exempted themselves without any outcry, without any accountability, without any review.  They just published they were “exempt” from following the law of the land or worrying about 4th Amendment rights.

Here’s the absolutely hilarious “promise” made by these criminals:

Counterterrorism officials say they will be circumspect with the data. “The guidelines provide rigorous oversight to protect the information that we have, for authorized and narrow purposes,” said Alexander Joel, Civil Liberties Protection Officer for the Office of the Director of National Intelligence, the parent agency for the National Counterterrorism Center.

What a load of crap.  If you believe that you’ll believe anything government says.  Human nature says they’ll push this to whatever limit they can manage until someone calls their hand.

And, as if that’s all not bad enough:

The changes also allow databases of U.S. civilian information to be given to foreign governments for analysis of their own. In effect, U.S. and foreign governments would be using the information to look for clues that people might commit future crimes.

So now our government is free to provide foreign governments with information about you, whether you like it or not.

This isn’t a new idea – here’s a little flashback from a time when people actually raised hell about stuff like this:

“If terrorist organizations are going to plan and execute attacks against the United States, their people must engage in transactions and they will leave signatures,” the program’s promoter, Admiral John Poindexter, said at the time. “We must be able to pick this signal out of the noise.”

Adm. Poindexter’s plans drew fire from across the political spectrum over the privacy implications of sorting through every single document available about U.S. citizens. Conservative columnist William Safire called the plan a “supersnoop’s dream.” Liberal columnist Molly Ivins suggested it could be akin to fascism. Congress eventually defunded the program.

Do you remember this? Do you remember how much hell was raised about this idea?  However now, yeah, not such a big deal:

The National Counterterrorism Center’s ideas faced no similar public resistance. For one thing, the debate happened behind closed doors. In addition, unlike the Pentagon, the NCTC was created in 2004 specifically to use data to connect the dots in the fight against terrorism.

What a surprise.

I’m sorry, I see no reason for an unaccountable Matthew Olsen or his NCTC to know anything about me or have the ability to put a file together about me, keep that information for five years and, on his decision and his decision only, provide the information on me to foreign governments at his whim.

I remember the time the left went bonkers about the “Privacy Act”.  Here’s something real to go bonkers on and what sound do we hear from the left (and the right, for that matter)?

Freakin’ crickets.

~McQ


The end of gun control?

I ran across an article in Forbes by Mark Gibbs, a proponent of stricter gun control, in which he thinks, given a certain technology, that gun control in reality may be dead.

That technology?  3D printers.  They’ve come a long way and, some of them are able to work in metals.  That, apparently led to an experiment:

So, can you print a gun? Yep, you can and that’s exactly what somebody with the alias “HaveBlue” did.

To be accurate, HaveBlue didn’t print an entire gun, he printed a “receiver” for an AR-15 (better known as the military’s M16) at a cost of about $30 worth of materials.

The receiver is, in effect, the framework of a gun and holds the barrel and all of the other parts in place. It’s also the part of the gun that is technically, according to US law, the actual gun and carries the serial number.

When the weapon was assembled with the printed receiver HaveBlue reported he fired 200 rounds and it operated perfectly.

Whether or not this actually happened really isn’t the point.  At some point there is no doubt it will.  There are all sorts of other things to consider when building a gun receiver (none of which Gibbs goes into), etc., but on a meta level what Gibbs is describing is much like what happened to the news industry when self-publishing (i.e. the birth of the new media) along with the internet became a realities.   The monopoly control of the flow of news enjoyed by the traditional media exploded into nothingness.  It has never been able to regain that control, and, in fact, has seen it slip even more.

Do 3D printers present the same sort of evolution as well as a threat to government control?  Given the obvious possibility, can government exert the same sort of control among the population that it can on gun manufacturers?  And these 3D printers work in ceramic too.  Certainly ceramic pistols aren’t unheard of.    Obviously these printers are going to continue to get better, bigger and work with more materials. 

That brings us to Gibb’s inevitable conclusion:

What’s particularly worrisome is that the capability to print metal and ceramic parts will appear in low end printers in the next few years making it feasible to print an entire gun and that will be when gun control becomes a totally different problem.

So what are government’s choices, given its desire to control the manufacture and possession of certain weapons?

Well, given the way it has been going for years, I’d say it isn’t about to give up control.  So?

Will there be legislation designed to limit freedom of printing? The old NRA bumper sticker “If guns are outlawed, only outlaws will have guns” will have to be changed to “If guns are outlawed, outlaws will have 3D printers.”

Something to think about.  I think we know the answer, but certainly an intriguing thought piece.  Registered printers?   Black market printers?  “Illegal printers” smuggled in to make cheap guns?

The possibilities boggle the mind.  But I pretty much agree with Gibbs – given the evolution of this technology, gun control, for all practical purposes, would appear to be dying and on the way to dying.

~McQ

Twitter: @McQandO


Can you solve the debt crisis by creating more debt?

Most intuitively know you can’t borrow your way out of debt, so it seems like a silly question on its face.  But the theory is that government spending creates a simulative effect that gets the economy going and pays back the deficit spending in increased tax revenues.  $14 trillion of debt argues strongly that the second part of that equation has never worked.

The current administration and any number of economists still believe that’s the answer to the debt crisis now and argue that deficit spending will indeed get us out of the economic doldrums we’re in.  William Gross at PIMCO tells you why that’s not going to work:

Structural growth problems in developed economies cannot be solved by a magic penny or a magic trillion dollar bill, for that matter. If (1) globalization is precluding the hiring of domestic labor due to cheaper alternatives in developing countries, then rock-bottom yields can do little to change the minds of corporate decision makers. If (2) technological innovation is destroying retail book and record stores, as well as theaters and retail shopping centers nationwide due to online retailers, then what do low cap rates matter to Macy’s or Walmart in terms of future store expansion? If (3) U.S. and Euroland boomers are beginning to retire or at least plan more seriously for retirement, why will lower interest rates cause them to spend more? As a matter of fact, savers will have to save more just to replicate their expected retirement income from bank CDs or Treasuries that used to yield 5% and now offer something close to nothing.

My original question – “Can you solve a debt crisis by creating more debt?” – must continue to be answered in the negative, because that debt – low yielding as it is – is not creating growth. Instead, we are seeing: minimal job creation, historically low investment, consumption turning into savings and GDP growth at less than New Normal levels.

Not good news, but certainly the reality of the situation.  Deficit spending has been the panacea that has been attempted by government whenever there has been an economic downturn.  Some will argue it has been effective in the past and some will argue otherwise.   But if you read through the 3 points Gross makes, even if you are a believer in deficit spending in times of economic downturn, you have to realize that there are other reasons – important reasons – that argue such intervention will be both expensive and basically useless.

We are in the middle of a global economy resetting itself.  Technology is one of the major drivers and its expansion is tearing apart traditional institutions in the favor of new ones that unfortunately don’t depend as heavily on workers.

Much of the public assumes we’ll return to the Old Normal.  But one has to wonder, as Gross points out, whether we’re not going to stay at the New Normal for quite some time as economies adjust.   And while it will be a short term negative, the Boomer retirements will actually end up being a good thing in the upcoming decades as there will be fewer workers competing for fewer jobs.

But what should be clear to all, without serious adjustments and changes, the welfare state, as we know it today, is over.  Economies can’t support it anymore.   That’s what you see going on in Europe today – its death throes.   And it isn’t a pretty picture.

So?  So increased government spending isn’t the answer.  And the answer to Gross’s question, as he says, is “no”. 

The next question is how do we get that across to the administration (and party) which seems to remain convinced that spending like a drunken sailor on shore leave in Hong Kong is the key to turning the economy around and to electoral salvation?

~McQ

Twitter: @McQandO


Moving technology from making things possible to making them easy

I’m coincidentally the same age as Steve Jobs and Bill Gates. I’ve seen and worked in the industry they created – what we first called "micro-computers" and later "personal computers" or PCs.

Even that term is falling out of favor. "Laptop" is probably heard more often now, with "tablet" and "slate" moving in.

I’m wondering, though, if "slate" will actually stick. Just as "kleenex" is the word most of us use for a small tissue to wipe your nose (no matter how Kimberly-Clark feels about it), I wonder if we’ll someday be talking about "ipads" from Amazon and Samsung. That would merely be continuing the trend where "ipod" is becoming the generic term for an MP3 player.

This is one example of the power of Steve Jobs to set the agenda in the last ten years. There are plenty more.

The changing signs on Music Row in Nashville are another testament to his ability to turn an existing order upside down. The iPod changed the music industry beyond recognition, and here in Nashville we had a front-row seat to watch the changes.

The area of most interest to me, though, is in software. I’ve focused more on user interface design over the years than any other area. I’ve watched Apple drive a trend that is powerful and desirable in our industry: moving from just making something possible with technology to making it easy.

For decades, it was enough for a software program to make something possible that was not possible before. DOS-based software was never particularly easy to use. The underlying technology to make it easy just wasn’t there.

Jobs and Wozniak pioneered that era, but Bill Gates ruled it. He reduced IBM to irrelevance, along with Novell, Lotus, and WordPerfect, all major league software companies at one time.

To some extent, Bill understood the importance of making things easy; Excel was about ten times easier to use than Lotus 1 2 3. But he never really innovated much in making things easy. His forte was seeing good ideas produced by others and then copying those ideas and making products based on them affordable and practical. Windows was never the equal of the Mac until (arguably) Windows 7, but it ran on cheaper machines and Bill made it friendly to businesses, which were the biggest buyers of PCs until somewhere in the 1990s.

Steve Jobs and his crew were Bill’s best idea source. I sometimes thought that they served as the unofficial research arm of Microsoft for user interface design throughout the eighties and nineties. Apple sputtered through that period, producing hits (iMac) and misses (Newton). At one point, Bill Gates even stepped in with a capital infusion that saved Apple from likely irrelevance or even bankruptcy. I suppose he didn’t want to see his free research lab disappear.

During that era, Steve Jobs kept pushing the boundaries. The very first Mac was a pain to use, because it was too slow to do what he imagined, and had a screen that we would laugh at today. But it made some new things possible, such as real graphic editing. Though a PC was my main machine in the mid-1980s, I would put up with the Mac’s flaws to do my graphics work. The salesmen at our company often said our diagrams of the system we were proposing often clinched the sale.

I believe Jobs had a vision during that period of what personal technology could be like, but the nuts and bolts were not quite there. Nevertheless, he always insisted on "user first" thinking.

Jobs understood something that is still misunderstood by almost all companies in technology. You can’t innovate by asking your users to tell you what to do.

The typical technology company convenes focus groups and does market research, and then says "Ah, what buyers want is X, Y, and Z. OK, you lab guys, go create it for the lowest possible cost."

Steve Jobs understood that consumers and users of technology don’t know how to design technology products any more than movie goers know how to write screenplays. To create innovative and delightful user experiences, it is necessary to get inside the mind of the user and understand them so well that you know what they will like even before they do.

This is hard. It’s so hard that only two companies in my lifetime have been any good at it at all: Apple and Sony. And these companies have dramatically different batting averages, with Apple up in Ted Williams territory while Sony languishes around the Mendoza line.

Finally, about ten years ago, the underlying technology started matching up with Jobs’ vision. The result was the iPod.

There were plenty of MP3 players that pre-dated the iPod. I had one, from Creative. It had about enough storage for three albums, and required me to organize files and folders on it to store my music.

Steve Jobs saw the small, low power hard disks coming on line and realized they could be the foundation of a new, reimagined device. First, it would store hundreds of albums or thousands of songs – a typical person’s entire music collection. It would use software designed earlier to manage music – iTunes.

The big departure was the approach to user experience. The iPod was so simple to use that someone could pick it up and figure it out in about two minutes.

This was done by purposely leaving out features that were arguably useful. While the other MP3 makers were designing and marketing on checklists of features, the iPod stripped things down to the basics. And kicked the others to the curb.

Jobs realized before others that it was time to stop working on "possible" and start emphasizing "easy". When technology is new and rapidly evolving, something new is possible with each passing year, and giving buyers new features is enough to sell products. But when technology reaches a certain point, and the feature lists get long enough, all products have the essential features. The differentiation then becomes based on something very simple: what people like.

This is particularly true as technology starts appealing to a broad market. If you try to satisfy everyone in a broad market by including all the features anyone in a broad spectrum wants, you’ll end up with an unusable mess.

At some point in the evolution of technology for a given space, people just assume that the features they really need will be in all the devices they see. They start choosing based on emotion. That is, they seek what feels elegant and fluid to them, something they really want to be a part of their daily life.

This is where genuine design, based on universal design principles that go back decades or centuries, starts adding value. For example, Hick’s Law says that the time required to choose an option goes up as the number of options increases. Simply put, users get frustrated trying to find the feature they want from a long list of features in a menu, or trying to find the button they want on a remote control that has fifty-eleven buttons.

There is an entire body of knowledge in this space, and the first major computer/software company to emphasize designers who knew and understood this body was Apple. The culture at Apple values people who know how to get inside the mind of a user and then create a new way of interacting with technology that the user will love.

Jobs created and drove that culture. He went from turning the music business upside down with the iPod to turning the phone industry upside down with the iPhone, and now Apple is remaking their original territory, the personal computer, with the iPad.

I’ve discussed before in the comments here that I don’t like the iPad. It’s slow and limited for my purposes, many of the web sites I use are not compatible with it, and I don’t like iTunes.

But it’s not designed for me. That’s a key lesson that designers grow to appreciate. Each design has a target audience, which must not be too broad. The true test of a good designer is whether they can design something for someone who is not like them. 

I put my iPad in the hands of my 76 year old mother, and she immediately took to it. I showed her a few basic touch gestures, and she could immediately do the only things she uses a computer for – browsing and email. For her, it was easy, and as a veteran of the made-to-do-anything-and-everything Windows (I got her a computer for email and such six years ago), she really appreciated that.

The culture created by Jobs can do things that Microsoft, for all its money and brains, is not very good at. Microsoft people are smart. I work with many of them, so I’ve seen it firsthand. But almost all of them have a tendency that is all too common in the human race. They can only see the world through their own eyes, and are not very good at seeing it through the eyes of someone with a radically different background or different abilities.

When Microsoft teams start designing a new product or version, most of the times I’ve been involved, the process started with a list of proposed features. In other words, their process starts with what they want to make possible for the user.

Unlike Apple, the culture at Microsoft places little or no value on making things easy. This isn’t surprising, because Microsoft’s success over a span of decades has not been dependent on innovation in making things easy. It’s been in making things possible and affordable. They copied the "make things easy" part from someone else, usually Apple.

But even Microsoft has seen the direction for the industry laid out by Jobs and Apple, and realized that things have sped up. Copying isn’t good enough any more. Jobs perfected the process of laying entire segments waste with an innovative new entry, and as the iPhone showed, it can happen in a single year.

Those at Microsoft are starting down the path of worrying more about user experience. They may not like it much, but they realize it’s now a matter of necessity. 

First, they created the XBox – an entirely new product in a separate division that successfully challenged established players in a world where user experience trumps everything else. Then, shamed by the abysmal Windows Mobile products they had produced in the phone space, they created a pretty decent product there in the Windows Phone.

Their steps are halting and tentative, but at least they are toddling down that path now. I hope they learn how to walk and run on that path, but given the effort it will take to turn their culture around, that will take a while.
 
I don’t know that they would have ever gone down that route if Jobs and Apple had not pushed them down it. I’ve chafed for most of my career at the apathy and ignorance in the Microsoft community around user experience. I’ve always believed that our systems and devices exist for users, not for our own aggrandizement. As such, we owe them the best experience we can give them.

I was never a major Apple customer. Apple was never a cost-effective choice for the business and data oriented software I’ve created.

But that doesn’t mean that I don’t appreciate what Steve Jobs did for our industry. I absolutely do. I wish he could have been around for another decade or two, continuing to show the world that "possible" isn’t good enough, and push the rest of the industry into respecting our users and making things easy.

michael kors outlet michael kors handbags outlet michael kors factory outlet