Free Markets, Free People

Technology

The power of the market, proven again …

We were told that while oil prices were high, shale oil could be produced at enough of a profit to drill, but to expensive to continue if the prices dropped.

But efficiency and technical innovation have overcome that bit of conventional wisdom as Shale Energy Insider reports:

US shale companies have increased the number of rigs in the field for the first time in nearly seven months when oil prices were trading around $70 per barrel, compared to under $60 per barrel in the current market.

The number of rigs rose in almost every main shale basin across the US according to data gathered by Baker Hughes.

Industry experts have suggested that as a result of last year’s price crash, shale exploration firms have cut their break even costs by anything up to $20 per barrel.

As much as anything else, the rise this week is a testament to break-evens coming down just over the course of this year,” said James Williams, president of energy consultancy WTRG Economics.

Shale is a lot more resilient than we thought it was, and it means we’re going to be able to keep producing shale oil at a lower cost than we thought we could.”

Adding rigs is the primary way to gauge whether or not it is economically profitable for energy companies to drill for and pump the oil  According to one analyst, the companies have been able to streamline their operations to the point their breakeven costs have dropped by about $20 a barrel.  That’s huge:

A Bloomberg analyst suggested that the cost of drilling services have fallen between 20% and 50% with break even prices in parts of the Permian and Eagle Ford below $40 per barrel.

And what does it mean overall?

Director of upstream research for Wood Mackenzie, Scott Mitchell forecast that producers could add up to 100 oil rigs by the end of the year.

Drilling rigs and fracking require a quite specific technical workforce, and there were a lot of layoffs as a result of the drop in activity.

We may find the supply of people becomes short very quickly if activity ramps up, leading to price increases again,” he predicted.

That’s right … jobs and less expensive gas.  Of course, most if not all of the shale oil drilling has taken place on non-federal land, and the market has been able to function without a great deal of governmental interference.  It is providing both employment and a very important commodity at less expensive prices.  Additionally, as it lowers its breakeven point, it buffers us against volume drops as the price of oil comes lower and other sources stop producing oil.  With the lower breakeven point, they’ll continue to pump past the point where they’d have quit previously because doing so is still profitable for them.  That helps ensure lower prices at the pump will be more common and more stable.

The market … a wonder we need to allow to work without interference much more often than we do.

~McQ

Scheming away your money … and privacy

That’s primarily what politicians seem to want to do despite protestations to the contrary by some.  They’re always looking for a new “revenue stream”.  And since tax payers are the only folks who actually pay taxes, they’re constantly dreaming up new ways to “access” your wallet.

Such as:

Rep. Earl Blumenauer (D-Ore.) on Tuesday reintroduced legislation that would require the government to study the most practical ways of taxing drivers based on how far they drive, in order to help fund federal highway programs.

Blumenauer’s bill, H.R. 3638, would set up a Road Usage Fee Pilot Program, which he said would study mileage-based fee systems. He cast his bill as a long-term solution for funding highway programs, and proposed it along with a shorter-term plan to nearly double the gas tax, from 18.4 cents to 33.4 cents per gallon.

“As we extend the gas tax, we must also think about how to replace it with something more sustainable,” Blumenauer said Tuesday. “The best candidate would be the vehicle mile traveled fee being explored by pilot projects in Oregon and implemented there on a voluntary basis next year.”

Because, you know, taxpayers paid for the highways, taxpayers have funded the maintenance of the highways and now they should pay for the privilege of driving on them as well. So, many single moms, who can barely afford gas for the car,  will likely be paying by the mile to go to work (as with most of these stupid schemes, the one’s who can afford it the least will get hit the hardest by it).

Brilliant!  Aw, what the heck, they can take public transportation, huh?

And what about privacy? What business is it of government how far you drive. One assumes they’ll be able to verify your mileage somehow, no? Do you really think it will be up to you to voluntarily keep records and report to the government? Of course not. So somehow the system has to be able to track you and tally mileage.

Yup:

In 2011, the Congressional Budget Office (CBO) released a study that explored how a VMT system might work. That report suggested devices could be fitted onto cars that log how far they have traveled, and these devices could be electronically read at gas stations to general tax bills for drivers.

That’s what I want … a government tracking device on my car.

Where’s Orwell when you need him?

~McQ

Welcome to the future

I’ve spent the last 20 years developing software, managing software development, and doing software systems analysis full-time. I’ve been programming since I was 16. The first time I went to college, my major was computer science.  Since then, I’ve seen one major revolution in the computer industry, which essentially blew up everything that came before it.

That first revolution was the advent of the PC. I started college in 1982, when there were very, very few PCs. Our computer lab had a couple of Apple IIs, Osbornes and TRS-80s. We played with them. They were cute. But we did our business on a Digital DEC 1133 mainframe with 50 dumb terminals and an old IBM 64 that only took punch card input.  Programming was done by an elite group of specialists Who logged into the mainframe and began writing in plain text, or who pushed up to a massive IBM card punch machine, and typed in their programs one line at a time, punching a card for each line of code.

The punch cards were the worst. At least, on the DEC, you you could compile your program while writing it. With the punch cards, you’d stack them up all in order—being very careful to write the card number on the back in magic marker, in case you dropped your card stack and had to re-sort the cards—and turn them into the computing center.  Then you’d go back the next day to learn that you’d forgotten to type a comma in card 200, and you had to re-type the card. You’d turn your stack in for another overnight wait, only to learn you’d missed a comma in card 201.

It was worse than being stabbed.

The PC killed all that. It killed the idea of programmers being this small cadre of elite specialists. And once PCs could talk to each other via a network, they killed the idea that software development had to take a long time, with shadowy specialists toiling away on one large machine in the bowels of the building.  By 1995, self-taught amateurs were programming database applications to handle their small business inventory in FoxPro.  Corporations employed hordes of programmers to build complicated database applications.

In fact, for all the snide attitudes they get, the people at Microsoft pretty much single-handedly created the worldwide software development community we have today. The Visual Basic for Applications programming interface for Microsoft Office, Visual Basic, and, eventually, the .NET Framework allowed millions of people to learn programming and create applications. Practically every corporation with more than 100 people has their own programming team, building custom desktop software for the organization.  Millions of people are employed as free-lance software developers. Yes, there are other programming technologies out there, but none of them—none—have had the impact on democratizing software development that Microsoft’s has had.

There are still some mainframe computers, of course. IBM makes 90% of them. It’s a small, mostly irrelevant market, except for governments, universities, and very large corporations. Every computer used to be a mainframe. Now, almost none of them are.

We’ve gotten used to this software development landscape. Even the advent of the Internet—while revolutionary in many ways—has not been particularly revolutionary in terms of software development. Until now, the changes to software development caused by the internet have been evolutionary—building on existing technologies. In fact, in some ways, the internet has been devolutionary. Thirty years ago, workers logged in to dumb terminals with no processing power, using a mainframe to do all the work.  Similarly, using the internet until now has mainly meant opening up a dumb web browser like Firefox or Internet Explorer to talk to a web server where all the magic happens. The browser has really been just a display device. The web server takes a request from a browser, retrieves information, gets database data, formats it, and sends it back in simple text so the browser can display it.

Until now.

In the past year or so, a number of technologies have hit the market that make the browser the center of processing instead of the web server.  Now, instead of running programming code on the server to do all the hard processing work, the browser can run JavaScript code and do all the processing locally. This has long been theoretically possible, but it was…difficult. Now, an entirely new kind of programming is possible, with JavaScript tools like Node.JS, Google’s AngularJS, and database technology that departs from the traditional relational database like MongoDB. (Though not, maybe MongoDB itself, for a variety of reasons.)

It’s still in it’s infancy. All of the smooth developer productivity tools we’ve become used to aren’t there yet. It’s still a bit of a pain to program, and it takes longer than we’ve been used to. Indeed, it’s very much like programming was back in the 1990s, when the developer had to code everything.

For instance, in Microsoft’s .NET development environment today, developers have become used to just dragging a text box or check box onto a form and having the software development environment write hundreds of lines of code for them. In a Windows application, a data-driven form that connects to a SQL Server database can be created almost entirely through a drag and drop interface, where the development environment writes thousands of lines of code behind the scenes. The developer has to actually write about 15 lines of code to finish up the form so it works.

We don’t have that yet with the current generation of JavaScript tools. We have to type a lot more code to do fairly simple things. So, for someone like me, who’s lived in the .NET world since 2001, it’s a bit of a pain. But it’s a thousand times better than it was just two years ago. Five years from now, the tools for rapid application development for browser-based applications will be everywhere.

This is the second revolution in computing that will change everything we’ve become used to. Right now, a software application generally has to be installed on your computer. A software application designed for Windows won’t install on a Mac or Linux machine. Well, that’s all going away. Browser-based software is platform independent, which is to say, it doesn’t matter what your computer’s operating system is. Do you need an office suite? Google already has a browser-based one online, and there’s probably someone, somewhere in the world, working on a desktop-based version right now.  Need to access your database to crunch some data? No need for Access or FileMaker Pro. We can do that in the browser, too. In fact, we’re pretty much at the point where there is no commonly-done task that can’t be done in the browser.

We can now make almost—almost—any software application you think of, and anyone, anywhere in the world, can run it, no matter what computer they’ve got. This is the second revolution in computing that I’ve seen in my lifetime, and it’s going to change everything about how applications are developed.  Software that is dependent on the operating system is essentially as dead as the mainframe. Microsoft’s advantage in building a software development community? That’s dead now. Desktop applications? Dead. In 5 years there’ll be nothing that you can’t do in a browser. On a phone. I think that means the days of Windows having a natural monopoly on corporate computing is now dead, too.

There’ll still be desktop systems, of course. And, as long as your system can act as a web server—and pretty much any desktop or laptop can—you’ll still have software that you run on the desktop.  After all, you may want to work locally instead of on a big server run by someone like Google or Microsoft, who’ll be doing God knows what with any data you store on their servers. But your choice of computer or operating system will not be driven by whether the software you want to use is available for it. In fact, in 10 years, when you think of desktop, you may just be thinking of a keyboard and display monitor that you plug your phone into to work more conveniently. Assuming that you just don’t have to talk into it to get things done.

If you’re working in software development, and aren’t embracing the coming wave of platform independent, browser-based programming, you’re not doing yourself any favors. It may take another 10 years or so, but the technology you’re working on right now is dying. For someone like me, who’s invested decades in Windows applications development, it’s a bit sad to see all that accumulated knowledge and experience passing away. It’s not easy to move into an entirely new development technology and go through all of it’s growing pains. But I don’t see any choice.

Thirty years ago, everything I learned in college about how to work with computers got tossed out the window.  All of those hours struggling to write RPG programs in IBM punch cards, learning about mainframes…all of it was utterly useless within a few years when the PC came out. Now it’s happening again.

I remember how, back in the 90s, all the old mainframe Unix guys were grumpy about having to kiss their Unix machines good-bye. In 10 years, I’m not going to be one of those old Unix guys.

The obvious question

I find something really interesting. In my previous post on creating the 2 Quickscript fonts, no one asked what I’d think was an obvious question, which is, "Wait. You made fonts? How the hell do you make a font?"

I find it fascinating that, especially today, when we have daily access to electronic typography, there’s so little interest in what fonts are, or how to make make them. Especially when literally anyone with a computer can make their own fonts. There’s even a free, online bitmap font creation program called Fonstruct. We spend our lives surrounded by typography and almost no one cares about it at all.

Which brings me to a trilogy of fantastic documentaries about design by a film-maker named Gary Hustwit: Helvetica, Objectified, and Urbanized. All three of them are enormously interesting, and one of them is about a font, Helvetica, which every single person in the Western world sees every single day of their lives. You should watch all three of them.

Also you should go read my latest auto review at Medium: Doctor Hoon: 2013 Mini John Cooper Works GP. And you should "recommend" it after reading, to make my Medium stats shoot up really high.

~
Dale Franks
Google+ Profile
Twitter Feed

For our readers in software development–a new technology called the agilo-modulizer

Today seems like a perfect day to tell you about some new technology I’ve been involved with for software development. Here’s a ninety second video with a high level description, made by the video training company that I did a course with last year:

 

I know some of our regular commenters, particularly looker, will be interested in this technology.

So lift those heavy eyelids

What if people could easily function with much less sleep?

Jon M at Sociological Speculation asked that question after observing that “new drugs such as Modafinil appear to vastly reduce the need for sleep without significant side effects (at least so far).” At extremes, as Jon M noted in a follow-up post, modafinil allows a reduction to 2.5 hours a night, but “the more common experiences seem to be people who reduce their sleep by a few hours habitually and people who use the drugs to stay up for extended periods once in a while without suffering the drastic cognitive declines insomnia normally entails.”  In fact, alertness is not the only reported cognitive benefit of the drug.

The US brand of modafinil, Provigil, did over $1.1 billion in US sales last year, but for the moment let’s dispense with the question of whether modafinil is everything it’s cracked up to be.  We’re speculating about the consequences of cheaply reducing or even eliminating the need for sleep for the masses.

If I can add to what’s already been said by several fine bloggers – Garett Jones at EconLog on the likely effect on wages, then Matt Yglesias at Slate sounding somewhat dour about the prospect, and Megan McArdle at the Daily Beast having fun with the speculation – the bottom line is that widely reducing the need for sleep would be a revolutionary good, as artificial light was.

For a sense of scale, there are about 252 million Americans age 15+, and on average they’re each awake about 5,585 hours a year.  Giving them each two extra hours a night for a year would be equivalent to adding the activity of 33 million people, without having to shelter, clothe, and feed 33 million more people.

Whatever objections critics have, sleeping less will be popular to the extent that people think the costs are low.  For all the billions of dollars spent trying to add years to their older lives, obviously people would spend more to add life to their younger years.  Who ever said, “If only I’d had less time!”?

Consider that the average employed parent usually sleeps 7.6 hours each workday.  He spends 8.8 of his remaining hours on work and related activities, 1.2 hours caring for others, and 2.5 hours on leisure and sports.

If he spends more time working productively (i.e. serving others), that’s good for both him and society.  The time and effort invested in birthing, educating, and sorting people for jobs is tremendous, so getting more out of people who are already born, educated, and sorted is just multiplying the return on sunk costs.

That’s a godsend for any society undergoing a demographic transition after the typical fall in birthrates, because aside from hoping for faster productivity growth, the specific ways to address having fewer workers per retiree – higher taxes, lower benefits, more immigration, or somehow spurring more people to invest in babies for decades – are unpleasant or difficult or both.

And if he uses extra hours to pursue happiness in other ways, that’s generally fine too.  A lot of people may simply get more out of their cable subscription. Others will finally have time for building and maintaining their families, reading, exercising, or learning a skill.

Yes, once a substantial number of people are enhancing their performance, others will likely have to follow suit if they want to compete.  But then, that’s also true of artificial light and many other technologies.  If people naturally slept only four hours a night and felt rested and alert, who would support a law forcing everyone to sleep twice as long, cutting a fifth of their waking hours so that everyone would slow down to the speed that some people prefer to live their lives?

I don’t think most people have such a strong presumption in favor of sleep.  We like feeling rested, or dreaming, but not sleeping as such; a substantial minority of Americans sleep less than advised despite the known costs, and so reveal their preference for waking life over oblivion.

US terrorism agency free to use government databases of US citizens

It is coming to the point that it is obvious that the terrorists have won.  Why?  Because they have provided government the excuse to intrude more and more into our lives and government is more than willing to use it.  If this doesn’t bother you, you’re not paying attention:

Top U.S. intelligence officials gathered in the White House Situation Room in March to debate a controversial proposal. Counterterrorism officials wanted to create a government dragnet, sweeping up millions of records about U.S. citizens—even people suspected of no crime.

Not everyone was on board. “This is a sea change in the way that the government interacts with the general public,” Mary Ellen Callahan, chief privacy officer of the Department of Homeland Security, argued in the meeting, according to people familiar with the discussions.

A week later, the attorney general signed the changes into effect.

Of course the Attorney General signed the changes into effect.  He’s as big a criminal as the rest of them.

What does this do?  Well here, take a look:

The rules now allow the little-known National Counterterrorism Center to examine the government files of U.S. citizens for possible criminal behavior, even if there is no reason to suspect them. That is a departure from past practice, which barred the agency from storing information about ordinary Americans unless a person was a terror suspect or related to an investigation.

Now, NCTC can copy entire government databases—flight records, casino-employee lists, the names of Americans hosting foreign-exchange students and many others. The agency has new authority to keep data about innocent U.S. citizens for up to five years, and to analyze it for suspicious patterns of behavior. Previously, both were prohibited.

Your activities are now presumed to be “suspicious”, one assumes, just by existing and doing the things you’ve always done.  Host a foreign exchange student?  Go under surveillance.  Fly anywhere the government arbitrarily decides is tied into terrorists (or not) it is surveillance for you (can the “no-fly” list be far behind?).  Work in a casino, go onto a surveillance list.

And all of this by unaccountable bureaucrats who have unilaterally decided that your 4th Amendment rights mean zip.  In fact, they claim that the 4th doesn’t apply here.

Congress specifically sought to prevent government agents from rifling through government files indiscriminately when it passed the Federal Privacy Act in 1974. The act prohibits government agencies from sharing data with each other for purposes that aren’t “compatible” with the reason the data were originally collected.

But:

But the Federal Privacy Act allows agencies to exempt themselves from many requirements by placing notices in the Federal Register, the government’s daily publication of proposed rules. In practice, these privacy-act notices are rarely contested by government watchdogs or members of the public. “All you have to do is publish a notice in the Federal Register and you can do whatever you want,” says Robert Gellman, a privacy consultant who advises agencies on how to comply with the Privacy Act.

As a result, the National Counterterrorism Center program’s opponents within the administration—led by Ms. Callahan of Homeland Security—couldn’t argue that the program would violate the law. Instead, they were left to question whether the rules were good policy.

Under the new rules issued in March, the National Counterterrorism Center, known as NCTC, can obtain almost any database the government collects that it says is “reasonably believed” to contain “terrorism information.” The list could potentially include almost any government database, from financial forms submitted by people seeking federally backed mortgages to the health records of people who sought treatment at Veterans Administration hospitals.

So they just exempted themselves without any outcry, without any accountability, without any review.  They just published they were “exempt” from following the law of the land or worrying about 4th Amendment rights.

Here’s the absolutely hilarious “promise” made by these criminals:

Counterterrorism officials say they will be circumspect with the data. “The guidelines provide rigorous oversight to protect the information that we have, for authorized and narrow purposes,” said Alexander Joel, Civil Liberties Protection Officer for the Office of the Director of National Intelligence, the parent agency for the National Counterterrorism Center.

What a load of crap.  If you believe that you’ll believe anything government says.  Human nature says they’ll push this to whatever limit they can manage until someone calls their hand.

And, as if that’s all not bad enough:

The changes also allow databases of U.S. civilian information to be given to foreign governments for analysis of their own. In effect, U.S. and foreign governments would be using the information to look for clues that people might commit future crimes.

So now our government is free to provide foreign governments with information about you, whether you like it or not.

This isn’t a new idea – here’s a little flashback from a time when people actually raised hell about stuff like this:

“If terrorist organizations are going to plan and execute attacks against the United States, their people must engage in transactions and they will leave signatures,” the program’s promoter, Admiral John Poindexter, said at the time. “We must be able to pick this signal out of the noise.”

Adm. Poindexter’s plans drew fire from across the political spectrum over the privacy implications of sorting through every single document available about U.S. citizens. Conservative columnist William Safire called the plan a “supersnoop’s dream.” Liberal columnist Molly Ivins suggested it could be akin to fascism. Congress eventually defunded the program.

Do you remember this? Do you remember how much hell was raised about this idea?  However now, yeah, not such a big deal:

The National Counterterrorism Center’s ideas faced no similar public resistance. For one thing, the debate happened behind closed doors. In addition, unlike the Pentagon, the NCTC was created in 2004 specifically to use data to connect the dots in the fight against terrorism.

What a surprise.

I’m sorry, I see no reason for an unaccountable Matthew Olsen or his NCTC to know anything about me or have the ability to put a file together about me, keep that information for five years and, on his decision and his decision only, provide the information on me to foreign governments at his whim.

I remember the time the left went bonkers about the “Privacy Act”.  Here’s something real to go bonkers on and what sound do we hear from the left (and the right, for that matter)?

Freakin’ crickets.

~McQ

The end of gun control?

I ran across an article in Forbes by Mark Gibbs, a proponent of stricter gun control, in which he thinks, given a certain technology, that gun control in reality may be dead.

That technology?  3D printers.  They’ve come a long way and, some of them are able to work in metals.  That, apparently led to an experiment:

So, can you print a gun? Yep, you can and that’s exactly what somebody with the alias “HaveBlue” did.

To be accurate, HaveBlue didn’t print an entire gun, he printed a “receiver” for an AR-15 (better known as the military’s M16) at a cost of about $30 worth of materials.

The receiver is, in effect, the framework of a gun and holds the barrel and all of the other parts in place. It’s also the part of the gun that is technically, according to US law, the actual gun and carries the serial number.

When the weapon was assembled with the printed receiver HaveBlue reported he fired 200 rounds and it operated perfectly.

Whether or not this actually happened really isn’t the point.  At some point there is no doubt it will.  There are all sorts of other things to consider when building a gun receiver (none of which Gibbs goes into), etc., but on a meta level what Gibbs is describing is much like what happened to the news industry when self-publishing (i.e. the birth of the new media) along with the internet became a realities.   The monopoly control of the flow of news enjoyed by the traditional media exploded into nothingness.  It has never been able to regain that control, and, in fact, has seen it slip even more.

Do 3D printers present the same sort of evolution as well as a threat to government control?  Given the obvious possibility, can government exert the same sort of control among the population that it can on gun manufacturers?  And these 3D printers work in ceramic too.  Certainly ceramic pistols aren’t unheard of.    Obviously these printers are going to continue to get better, bigger and work with more materials. 

That brings us to Gibb’s inevitable conclusion:

What’s particularly worrisome is that the capability to print metal and ceramic parts will appear in low end printers in the next few years making it feasible to print an entire gun and that will be when gun control becomes a totally different problem.

So what are government’s choices, given its desire to control the manufacture and possession of certain weapons?

Well, given the way it has been going for years, I’d say it isn’t about to give up control.  So?

Will there be legislation designed to limit freedom of printing? The old NRA bumper sticker “If guns are outlawed, only outlaws will have guns” will have to be changed to “If guns are outlawed, outlaws will have 3D printers.”

Something to think about.  I think we know the answer, but certainly an intriguing thought piece.  Registered printers?   Black market printers?  “Illegal printers” smuggled in to make cheap guns?

The possibilities boggle the mind.  But I pretty much agree with Gibbs – given the evolution of this technology, gun control, for all practical purposes, would appear to be dying and on the way to dying.

~McQ

Twitter: @McQandO

Can you solve the debt crisis by creating more debt?

Most intuitively know you can’t borrow your way out of debt, so it seems like a silly question on its face.  But the theory is that government spending creates a simulative effect that gets the economy going and pays back the deficit spending in increased tax revenues.  $14 trillion of debt argues strongly that the second part of that equation has never worked.

The current administration and any number of economists still believe that’s the answer to the debt crisis now and argue that deficit spending will indeed get us out of the economic doldrums we’re in.  William Gross at PIMCO tells you why that’s not going to work:

Structural growth problems in developed economies cannot be solved by a magic penny or a magic trillion dollar bill, for that matter. If (1) globalization is precluding the hiring of domestic labor due to cheaper alternatives in developing countries, then rock-bottom yields can do little to change the minds of corporate decision makers. If (2) technological innovation is destroying retail book and record stores, as well as theaters and retail shopping centers nationwide due to online retailers, then what do low cap rates matter to Macy’s or Walmart in terms of future store expansion? If (3) U.S. and Euroland boomers are beginning to retire or at least plan more seriously for retirement, why will lower interest rates cause them to spend more? As a matter of fact, savers will have to save more just to replicate their expected retirement income from bank CDs or Treasuries that used to yield 5% and now offer something close to nothing.

My original question – “Can you solve a debt crisis by creating more debt?” – must continue to be answered in the negative, because that debt – low yielding as it is – is not creating growth. Instead, we are seeing: minimal job creation, historically low investment, consumption turning into savings and GDP growth at less than New Normal levels.

Not good news, but certainly the reality of the situation.  Deficit spending has been the panacea that has been attempted by government whenever there has been an economic downturn.  Some will argue it has been effective in the past and some will argue otherwise.   But if you read through the 3 points Gross makes, even if you are a believer in deficit spending in times of economic downturn, you have to realize that there are other reasons – important reasons – that argue such intervention will be both expensive and basically useless.

We are in the middle of a global economy resetting itself.  Technology is one of the major drivers and its expansion is tearing apart traditional institutions in the favor of new ones that unfortunately don’t depend as heavily on workers.

Much of the public assumes we’ll return to the Old Normal.  But one has to wonder, as Gross points out, whether we’re not going to stay at the New Normal for quite some time as economies adjust.   And while it will be a short term negative, the Boomer retirements will actually end up being a good thing in the upcoming decades as there will be fewer workers competing for fewer jobs.

But what should be clear to all, without serious adjustments and changes, the welfare state, as we know it today, is over.  Economies can’t support it anymore.   That’s what you see going on in Europe today – its death throes.   And it isn’t a pretty picture.

So?  So increased government spending isn’t the answer.  And the answer to Gross’s question, as he says, is “no”. 

The next question is how do we get that across to the administration (and party) which seems to remain convinced that spending like a drunken sailor on shore leave in Hong Kong is the key to turning the economy around and to electoral salvation?

~McQ

Twitter: @McQandO