Giants like General Motors Know the Advantages of Outsourcing. Here’s Why You Should Too.

general motors advantages of outsourcing

General Motors Has Not Made a Car in 70 Years

And I respect GM for that.  It would be foolish of them to not know the advantages of outsourcing and do the whole job.  Let me explain.

GM does what is strategic and outsources the rest – as it should.  The company designs cars. It details the specifications for its powertrains, brakes, lights, electrical systems, tires, etc. but – and this is a big but – it doesn’t make any of them. It buys its tires from the big tire manufacturers. It buys its batteries from the big battery companies. It buys its brakes from the best-of-breed brake manufacturers. But it doesn’t even try to compete with companies that are superb at making the components of a car.  Instead, it buys from them.

To Get the Advantages of Outsourcing you Must Understand Core vs. Context

GM understands the difference between core and context. Core are functions that make a genuine difference in the marketplace; context is everything else. Context are the things that need to be done well but don’t deal directly with marketplace success.

Office buildings are context.  Accounting is context.  Designing windshields is core but manufacturing them is context.

Successful companies intuitively recognize the difference between core and context. They focus their energies on what really makes a difference and they outsource the rest.

Marketing Strategy is Core; Marketing Operations are Context

Only management can decide which markets it wants to pursue and how it wants to pursue them.  Only management can determine its product and service pricing. Only management can develop the company’s marketing strategy. Its marketing strategy is core.

But the company is not going to be recognized in the market for its excellence in installing and operating  No company will generate great profits because it did a good job scrubbing its email lists. The company’s stock price will not budge because it wrote a good piece of marketing collateral that an agency could have written.

A prudent company will focus its energies on marketing strategy and outsource its marketing operations. The reason is simple. One of the advantages of outsourcing marketing operations is that companies that focus on marketing operations need to be expert at it.  They need to know how to write compelling copy, layout a persuasive website or brochure, and deliver the corporate message cost-effectively through a wide range of channels. They need to understand the full range of digital software sales and marketing tools and know exactly where they fit in a campaign. Some of these tools are best for small companies and others are best for large companies. Marketing operations experts know which is which.

Most companies, particularly small and medium-sized ones, can’t afford to develop this expertise – and they shouldn’t. Just as those companies outsource their legal work to legal firms, outsource their accounting work to accounting firms, and outsource their janitorial work to janitorial companies, they should also outsource the implementation of their marketing strategies to boutique firms that live and breathe this sort of work.

It just makes sense.  Companies should only do what they do best and makes a difference in the marketplace. The rest should be outsourced.

Is the Patient an Afterthought in Healthcare in America?

healthcare in america

A close examination of healthcare in America leads to the inevitable conclusion that patients are one of the least important players in the healthcare system. I’m the first to admit that this claim is both counterintuitive and provocative, but hear me out.  The evidence could not be clearer. This is particularly ironic because the healthcare field is staffed with professionals who were attracted to the field specifically to provide patient care. The problem does not lie with the people in the system; the problem lies in the system itself.


Healthcare in America is a Private Sector Function

Unlike every other developed country in the world, healthcare in America is treated as a profit-making operation. This is true for profit making institutions as well as the non-profit or not-for-profit healthcare organizations. Rather than talk about profit, these hospitals talk about a “surplus” that is required to see the hospital through lean times and fund the purchase of new equipment or grow the institution. This is the first in a series of blogs that will provide ample evidence of this remarkable claim. Stay tuned.

Turning a profit is baked into the very DNA of the American culture. It is part of what it means to be an American. Healthcare is no exception.

But unlike every other business in the country, in healthcare, there is remarkably little focus on the holistic welfare of patients. Every other business in the country – and most throughout the world – have customer service centers that are tasked with handling customer problems as they arise.  Customer satisfaction is paramount. At the end of every call I make to customer service departments, the agents always ask, “Is there anything else I can help you with?” That question rarely comes up in healthcare!

Let me give a few examples of the extent to which healthcare in America is a profit-making industry rather than a service to the community.


The Arbitrary Nature of “Master Charge” Lists

Hospitals develop what they call “master charge” lists. These are the prices they propose charging for patients admitted with various admitting diagnoses. In fact, these lists are just starting points for negotiations with insurance companies. During the negotiations, the insurance companies will negotiate deep discounts from these lists and the negotiators will be seen as heroes because they were able to win those discounts. But the negotiation is highly misleading because the “master charge” lists are created only for the purpose of negotiating with the insurance companies. Hospitals and health clinics don’t have any solid data about what it really costs to treat medical conditions because they don’t have cost accounting systems that allow them to develop those costs.  They make these lists up out of thin air.

Medicare and Medicaid don’t pay according to these lists.  They ignore them.  The federal government pays according to its own payment schedule.  Hospitals have the choice of charging the government in line with those government payment schedules or not taking Medicare and Medicaid patients. Most hospitals are willing to work with the government payment schedules.

The “master charge” lists vary considerably from one institution to another. This is true for institutions of comparable quality and in the same geography. Further, unlike restaurant menus, these price lists are rarely shown in advance. This makes comparative shopping impossible!

But even if the “master lists” were available, it wouldn’t make much difference in most cases. When a relative is screaming in pain and terrorized by her imminent death, her relatives are unlikely to show the same due diligence in selecting a healthcare provider that they would show, for example, in buying a new car.


With Healthcare in America, those Who Can Afford the Least Are Charged the Most

They only patients who get hit with the “master charge” prices are poor people who can’t afford to buy insurance in the first place. Relatives may take an ailing relative to the hospital in a moment of desperation and sign whatever pieces of paper are put before them. They may not realize they’ve signed legally binding financial commitments with no upper limit.

When the bill comes it could be in the five figures for something as simple as a paper cut. Anything halfway serious is liable to be in the six digits.  And the hospitals and clinics are serious about collecting on their bills. They retain a cadre of well-paid debt-collecting lawyers who are first-rate at what they do.

First, they take the sponsor’s savings accounts. Then they go after her retirement funds.  Those are easy to pick up.  Then they take her home. A sponsor who tries to declare bankruptcy discovers that healthcare bills – like education loans – are exempt from bankruptcy.  That means that no matter how little money she may have or how little she may earn, she can’t escape healthcare bills through bankruptcy.  It’s not even worth thinking about.

This aggressive bill collecting effort is a clear sign that the welfare of the healthcare institution, not the patient, is what is at stake.  I am not trying to argue that people should not pay their bills. But having a different set of rules for collecting healthcare debts than for collecting all other debts tells me there is a double standard.

This odd situation doesn’t mean that hospital administrators are acting in a malevolent way. It means they are acting in a way that our laws and customs endorse. Those administrators have a fiduciary responsibility to their Boards of Directors to collect all the money owed to them.  They would be negligent if they did not try collect every account as vigorously as possible.


Fee-for-Service is NOT Geared to Good Patient Care

For the last hundred years or so, general practitioners and specialists have charged on a fee-for-service basis.  That means exactly what it says: doctors provide services and bill someone (i.e., the patient, an insurance company, the government) for the service provided. There is no requirement that the service needs to be required in order to improve their patients’ medical condition.  None whatsoever. Often hospitals or clinicians carry out tests not because they contribute to their patients’ well-being, but because they protect the medical community in the event of a legal suit.

Typically, patients approach GPs with a complaint of some type. The GPs will refer the patients for a series of tests that they believe will contribute to the patients’ recovery. Often, they also refer their patients to specialists. The specialists may order even more (and often more expensive) tests than the previous tests.

In the end, it really doesn’t matter whether the patients improve or not – although there is a universal hope that the tests and procedures will lead to improvements. But, regardless of the outcome, the laboratories, medical practitioners, and hospitals all charge – and collect – for the work they did, not the results they deliver.

In no other industry will professionals, executives, mechanics, or salesmen get paid for their activities without respect to the achievement of their end goals. Healthcare is unique in this respect.

To put it more bluntly, the welfare of the patients is simply not a key factor in the operation and economics of the healthcare system. I believe that every individual in the system acts in good faith in contributing to the welfare of their patients within the protocols of their professions, their institutions, and the law. Each professional likely plays her own part as well as possible, but the system rarely assigns any one individual to look after the welfare of the patient in a holistic sense. This is a sign of a problem with the system of healthcare in America – not the administrators or medical staff.


We Have an “Illnesscare” System, NOT a Healthcare System

If we are completely honest, we need to acknowledge that, with the exception of public health (which is a marginal component of the overall healthcare system), our healthcare is not primarily concerned with promoting health.  There’s no money in it. The real money is in treating patients after they get sick, suffer from cancer, sink into a preventable chronic disease, or break a bone. That’s where the big money is.  Saving lives and working minor miracles is heroic. “Illnesscare” galvanizes everyone who witnesses it.

But promoting health by recommending improvements to diets, exercise programs, or cleaning up the environment really doesn’t carry the same WOW factor. It is routine and undramatic. But that is truly right at the heart of healthcare and as far removed from “illnesscare” as one can imagine.


Stay Tuned for More Revelations about How Healthcare in America Works

My claim in the first paragraph that patients are the least important part of the system of healthcare in America needs a lot more justification than I’ve given here. I urge you to read the entire series of upcoming blogs about how the healthcare system works (or doesn’t work).  You will learn that we have one of the most expensive and least effective systems in the world. You will learn that our government agencies mandated to protect our health often do exactly the opposite. (And these dynamics started long before Trump came on the scene.) You will learn that Americans are among the least healthy demographic on the planet – and that this poor health is driven by policies that are known to be counterproductive. It is not driven by callous healthcare staff.

Further, what you will read in this healthcare series is NOT a conspiracy theory or a secret. Far from it. In fact, everything I’ll talk about is well known and published in articles and books that anyone can read if they choose to. But, given the pressures of everyday life, people just don’t have the time, energy, and motivation to learn about the greatest threats to their health.



Read up on Hospital Readmission Rates as a sign of Poor Healthcare Delivery Here: Part 2  Part 1

Are 30-Day Readmissions Rates a Reliable Indicator for Poor Healthcare Delivery?  (Part 2 of 2)

digital healthcare

A look at poor healthcare delivery through hospital readmission rates.

Obamacare (Patient Protection and Affordable Care Act) has provisions that require the Centers for Medicare and Medicaid Services (CMS) to financially penalize hospitals that have unacceptably high 30-day readmission rates for Medicare and Medicaid patients. Institutions with high 30-day readmission rates in just a handful of situations will suffer financial penalties for ALL Medicare and Medicaid charges during the following fiscal year – not just the handful that are monitored.  Specifically, the CMS tracks 30-day readmission rates for[1]:

  • Heart failure
  • Heart attack
  • Pneumonia
  • Chronic lung problems (emphysema and bronchitis)
  • Elective knee and hip replacements

The penalties can be as much as 3% of all the Obamacare charges for the coming fiscal year. In most organizations, this can easily amount to millions of dollars. In larger institutions, this can amount to tens of millions of dollars.

The rationale behind this policy is that high 30-day readmission rates are a reliable sign of poor healthcare delivery. The idea is that if the hospitals had done a good job in the first place, patients wouldn’t need to come back so soon.

Part 2 of this blog argues that 30-day readmission rates are not a good metric for assessing the quality of care.  I fully recognize that my position is incompatible with the current wisdom, but I’ll give several reasons to support this position.  I suspect there are many others who feel the same way but haven’t argued their position.

The most striking issue that occurs to me is that 25% of all hospitals will automatically be classified as “losers” regardless of the reasons for their high readmission rates. This automatic and simple-minded categorization is grossly unfair.


Hospitals Are Only One Component in a Complex Healthcare Web

Hospitals are highly visible nodes in a complex web of healthcare delivery.  Other components include general practitioners, medical and surgical specialists, independent laboratories, the social welfare system, and family support among others. Unfortunately, it is not unusual for the elderly to have no family support. Everyone knows healthcare is a highly fragmented and fragile system. Failure in any component of this web can lead to readmissions. Nevertheless, Medicare and Medicaid (and perhaps society at large) hold hospitals solely accountable for readmissions.

Given the complexity of the healthcare web, it is highly unfair to single out hospitals as culprits when many of the factors affecting readmissions are beyond the control of hospitals.

Readmissions are a function of hospital care and discharge planning.  That is true.  But it is not the full story.  Another factor that impacts readmissions is the severity of the illnesses treated; those with severe illnesses are more likely to be readmitted. Hospitals can lower their readmission rates by declining to treat patients with severe illnesses.  I know that this is gaming the system, but it makes the metrics look good.

In some communities, elderly patients are discharged into the care of loving, stable, supportive families. In other communities, elderly patients go back to a bleak room in solitude. When they need help – even a ride to see their GP – there is no one to turn to. In other cases, elderly patients may live with their children.  But their children often have jobs and lives of their own. Although they are available to give help sometimes, they are simply not available to help all the time.

At discharge, hospitals routinely advise patients to schedule follow up appointments with their GPs. Patients promise to do so – but often don’t. In some cases, they don’t have GPs to call.  In other cases, they simply forget to make the appointments.  Sometimes they try to schedule an appointment but cannot get one for a month or more. Then there are the patients who simply don’t have access to transportation to get to their appointments.

Discharge staff generally give extensive instructions to patients about their medications, diet, exercise, etc. But it is not unusual for patients to fail to understand these instructions. Or they understand but they don’t have the money to buy the medications. Or they have the money for their medicines but they forget to take them.

There are any number of points of failure and many of them are beyond the hospital control – but hospitals take the hit for readmissions.


Race and Minority Status Are Correlated with Readmission Rates[2]

Blacks and Hispanics have higher rates or readmission to hospitals than whites. Many of these readmissions are avoidable. This means that hospitals serving Black and Hispanic populations are doomed to look bad on their readmission stats. There is no justice in this.

Why is race and ethnic background so important in determining readmissions?  Well, for one thing, the research shows they are less likely to schedule follow-up visits with their GPs or ongoing care givers. They are also less likely to even have GPs and, therefore, are more likely to rely on their local hospitals. Many new immigrants don’t have adequate proficiency in the English language to understand their discharge instructions or read and understand the written materials their hospitals give them. Unlike whites, they have no experience in taking the initiative to look after their own health; they often take the position that whatever happens to them is beyond their control. Some don’t trust Western medicine and discount what they are told.

These demographics suffer more anxiety and depression than whites. These mental health issues contribute to the likelihood of readmissions.

These demographics often have co-morbidities. In other words, they often have several problems at the same time.  If patients don’t bring their other problems to the attention of hospital staff – or if hospital staff fail to stumble across them – those problems can pop up after discharge and trigger other, but unrelated readmissions.

The factors listed here are not due to unsubstantiated biases but to solid research funded by the Centres for Medicare and Medicaid Services and conducted by the The Disparities Solutions Center, Mongan Institute for Health Policy, Massachusetts General Hospital. Yet, even with this solid research, well-known in the healthcare community, hospitals serving these disadvantaged populations are held responsible for readmission rates beyond their control.


Readmission Rates Vary by Geography and No One Knows Why[3]

In Part 1 of this blog, I showed a map of the readmission rates across the country. Now there are two interesting points about those maps.  The first is that the maps remain unchanged year after year. This means that the geographic-based dynamics are consistent year over year.


The other interesting point is that the underlying health profile across these geographic regions is essentially the same.  In other words, the factors that drive readmission rates are not tied to differences in the health of the general population on a regional basis.  There are other drivers, but those drivers are not well understood.


We Think We Know the Answers; Not Sure We Do

The experts are in general agreement about how to reduce readmission rates. Surprisingly, only very few of the hospitals that adopt the recommended practices actually see reductions in readmission rates!  This is counterintuitive.

The four generally recognized ways to reduce readmissions are:

  • Improve discharge management with follow-up
  • Patient coaching
  • Disease/health management
  • Telehealth services

Unfortunately, the evidence shows that these common-sense techniques do NOT generally lead to lower readmissions.  The research is consistent on this finding in both community hospitals as well as teaching and research hospitals. What the data for a study CMS conducted looking at changes in readmission rates during 2008 to 2010 showed is that reductions in readmission rates are slow and inconsistent.


Do You Like to Play Whack-A-Mole?

As a boy, I remember going to the country fairs in August and playing Whack-A-Mole. Some of you may know the game.  The game has a board with about a dozen holes cut into it. “Moles” would pop out of the woodwork at random times; I never knew when and where the next one would pop out. My job was to hit the mole on the head with a mallet.  I often missed.

In some respects, taking steps to reduce 30-day readmission rates reminds me of playing Whack-A-Mole – although it shouldn’t. It seems that even though we know what we should do to reduce readmission rates, doing the “right thing” rarely leads to the desired outcome. To the extent this is true, it suggests that we don’t understand the underlying problem or that we don’t know how to address the problem.


Here Are the Best Ways to Reduce Readmission Rates

The best way to reduce readmission rates is to only accept patients who are not very sick in the first place. These folks can be patched up fairly quickly and put back on the street with a much lower chance of being readmitted.

Another technique is to reduce the overall intensity of healthcare delivery.  One would think that intensive levels of healthcare would lead to healthier populations. That, in turn, would lead to lower rates of readmission. Not true.

A third technique is to change the regional practices of hospital site care.  In some areas, patients are more likely to go to a hospital for initial care rather than a local clinic or a GP. In those cases, readmission rates are higher. If we could discourage patients from using hospitals as their primary source of healthcare, we could reduce readmission rates.

We also need to change the financial incentives. Hospitals that are given the choice between leaving a bed empty and losing the revenue or readmitting a patient and increasing its readmission counts will rarely pass up the opportunity to earn a dollar today.

Experience also shows that taking steps to reduce readmissions in only one area (e.g., better discharge planning) has little impact. But if steps are taken in a number of mutually reinforcing areas, the hospital will see better results.


So, What Does It All Mean?

So, what’s the “take away” from all this?  Well, the first thing that occurs to me is that this is a very complex problem that we don’t seem to understand well in spite of the focus it has received.

Second, we should not hold hospitals accountable for outcomes they cannot control.  We need system-wide changes, not simply improved hospital procedures.

Third, even teaching and research hospitals – where we presumably find the best-of-the-best in healthcare – have not shown significant improvements in spite of their efforts.

Fourth, readmission rates vary geographically but change very little over time for any given geography.  That means there are forces at play we have not yet identified.

Fifth, racial and ethnic minorities have higher rates of hospital readmissions. These demographics have lower levels of trust in the “system,” take less personal responsibility for their health, have lower levels of health literacy, and suffer from higher rates of mental illness.

Sixth, 30 days is an arbitrary time frame.  It’s even possible that hospitals that focus on reducing 30-day readmissions will create unexpected negative consequences in other parts of the delivery system – although no research has substantiated this fear.


Read Part 1 HERE


[1] A Guide to Medicare’s Readmissions Penalties and Data,

[2] Guide to Preventing Readmissions Among Racially and Ethnically Diverse Medicare Beneficiaries,

[3] The Revolving Door: A Report on U.S. Hospital Readmissions,


Are 30-Day Readmissions Rates a Reliable Indicator for Poor Healthcare Delivery? (Part 1 of 2)

digital healthcare

A look at healthcare delivery quality through hospital readmission rates.

Obamacare (Patient Protection and Affordable Care Act) has provisions that require the Centers for Medicare and Medicaid Services (CMS) to financially penalize hospitals and clinics that have unacceptably high readmission rates for Medicare and Medicaid patients within 30 days. Institutions with high 30-day readmission rates in just a handful of situations will suffer financial penalties for ALL Medicare and Medicaid charges during the following fiscal year – not just the handful that are monitored.  Specifically, the CMS tracks 30-day readmission rates for[1]:


  • Heart failure
  • Heart attack
  • Pneumonia
  • Chronic lung problems (emphysema and bronchitis)
  • Elective knee and hip replacements


The penalties can be as much as 3% of all the Obamacare charges for the coming fiscal year. In most organizations, this can easily amount to millions of dollars. In larger institutions, this can amount to tens of millions of dollars.

The rationale behind this policy is that high 30-day readmission rates are a reliable sign of poor healthcare delivery. The idea is that if the hospitals had done a good job in the first place, patients wouldn’t need to come back so soon.

Ironically, I would say that this claim is both true and false. There are good reasons to treat 30-day readmission rates as a reliable surrogate for poor healthcare delivery.  But there are equally good reasons to treat this arbitrary metric as completely misleading.  We will explore both sides of this argument. Part 1 of this blog will argue that 30-day readmission rates are a reliable guide to the overall quality of the healthcare provided.  Part 2 of this blog will argue just the opposite: 30-day readmission rates are a bogus measure of the healthcare provided.


Medicare Readmissions Cost $17 Billion a Year

The most compelling argument in favor of using the 30-day readmission rates as a metric of quality comes directly from the Centers for Medicare and Medicaid Services (CMS). CMS claims that of the total $26 billion it pays annually for readmissions, $17 billion of that figure is for avoidable readmissions[2]. One in five elderly patients returns within 30 days of discharge. These are staggering numbers and, if true, are a strong indictment of the healthcare industry.

Further, this is the figure only for Medicare and Medicaid readmissions – a minority of all hospital admissions. Since there is no organization charged with tracking the costs of readmissions for those with private health insurance or no insurance at all, we will never know the full extent of avoidable readmissions for all patients.


Poor Communications at Discharge Is a Primary Driver of Readmissions

High readmission rates have been tracked to poor communications between hospitals and their discharged patients. Patients are often discharged with little explanation about the medications they are to take or the pain they will experience.  Post discharge pain is particularly severe for patients with hip and knee replacements.[3] Patients who expect the pain, know that it is normal, and know how to manage it are far less liable to return to the hospital than those who suffer pain and believe something has gone wrong.

There are other examples of poor communications that lead to rapid readmissions. Some patients who are admitted for chronic obstructive pulmonary disease have their condition treated and are discharged promptly. But the hospital personnel fail to tell some of those patients to stop smoking! They continue to smoke and return to the hospital promptly. Better communications at discharge about the need to stop smoking would make these readmissions unnecessary.

One patient suffered from type 2 diabetes for 14 years. She showed up at the hospital because her blood sugar was out of control. She got patched up and was back on the street again – but with no idea how to administer her insulin or manage her diet. Wham! She was back in the hospital again. This time the nurses and dietician showed her how to handle her insulin and how to change her diet. This was the first she had heard of these things in 14 years.  Strange but true.

Some research[4] indicates that 30-day readmissions could be reduced by 5% simply by improving communications with the patient prior to and at discharge while following a defined process of care protocol.  This is a cheap solution to an expensive problem.

If the solution is so obvious, why hasn’t it been widely adopted? Well, it really boils down to the way our healthcare system is organized. Each of the participants in the system does his or her job as they were trained to. If the system doesn’t focus on clear, thorough communications at discharge, it won’t happen. But that is changing.  Now that CMS is tracking readmission rates, financial penalties are applied regularly, and research uncovers the underlying reasons, the system is changing. Again, we need to point the finger at the hospital protocols, not the individual practitioners.


Poor Follow Up is a Big Problem, Too

Half the Medicare patients do not see their general practitioners or a specialist during the first two weeks after their discharge. We have no numbers for non-Medicare/Medicaid patients, but it is reasonable to assume that the story is somewhat similar.

This lack of follow up leaves patients who suffer problems – real or imagined – little recourse but to return to the hospital where they received their most recent care.  Most of them don’t know what else to do.

“Evidence Based Medicine” May Be Another Culprit

Medical and nursing training focuses on the technical aspects of healthcare. This training focuses on the “evidence-based” aspects of what works and what doesn’t. Since there have been few (perhaps no) studies of the importance of patient/clinician based interactions, patient communication hasn’t attracted the attention it should as an important factor in long-term healthcare.

But even if there have been no studies to validate the importance of those communications, common sense should have done the trick.  In any case, the culture is likely to change. Hospital staff will pay more attention to discharge communications in the future.


Race and Ethnic Background Are Major Factors in Readmissions

Race and ethnic background are important factors in determining readmissions. Blacks and Hispanics have higher rates of avoidable readmissions than whites.[5] There is a multitude of reasons for this:

  • Less likely to see a primary care provider or specialist
  • Less likely to have a primary care provider they visit regularly
  • Limited proficiency in English leads to poor follow up (less likely to take the medicines prescribed, less likely to understand the discharge instructions, etc.)
  • Poorer health literacy and, as a result, less likely to take personal responsibility for their health
  • Cultural beliefs and customs
  • Less likely to have adequate food, transportation, and social support to follow medical regimens
  • More likely to suffer anxiety, depression, and poor mental health
  • More likely to suffer from a host of medical problems that lead to readmission

Collectively, this means that it is costlier and more time consuming to deal with these patients. When hospital readmission rates were not measured, there was no financial incentive for hospitals to make special efforts to deal with these demographic groups. But now that these statistics are measured and reported publicly and there are financial penalties, we are likely to see hospitals take the steps necessary to minimize readmissions with this demographic.

This does not suggest that hospital administrators were negligent in the past. Rather, it suggests that they were responding to public evaluation and financial metrics that made sense at that time. Once we change the system, we change behaviors.


What Gets Measured, Gets Done

This is an old management bromide that applies directly to hospital readmissions. Until the CMS started focusing on hospital readmissions, the issue simply escaped notice. Since it was never an issue, it was never addressed. It was only when healthcare administrators found that their institutions were evaluated and financially penalized with this metric that they focused on it.  That is normal.

Measuring 30-day readmissions and penalizing the worst performing 25% brought a focus to healthcare quality that has been missing for the last three millennia.

The fee-for-service payment model that has been used in this country since day one has never brought light to bear on the quality of healthcare. We have always automatically assumed that all clinicians showed superb judgment and did all that can be done. This uncritical attitude never held anyone in the healthcare field accountable for actual results.

Now, here’s the important point: By pointing a spotlight on high readmission rates and putting penalties in place to penalize poor performers, the federal government believes it can change behaviors.  The rise of Accountable Care Organizations to address this issue is unlikely to have occurred without this sort of impetus. Further, there is evidence (The Revolving Door) that this new-found attention is, in fact, changing some behaviors at the community level. In other words, by measuring readmission rates, hospitals find that they can improve their performance on this metric.


Readmissions Are Determined by Where Patients Live

If patient demographics and healthcare delivery systems were homogeneous across the country, we would expect to find the same rate of readmissions uniformly everywhere.  That is not the case. Rather, we see a lot of “lumpiness.” In other words, the rates or readmissions to hospitals are determined to a surprising degree by where patients live.

The map below shows the intensity of readmission rates within hospital referral regions.

Although it would be convenient to tie these widely ranging readmission rates solely to quality of medical care, that would be a mistake.  There are other forces at play:

  • Patient health status
  • Discharge planning
  • Care coordination with primary care physicians and other community based resources
  • Quality and availability of ambulatory care services

Further, some places treat their hospitals as a routine site of care. In other words, it is normal for those in some areas to go to the hospital rather than doctors’ offices or community clinics.

Percent of patients readmitted within 30 days following medical discharge among hospital referral regions (2009)


Here is something else I find interesting. If you look at the readmission rates for any one of the five factors I listed immediately after the first paragraph above, you’ll find that the readmission rates for the other four factors are nearly the same for hospitals in the same geographic region.   This correlation suggests that there is some dynamic at play that is independent of the illnesses and chronic conditions in the region.

In other words, the patient is not at the hub of the healthcare system.


So, What Does It All Mean?

It requires some judgment to stand back, look at this disparate information, and draw conclusions.  In fact, different people are likely to draw different conclusions.

Nevertheless, I think it’s reasonable to say that 30-day readmission rates can be used, at a minimum, as a rough measure of quality of care. The rise of Accountable Care Organizations (which we will discuss later) and the fact that hospitals have been able to shift their position significantly on the readmissions scale suggests that improvements are possible if we develop the right metrics, measure all hospitals by the same yardstick, and provide rewards accordingly.


Read Part 2 Here


[1] A Guide to Medicare’s Readmissions Penalties and Data,

[2] The Revolving Door: A Report on U.S. Hospital Readmissions,


[3] Reducing Readmission Rates with Superior Pain Management, by Bobbie Gerhart, owner, BGerhart & Associates, LLC; former president, Miami Valley Hospital, Dayton, Ohio


[4] What Has the Biggest Impact on Hospital Readmission Rates, by Claire Senot and Aravind Chandrasekaran


[5] Guide to Preventing Readmissions among Racially and Ethnically Diverse Medicare Beneficiaries, Prepared by: The Disparities Solutions Center, Mongan Institute for Health Policy, Massachusetts General Hospital, Boston, MA

The Top Six Big Data Challenges in Education

education challenges

Top Big Data Challenges

The path to the successful application of Big Data to educational institutions is going to face at least six major Big Data challenges or road blocks that will have to be addressed one at a time:

Integration across institutional boundaries – K-12 schools are generally organized around academic disciplines. Universities are organized as separate schools, faculties, and departments. Each of these units operates somewhat independently of the others and share real estate as a matter of convenience. Integrating data across these organizational boundaries is going to be a major challenge. No organizational unit is going to surrender any part of its power base easily. Data is power.

Self-service analytics and data visualization –– It is going to be a piece of cake to give planners and decision makers the technology based tools they need to do their own analytics and visualize the results of their studies graphically. It is going to be a genuine challenge to create a culture that requires them to do their own studies using those tools. An even greater challenge will be to create a climate that informs their decision making with the results of their own studies because they are so accustomed to making decisions intuitively.

Privacy – There is a great deal of concern – perhaps even excessive concern – about the privacy of the information collected about each student and her family. The concern is that this data could fall into the wrong hands or be abused by those who have been given responsibility for safeguarding the information. To some extent, this is a technological and management issue. However, the fundamental issue is fear that the technical and management safeguards either won’t work or will be abused. Lisa Shaw, a parent in the New York City public school system said, “It’s really invasive. There’s no amount of monetary funds that could replace personal information that could be used to hurt or harm our children in the future.”

Correlation vs cause and effect– Purists in rational argument want to see arguments that clearly spell out cause-and-effect relationships before blessing them as a basis for decision making. The fact that two factors may be highly correlated does not satisfy this demand for cause-and-effect. Nevertheless, real world experience in other areas of Big Data have shown that high correlations are sufficient by themselves to make decisions that are either lucrative or achieve the objectives the players in mind. This means they have been able to realize significant benefits based on correlation without being able to argue the underlying mechanics.

Money Nearly all educational institutions are strapped for money. When they make decisions to invest in the hardware, software, staff, and training to exploit Big Data, they are making decisions not to hire another professor, equip a student lab, or expand an existing building. That can be a tough call.

Numbers game Some argue – perhaps rightfully so – that Big Data reduces interactions with students to a numbers game. Recommendations and assessments are based entirely on analytics. This means that compassion, personal bonding, and an understanding of the unique circumstances of every student gets lost in the mix. Others argue that Big Data is an assist to the human process. In any event, this is unquestionably a stumbling block.

Privacy vs. Evidence Based Research

There is a great deal of concern about student privacy as we mentioned above, and it is one of the top Big Data challenges that must be resolved. One of the key reasons for this concern focuses on the process of growing up itself. It’s not unusual for students to participate in activist organizations in their youth that they reject later in life. Or they drank too much in university but sobered up once they had the responsibilities of jobs and families. Or a teacher may have given a student a negative evaluation that should not have survived his graduation or departure from the school. In the past, we simply forgot these things. Life moves on and we don’t give a great deal of attention to what happened 25 years ago. But permanent records that can be pulled up and viewed decades later may cast shadows on job candidates that are completely unwarranted at that time. In other words, we lose the ability to forget.

There is an even greater threat, though. Although there is general agreement about the value of predictive analytics, no one pretends that the predictions are inevitable. Nevertheless, a computer-generated prediction can take on the aura of truth. A prediction that a student is not suitable for a particular line of work may prevent hiring managers from hiring her for a position she is perfectly well suited to handle. These predictions can severely limit her opportunities in life forever.

One way of dealing with this is to pass legislation that limits access to student information, protects the identity of individuals, and yet still makes it available to those conducting legitimate educational research. Unfortunately, this ideal is better served in rhetoric than in reality.

Consider stripping student information of any identifying information and releasing it, along with records of other students in the same cohort, for general access for educational research. Yes, the school has taken all the required and appropriate steps to protect the students’ identity. But, no, it doesn’t work. That’s because Big Data practitioners generally access large data sets from a wide variety of sources. Some of those other sources (viz. Facebook) make no attempt to protect the individual’s identity. Those secondary sources have enough unique identifying characteristics that can be accurately correlated with the de-identified school records to re-identify those school records. The best laid plan of mice and men …………

There is no shortage of legislation in the US to protect student information. The most relevant legislation includes:

  • The Family Educational Rights and Privacy Act of 1974 (FERPA). This act prohibits the unauthorised disclosure of educational records. FERPA applies to any school receiving federal funds and levies financial penalties for non-compliance.
  • The Protection of Pupil Rights Amendment (PPRA) of 1978. This act regulates the administration of surveys soliciting specific categories of information. It imposes certain requirements regarding the collection and use of student information for marketing purposes.
  • The Children’s Online Privacy Protection Act of 1998 (COPPA). This act applies specifically to online service providers that have direct or actual knowledge of users under 13 and collect information online.

Unfortunately, this legislation is outdated and somewhat useless today. For example, it applies to schools but not to third party companies operating under contract to the schools. This legislation was enacted before the era of Big Data and doesn’t address the issues that this current technology raises. Further, the acts don’t include a “right of action.” This means there is no way to enforce the law.

In light of this, there are ongoing legislative attempts to deal with the need to protect the privacy of student information. Up until September 2015, 46 states introduced 162 laws dealing with student privacy; 28 of those pieces of legislation have been enacted in 15 states. There have been ongoing initiatives at the federal level as well. Relevant pieces of federal legislation that have been introduced include:

  • Student Digital Privacy and Parental Rights Act (SDPPRA)
  • Protecting Student Privacy Act (PSPA)
  • Student Privacy Protection Act (SPPA)

These acts are primarily concerned with protecting student data that schools pass along to third party, private sector companies for processing. In spite of the fact that these companies have generally built in their own data protection policies and procedures that already meet the requirements of this legislation, there is still considerable fear that the companies will use the data for nefarious purposes such as tailoring marketing messages to particular students – something that is clearly outside the scope of providing education or conducting educationally related research.

The US is not alone in its concern. The European Union has developed regulations that apply throughout the EU. This is in contrast to the fragmented American approach. To be fair to the Americans, however, the Constitution specifically provides that education is a state concern, not a federal one.

The EU 1995 Directive 95/46/EC is the most important EU legal instrument regarding personal data protection of individuals. Rather than discourage the use of third parties storing and processing student information, the EU prefers to regulate it. The EU recognizes that private sector companies provide a valuable service.

The Directive gives parents the option of opting out data sharing arrangements for their children. However, doing so would likely jeopardize the educational opportunities their children would enjoy otherwise. In other words, while parents have the right to opt out, it would be imprudent in practice to do so.

After considerable discussion and consultation, the EU Parliament approved the General Data Protection Regulation (GDPR or Regulation). This Regulation is set to go into effect in May 2018.This Regulation pays particular attention to requiring schools to communicate “in a concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child.”

Unfortunately, this is problematical. Big Data and Machine Learning develop algorithms that are quite opaque. Even the professionals who operate Big Data systems don’t know the inner workings of the algorithms their systems develop. Interestingly, they don’t even know which pieces of input are pivotal to the output and recommendations of those systems. In this context, it is reasonable that the general public sees EdTech companies as a threat to students’ autonomy, liberty, freedom of thought, equality and opportunity.

On the other hand, when you visit these EdTech websites, it certainly appears that they are driven by a sense of enlightenment. Their websites clearly suggest that they have the best interests of the students and their client schools in mind. Aside from the opaque nature of Big Data and Machine Learning algorithms, it is not clear – to this author at least – that EdTech companies deserve to be treated as skeptically as they are. It’s quite possible that the nub of the issue is not the stated objectives and current operations of these companies, but rather the uses that this data might be put to in the future and have not been foreseen today. In other words, the way the data might be used in the future is unpredictable. The unpredictable uses of the data could lead to unintended consequences.

In both Europe and the US, when we look at the furor about the importance of the privacy of student information, it often boils down to pedagogical issues.

Here is the nub of the conundrum in a nutshell. There is clearly a potential benefit of conducting educational research using student information. There is good reason to believe that tracking students over the course of their academic years – and perhaps even into their working careers – would allow scholars to identify early indicators of eventual success or failure. However, if scholars are prohibited from conducting that research by placing restrictions on student identification or restrictions on the length of time data can be stored, then that sort of research could not be conducted. This could conceivably lead to a loss of value to both individual students who could benefit from counseling informed by reliable research as well as to benefits to society at large.

How Is the Future of Big Data in Education Likely to Unfold?

Here are the trends to look for – in no particular order. These trends are instrumental in informing the schools’ policy development, strategic planning, tactical operations, and resource allocation, and overcoming the Big Data challenges in Education.

Focus student recruitment – Historically, colleges and universities have had student recruitment programs that were fairly broad in terms of geography and demographics. This led to a large number of student applications for admission. Unfortunately, many of the students the institutions accepted did not enrol in those schools. Colleges are now using Big Data to find those geographic areas and demographics where their promotional efforts not only generate large numbers of high caliber applicants, but also applicants who, if accepted into the college, will actually enrol.

Student retention and graduation Universities need to do more than attract high caliber students. They need to attract students who will stay in school and graduate. Big Data coupled with Machine Learning can help identify those students. In parallel with student recruitment, the schools will increasingly use Big Data to identify at risk students at the moment they show signs of falling behind. This will enable the schools to assist the students, help ensure their success, retain them in school, and increase the chances they will graduate.

Construction planning and facility upgrades Educational institutions at all levels have more demands to add or expand their buildings and upgrade their facilities than their budgets will permit. They need to establish priorities. Big Data will help planners sort through the data to identify those areas that are likely to be in highest demand and provide the greatest benefit to the students and the institutions.

Data centralization At the moment, nearly all data in educational institutions is held in organizational silos. That means that each department or organizational unit collects, stores, and manages the data it needs for its own purposes. That is a natural result of the need for each function to get its work done. However, it is counterproductive if we wish to apply Big Data. In the future, we can expect these siloed data stores to be integrated or linked virtually. Integration means that the data will be moved to a central repository and managed by a central function – like the IT department. Virtual integration means that the functional units will remain where they are at the moment but the IT department will have read access to each of these repositories. Quite likely, we will see both options in practice for the foreseeable future.

Data based decision making and planning Although Education has enjoyed the benefit of quantitative studies for centuries, the practice of education is generally driven by the philosophical views of educators more than data or evidence based studies. In fact, this approach has been enshrined in our commitment to academic freedom at the university level and has trickled down, to some extent, to public and private K-12 schools. Big Data will enable a data-rich culture that will inform policy development and operational planning to an extent we’ve never seen in the past.

Greater use of predictive analytics Machine Learning applied to Big Data will become increasingly successful at predicting students’ future success based on their past performance. Schools of all stripes will rely on these predictive analytics more and more in the future. This is likely to lead to two types of outcomes. On the one hand, schools will allocate more resources to those students most likely to succeed and, as a result, graduate more high-performing students who will deliver significant benefits to their communities and the world. On the other hand, predictive analytics will restrict the academic opportunities of failing students or those who show little promise – like Albert Einstein. Predictive analytics will also help institutions develop counter-intuitive insights that will challenge long cherished values and lead to better student and institutional results.

Local adoption of analytics tools Older readers will remember the days when word processing was handled by a pool of word processing typists. Over time, word processing migrated from the pool to executives’ assistants and, eventually, to the desks of the executives themselves. Once word processing reached the desks of the executives and other knowledge workers, word processing shifted from being a mechanical function to being a creative one. Knowledge workers crafted their messages as they took form on their screens. The same will be true of predictive analytics. We are going to see the hands-on management of predictive analytics studies migrate from Big Data specialists to the desktops (and laptops) of executives who need to think through, propose, and defend policy statements, strategic plans, and operational or tactical initiatives.

User experience – Educators often don’t know a student is having a problem until they see the student failing (or just barely passing) quizzes and tests. But, even when they recognize the problem, they don’t know the reasons any given student is falling behind. Big Data will help students by recognizing the problems they have as those problems occur. Then it can offer tutorials that address those problems as they occur – not days or weeks later when it may be too late to affect the students’ learning trajectories.

Real time quiz evaluations and corrective action. — As computers and tablets become ever more pervasive in classrooms, schools at all levels will be better able to collect digital breadcrumbs about how students perform on quizzes and determine what corrective action is required. This is going to eventually become the norm. Seven Ross, a professor at the Center for Research and Reform in Education at Johns Hopkins University agrees. He said, “Most of us in research and education policy think that for today’s and tomorrow’s generation of kids, it’s probably the only way.”

Privacy, privacy, privacy The privacy of student and family data will continue to be a hot issue. Over time, however, the benefits of sharing data with student identification data will outweigh the concerns of the general public. Sharing this data among qualified research professionals will become more socially acceptable not only as technological safeguards are put into place, but as they are accepted as being appropriate. In practice, society will discover that the student data they thought was secure, is not. Witness the data breach at Equifax that spilled confidential data about 143 million people. Do you remember the data breaches at Target and Home Depot? Again, tens of millions of people who trusted these companies with their credit card information were affected.

Learning Analytics and Educational Data Mining – We are seeing a new professional discipline emerge. The professionals in this field will have both the professional and technical skills to sort through the masses of unstructured educational data being collected on a wholesale basis, know what questions to ask, and then drill through the data to find useful, defensible insights that make a genuine difference in the field of Education. The demand for these specialists is likely to outstrip the supply for many years to come.

Games We are likely to see far more games introduced into the educational curriculum than we’ve ever seen before. Games are not only proven to be instrumental in the learning process, they also lend themselves to data acquisition for immediate or later analyses.

Flipped classrooms The Kahn Academy has reversed the historical process of delivering course material during class time and assigning homework to be handled out of class. It their flipped classrooms, students watch streaming videos at their leisure out of class. Class time is dedicated to providing students a forum where they can work through their problem sets and ask for – and get – help as they need it. This flipped classroom is going to become far more widespread because our technologies today enable it – and it just makes a lot of sense.

Adaptation on steroids Adaptation is nothing new. It’s been going on for thousands of years. The idea is that course material or explanations or problem sets or tutoring is tailored to the individual needs of the student. But when we put that adaptation on steroids, we see a shift in “kind.” In other words, we see something that was not present before. Today we can monitor every move students make, not just count the right and wrong answers they give to a quiz question. By analyzing facial expressions, delays in responding, and a myriad of other variables, we can tailor make and deliver a tutorial specifically suited to a student’s learning problem at the moment the problem occurs.

Institutional evaluation Schools have always presumed to grade their students. Until relatively recently, it was presumptuous for students to grade their teachers or their schools. Now it is becoming common practice. In fact, Big Data will play an ever-growing role in assessing the performance of individual instructors. More importantly, Big Data will rank order universities, colleges, and high schools on a wide range of variables that can be supported through empirical evidence. True, some of that evaluation will be based on “sentiment” – but much of it will be based on hard analytics that would have been too time consuming or too expensive to collect and analyze in a holistic manner.

The Jury Is Still Out

In spite of all the investment, the excitement, and the promise of Big Data in Education, we still don’t have enough experience to make categorical claims about its value. We are still struggling the top Big Data challenges we face.

In an article in The Washington Post last year, Sahlberg and Hasak claimed that the promised benefits of Big Data have not been delivered. As a visiting professor at The Harvard Graduate School of Education, Sahlberg is an authority we should listen to. He claims that our preoccupation with test results reveal nothing about the emotions and relationships that are pivotal in the learning process.   Our commitment to judging teachers by their students’ test scores has the effect of steering top performing teachers away from low performing schools – exactly where they are most needed. There are extensive efforts to evaluate both teachers and students. However, according to Sahlberg, this has NOT led to any improvement in teaching in the US.

The most that Big Data can offer is an indication of a high correlation between one factor and another. It cannot tell about cause and effect. In fact, cause and effect argments are difficult for people to make – and yet they are instrumental in building compelling arguments. Having said that, it is revealing to recognize that finding high correlations in other fields – even without a demonstrated cause and effect relationship – have proven to be quite beneficial.

Digitally Transforming Healthcare Industry

digital healthcare

Big Data Has Changed the Practice of Healthcare Forever – and the Change is Just Beginning. Healthcare organizations – old and new – are investing heavily in Big Data applications.

Big Data projects process data measured in petabytes to deliver significant healthcare benefits. Only a small proportion of that data comes from traditional databases with well-structured data. Instead, almost all of the data comes from sources that are messy, inconsistent, and never intended for a computer to use. I’m talking about messy, unstructured patient records. Accessing this unstructured data and making sense of it gives health care professionals and leaders insights they would never have otherwise. They directly affect the way health care is delivered on a patient-by-patient basis.

I’ll give you four real-world examples the health care industry has already realized. We’ll take a quick look at Apixio, Fitbit, the center for Disease Control, and IBM’s Watson Health.


Medical research has always been conducted on randomized trials of small populations. No one tried to conduct massive healthcare research using all the data on all patients because the work would have been over whelming. Limiting the size of the data sets researchers used made their research manageable. Working with small sample sizes creates methodological flaws of its own. This is not to criticize those studies but to recognize the limitations of the research outcomes based on the limitations of what was feasible at the time those studies were conducted.

Apixio set out to change all that. Apixio developed mechanisms for conducting healthcare research based on studies of actual patient healthcare records. Their mechanisms leverage both Big Data and machine learning. Further, they work with ALL the patient healthcare records a facility has to offer – not just a randomized subset. As new patients are treated, Apixio collects data about the symptoms, diagnoses, treatment plans, and actual outcomes. By integrating these new cases into the mix, the company can quickly determine what works and what doesn’t. The difference between discovering the effectiveness of healthcare treatment programs based on limited clinical research studies and those based on analyses of the effectiveness of treatment programs based on reviews of ALL patients can be dramatic. I’m talking here about studying the treatment outcomes for all patients, not just a small number included in clinical research studies.

Only about 20% of the patient healthcare records reside in well-ordered databases. 80% of the data is messy, unstructured data. I’m talking about the GP’s notes, consultant’s notes, and forms prepared for Medicare reimbursement purposes. Working with unstructured data used to be problematical. Institutions had to hire and train “coders” who would read free form materials (handwritten notes, typed notes, etc.) and capture the meanings of those notes in a form suitable for computer processing. Apixio dealt with this issue quite differently. It used computer based algorithms to scan and interpret this data. The company found that its computer assisted techniques enable coders to process two to three more patient records per hour. Further, the coded data it created this way can be as much as 20% more accurate than the manual only approach.

This computer-assisted approach also finds gaps in the documentation. In one nine-month period, Apixio reviewed 25,000 patient records and found 5,000 records that either did not record a disease or didn’t label it correctly. Correcting the data can only improve diagnoses and treatment programs.

Apixio does far more than produce studies that physicians can use to inform their treatment plans. It takes the next step. It reviews the healthcare records of each patient and develops personalized treatment plans based on a combination of the data it has collected for that patient and the results of its analyses of practice-based clinical data. This enables physicians to only order the tests that are useful and avoid expensive but worthless procedures.

This pays off handsomely for insurance companies that treat patients who are enrolled in the Medicare Advantage Plans. Under these plans, Medicare pays a “capitated payment.” This is a payment paid to treat patients based on their expected healthcare costs. By tailoring the diagnostic tests and treatment programs by individual, the company is able to reduce its costs dramatically. Those savings drop directly to the bottom line.

It’s not just the insurance companies that benefit, though. Patients benefit as well. Patients are not required to undergo inconvenient or painful procedures that would provide no benefit.


Fitbit is the leader in the sale of wearable devices that track fitness metrics, although Apple is hot on its heels with its Apple Watch. Fitbit sold 11 million devices between its founding in 2007 and March 2014. These devices track fitness metrics such as activity, exercise, sleep, and calorie intake. The data collected daily can be synchronized with a cumulative database that allows users to track their progress over time.

The driving principle here is that people can improve their health and fitness if they can measure their activity, diet, and its outcomes over time. In other words, people need to be informed in order to make better fitness decisions. Fitbit provides users with progress reports presented in a preformatted dashboard. This dashboard tracks body fat percentage, body mass index (BMI), and weight among other metrics.

Patients can share their data with their physicians to give them an on-going record of their key healthcare parameters. This means that doctors are not forced to rely on the results of tests that they order on an infrequent basis. To be fair, however, not all physicians are open to treating the data their patients collect on their own to be as credible as that collected in a clinical setting.

Insurance companies are prepared to adjust their premiums based on the extent to which their policyholders look after themselves as measured by Fitbit. This means that policyholders are required to share their Fitbit or Apple Watch data with the company. John Hancock already offers discounts to those who wear Fitbit devices and the trend is likely to spread to other insurance companies.

The fastest growing sub-market for Fitbit is employers. Employers can then provide their employees with Fitbit devices to monitor their health and activity levels (with their permission).

The CDC and NIH

The Center for Disease Control (CDC) and the National Institutes of Health (NIH) are leaders is applying Big Data identifying epidemics, tracking the spread of those epidemics, and – in some cases – projecting how they are likely to spread.

The CDC is tracks the spread of public health threats including epidemics through analyses of social media such as Facebook posts.

The NIH launched a project in 2012 it calls Big Data to Knowledge or BD2K. This project encourages initiatives to improve healthcare innovation by applying data analytics. The NIH website says, “Overall, the focus of the BD2K program is to support the research and development of innovative and transforming approaches and tools to maximize and accelerate the integration of Big Data and data science into biomedical research.”

A couple years ago the CDC used Big Data to track the likely spread of the Ebola virus. It used BigMosaic. BigMosaic is a Big Data analytics program that the CDC coupled with HealthMap. HealthMap is a data base that maps census data and migration patterns. HealthMap shows where immigrants from various countries are likely to live – right down to the county or even the community level. When the CDC identifies countries where there is a public health problem – like the Ebola virus – it can link that census data showing the distribution of expat communities with airline schedules to determine how the disease is likely to spread in the US – or even other countries. This allows the CDC to track the spread of disease in near real time. In some cases, it could even project how diseases are likely to spread.

These Big Data applications merge data about weather patterns, climate data, and even the distribution of poultry and swine. These applications present this data in a graphic form that makes it easier for epidemiologists to visualize how diseases are spreading geographically. The benefit, of course, is that the CDC and the World Health Organization can deploy its scarce resources to the areas where they can do the most good. They can do that because Big Data provides the tools to chart the spread of diseases by international travellers.

The Center for Disease Control now uses Big Data linked with Social Media to forecast the spread of communicable diseases. Historically, CDC tracked how they observed the reported spread of diseases; forecasting how diseases will spread is a new ball game. The CDC ran competitions for research groups to develop Big Data models that accurately forecasted the spread of diseases. The CDC received proposals for 28 systems. The two most successful were both submitted by Carnegie Mellon’s Delphi research group. These models are not predetermined but, instead, leverage Machine Learning to develop tailored models to forecast the specific spread of each disease.

The model is by no means perfect. The CDC gave the Carnegie Mellon model a score of .451 where 1.000 would be a perfect model. The average score for all 28 models was .430. That means that the model the CDC will use is the best available and much better than nothing, but still has considerable room for improvement.

The Delphi group is studying the spread of the dengue fever. It has plans to study the spread of HIV, Ebola, and Zika.

IBM and Watson Health

IBM is particularly proud of Watson, its artificial intelligence system on steroids. Although Watson has produced some stunning results such as winning the TV game Jeopardy against the two best Jeopardy contestants, our interests today are in healthcare.

Watson is machine learning at its finest. In the healthcare field, its managers feed it an on-going stream of peer reviewed research papers from medical journals and pharmaceutical data. Given that Big Data knowledge base, Watson applies that knowledge to individual patient records to suggest the most effective treatment programs for cancer patients. Watson’s suggestions are personalized to each patient.

Watson’s handlers don’t program the software to deliver predetermined outcomes. Instead, they apply Big Data algorithms to enable Watson to learn for itself based on the research it reviews as well as the diagnoses, treatment programs, and observed outcomes for individual patients.

IBM is partnering with Apple, Johnson & Johnson, and Medtronic to build and deploy a cloud-based service to provide personalized, tailored guidance to hospitals, insurers, physicians, researchers and even individual patients. This IBM offering is based on Watson – its remarkably successful system that integrates Big Data with machine learning to enable personalized healthcare on a massive scale.

Until now, IBM has used Watson in leading edge medical centers including the University of Texas MD Anderson Cancer Center, the Cleveland Clinic, and the Memorial Sloan Kettering Cancer Center in New York. Given its successes to date, IBM is now ready to take its system mainstream and broad based.

How Medical Mobile Apps are Transforming Healthcare

mobile medical apps

Medical mobile apps are transforming the Healthcare Industry, promising to improve quality of healthcare while lowering costs.

In 2017, global medical healthcare apps were a $26 billion industry with a global average CAGR of 32.5%. The United States currently has the largest market for mobile medical apps. However, the Asia-Pacific region is showing the fastest growth rate in the world – with an estimated average CAGR of 70.8%. By 2022, the worldwide mobile medical app market is anticipated to reach a $102.43 billion.

As of 2017, mobile healthcare apps have been downloaded over 3.2 billion times – this marks a 25% increase since 2015. In the United States alone, there are over 500 million smartphone users with mobile health-related apps. The greatest growth in mobile medical apps has been in the management of chronic care – particularly diabetes, obesity, high blood pressure, cancer, and cardiac illnesses.

As the prevalence of chronic illnesses worldwide increases, so is the increase in medical apps created to help manage these chronic illnesses. Nearly half of all Americans, around 133 million individuals, currently live with a chronic illness. Per the Centers for Disease Control and Prevention, now seven of the top ten causes of death in the US are due either directly or partially to chronic illness.

Chronic illness is on the rise globally as well. According to the World Health Organization, as of 2017, over 79% of all deaths related to chronic illness occur in developing countries, and this rate is anticipated to continue to climb. Heart diseases and other cardiovascular illnesses will continue to be the major cause of mortality throughout the globe. Asia, in particular, is experiencing the greatest rise is cardiac disease and death due to heart-related complications.

The widespread availability of tablets and smartphones in healthcare today is what is helping spur the use of mobile healthcare apps by patients and providers alike. According to referralmd, over 80% of physicians in 2017 use their smartphone at the point of care – whether for patient services or for administrative reasons. The wide access to and use of smartphones by providers and patients alike has been the primary driver behind the increasing availability of mobile healthcare apps year-over-year.

How can mobile apps help? What kind of mobile apps do patients want? And which kind do physicians need?

The healthcare industry is filled with opportunities for digitally savvy companies and mobile app developers.

Download and read the full article here.

Is Your Company Complacent?

Over the past 20 years, my main focus has been to turn around lack luster sales and marketing organizations. I have done so over 50 times, and one of the most common phrases I hear from clients is: “It is what it is!”

I have found that when clients use that statement they are saying one or more of the following:

  • They have no control over the current situation and don’t see how things will change.
  • Their situation is too difficult to change and they don’t want to spend the time, energy or resources required to make improvements.
  • The organization is very political and “rocking the boat” is not an option.

In short, the company or team has become complacent. Complacency leads to stagnation. That’s when I’m brought in, to figure out what is happening and to get the team back on track.

Complacency is like a cancer: it can hit fast and without warning. Morale tends to tank and numbers and goals are missed.

Here is what I recommend to cure your company of the complacency disease:

  1. Use a survey to ask your customers and your team how the company is performing  in the following areas:
    – Sales
    – Marketing
    – Customer Service
    – Innovation
  2. Compare the results of the two surveys to find issues and gaps.

Be fearless.  Use the survey to get to the heart of your company’s complacency and lackluster results.  Take the responses seriously.  Make a plan for addressing the challenges, focusing on the most impactful issues first.

When I am tasked with a turn-around situation, I survey every group and review every process that I believe to be important.  I also get customers’ perspectives. The information provides me a way to diagnose the problem and to find a cure.

I recommend that these surveys are done quarterly.  This is a great feedback mechanism and will provide valuable insights, as you turn your company from a complacent organization to a high performance organization.

3 Questions to Ask when Interviewing Potential Hires

In this job market, when companies have open Teleprospecting positions (or any open positions, for that matter) they tend to receive a large number of resumes.  The position is open for a reason and any delay in filling it may delay sales funnel growth.  That is why companies feel compelled to get the hiring done fast. In an effort to fill the position quickly, some resume screening occurs and viable candidates are brought in for in-person interviews.  Several key people are asked to interview each candidate to review their skills and to see if they would be a fit for the company.  These interviews happen within the same day. I call this the “Interview Shuffle”.

During the interview shuffle, Teleprospecting candidates meet with several people from marketing and sales.  In some cases, candidates will have a final interview with the CMO, CSO and even the CEO.  Depending on the number of people required to do the interviews, the actual time each member of the interview team spends per candidate may be only 30-45 minutes, at the most.

These are the 3 reasons why I advise my clients against using this method to interview candidates:

1. Who is this person?

In 30-45 minutes it is difficult to determine if the candidate is a good fit for the position or the company.  A series of questions are asked and people who are good at interviewing may wow each interviewer.  When a decision is based on this first series of interviews, I have seen that the hiring manager regrets their decision within the first 3-6 months because the candidate isn’t performing as well as was hoped. Bring the candidate in at least 2-3 times.  Ask the candidate a few of the same questions that were asked during the first interview.  Are their answers generally the same?  Or are their answers significantly different?  Plan the questions that the interview team will ask each candidate.  Make sure that the bulk of the questions are “situational” questions.  Ask questions that require specifics for how the candidate performs their job.  For example:

Question: Give me an example of a typical day at your current job.

Good answer: “I get into the office around 7 am and take a quick look at my activities that I have set up for the day. If there is a company that I need to research, I will take a moment and Google the company before I make the call.”

Bad answer: “Well, in this job, I guess most people start early after they get their coffee and then they start making calls.”

Situational interviewing takes time.  It is the best interview format to uncover if the candidate knows their stuff.  This interview process might take 90 minutes or more for each interview team member to complete.  The time will be well spent because the information uncovered will help you to make a more informed decision.

2. What is the commitment level?

When a candidate is interviewed by the interview team on the same day, there is no way to tell how committed the candidate is to the position or the company.  When candidates are asked to return for additional interviews, the hiring manager has an opportunity to verify continued interest.  Determining commitment level is another reason why the candidate should come back to the office 2-3 times. By doing this, you will see how committed the candidate is and it gives you more than one day to determine if they are the best candidate for the job.  Are they professional and appropriately dressed each time? Do they exhibit a continued level of enthusiasm?  Do they ask additional and interesting questions about your company with each subsequent interview?  If not, the candidate may not be right for the job or your company.

3. What does the interview team think?

There is a Chinese proverb that says “Don’t be over self-confident with your first impressions of people”.   There are many people who are very good at interviewing.  Sometimes these people are great on the job and sometimes they aren’t.  I do believe that managers should trust their gut if they don’t feel good about a candidate.  However, I think that a hiring manager should give themselves some time to see if their first “good” impression sticks.  This can’t be done in 1 day. Bring the candidate back.  Circle back to the interview team to see if they too continue to feel good about the candidate.

The Interview Shuffle enables companies to get their open positions filled quickly.  From my experience, however, this hiring method costs companies time, wastes resources and inhibits a company from meeting objectives.  Too often the wrong candidates are selected and the position is open again within a few months.

I recommend that you have the hiring manager conduct the first interview; taking time to ask situational questions to determine fit. Candidates who pass this first interview should be asked to meet with the interview team and the hiring manager a few more times and over a period of 8-10 days.  This will give everyone time to get a good sense of the person.  If you rush the hiring process you might regret your choice later on. Take a few weeks to get a sense of the person so that you will not have to repeat the interview process again in a few months.

Shrinking Idea to Revenue Cycle Time

We define Idea-to-Revenue (I2R) Cycle Time as the time it takes from the moment an idea is born within a company to when it is out in the market fully supported and ready to be sold.

C2R cycle times can be unnecessarily long, partly due to lack of best practices, and partly due to limitations in organizational structure. Concepts never become products for one reason or another; Products take a long time before they are out of Development; or it takes a long time to put together the necessary marketing and selling tools and campaigns. While each of these reasons can cause significant delays, most often all three are present in making the matter a real challenge.

In this article, we will start by discussing the anatomy of Idea-to-Revenue Cycle Time. Then we will see what makes C2R cycle times longer than they should be, and finally what companies can do to cut their cycle times in half or more.

Anatomy of C2R Cycle Time

Products have three levels of readiness: Development Ready phase, which is the phase in which an idea becomes first a project and then a product; Operations Ready phase, which is where the product becomes a fully documented, supported, trainable, and usable product; and finally, Market Ready phase, which is the phase responsible for launching, marketing, and selling the new product.

Development Ready

We define a product as anything that is made pretty much the same way from customer to customer and not tailor made for each customer. A BMW 750, Microsoft Word, a business checking account in a specific bank, a business class seat in Delta airlines, a Big Mac, and a 30-year fixed mortgage loan from a specific lender are all products, since there is very little variation between what one customer receives and another, or what the same customer receives this time versus the next time he/she purchases the same thing.

When Product Development talks about how long it took to develop a product, it is typically looking at the time it took from concept to Development Ready.

Typically, concepts go through the following steps before they become working products:

  1. Kicking around an idea – typically the originator(s) of the idea or concept kick it around with others to see if anyone can punch a hole in it. At this point, the concept is not even on paper and mostly in the heads of those in the discussion of the concept.
  2. Validating key underlying assumptions – once the idea seems to be solid, the next step is to validate some underlying assumptions, especially on the market side. Is there a sufficient size of potential customers for this if it ever becomes a product?
  3. Defining the Requirements – this is likely the first time any document is produced, and the goal is to specify and document what the product is supposed to do. For the most part, this document specifies “what” needs to be done without going into any discussion of “how” it will be done.
  4. Proof of Concept –The purpose here is to find out what the big hurdles are and if they can be overcome within a reasonable time frame.
  5. Design – the proof of concept phase provides highly useful insights into how the product should and should not be designed. Armed with such insights, the task here is to design products that are easy to make, sell, support, learn, and use.
  6. Production –the final step of the Product Development phase is to make the product according to the design specifications

Operations Ready

Most customers will not buy a product that is not ready to be “used”, as they use it within their own context. For example, consumers will not buy an electric car until they are sure there are charging stations found conveniently located; there is a place where they can get it services, that their insurance company will insure the electric car, and so on.

Operations Ready means that the product is ready to be fully supported by the company so that the average end user can fully utilize the product. It includes all of the supporting or peripheral products, training, installation, configuration, and more.

In order for this to happen, the service side of the company has to be done with its part, not just Product Development. Documentation has to be prepared, trainers have to be trained so they in turn can train the product. The Customer Service and Support Team have to be trained and have the necessary tools they need to support customers, and so on.

The Support or Field Organization that is actually responsible for the customer’s end result should be brought in very early in the Product Development phase, typically by step #2 (Validating Assumptions), but no later than step #4 (Proof of Concept). If that is the case, not only is the product design better (it is easy to support and learn), the support organization will be ready to support the new product at the time, or shortly after, the product development team is done with the product.

Market Ready

Products are market ready when everything necessary to market and sell the product is ready. It includes the preparation of the necessary campaigns to drive awareness of the product; content to educate the market on the product; generating warm leads to be passed on to the Sales Organization; and ensuring that the Sales Organization has the necessary training, tools, and resources to sell the product.

Ideally, the company will bring in Marketing and Sales early into the product development–both to receive valuable feedback and to get the Marketing and Sales ready to rapidly launch the product into the intended market segment

The ideal time to do this is at Step #2 of the Product Development phase when validating underlying assumptions. Both Marketing and Sales have valuable feedback and insights to bring to the product development phase.

Why C2R Cycle Times are Long

Having described the three levels of readiness that make up Idea-to-Revenue time cycles, we will now examine why most companies experience unnecessarily long C2R time cycle.

Typically, companies conduct their product development, operational readiness, and market readiness in serial or 1-2-3 format.  For example, if it takes twenty (20) months to develop the product, four (4) months to make the necessary preparation for the support organization, and another two (2) months to get Marketing and Sales ready, the total C2R cycle time is at least twenty-six (26) months. In actuality, the C2R cycle time is even longer since it is unlikely that the company will have perfect timing of handoff from one to the next phase.

What would happen most often is that the head of Product Development announces that product “P” is now ready and that the company should move fast to bring it out to market. At this point, it is highly unlikely that anyone outside Product Development knows much about this product.

In order for the various departments to do their job and support this product as well as market and sell it, they must know what problem it solves, how it solves it, and why it is the best alternative. That information typically comes last after the product is developed. Ideally, there should be significant overlaps of when each team starts and finishes its part so they are all learning and sharing feedback to improve the overall product. Not only does this overlap significantly cut the C2R cycle times, but it also improves the quality of the product due to strong feedback from various sources early in the development process where it matters.

Strategies for Shrinking C2R Cycle Time

So, how do we shrink the total Idea-to-Revenue Cycle time so we can begin realizing revenue as early as possible? And how can we, at the same time, make a better product?

Below, we have outlined a number of strategies that companies can implement to improve their C2R cycle times. The more of these strategies companies implement, the shorter their C2R cycle times will be.

A. Defining & Managing C2R Process

The first thing that companies can do is to implement a more reliable and formalized process for reviewing and managing Concepts to Products process. The company should have a system and process in place where ideas can be submitted from any corner of the company. These ideas will be reviewed, and if found to be viable, will go to the next step until either rejected or completed.  The entire process should be as automated as possible to remove unnecessary delays.

1-Submit New Concept for review

Any employee that has a great idea should be able to submit an idea under a specific “reason for doing” such as increase revenue, lower costs, improve customer experience, etc.

This should typically be no more than one page long and should briefly describe the problem to be solved, how this concept would solve that problem, who would benefit, and what the expected impact would be to the company and potential customers.

2-Go/No Go

The originator of the idea will then present his/her idea in 10 min or less to the Concept Review committee. This should be done on designated days, say the last Friday of each month. The review committee should ideally include the CEO so that the idea is either passed or killed quickly.

This is where a lot of ideas go nowhere because of lack of decision to go or not to go to the next level.

3-Form the Concept Development Team

If the Review Committee gives its thumbs up for the concept to move forward, the next step is to form the Concept Development Team. The Team should include: the concept originator; one or two top sales reps; one or two top engineers, one or two marketers; a product manager; and anyone else that wants to join the Team for this concept.

4-Brainstorm market potential

The Team will then meet for a 3-4 hour session and hash out the concept in order to arrive at answers to the following questions, among others:

  1. What is the compelling need that this product concept would address?
  2. Who is the targeted end user for this product concept?
  3. What would be the “before” and “after” scenario of using this product?
  4. What would make the “after” so much more compelling than the “before”?
  5. What will need to happen to make this work?

Once the session is over, the member of the marketing team should prepare a short product marketing plan for the product concept based on the brainstorming session’s outcome. The product manager should do the same and build a requirements document for the product.

5-Validate Assumption

The next step is to use the product marketing and product requirements documents as background to do some research and validate key assumptions. At this point, what the company wants to find out are:

  1. How is the problem that this product concept is expected to solve currently addressed?
  2. What is the cost of the current solution?
  3. To make the ROI compelling to the buying prospect, what should the ideal price be?
  4. What will it cost the company to product the product?
  5. How many units will the company have to sell in order to break even?
  6. Is there a sufficient user base to significantly exceed the breakeven point?

These are the basic questions that need to be answered before moving forward. These questions could have been asked before the brainstorming and documenting efforts, but now the company can answer the questions more confidently.

6-Go/No Go

The answers to the above questions are incorporated back into the Product Marketing Plan and are submitted to the Concept Review Committee for Final Go/No Go Decision. At this point the business decision has been made and what is left is primarily departmental level decisions.

7-Design the Product

The Product Design team now has everything they need to get started. They know both the business and user requirements as objectives for their design such as: total and unit costs; time lines; user requirements; and manufacturing and support requirements. This stage would correspond to #3 of the Concept to Product section discussed above.

The Design Team can now build a proof of concept and flesh that out into one or more prototypes to show and get feedback from the Concept Team formed at the Brainstorming stage.

All along, each department is fully informed of the product progress, and the documentation is becoming more and more complete as the product concept matures into reality.

8-Make the Product

By the time the product is completed out of the Product Development group, so is the documentation for it. All departments including Training, Support, Marketing, and Sales are fully ready to market, sell, and support the product.

B. Coherent Strategy: The Focus –Leverage Dichotomy

When companies face difficulties in executing their strategies, it is almost always a result of lack of coherence in the strategy. A coherent strategy is one that is both logical and consistent. This can only be achieved when the two sides of the strategy coin—leverage and focus—are in full balance and alignment.

Leverage and Focus are opposites.

Internally, Leverage tries to maximize return on the company’s resources and assets by “generalizing” the use of the asset or resource. Focus tries to do the opposite and specialize the use of the resource.

Externally, Leverage tries to broaden the base for a given product by selling it to as large an audience as it can, while Focus does the opposite by finding a tight niche where the product becomes the only viable alternative.

Each has its advantages and disadvantages—its upsides and downsides. And getting this right is the real work of strategy development.

In the particular discussion of Idea-to-Revenue Cycle Time acceleration, the choice of how the company groups its assets and resources to create both Leverage and Focus at the appropriate level will make huge difference in cutting the C2R time. As we have outlined above, each department has a focus but can be leveraged across products and markets. Additionally, each product group is focused on a particular product, while best practices are leveraged across the entire department and made available to different product groups.

This is a deep topic on its own and we will have more to say on this in later articles.

C. Redesigning the Company

We have described the Concept t Revenue Process above. However, it is difficult to implement the process unless the organizational structure allows for easy team formation around new product concepts.

While it makes sense to group similar activities such as Support, Marketing, Design, etc together to increase and facilitate the sharing of professional skills and best practices, it is equally important to accept that there is a second layer that sits across all departments that forms around a given product or market segment.

Which grouping to choose—whether to group by product or by market—is a decision made by examining both internal and external constraints and opportunities. This is a significant topic in and of itself and we will reserve it for another article.

D. Using Product Platforms

The concept of Product Platform is easy to understand, but difficult to implement. Not because it is complex, but because it requires a patient and diligent approach.

At its essence, it is about factoring out the common denominator and making it available for all products so that they automatically benefit from it. Once a company makes more than one product, it can examine the common elements in these, factor that out, and make the following adjustments to it:

  • Make it a more general purpose so other products can make use of it
  • Make it simpler and cheaper to make
  • Make it more robust and more reliable

From then on, all products that use this platform “inherit” its reliability, simplicity, low cost, and any other attributes.

Furthermore, the company can dedicate a team of designers and developers to work on the platform, making it increasingly more capable, simpler, more reliable, and so on.

In addition to lowering the cost and complexity of new products, a product platform dramatically reduces the Idea-to-Revenue Cycle time of new products, since only what is different or new needs to be designed and built. Everything else is already there.

For example, when Toyota launched its hybrid car Prius, it only designed and built the new technologies and parts it needed. The rest was built on the Camry platform, thereby significantly reducing the C2R cycle time for the Prius. It also helped that Toyota is one of the best practitioners of product platform, using it not only in Toyota brands, but also across brands with Lexus.

Product Platforms are also deep topics where volumes of books and articles have been written, yet few companies seem to take advantage of them.

What we can do To Help

So far, we have outlined some very powerful and effective principles for shrinking Idea-to-Revenue cycle times, some of which you may already be implementing.  When companies are not taking advantage of these principles, it is primarily due to lack of time or expertise.

SOMAmetrics has the resources and expertise to: help you assess where you are and where you need to be; work with you to develop a viable plan for bridging the gaps; and helping you execute the plan to reduce your Idea-to-Revenue cycle time.