Fives Factors Affecting Revenue Growth (Factor 2): Sales Process is Buyer Process

Factor 2: Sales Process is Buyer Process

Research by McKinsey & Company, Bain & Company, and the sales force training firm The Rain Group all show the same thing: Buyers now prefer to work with sellers who align their process to the Buyer’s process. A seller that does not comply is one that complains about unreturned phone calls and emails—hence increased marketing and sales cost.

The New Buying Process

The studies show  that Buyers prefer to conduct their own research and determine who gets invited to work with them to further refine a possible solution.

The buying process starts internally, typically when some pain becomes no longer acceptable, driving a new initiative to address it. The head of the business or functional unit (the business driver) who is responsible for the resolution of the issue now heads this new initiative. She typically assigns someone on her team to conduct preliminary research and report to her with findings and recommendations.

This is the beginning of the “Buying” process. At this point, no seller is aware that the buying process has started.

The team member assigned to this task now begins the research by entering keywords in her preferred  search engine. She thenreviews the search results and begins tagging the promising ones.

Later, she will go deeper into each result to determine  which will make her final cut. At no point has she called any company—this is all digital content review.

A few days later, she returns to her boss to report and make her recommendations. The business driver then makes the decision of who they will review—in other words, who makes the short list. She then tells her researcher to contact the short list and schedule meetings with the vendors’ representatives.

The New Selling Process


Since the buying process starts with research, the first thing that a Seller must do is make sure the seller’s website has deep and relevant content that addresses the issues that its market typically faces.

If the Seller has a focused market as described in Factor 1 above, then not only can it stay abreast with changes in its chosen market, but it can actually be ahead of them with thought leadership. The Seller can anticipate trajectories in regulations, changes in norms, shortages of key supplies, etc.

Because it specifically focuses on a single market and because it has depth, the Seller’s content will surface  among the many sources examined by the Buyer’s researcher. The Seller’s chances of making the short list is pretty high, and it will likely be invited to present.

Lead Generation

In addition to having highly search optimized content that drives inbound leads, if the seller also has outbound lead generation campaigns, then it is virtually guaranteed to make the short list of vendors that get invited. Its emails are likely to be opened as their message is directly relevant and always refreshing its subject matter. Its voicemails are right on and are likely to generate call-backs.


When invited to meet the business driver, the Seller must recognize that this is a collaborative event and should invite the buyer to fully participate in defining the problem as well as the solution. Read about Factor 3: Sales and Marketing.

This is exactly what buyers today are looking for since their needs are complex and will need customized solutions rather than ready-made ones. . They want Sellers who are willing to work towards customizing a solution that functions perfectly for them.

The Problem with the Old Selling Process

Lets compare the new selling process with the old. The Old Selling process consists of  “blasting” a huge list with irrelevant emails and “dialing for dollars” in hopes that someone picks up. If through sheer persistence, the sales rep gets an appointment, the chances it will get canceled are high.

And if the rep actually gets the meeting, the rep typically will blow it off by forcing a process the buyer does not find useful—first I am going to tell you about me. Then I am going to ask you about you. Then I will show you my product. Then I will send you my proposal…

The old seller driven and seller biased way no longer works. Sellers must understand that Buyers are looking for committed partners.

Five Factors Affecting Revenue Growth (Factor 1): Market Focus

Factor 1: Market Focus

Of all Five Factors, this one probably drives high growth more than any other.

Geoffrey Moore defines a B2B market segment as the intersection of industry, role in that industry, and geography— for example, hospital administrators in the US. Evidence suggests that the tighter the definition of a market segment, the greater the performance of a Seller in that segment.

Therefore, if all other factors are  equal, a company that sells to “Hospital Administrators in the US” will see higher growth rate and profitability than one that sells to “Health Care Professionals in the US”. To expand upon this example, a company that sells to “Hospital Administrators In California” will see the fastest growth.

What usually happens, however, is that Sellers keep widening their definition of the market they serve thinking that they will get more business by doing so. The reality is often the opposite. To demonstrate this principle, we will need to analyze these three different markets.

Let’s assume that XYZ Corp is a $100 million provider of software products for the healthcare industry.

The question is where it would realize faster growth and profitability: Hospital Administrators in CA, Hospital Administrators in the US, or Health Care Professionals in US.


A. Hospital Administrators in California

B. Hospital Administrators in US

C. Health Care Professionals in US

Estimated number




Key competitors



200 – 300


Managing value-based reimbursement

Managing value-based reimbursement

Regulations in healthcare





The prospect of selling to 128,000 Health Care Professionals in North America instead of only 400 Hospital Administrators may seem more appealing. However, what really matters is the perspective of the buyer.  Does XYZ offer the best value?

Imagine how XYZ Corp will have to demonstrate such evidence of value to 128,000 Health Care Professionals in the US.

First, it has to reach them in some way. You can imagine what it takes to reach such a widely diverse audience. Does it attend 211 conferences? Does it run TV or Print Ads? Does it try to buy email lists? What would be the subject line? What would its compelling message be for a wide assortment of professionals including Doctors and Nurses, Therapists, and Hospital Administrators? How would it organize its sales force—  by geography or by profession?

Whichever road it takes, XYZ Corp’s choices remain the same—either go shallow and wide, or spend an enormous amount of money to build the necessary expertise in each of these professions.

When faced with that choice, most companies seem to choose the wide and shallow route, rather than scaling back to go narrow and deep in a vertical strategy. Unfortunately, companies that go shallow and wide are always beaten by those that go narrow and deep—hence the cost of sales and marketing rising faster than revenues.

In reality, there is a third choice—one that is actually better than either of the above. That choice  is to go narrow and deep in only one segment at a time.

For example, if XYZ Corp decides to go narrow and deep with a focus on Hospital Administrators in California, it will face a totally different scenario. It can now direct its product, messaging, and services to just that market. It only has to compete with 4 to 6 other providers, and if it chooses to attend conferences, it only needs to attend the top 3. Both are doable tasks.

Finally, its sales reps only have to work with Hospital Administrators in CA, so it is perfectly feasible to have sales reps who are experts on the issues that their customers face.

In which segment would you say that XYZ Corp has a better chance of closing more deals faster and at better prices: segment A, B, or C?

It is worth repeating that Focus is the most important of all the Factors Driving Revenue Growth.

Read about the second factor driving revenue growth here.

The New Realities of B2B Sales (Part 1)

The 21st Century Reality

A fundamental shift we see in Business-to-Business (B2B) is that prospecting and sales results are becoming more difficult, time-consuming, and expensive to produce. Lead conversion rates are lower, sales cycles are longer, and closing ratios are not what they used to be.

The success of acquiring and retaining customers relies on delivering exactly what customers want and in the manner they want it. For modern B2B companies, this means developing highly personalized and specific products and messaging to address individual business needs. The idea of marketing and selling the way customers want to buy is not something that most B2B companies truly get, let alone do well. Most tend to have a standardized method that fits their internal needs, which they try to force on their buyers—who consequently shut them out.

The realities of B2B sales and marketing have changed, and we have outlined two of these changes below.

In coming articles, we discuss other changes and recommendations for better aligning with Buyers to increase the number of quality leads that close faster and at a higher rate.


Reality 1: The Millennial as the New Buyer

A 2016 American Marketing Association survey discovered that 73% of all millennial workers are in some way involved in the purchasing decisions of their companies. Furthermore, the number of Millennials in charge of B2B purchasing power increases every year. In 2016, 34% of sole buying power was in the hands of employees under age 35.

With millennial control comes the digitization of the buying process. As consumers, Millennials conduct 80% or more of their transactions online—preferably on mobile devices.  They do not see why they can’t do the same at work. Millennial buyers are also more likely to research and review companies and their products, rather than wait for sellers to approach them. This process allows them to feel more confident in finding  sellers whose services and products fit their specific needs.

In addition, Millennials incorporate social media into their businesses and use them as purchasing resources. The collaborative aspect of social media helps companies form teams that review and analyze potential products before making a purchase decision. Perhaps as long as ten years ago, B2B sales transpired in a linear fashion from sellers to buyers. Today, sales often begin with buyers checking out the reviews and advice of their companies’ network before considering a purchase. Purchasing is now strongly influenced by buyers operating in a web-like system.


Reality 2: Strategic Procurement is Not Purchasing

The terms “procurement” and “purchasing” are often wrongly used interchangeably. Procurement is the process of selecting vendors, which involves vetting, establishing purchasing terms, and negotiating contracts on behalf of prospective clients. ItProcurement refers to the broad range of processes that lead up to the purchase of goods and services.

Procurement is a strategic procedure, while purchasing is transactional. When developing strategic procurement, businesses integrate and align the purchasing needs of their various lines of businesses and departments with their overall company-wide objectives. Doing soThis not only controls costs, but also controls the quality and reliability of input products and services needed. Through strategic procurement procedures, B2B companies can continuously improve and re-evaluate their purchasing actions to optimize their resources and generate efficiency.

B2B sellers that fully appreciate this difference and align their marketing and selling strategies with the procurement strategies of their buyers will find more success.

Read about how the globalization of competition and digital transformations of B2C companies are changing the landscape for B2B companies in “The New Realities of B2B Sales (Part 2)”.

Giants like General Motors Know the Advantages of Outsourcing. Here’s Why You Should Too.

general motors advantages of outsourcing

General Motors Has Not Made a Car in 70 Years

And I respect GM for that.  It would be foolish of them to not know the advantages of outsourcing and do the whole job.  Let me explain.

GM does what is strategic and outsources the rest – as it should.  The company designs cars. It details the specifications for its powertrains, brakes, lights, electrical systems, tires, etc. but – and this is a big but – it doesn’t make any of them. It buys its tires from the big tire manufacturers. It buys its batteries from the big battery companies. It buys its brakes from the best-of-breed brake manufacturers. But it doesn’t even try to compete with companies that are superb at making the components of a car.  Instead, it buys from them.

To Get the Advantages of Outsourcing you Must Understand Core vs. Context

GM understands the difference between core and context. Core are functions that make a genuine difference in the marketplace; context is everything else. Context are the things that need to be done well but don’t deal directly with marketplace success.

Office buildings are context.  Accounting is context.  Designing windshields is core but manufacturing them is context.

Successful companies intuitively recognize the difference between core and context. They focus their energies on what really makes a difference and they outsource the rest.

Marketing Strategy is Core; Marketing Operations are Context

Only management can decide which markets it wants to pursue and how it wants to pursue them.  Only management can determine its product and service pricing. Only management can develop the company’s marketing strategy. Its marketing strategy is core.

But the company is not going to be recognized in the market for its excellence in installing and operating  No company will generate great profits because it did a good job scrubbing its email lists. The company’s stock price will not budge because it wrote a good piece of marketing collateral that an agency could have written.

A prudent company will focus its energies on marketing strategy and outsource its marketing operations. The reason is simple. One of the advantages of outsourcing marketing operations is that companies that focus on marketing operations need to be expert at it.  They need to know how to write compelling copy, layout a persuasive website or brochure, and deliver the corporate message cost-effectively through a wide range of channels. They need to understand the full range of digital software sales and marketing tools and know exactly where they fit in a campaign. Some of these tools are best for small companies and others are best for large companies. Marketing operations experts know which is which.

Most companies, particularly small and medium-sized ones, can’t afford to develop this expertise – and they shouldn’t. Just as those companies outsource their legal work to legal firms, outsource their accounting work to accounting firms, and outsource their janitorial work to janitorial companies, they should also outsource the implementation of their marketing strategies to boutique firms that live and breathe this sort of work.

It just makes sense.  Companies should only do what they do best and makes a difference in the marketplace. The rest should be outsourced.

Is the Patient an Afterthought in Healthcare in America?

healthcare in america

A close examination of healthcare in America leads to the inevitable conclusion that patients are one of the least important players in the healthcare system. I’m the first to admit that this claim is both counterintuitive and provocative, but hear me out.  The evidence could not be clearer. This is particularly ironic because the healthcare field is staffed with professionals who were attracted to the field specifically to provide patient care. The problem does not lie with the people in the system; the problem lies in the system itself.


Healthcare in America is a Private Sector Function

Unlike every other developed country in the world, healthcare in America is treated as a profit-making operation. This is true for profit making institutions as well as the non-profit or not-for-profit healthcare organizations. Rather than talk about profit, these hospitals talk about a “surplus” that is required to see the hospital through lean times and fund the purchase of new equipment or grow the institution. This is the first in a series of blogs that will provide ample evidence of this remarkable claim. Stay tuned.

Turning a profit is baked into the very DNA of the American culture. It is part of what it means to be an American. Healthcare is no exception.

But unlike every other business in the country, in healthcare, there is remarkably little focus on the holistic welfare of patients. Every other business in the country – and most throughout the world – have customer service centers that are tasked with handling customer problems as they arise.  Customer satisfaction is paramount. At the end of every call I make to customer service departments, the agents always ask, “Is there anything else I can help you with?” That question rarely comes up in healthcare!

Let me give a few examples of the extent to which healthcare in America is a profit-making industry rather than a service to the community.


The Arbitrary Nature of “Master Charge” Lists

Hospitals develop what they call “master charge” lists. These are the prices they propose charging for patients admitted with various admitting diagnoses. In fact, these lists are just starting points for negotiations with insurance companies. During the negotiations, the insurance companies will negotiate deep discounts from these lists and the negotiators will be seen as heroes because they were able to win those discounts. But the negotiation is highly misleading because the “master charge” lists are created only for the purpose of negotiating with the insurance companies. Hospitals and health clinics don’t have any solid data about what it really costs to treat medical conditions because they don’t have cost accounting systems that allow them to develop those costs.  They make these lists up out of thin air.

Medicare and Medicaid don’t pay according to these lists.  They ignore them.  The federal government pays according to its own payment schedule.  Hospitals have the choice of charging the government in line with those government payment schedules or not taking Medicare and Medicaid patients. Most hospitals are willing to work with the government payment schedules.

The “master charge” lists vary considerably from one institution to another. This is true for institutions of comparable quality and in the same geography. Further, unlike restaurant menus, these price lists are rarely shown in advance. This makes comparative shopping impossible!

But even if the “master lists” were available, it wouldn’t make much difference in most cases. When a relative is screaming in pain and terrorized by her imminent death, her relatives are unlikely to show the same due diligence in selecting a healthcare provider that they would show, for example, in buying a new car.


With Healthcare in America, those Who Can Afford the Least Are Charged the Most

They only patients who get hit with the “master charge” prices are poor people who can’t afford to buy insurance in the first place. Relatives may take an ailing relative to the hospital in a moment of desperation and sign whatever pieces of paper are put before them. They may not realize they’ve signed legally binding financial commitments with no upper limit.

When the bill comes it could be in the five figures for something as simple as a paper cut. Anything halfway serious is liable to be in the six digits.  And the hospitals and clinics are serious about collecting on their bills. They retain a cadre of well-paid debt-collecting lawyers who are first-rate at what they do.

First, they take the sponsor’s savings accounts. Then they go after her retirement funds.  Those are easy to pick up.  Then they take her home. A sponsor who tries to declare bankruptcy discovers that healthcare bills – like education loans – are exempt from bankruptcy.  That means that no matter how little money she may have or how little she may earn, she can’t escape healthcare bills through bankruptcy.  It’s not even worth thinking about.

This aggressive bill collecting effort is a clear sign that the welfare of the healthcare institution, not the patient, is what is at stake.  I am not trying to argue that people should not pay their bills. But having a different set of rules for collecting healthcare debts than for collecting all other debts tells me there is a double standard.

This odd situation doesn’t mean that hospital administrators are acting in a malevolent way. It means they are acting in a way that our laws and customs endorse. Those administrators have a fiduciary responsibility to their Boards of Directors to collect all the money owed to them.  They would be negligent if they did not try collect every account as vigorously as possible.


Fee-for-Service is NOT Geared to Good Patient Care

For the last hundred years or so, general practitioners and specialists have charged on a fee-for-service basis.  That means exactly what it says: doctors provide services and bill someone (i.e., the patient, an insurance company, the government) for the service provided. There is no requirement that the service needs to be required in order to improve their patients’ medical condition.  None whatsoever. Often hospitals or clinicians carry out tests not because they contribute to their patients’ well-being, but because they protect the medical community in the event of a legal suit.

Typically, patients approach GPs with a complaint of some type. The GPs will refer the patients for a series of tests that they believe will contribute to the patients’ recovery. Often, they also refer their patients to specialists. The specialists may order even more (and often more expensive) tests than the previous tests.

In the end, it really doesn’t matter whether the patients improve or not – although there is a universal hope that the tests and procedures will lead to improvements. But, regardless of the outcome, the laboratories, medical practitioners, and hospitals all charge – and collect – for the work they did, not the results they deliver.

In no other industry will professionals, executives, mechanics, or salesmen get paid for their activities without respect to the achievement of their end goals. Healthcare is unique in this respect.

To put it more bluntly, the welfare of the patients is simply not a key factor in the operation and economics of the healthcare system. I believe that every individual in the system acts in good faith in contributing to the welfare of their patients within the protocols of their professions, their institutions, and the law. Each professional likely plays her own part as well as possible, but the system rarely assigns any one individual to look after the welfare of the patient in a holistic sense. This is a sign of a problem with the system of healthcare in America – not the administrators or medical staff.


We Have an “Illnesscare” System, NOT a Healthcare System

If we are completely honest, we need to acknowledge that, with the exception of public health (which is a marginal component of the overall healthcare system), our healthcare is not primarily concerned with promoting health.  There’s no money in it. The real money is in treating patients after they get sick, suffer from cancer, sink into a preventable chronic disease, or break a bone. That’s where the big money is.  Saving lives and working minor miracles is heroic. “Illnesscare” galvanizes everyone who witnesses it.

But promoting health by recommending improvements to diets, exercise programs, or cleaning up the environment really doesn’t carry the same WOW factor. It is routine and undramatic. But that is truly right at the heart of healthcare and as far removed from “illnesscare” as one can imagine.


Stay Tuned for More Revelations about How Healthcare in America Works

My claim in the first paragraph that patients are the least important part of the system of healthcare in America needs a lot more justification than I’ve given here. I urge you to read the entire series of upcoming blogs about how the healthcare system works (or doesn’t work).  You will learn that we have one of the most expensive and least effective systems in the world. You will learn that our government agencies mandated to protect our health often do exactly the opposite. (And these dynamics started long before Trump came on the scene.) You will learn that Americans are among the least healthy demographic on the planet – and that this poor health is driven by policies that are known to be counterproductive. It is not driven by callous healthcare staff.

Further, what you will read in this healthcare series is NOT a conspiracy theory or a secret. Far from it. In fact, everything I’ll talk about is well known and published in articles and books that anyone can read if they choose to. But, given the pressures of everyday life, people just don’t have the time, energy, and motivation to learn about the greatest threats to their health.



Read up on Hospital Readmission Rates as a sign of Poor Healthcare Delivery Here: Part 2  Part 1

Are 30-Day Readmissions Rates a Reliable Indicator for Poor Healthcare Delivery?  (Part 2 of 2)

digital healthcare

A look at poor healthcare delivery through hospital readmission rates.

Obamacare (Patient Protection and Affordable Care Act) has provisions that require the Centers for Medicare and Medicaid Services (CMS) to financially penalize hospitals that have unacceptably high 30-day readmission rates for Medicare and Medicaid patients. Institutions with high 30-day readmission rates in just a handful of situations will suffer financial penalties for ALL Medicare and Medicaid charges during the following fiscal year – not just the handful that are monitored.  Specifically, the CMS tracks 30-day readmission rates for[1]:

  • Heart failure
  • Heart attack
  • Pneumonia
  • Chronic lung problems (emphysema and bronchitis)
  • Elective knee and hip replacements

The penalties can be as much as 3% of all the Obamacare charges for the coming fiscal year. In most organizations, this can easily amount to millions of dollars. In larger institutions, this can amount to tens of millions of dollars.

The rationale behind this policy is that high 30-day readmission rates are a reliable sign of poor healthcare delivery. The idea is that if the hospitals had done a good job in the first place, patients wouldn’t need to come back so soon.

Part 2 of this blog argues that 30-day readmission rates are not a good metric for assessing the quality of care.  I fully recognize that my position is incompatible with the current wisdom, but I’ll give several reasons to support this position.  I suspect there are many others who feel the same way but haven’t argued their position.

The most striking issue that occurs to me is that 25% of all hospitals will automatically be classified as “losers” regardless of the reasons for their high readmission rates. This automatic and simple-minded categorization is grossly unfair.


Hospitals Are Only One Component in a Complex Healthcare Web

Hospitals are highly visible nodes in a complex web of healthcare delivery.  Other components include general practitioners, medical and surgical specialists, independent laboratories, the social welfare system, and family support among others. Unfortunately, it is not unusual for the elderly to have no family support. Everyone knows healthcare is a highly fragmented and fragile system. Failure in any component of this web can lead to readmissions. Nevertheless, Medicare and Medicaid (and perhaps society at large) hold hospitals solely accountable for readmissions.

Given the complexity of the healthcare web, it is highly unfair to single out hospitals as culprits when many of the factors affecting readmissions are beyond the control of hospitals.

Readmissions are a function of hospital care and discharge planning.  That is true.  But it is not the full story.  Another factor that impacts readmissions is the severity of the illnesses treated; those with severe illnesses are more likely to be readmitted. Hospitals can lower their readmission rates by declining to treat patients with severe illnesses.  I know that this is gaming the system, but it makes the metrics look good.

In some communities, elderly patients are discharged into the care of loving, stable, supportive families. In other communities, elderly patients go back to a bleak room in solitude. When they need help – even a ride to see their GP – there is no one to turn to. In other cases, elderly patients may live with their children.  But their children often have jobs and lives of their own. Although they are available to give help sometimes, they are simply not available to help all the time.

At discharge, hospitals routinely advise patients to schedule follow up appointments with their GPs. Patients promise to do so – but often don’t. In some cases, they don’t have GPs to call.  In other cases, they simply forget to make the appointments.  Sometimes they try to schedule an appointment but cannot get one for a month or more. Then there are the patients who simply don’t have access to transportation to get to their appointments.

Discharge staff generally give extensive instructions to patients about their medications, diet, exercise, etc. But it is not unusual for patients to fail to understand these instructions. Or they understand but they don’t have the money to buy the medications. Or they have the money for their medicines but they forget to take them.

There are any number of points of failure and many of them are beyond the hospital control – but hospitals take the hit for readmissions.


Race and Minority Status Are Correlated with Readmission Rates[2]

Blacks and Hispanics have higher rates or readmission to hospitals than whites. Many of these readmissions are avoidable. This means that hospitals serving Black and Hispanic populations are doomed to look bad on their readmission stats. There is no justice in this.

Why is race and ethnic background so important in determining readmissions?  Well, for one thing, the research shows they are less likely to schedule follow-up visits with their GPs or ongoing care givers. They are also less likely to even have GPs and, therefore, are more likely to rely on their local hospitals. Many new immigrants don’t have adequate proficiency in the English language to understand their discharge instructions or read and understand the written materials their hospitals give them. Unlike whites, they have no experience in taking the initiative to look after their own health; they often take the position that whatever happens to them is beyond their control. Some don’t trust Western medicine and discount what they are told.

These demographics suffer more anxiety and depression than whites. These mental health issues contribute to the likelihood of readmissions.

These demographics often have co-morbidities. In other words, they often have several problems at the same time.  If patients don’t bring their other problems to the attention of hospital staff – or if hospital staff fail to stumble across them – those problems can pop up after discharge and trigger other, but unrelated readmissions.

The factors listed here are not due to unsubstantiated biases but to solid research funded by the Centres for Medicare and Medicaid Services and conducted by the The Disparities Solutions Center, Mongan Institute for Health Policy, Massachusetts General Hospital. Yet, even with this solid research, well-known in the healthcare community, hospitals serving these disadvantaged populations are held responsible for readmission rates beyond their control.


Readmission Rates Vary by Geography and No One Knows Why[3]

In Part 1 of this blog, I showed a map of the readmission rates across the country. Now there are two interesting points about those maps.  The first is that the maps remain unchanged year after year. This means that the geographic-based dynamics are consistent year over year.


The other interesting point is that the underlying health profile across these geographic regions is essentially the same.  In other words, the factors that drive readmission rates are not tied to differences in the health of the general population on a regional basis.  There are other drivers, but those drivers are not well understood.


We Think We Know the Answers; Not Sure We Do

The experts are in general agreement about how to reduce readmission rates. Surprisingly, only very few of the hospitals that adopt the recommended practices actually see reductions in readmission rates!  This is counterintuitive.

The four generally recognized ways to reduce readmissions are:

  • Improve discharge management with follow-up
  • Patient coaching
  • Disease/health management
  • Telehealth services

Unfortunately, the evidence shows that these common-sense techniques do NOT generally lead to lower readmissions.  The research is consistent on this finding in both community hospitals as well as teaching and research hospitals. What the data for a study CMS conducted looking at changes in readmission rates during 2008 to 2010 showed is that reductions in readmission rates are slow and inconsistent.


Do You Like to Play Whack-A-Mole?

As a boy, I remember going to the country fairs in August and playing Whack-A-Mole. Some of you may know the game.  The game has a board with about a dozen holes cut into it. “Moles” would pop out of the woodwork at random times; I never knew when and where the next one would pop out. My job was to hit the mole on the head with a mallet.  I often missed.

In some respects, taking steps to reduce 30-day readmission rates reminds me of playing Whack-A-Mole – although it shouldn’t. It seems that even though we know what we should do to reduce readmission rates, doing the “right thing” rarely leads to the desired outcome. To the extent this is true, it suggests that we don’t understand the underlying problem or that we don’t know how to address the problem.


Here Are the Best Ways to Reduce Readmission Rates

The best way to reduce readmission rates is to only accept patients who are not very sick in the first place. These folks can be patched up fairly quickly and put back on the street with a much lower chance of being readmitted.

Another technique is to reduce the overall intensity of healthcare delivery.  One would think that intensive levels of healthcare would lead to healthier populations. That, in turn, would lead to lower rates of readmission. Not true.

A third technique is to change the regional practices of hospital site care.  In some areas, patients are more likely to go to a hospital for initial care rather than a local clinic or a GP. In those cases, readmission rates are higher. If we could discourage patients from using hospitals as their primary source of healthcare, we could reduce readmission rates.

We also need to change the financial incentives. Hospitals that are given the choice between leaving a bed empty and losing the revenue or readmitting a patient and increasing its readmission counts will rarely pass up the opportunity to earn a dollar today.

Experience also shows that taking steps to reduce readmissions in only one area (e.g., better discharge planning) has little impact. But if steps are taken in a number of mutually reinforcing areas, the hospital will see better results.


So, What Does It All Mean?

So, what’s the “take away” from all this?  Well, the first thing that occurs to me is that this is a very complex problem that we don’t seem to understand well in spite of the focus it has received.

Second, we should not hold hospitals accountable for outcomes they cannot control.  We need system-wide changes, not simply improved hospital procedures.

Third, even teaching and research hospitals – where we presumably find the best-of-the-best in healthcare – have not shown significant improvements in spite of their efforts.

Fourth, readmission rates vary geographically but change very little over time for any given geography.  That means there are forces at play we have not yet identified.

Fifth, racial and ethnic minorities have higher rates of hospital readmissions. These demographics have lower levels of trust in the “system,” take less personal responsibility for their health, have lower levels of health literacy, and suffer from higher rates of mental illness.

Sixth, 30 days is an arbitrary time frame.  It’s even possible that hospitals that focus on reducing 30-day readmissions will create unexpected negative consequences in other parts of the delivery system – although no research has substantiated this fear.


Read Part 1 HERE


[1] A Guide to Medicare’s Readmissions Penalties and Data,

[2] Guide to Preventing Readmissions Among Racially and Ethnically Diverse Medicare Beneficiaries,

[3] The Revolving Door: A Report on U.S. Hospital Readmissions,


Are 30-Day Readmissions Rates a Reliable Indicator for Poor Healthcare Delivery? (Part 1 of 2)

digital healthcare

A look at healthcare delivery quality through hospital readmission rates.

Obamacare (Patient Protection and Affordable Care Act) has provisions that require the Centers for Medicare and Medicaid Services (CMS) to financially penalize hospitals and clinics that have unacceptably high readmission rates for Medicare and Medicaid patients within 30 days. Institutions with high 30-day readmission rates in just a handful of situations will suffer financial penalties for ALL Medicare and Medicaid charges during the following fiscal year – not just the handful that are monitored.  Specifically, the CMS tracks 30-day readmission rates for[1]:


  • Heart failure
  • Heart attack
  • Pneumonia
  • Chronic lung problems (emphysema and bronchitis)
  • Elective knee and hip replacements


The penalties can be as much as 3% of all the Obamacare charges for the coming fiscal year. In most organizations, this can easily amount to millions of dollars. In larger institutions, this can amount to tens of millions of dollars.

The rationale behind this policy is that high 30-day readmission rates are a reliable sign of poor healthcare delivery. The idea is that if the hospitals had done a good job in the first place, patients wouldn’t need to come back so soon.

Ironically, I would say that this claim is both true and false. There are good reasons to treat 30-day readmission rates as a reliable surrogate for poor healthcare delivery.  But there are equally good reasons to treat this arbitrary metric as completely misleading.  We will explore both sides of this argument. Part 1 of this blog will argue that 30-day readmission rates are a reliable guide to the overall quality of the healthcare provided.  Part 2 of this blog will argue just the opposite: 30-day readmission rates are a bogus measure of the healthcare provided.


Medicare Readmissions Cost $17 Billion a Year

The most compelling argument in favor of using the 30-day readmission rates as a metric of quality comes directly from the Centers for Medicare and Medicaid Services (CMS). CMS claims that of the total $26 billion it pays annually for readmissions, $17 billion of that figure is for avoidable readmissions[2]. One in five elderly patients returns within 30 days of discharge. These are staggering numbers and, if true, are a strong indictment of the healthcare industry.

Further, this is the figure only for Medicare and Medicaid readmissions – a minority of all hospital admissions. Since there is no organization charged with tracking the costs of readmissions for those with private health insurance or no insurance at all, we will never know the full extent of avoidable readmissions for all patients.


Poor Communications at Discharge Is a Primary Driver of Readmissions

High readmission rates have been tracked to poor communications between hospitals and their discharged patients. Patients are often discharged with little explanation about the medications they are to take or the pain they will experience.  Post discharge pain is particularly severe for patients with hip and knee replacements.[3] Patients who expect the pain, know that it is normal, and know how to manage it are far less liable to return to the hospital than those who suffer pain and believe something has gone wrong.

There are other examples of poor communications that lead to rapid readmissions. Some patients who are admitted for chronic obstructive pulmonary disease have their condition treated and are discharged promptly. But the hospital personnel fail to tell some of those patients to stop smoking! They continue to smoke and return to the hospital promptly. Better communications at discharge about the need to stop smoking would make these readmissions unnecessary.

One patient suffered from type 2 diabetes for 14 years. She showed up at the hospital because her blood sugar was out of control. She got patched up and was back on the street again – but with no idea how to administer her insulin or manage her diet. Wham! She was back in the hospital again. This time the nurses and dietician showed her how to handle her insulin and how to change her diet. This was the first she had heard of these things in 14 years.  Strange but true.

Some research[4] indicates that 30-day readmissions could be reduced by 5% simply by improving communications with the patient prior to and at discharge while following a defined process of care protocol.  This is a cheap solution to an expensive problem.

If the solution is so obvious, why hasn’t it been widely adopted? Well, it really boils down to the way our healthcare system is organized. Each of the participants in the system does his or her job as they were trained to. If the system doesn’t focus on clear, thorough communications at discharge, it won’t happen. But that is changing.  Now that CMS is tracking readmission rates, financial penalties are applied regularly, and research uncovers the underlying reasons, the system is changing. Again, we need to point the finger at the hospital protocols, not the individual practitioners.


Poor Follow Up is a Big Problem, Too

Half the Medicare patients do not see their general practitioners or a specialist during the first two weeks after their discharge. We have no numbers for non-Medicare/Medicaid patients, but it is reasonable to assume that the story is somewhat similar.

This lack of follow up leaves patients who suffer problems – real or imagined – little recourse but to return to the hospital where they received their most recent care.  Most of them don’t know what else to do.

“Evidence Based Medicine” May Be Another Culprit

Medical and nursing training focuses on the technical aspects of healthcare. This training focuses on the “evidence-based” aspects of what works and what doesn’t. Since there have been few (perhaps no) studies of the importance of patient/clinician based interactions, patient communication hasn’t attracted the attention it should as an important factor in long-term healthcare.

But even if there have been no studies to validate the importance of those communications, common sense should have done the trick.  In any case, the culture is likely to change. Hospital staff will pay more attention to discharge communications in the future.


Race and Ethnic Background Are Major Factors in Readmissions

Race and ethnic background are important factors in determining readmissions. Blacks and Hispanics have higher rates of avoidable readmissions than whites.[5] There is a multitude of reasons for this:

  • Less likely to see a primary care provider or specialist
  • Less likely to have a primary care provider they visit regularly
  • Limited proficiency in English leads to poor follow up (less likely to take the medicines prescribed, less likely to understand the discharge instructions, etc.)
  • Poorer health literacy and, as a result, less likely to take personal responsibility for their health
  • Cultural beliefs and customs
  • Less likely to have adequate food, transportation, and social support to follow medical regimens
  • More likely to suffer anxiety, depression, and poor mental health
  • More likely to suffer from a host of medical problems that lead to readmission

Collectively, this means that it is costlier and more time consuming to deal with these patients. When hospital readmission rates were not measured, there was no financial incentive for hospitals to make special efforts to deal with these demographic groups. But now that these statistics are measured and reported publicly and there are financial penalties, we are likely to see hospitals take the steps necessary to minimize readmissions with this demographic.

This does not suggest that hospital administrators were negligent in the past. Rather, it suggests that they were responding to public evaluation and financial metrics that made sense at that time. Once we change the system, we change behaviors.


What Gets Measured, Gets Done

This is an old management bromide that applies directly to hospital readmissions. Until the CMS started focusing on hospital readmissions, the issue simply escaped notice. Since it was never an issue, it was never addressed. It was only when healthcare administrators found that their institutions were evaluated and financially penalized with this metric that they focused on it.  That is normal.

Measuring 30-day readmissions and penalizing the worst performing 25% brought a focus to healthcare quality that has been missing for the last three millennia.

The fee-for-service payment model that has been used in this country since day one has never brought light to bear on the quality of healthcare. We have always automatically assumed that all clinicians showed superb judgment and did all that can be done. This uncritical attitude never held anyone in the healthcare field accountable for actual results.

Now, here’s the important point: By pointing a spotlight on high readmission rates and putting penalties in place to penalize poor performers, the federal government believes it can change behaviors.  The rise of Accountable Care Organizations to address this issue is unlikely to have occurred without this sort of impetus. Further, there is evidence (The Revolving Door) that this new-found attention is, in fact, changing some behaviors at the community level. In other words, by measuring readmission rates, hospitals find that they can improve their performance on this metric.


Readmissions Are Determined by Where Patients Live

If patient demographics and healthcare delivery systems were homogeneous across the country, we would expect to find the same rate of readmissions uniformly everywhere.  That is not the case. Rather, we see a lot of “lumpiness.” In other words, the rates or readmissions to hospitals are determined to a surprising degree by where patients live.

The map below shows the intensity of readmission rates within hospital referral regions.

Although it would be convenient to tie these widely ranging readmission rates solely to quality of medical care, that would be a mistake.  There are other forces at play:

  • Patient health status
  • Discharge planning
  • Care coordination with primary care physicians and other community based resources
  • Quality and availability of ambulatory care services

Further, some places treat their hospitals as a routine site of care. In other words, it is normal for those in some areas to go to the hospital rather than doctors’ offices or community clinics.

Percent of patients readmitted within 30 days following medical discharge among hospital referral regions (2009)


Here is something else I find interesting. If you look at the readmission rates for any one of the five factors I listed immediately after the first paragraph above, you’ll find that the readmission rates for the other four factors are nearly the same for hospitals in the same geographic region.   This correlation suggests that there is some dynamic at play that is independent of the illnesses and chronic conditions in the region.

In other words, the patient is not at the hub of the healthcare system.


So, What Does It All Mean?

It requires some judgment to stand back, look at this disparate information, and draw conclusions.  In fact, different people are likely to draw different conclusions.

Nevertheless, I think it’s reasonable to say that 30-day readmission rates can be used, at a minimum, as a rough measure of quality of care. The rise of Accountable Care Organizations (which we will discuss later) and the fact that hospitals have been able to shift their position significantly on the readmissions scale suggests that improvements are possible if we develop the right metrics, measure all hospitals by the same yardstick, and provide rewards accordingly.


Read Part 2 Here


[1] A Guide to Medicare’s Readmissions Penalties and Data,

[2] The Revolving Door: A Report on U.S. Hospital Readmissions,


[3] Reducing Readmission Rates with Superior Pain Management, by Bobbie Gerhart, owner, BGerhart & Associates, LLC; former president, Miami Valley Hospital, Dayton, Ohio


[4] What Has the Biggest Impact on Hospital Readmission Rates, by Claire Senot and Aravind Chandrasekaran


[5] Guide to Preventing Readmissions among Racially and Ethnically Diverse Medicare Beneficiaries, Prepared by: The Disparities Solutions Center, Mongan Institute for Health Policy, Massachusetts General Hospital, Boston, MA

The Top Six Big Data Challenges in Education

education challenges

Top Big Data Challenges

The path to the successful application of Big Data to educational institutions is going to face at least six major Big Data challenges or road blocks that will have to be addressed one at a time:

Integration across institutional boundaries – K-12 schools are generally organized around academic disciplines. Universities are organized as separate schools, faculties, and departments. Each of these units operates somewhat independently of the others and share real estate as a matter of convenience. Integrating data across these organizational boundaries is going to be a major challenge. No organizational unit is going to surrender any part of its power base easily. Data is power.

Self-service analytics and data visualization –– It is going to be a piece of cake to give planners and decision makers the technology based tools they need to do their own analytics and visualize the results of their studies graphically. It is going to be a genuine challenge to create a culture that requires them to do their own studies using those tools. An even greater challenge will be to create a climate that informs their decision making with the results of their own studies because they are so accustomed to making decisions intuitively.

Privacy – There is a great deal of concern – perhaps even excessive concern – about the privacy of the information collected about each student and her family. The concern is that this data could fall into the wrong hands or be abused by those who have been given responsibility for safeguarding the information. To some extent, this is a technological and management issue. However, the fundamental issue is fear that the technical and management safeguards either won’t work or will be abused. Lisa Shaw, a parent in the New York City public school system said, “It’s really invasive. There’s no amount of monetary funds that could replace personal information that could be used to hurt or harm our children in the future.”

Correlation vs cause and effect– Purists in rational argument want to see arguments that clearly spell out cause-and-effect relationships before blessing them as a basis for decision making. The fact that two factors may be highly correlated does not satisfy this demand for cause-and-effect. Nevertheless, real world experience in other areas of Big Data have shown that high correlations are sufficient by themselves to make decisions that are either lucrative or achieve the objectives the players in mind. This means they have been able to realize significant benefits based on correlation without being able to argue the underlying mechanics.

Money Nearly all educational institutions are strapped for money. When they make decisions to invest in the hardware, software, staff, and training to exploit Big Data, they are making decisions not to hire another professor, equip a student lab, or expand an existing building. That can be a tough call.

Numbers game Some argue – perhaps rightfully so – that Big Data reduces interactions with students to a numbers game. Recommendations and assessments are based entirely on analytics. This means that compassion, personal bonding, and an understanding of the unique circumstances of every student gets lost in the mix. Others argue that Big Data is an assist to the human process. In any event, this is unquestionably a stumbling block.

Privacy vs. Evidence Based Research

There is a great deal of concern about student privacy as we mentioned above, and it is one of the top Big Data challenges that must be resolved. One of the key reasons for this concern focuses on the process of growing up itself. It’s not unusual for students to participate in activist organizations in their youth that they reject later in life. Or they drank too much in university but sobered up once they had the responsibilities of jobs and families. Or a teacher may have given a student a negative evaluation that should not have survived his graduation or departure from the school. In the past, we simply forgot these things. Life moves on and we don’t give a great deal of attention to what happened 25 years ago. But permanent records that can be pulled up and viewed decades later may cast shadows on job candidates that are completely unwarranted at that time. In other words, we lose the ability to forget.

There is an even greater threat, though. Although there is general agreement about the value of predictive analytics, no one pretends that the predictions are inevitable. Nevertheless, a computer-generated prediction can take on the aura of truth. A prediction that a student is not suitable for a particular line of work may prevent hiring managers from hiring her for a position she is perfectly well suited to handle. These predictions can severely limit her opportunities in life forever.

One way of dealing with this is to pass legislation that limits access to student information, protects the identity of individuals, and yet still makes it available to those conducting legitimate educational research. Unfortunately, this ideal is better served in rhetoric than in reality.

Consider stripping student information of any identifying information and releasing it, along with records of other students in the same cohort, for general access for educational research. Yes, the school has taken all the required and appropriate steps to protect the students’ identity. But, no, it doesn’t work. That’s because Big Data practitioners generally access large data sets from a wide variety of sources. Some of those other sources (viz. Facebook) make no attempt to protect the individual’s identity. Those secondary sources have enough unique identifying characteristics that can be accurately correlated with the de-identified school records to re-identify those school records. The best laid plan of mice and men …………

There is no shortage of legislation in the US to protect student information. The most relevant legislation includes:

  • The Family Educational Rights and Privacy Act of 1974 (FERPA). This act prohibits the unauthorised disclosure of educational records. FERPA applies to any school receiving federal funds and levies financial penalties for non-compliance.
  • The Protection of Pupil Rights Amendment (PPRA) of 1978. This act regulates the administration of surveys soliciting specific categories of information. It imposes certain requirements regarding the collection and use of student information for marketing purposes.
  • The Children’s Online Privacy Protection Act of 1998 (COPPA). This act applies specifically to online service providers that have direct or actual knowledge of users under 13 and collect information online.

Unfortunately, this legislation is outdated and somewhat useless today. For example, it applies to schools but not to third party companies operating under contract to the schools. This legislation was enacted before the era of Big Data and doesn’t address the issues that this current technology raises. Further, the acts don’t include a “right of action.” This means there is no way to enforce the law.

In light of this, there are ongoing legislative attempts to deal with the need to protect the privacy of student information. Up until September 2015, 46 states introduced 162 laws dealing with student privacy; 28 of those pieces of legislation have been enacted in 15 states. There have been ongoing initiatives at the federal level as well. Relevant pieces of federal legislation that have been introduced include:

  • Student Digital Privacy and Parental Rights Act (SDPPRA)
  • Protecting Student Privacy Act (PSPA)
  • Student Privacy Protection Act (SPPA)

These acts are primarily concerned with protecting student data that schools pass along to third party, private sector companies for processing. In spite of the fact that these companies have generally built in their own data protection policies and procedures that already meet the requirements of this legislation, there is still considerable fear that the companies will use the data for nefarious purposes such as tailoring marketing messages to particular students – something that is clearly outside the scope of providing education or conducting educationally related research.

The US is not alone in its concern. The European Union has developed regulations that apply throughout the EU. This is in contrast to the fragmented American approach. To be fair to the Americans, however, the Constitution specifically provides that education is a state concern, not a federal one.

The EU 1995 Directive 95/46/EC is the most important EU legal instrument regarding personal data protection of individuals. Rather than discourage the use of third parties storing and processing student information, the EU prefers to regulate it. The EU recognizes that private sector companies provide a valuable service.

The Directive gives parents the option of opting out data sharing arrangements for their children. However, doing so would likely jeopardize the educational opportunities their children would enjoy otherwise. In other words, while parents have the right to opt out, it would be imprudent in practice to do so.

After considerable discussion and consultation, the EU Parliament approved the General Data Protection Regulation (GDPR or Regulation). This Regulation is set to go into effect in May 2018.This Regulation pays particular attention to requiring schools to communicate “in a concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for any information addressed specifically to a child.”

Unfortunately, this is problematical. Big Data and Machine Learning develop algorithms that are quite opaque. Even the professionals who operate Big Data systems don’t know the inner workings of the algorithms their systems develop. Interestingly, they don’t even know which pieces of input are pivotal to the output and recommendations of those systems. In this context, it is reasonable that the general public sees EdTech companies as a threat to students’ autonomy, liberty, freedom of thought, equality and opportunity.

On the other hand, when you visit these EdTech websites, it certainly appears that they are driven by a sense of enlightenment. Their websites clearly suggest that they have the best interests of the students and their client schools in mind. Aside from the opaque nature of Big Data and Machine Learning algorithms, it is not clear – to this author at least – that EdTech companies deserve to be treated as skeptically as they are. It’s quite possible that the nub of the issue is not the stated objectives and current operations of these companies, but rather the uses that this data might be put to in the future and have not been foreseen today. In other words, the way the data might be used in the future is unpredictable. The unpredictable uses of the data could lead to unintended consequences.

In both Europe and the US, when we look at the furor about the importance of the privacy of student information, it often boils down to pedagogical issues.

Here is the nub of the conundrum in a nutshell. There is clearly a potential benefit of conducting educational research using student information. There is good reason to believe that tracking students over the course of their academic years – and perhaps even into their working careers – would allow scholars to identify early indicators of eventual success or failure. However, if scholars are prohibited from conducting that research by placing restrictions on student identification or restrictions on the length of time data can be stored, then that sort of research could not be conducted. This could conceivably lead to a loss of value to both individual students who could benefit from counseling informed by reliable research as well as to benefits to society at large.

How Is the Future of Big Data in Education Likely to Unfold?

Here are the trends to look for – in no particular order. These trends are instrumental in informing the schools’ policy development, strategic planning, tactical operations, and resource allocation, and overcoming the Big Data challenges in Education.

Focus student recruitment – Historically, colleges and universities have had student recruitment programs that were fairly broad in terms of geography and demographics. This led to a large number of student applications for admission. Unfortunately, many of the students the institutions accepted did not enrol in those schools. Colleges are now using Big Data to find those geographic areas and demographics where their promotional efforts not only generate large numbers of high caliber applicants, but also applicants who, if accepted into the college, will actually enrol.

Student retention and graduation Universities need to do more than attract high caliber students. They need to attract students who will stay in school and graduate. Big Data coupled with Machine Learning can help identify those students. In parallel with student recruitment, the schools will increasingly use Big Data to identify at risk students at the moment they show signs of falling behind. This will enable the schools to assist the students, help ensure their success, retain them in school, and increase the chances they will graduate.

Construction planning and facility upgrades Educational institutions at all levels have more demands to add or expand their buildings and upgrade their facilities than their budgets will permit. They need to establish priorities. Big Data will help planners sort through the data to identify those areas that are likely to be in highest demand and provide the greatest benefit to the students and the institutions.

Data centralization At the moment, nearly all data in educational institutions is held in organizational silos. That means that each department or organizational unit collects, stores, and manages the data it needs for its own purposes. That is a natural result of the need for each function to get its work done. However, it is counterproductive if we wish to apply Big Data. In the future, we can expect these siloed data stores to be integrated or linked virtually. Integration means that the data will be moved to a central repository and managed by a central function – like the IT department. Virtual integration means that the functional units will remain where they are at the moment but the IT department will have read access to each of these repositories. Quite likely, we will see both options in practice for the foreseeable future.

Data based decision making and planning Although Education has enjoyed the benefit of quantitative studies for centuries, the practice of education is generally driven by the philosophical views of educators more than data or evidence based studies. In fact, this approach has been enshrined in our commitment to academic freedom at the university level and has trickled down, to some extent, to public and private K-12 schools. Big Data will enable a data-rich culture that will inform policy development and operational planning to an extent we’ve never seen in the past.

Greater use of predictive analytics Machine Learning applied to Big Data will become increasingly successful at predicting students’ future success based on their past performance. Schools of all stripes will rely on these predictive analytics more and more in the future. This is likely to lead to two types of outcomes. On the one hand, schools will allocate more resources to those students most likely to succeed and, as a result, graduate more high-performing students who will deliver significant benefits to their communities and the world. On the other hand, predictive analytics will restrict the academic opportunities of failing students or those who show little promise – like Albert Einstein. Predictive analytics will also help institutions develop counter-intuitive insights that will challenge long cherished values and lead to better student and institutional results.

Local adoption of analytics tools Older readers will remember the days when word processing was handled by a pool of word processing typists. Over time, word processing migrated from the pool to executives’ assistants and, eventually, to the desks of the executives themselves. Once word processing reached the desks of the executives and other knowledge workers, word processing shifted from being a mechanical function to being a creative one. Knowledge workers crafted their messages as they took form on their screens. The same will be true of predictive analytics. We are going to see the hands-on management of predictive analytics studies migrate from Big Data specialists to the desktops (and laptops) of executives who need to think through, propose, and defend policy statements, strategic plans, and operational or tactical initiatives.

User experience – Educators often don’t know a student is having a problem until they see the student failing (or just barely passing) quizzes and tests. But, even when they recognize the problem, they don’t know the reasons any given student is falling behind. Big Data will help students by recognizing the problems they have as those problems occur. Then it can offer tutorials that address those problems as they occur – not days or weeks later when it may be too late to affect the students’ learning trajectories.

Real time quiz evaluations and corrective action. — As computers and tablets become ever more pervasive in classrooms, schools at all levels will be better able to collect digital breadcrumbs about how students perform on quizzes and determine what corrective action is required. This is going to eventually become the norm. Seven Ross, a professor at the Center for Research and Reform in Education at Johns Hopkins University agrees. He said, “Most of us in research and education policy think that for today’s and tomorrow’s generation of kids, it’s probably the only way.”

Privacy, privacy, privacy The privacy of student and family data will continue to be a hot issue. Over time, however, the benefits of sharing data with student identification data will outweigh the concerns of the general public. Sharing this data among qualified research professionals will become more socially acceptable not only as technological safeguards are put into place, but as they are accepted as being appropriate. In practice, society will discover that the student data they thought was secure, is not. Witness the data breach at Equifax that spilled confidential data about 143 million people. Do you remember the data breaches at Target and Home Depot? Again, tens of millions of people who trusted these companies with their credit card information were affected.

Learning Analytics and Educational Data Mining – We are seeing a new professional discipline emerge. The professionals in this field will have both the professional and technical skills to sort through the masses of unstructured educational data being collected on a wholesale basis, know what questions to ask, and then drill through the data to find useful, defensible insights that make a genuine difference in the field of Education. The demand for these specialists is likely to outstrip the supply for many years to come.

Games We are likely to see far more games introduced into the educational curriculum than we’ve ever seen before. Games are not only proven to be instrumental in the learning process, they also lend themselves to data acquisition for immediate or later analyses.

Flipped classrooms The Kahn Academy has reversed the historical process of delivering course material during class time and assigning homework to be handled out of class. It their flipped classrooms, students watch streaming videos at their leisure out of class. Class time is dedicated to providing students a forum where they can work through their problem sets and ask for – and get – help as they need it. This flipped classroom is going to become far more widespread because our technologies today enable it – and it just makes a lot of sense.

Adaptation on steroids Adaptation is nothing new. It’s been going on for thousands of years. The idea is that course material or explanations or problem sets or tutoring is tailored to the individual needs of the student. But when we put that adaptation on steroids, we see a shift in “kind.” In other words, we see something that was not present before. Today we can monitor every move students make, not just count the right and wrong answers they give to a quiz question. By analyzing facial expressions, delays in responding, and a myriad of other variables, we can tailor make and deliver a tutorial specifically suited to a student’s learning problem at the moment the problem occurs.

Institutional evaluation Schools have always presumed to grade their students. Until relatively recently, it was presumptuous for students to grade their teachers or their schools. Now it is becoming common practice. In fact, Big Data will play an ever-growing role in assessing the performance of individual instructors. More importantly, Big Data will rank order universities, colleges, and high schools on a wide range of variables that can be supported through empirical evidence. True, some of that evaluation will be based on “sentiment” – but much of it will be based on hard analytics that would have been too time consuming or too expensive to collect and analyze in a holistic manner.

The Jury Is Still Out

In spite of all the investment, the excitement, and the promise of Big Data in Education, we still don’t have enough experience to make categorical claims about its value. We are still struggling the top Big Data challenges we face.

In an article in The Washington Post last year, Sahlberg and Hasak claimed that the promised benefits of Big Data have not been delivered. As a visiting professor at The Harvard Graduate School of Education, Sahlberg is an authority we should listen to. He claims that our preoccupation with test results reveal nothing about the emotions and relationships that are pivotal in the learning process.   Our commitment to judging teachers by their students’ test scores has the effect of steering top performing teachers away from low performing schools – exactly where they are most needed. There are extensive efforts to evaluate both teachers and students. However, according to Sahlberg, this has NOT led to any improvement in teaching in the US.

The most that Big Data can offer is an indication of a high correlation between one factor and another. It cannot tell about cause and effect. In fact, cause and effect argments are difficult for people to make – and yet they are instrumental in building compelling arguments. Having said that, it is revealing to recognize that finding high correlations in other fields – even without a demonstrated cause and effect relationship – have proven to be quite beneficial.

Digitally Transforming Healthcare Industry

digital healthcare

Big Data Has Changed the Practice of Healthcare Forever – and the Change is Just Beginning. Healthcare organizations – old and new – are investing heavily in Big Data applications.

Big Data projects process data measured in petabytes to deliver significant healthcare benefits. Only a small proportion of that data comes from traditional databases with well-structured data. Instead, almost all of the data comes from sources that are messy, inconsistent, and never intended for a computer to use. I’m talking about messy, unstructured patient records. Accessing this unstructured data and making sense of it gives health care professionals and leaders insights they would never have otherwise. They directly affect the way health care is delivered on a patient-by-patient basis.

I’ll give you four real-world examples the health care industry has already realized. We’ll take a quick look at Apixio, Fitbit, the center for Disease Control, and IBM’s Watson Health.


Medical research has always been conducted on randomized trials of small populations. No one tried to conduct massive healthcare research using all the data on all patients because the work would have been over whelming. Limiting the size of the data sets researchers used made their research manageable. Working with small sample sizes creates methodological flaws of its own. This is not to criticize those studies but to recognize the limitations of the research outcomes based on the limitations of what was feasible at the time those studies were conducted.

Apixio set out to change all that. Apixio developed mechanisms for conducting healthcare research based on studies of actual patient healthcare records. Their mechanisms leverage both Big Data and machine learning. Further, they work with ALL the patient healthcare records a facility has to offer – not just a randomized subset. As new patients are treated, Apixio collects data about the symptoms, diagnoses, treatment plans, and actual outcomes. By integrating these new cases into the mix, the company can quickly determine what works and what doesn’t. The difference between discovering the effectiveness of healthcare treatment programs based on limited clinical research studies and those based on analyses of the effectiveness of treatment programs based on reviews of ALL patients can be dramatic. I’m talking here about studying the treatment outcomes for all patients, not just a small number included in clinical research studies.

Only about 20% of the patient healthcare records reside in well-ordered databases. 80% of the data is messy, unstructured data. I’m talking about the GP’s notes, consultant’s notes, and forms prepared for Medicare reimbursement purposes. Working with unstructured data used to be problematical. Institutions had to hire and train “coders” who would read free form materials (handwritten notes, typed notes, etc.) and capture the meanings of those notes in a form suitable for computer processing. Apixio dealt with this issue quite differently. It used computer based algorithms to scan and interpret this data. The company found that its computer assisted techniques enable coders to process two to three more patient records per hour. Further, the coded data it created this way can be as much as 20% more accurate than the manual only approach.

This computer-assisted approach also finds gaps in the documentation. In one nine-month period, Apixio reviewed 25,000 patient records and found 5,000 records that either did not record a disease or didn’t label it correctly. Correcting the data can only improve diagnoses and treatment programs.

Apixio does far more than produce studies that physicians can use to inform their treatment plans. It takes the next step. It reviews the healthcare records of each patient and develops personalized treatment plans based on a combination of the data it has collected for that patient and the results of its analyses of practice-based clinical data. This enables physicians to only order the tests that are useful and avoid expensive but worthless procedures.

This pays off handsomely for insurance companies that treat patients who are enrolled in the Medicare Advantage Plans. Under these plans, Medicare pays a “capitated payment.” This is a payment paid to treat patients based on their expected healthcare costs. By tailoring the diagnostic tests and treatment programs by individual, the company is able to reduce its costs dramatically. Those savings drop directly to the bottom line.

It’s not just the insurance companies that benefit, though. Patients benefit as well. Patients are not required to undergo inconvenient or painful procedures that would provide no benefit.


Fitbit is the leader in the sale of wearable devices that track fitness metrics, although Apple is hot on its heels with its Apple Watch. Fitbit sold 11 million devices between its founding in 2007 and March 2014. These devices track fitness metrics such as activity, exercise, sleep, and calorie intake. The data collected daily can be synchronized with a cumulative database that allows users to track their progress over time.

The driving principle here is that people can improve their health and fitness if they can measure their activity, diet, and its outcomes over time. In other words, people need to be informed in order to make better fitness decisions. Fitbit provides users with progress reports presented in a preformatted dashboard. This dashboard tracks body fat percentage, body mass index (BMI), and weight among other metrics.

Patients can share their data with their physicians to give them an on-going record of their key healthcare parameters. This means that doctors are not forced to rely on the results of tests that they order on an infrequent basis. To be fair, however, not all physicians are open to treating the data their patients collect on their own to be as credible as that collected in a clinical setting.

Insurance companies are prepared to adjust their premiums based on the extent to which their policyholders look after themselves as measured by Fitbit. This means that policyholders are required to share their Fitbit or Apple Watch data with the company. John Hancock already offers discounts to those who wear Fitbit devices and the trend is likely to spread to other insurance companies.

The fastest growing sub-market for Fitbit is employers. Employers can then provide their employees with Fitbit devices to monitor their health and activity levels (with their permission).

The CDC and NIH

The Center for Disease Control (CDC) and the National Institutes of Health (NIH) are leaders is applying Big Data identifying epidemics, tracking the spread of those epidemics, and – in some cases – projecting how they are likely to spread.

The CDC is tracks the spread of public health threats including epidemics through analyses of social media such as Facebook posts.

The NIH launched a project in 2012 it calls Big Data to Knowledge or BD2K. This project encourages initiatives to improve healthcare innovation by applying data analytics. The NIH website says, “Overall, the focus of the BD2K program is to support the research and development of innovative and transforming approaches and tools to maximize and accelerate the integration of Big Data and data science into biomedical research.”

A couple years ago the CDC used Big Data to track the likely spread of the Ebola virus. It used BigMosaic. BigMosaic is a Big Data analytics program that the CDC coupled with HealthMap. HealthMap is a data base that maps census data and migration patterns. HealthMap shows where immigrants from various countries are likely to live – right down to the county or even the community level. When the CDC identifies countries where there is a public health problem – like the Ebola virus – it can link that census data showing the distribution of expat communities with airline schedules to determine how the disease is likely to spread in the US – or even other countries. This allows the CDC to track the spread of disease in near real time. In some cases, it could even project how diseases are likely to spread.

These Big Data applications merge data about weather patterns, climate data, and even the distribution of poultry and swine. These applications present this data in a graphic form that makes it easier for epidemiologists to visualize how diseases are spreading geographically. The benefit, of course, is that the CDC and the World Health Organization can deploy its scarce resources to the areas where they can do the most good. They can do that because Big Data provides the tools to chart the spread of diseases by international travellers.

The Center for Disease Control now uses Big Data linked with Social Media to forecast the spread of communicable diseases. Historically, CDC tracked how they observed the reported spread of diseases; forecasting how diseases will spread is a new ball game. The CDC ran competitions for research groups to develop Big Data models that accurately forecasted the spread of diseases. The CDC received proposals for 28 systems. The two most successful were both submitted by Carnegie Mellon’s Delphi research group. These models are not predetermined but, instead, leverage Machine Learning to develop tailored models to forecast the specific spread of each disease.

The model is by no means perfect. The CDC gave the Carnegie Mellon model a score of .451 where 1.000 would be a perfect model. The average score for all 28 models was .430. That means that the model the CDC will use is the best available and much better than nothing, but still has considerable room for improvement.

The Delphi group is studying the spread of the dengue fever. It has plans to study the spread of HIV, Ebola, and Zika.

IBM and Watson Health

IBM is particularly proud of Watson, its artificial intelligence system on steroids. Although Watson has produced some stunning results such as winning the TV game Jeopardy against the two best Jeopardy contestants, our interests today are in healthcare.

Watson is machine learning at its finest. In the healthcare field, its managers feed it an on-going stream of peer reviewed research papers from medical journals and pharmaceutical data. Given that Big Data knowledge base, Watson applies that knowledge to individual patient records to suggest the most effective treatment programs for cancer patients. Watson’s suggestions are personalized to each patient.

Watson’s handlers don’t program the software to deliver predetermined outcomes. Instead, they apply Big Data algorithms to enable Watson to learn for itself based on the research it reviews as well as the diagnoses, treatment programs, and observed outcomes for individual patients.

IBM is partnering with Apple, Johnson & Johnson, and Medtronic to build and deploy a cloud-based service to provide personalized, tailored guidance to hospitals, insurers, physicians, researchers and even individual patients. This IBM offering is based on Watson – its remarkably successful system that integrates Big Data with machine learning to enable personalized healthcare on a massive scale.

Until now, IBM has used Watson in leading edge medical centers including the University of Texas MD Anderson Cancer Center, the Cleveland Clinic, and the Memorial Sloan Kettering Cancer Center in New York. Given its successes to date, IBM is now ready to take its system mainstream and broad based.

How Medical Mobile Apps are Transforming Healthcare

mobile medical apps

Medical mobile apps are transforming the Healthcare Industry, promising to improve quality of healthcare while lowering costs.

In 2017, global medical healthcare apps were a $26 billion industry with a global average CAGR of 32.5%. The United States currently has the largest market for mobile medical apps. However, the Asia-Pacific region is showing the fastest growth rate in the world – with an estimated average CAGR of 70.8%. By 2022, the worldwide mobile medical app market is anticipated to reach a $102.43 billion.

As of 2017, mobile healthcare apps have been downloaded over 3.2 billion times – this marks a 25% increase since 2015. In the United States alone, there are over 500 million smartphone users with mobile health-related apps. The greatest growth in mobile medical apps has been in the management of chronic care – particularly diabetes, obesity, high blood pressure, cancer, and cardiac illnesses.

As the prevalence of chronic illnesses worldwide increases, so is the increase in medical apps created to help manage these chronic illnesses. Nearly half of all Americans, around 133 million individuals, currently live with a chronic illness. Per the Centers for Disease Control and Prevention, now seven of the top ten causes of death in the US are due either directly or partially to chronic illness.

Chronic illness is on the rise globally as well. According to the World Health Organization, as of 2017, over 79% of all deaths related to chronic illness occur in developing countries, and this rate is anticipated to continue to climb. Heart diseases and other cardiovascular illnesses will continue to be the major cause of mortality throughout the globe. Asia, in particular, is experiencing the greatest rise is cardiac disease and death due to heart-related complications.

The widespread availability of tablets and smartphones in healthcare today is what is helping spur the use of mobile healthcare apps by patients and providers alike. According to referralmd, over 80% of physicians in 2017 use their smartphone at the point of care – whether for patient services or for administrative reasons. The wide access to and use of smartphones by providers and patients alike has been the primary driver behind the increasing availability of mobile healthcare apps year-over-year.

How can mobile apps help? What kind of mobile apps do patients want? And which kind do physicians need?

The healthcare industry is filled with opportunities for digitally savvy companies and mobile app developers.

Download and read the full article here.