Posted in Hirepayoff News

The world of employment testing and assessment can be a ‘buyer beware’ marketplace… driving bad hiring investments and creating legal risks.

June 30, 2014 – Over the years, Dr. David Jones has worked extensively with Jeffrey Ross, a Partner with Seyfarth Shaw LLC, one of the largest and most respected employment law practices in the United States. Jones and Ross produced the chapter in Jones’ latest book, Million-Dollar Hire: Build Your Bottom Line, One Employee at a Time, dealing with the kinds of legal challenges faced in candidate recruiting, screening, assessment and hiring. Recently, the two prepared this summary of what they see to be a growing ‘buyer beware’ market for some of today’s employment testing programs.

“Everyone knows the major payoff that comes through screening, testing, and assessing candidates effectively. Nothing we identify here challenges the value of such systems in growing a truly high-performance organization. The challenge, though, is doing it the right way,” says Dave Jones.

“Today, we see growing examples of things employers need to avoid when they shop for employment testing and assessment programs,” says Jeff Ross. “It seems that acquisitions and rollups in the employment testing sector, and everyone’s need to grow sales quickly, are creating sales claims that boil down to ‘trust us, our tests work’… a ‘buyer beware’ sales pitch. Accepting such claims can raise the legal compliance risk of an employer’s hiring process,” Ross adds.

According to Ross, “From a legal point of view, we know employers need to screen and assess candidates. Often, though, we look at a claim about what a vendor’s employment testing tools do, or how they can be defended, and are concerned the employer will face great risk if challenged by a rejected candidate, a government agency, or, worse yet, by a group of candidates in a class action; events that bring great cost, disruption, and exposure to employers.”

To help employers avoid spending money and creating legal risk with poor or inappropriate employment testing and assessment tools, Jones and Ross draw on years of dealing with legal challenges. They suggest questions employers should ask of their testing, and proof they should seek about how vendors’ testing tools actually work. The questions are framed to help employers avoid Sham… Scam… Khazam! claims like the following.

No. 1 – “Trust us, try our test with your top performers… then you can find more employees just like them!”

“When asked to explain how their testing tool will work,” Jones says “some vendors tell the employer all it needs to do is use the tool to test the organization’s very best performers, and then compare candidates to the resulting ‘top-performer profile.’ The claim sounds convincing when presented by a talented sales person, but when you think about it, the concept is truly a sham,” says Jones.

Ross adds, “When I first heard this kind of claim, I asked myself, ‘OK, but how do we know whether the poorest-performing employees score any differently on the test than the top performers?’ After all, don’t we want to be sure the test actually distinguishes between the best versus poorest performers?”

“We were asked to help defend an employer where a vendor made such a claim. When investigated, results showed the employer’s top performers scored no differently on the test than its poorest performers,” reports Jones. The result – wasted money, poor hiring decisions, and an opening to legal challenge when the test produced different selection results for candidates of different ethnicity.

When a vendor poses this ‘top-performer model,’ Jones suggests asking the vendor:

  • OK, but what about our poor performers? Shouldn’t we see how they score on your test, as well?
  • What portion of our candidates can we expect to reject if all we seek are those who match our ‘top performer’ group?
  • When was the last time you used this approach to successfully defend a legal challenge to your employment test?

No. 2 – “Trust us, try our test with a few of your best and a few of your worst performers to see our test works!”

Sometimes a vendor suggests testing a few of the employer’s best and worst performers. This sounds more sensible than the previous sham. But this is a claim almost as bad in its payoff and defensibility.

“Two questions need to be answered,” says Jones. First, just how different are the sampled groups of employees in their on-the-job performance? Are the differences typical of the overall workforce, or are the groups being compared extreme enough from one another to be unrepresentative of actual employee performance? Second, are differences between the groups in how they perform on the assessment tool significant, or do the results reflect nothing more than chance alone?

“Picking just a few employees to compare; choosing extremes from the workforce; or simply finding chance differences instead of statistically reliable ones will all produce the same result… an unacceptable argument that the assessment tool really works if challenged,” explains Jones.

“You can’t just play with small numbers to make something look like it works, and claim you have thereby “validated” the assessment tool – a legal standard that challenges focus on,” says Ross. “There are clear standards that need to be met if the tool is challenged. An argument of validity based on small, atypical groups of employees just won’t stand up,” adds Ross.

When a vendor poses this ‘top-versus-bottom performer’ process, Jones suggests asking the vendor:

  • Just how should we sample the ‘top’ and ‘bottom’ performers who take the assessment tool. Shouldn’t we avoid extreme cases to keep the group representative of our workforce?
  • Don’t we need to assess a large enough group to produce statistically reliable results? What number do you think meets this standard?
  • If we only try out the assessment tool with ‘top and bottom performers,’ how do we decide where to set the standard for candidates who score in the middle of the range?

No. 3 – “Trust us, our tests predict candidate performance in the most important job functions… and we can prove it!”

Start with a basic concept – in preparing legal defensibility, an employer needs to show their testing tool predicts how candidates will perform once on the job; what was referred to earlier as ‘validating the tool.’ “When challenged, this is what an employer needs to show – the tool predicts candidates’ future on-the-job performance. If it does, it’s much more likely to be defensible,” says Ross. “The same concept holds from a business point of view,” says Jones. “If the tool doesn’t predict a candidate’s job performance, why spend the money?

Another thing to focus on, though, is what a test vendor claims its assessment tool actually predicts. “Sometimes, what’s predicted is not really that important,” Jones adds.

For example, test and assessment vendors sometimes claim that candidate scores on their tool link with the way supervisors evaluate employee performance. This can be useful, provided the evaluations actually reflect important work outcomes. “But sometimes we find these claims simply show that the way candidates describe themselves on a test links to how their supervisors describe them. Is this
really useful?” asks Jones. “What about linking test scores with measures such as productivity, hitting performance targets, attendance, absenteeism, etc.?”

“Sometimes, we see even more unusual ‘performance prediction’ claims. We’ve seen cases where a vendor claimed that employee scores on their test linked with employees’ descriptions of their own job performance! The claim – our test predicts job performance. Really; that’s performance?” asks Jones.

The bottom line – to be useful, an assessment tool needs to predict how people perform in areas important to the employer, whether objective measures or solid supervisory evaluations, not simply what the supervisor and employee think about themselves. Employers need to ask a vendor:

  • Exactly what aspects of on-the-job performance can we expect your assessment tool to predict?
  • Are there any aspects of on-the-job performance we should expect your tool not to predict and, if so, why?
  • How accurate will both these predictions be? What level of ‘hits’ versus ‘misses’ can we expect?

No. 4 – “Trust us, our tests work for jobs like yours… we have all sorts of validity proof to show they predict job performance!”

Another frequently heard sales claim is – ‘Trust us, our test has been proven to predict job performance so many times, all you need to do is use it.’ Sometimes vendors refer to ‘validity transportability’ as the ground for making this claim sound believable. Sometimes they cite ‘scientific research’ showing their test works for jobs like those of the employer.

In a recent legal compliance review, Jones saw this claim for a test sold to an employer screening candidates for entry-level factory jobs. The vendor claimed ‘We’re sure this test will work for your jobs, we’ve proven its validity many times. We can simply ‘transport’ evidence of validity and defensibility to your organization.’ A deeper dive into the vendor’s materials showed that, indeed, the test
had been shown to work… for insurance and financial services representatives.

Unfortunately, the test produced a noteworthy adverse impact on the passing rates of minority candidates. “So, how was the employer going to defend the test’s rejection of minority candidates for factory jobs when the only evidence it worked came from white collar employees?” asks Jones. “There are certain legal standards for making a ‘validity transportability’ argument, but too frequently some or all are not met… other than ‘trust us, the test works’,” adds Ross.

Questions to ask when such sales claims are heard include:

  • Can you produce recommendations about the way we should use this test to arrive at ‘pass vs. fail’ decisions based on other employers’ jobs?
  • If you’re suggesting we ‘transport’ evidence of this test’s validity from other users to defend its use for our jobs, can you produce documentation for us that satisfies federal agency ‘validity transportability’ standards?
  • Once we implement this test, what will you do to verify that it predicts performance in our jobs?

No. 5 – “Trust us, our tests work, and we customize how to score and use them just for you!”

“Next, there are sales strategies we put in the ‘khazam’ category – ones where the vendor claims to produce customized, high-precision tests that, in reality, are nothing more than smoke and mirrors,” says Jones. For example, some vendors claim to customize how an employer should score individual test questions, or weight certain sets of answers in a way tailored to fit the buyer’s unique hiring requirements. “This is where the process takes a dive into statistics, and where statistical mumbo jumbo often comes to the fore,” says Jones.

In brief, a vendor might claim to ‘validate’ current employees’ answers to test questions to frame a scoring key that maximizes a test’s accuracy for the employer. When properly executed, this is a sound approach, but it is one that requires data on relatively large numbers.

The process also demands the vendor follow a structured set of analyses. “Problems evolve when basing this work on small numbers of test takers, and failing to confirm, statistically, that the results repeat themselves in separate groups of people. We see the later flaw all too frequently, because it saves a vendor time and money,” adds Jones.

All too often, vendors play with small sets of data, create unreliable scoring rules, and then claim the results offer a truly unique, highly precise way for a given employer to use a test. “It’s as silly as claiming that, just because you flipped a coin to ‘heads’ two times in a row, it’s guaranteed to do so every time it’s flipped in the future!” says Jones. Small numbers often produce results unlikely to work in the long run.

Questions to help avoid these ‘khazam’ claims include:

  • If you create a customized scoring process for us, how many people will you need in the study?
  • Once you come up with a customized way for us to score your test, will you use another set of data to ‘cross-validate’ the process
  • Can we begin using your test defensibly before you complete this cross-validation process?

No. 6 – “Trust us, this test is totally legal”

Unfortunately, there is no such thing as a “lawful” test. Jones explains that “whether a test is lawful or not is totally dependent on how it is used. A test may be perfectly lawful when used in a given way, and totally unlawful when used in another.” “But test vendors are not lawyers. Testing and assessment tools usually are created by psychologists, and then marketed by sales staff. Users generally do
not understand the somewhat esoteric legal requirements applicable to testing. And that’s where the real trouble starts,” says Ross.

Jones notes, for example, that some vendors will make bald representations that their test or assessment does not cause adverse impact on any legally protected group. “That might be true, but whether a test will cause adverse impact depends on the composition of the particular candidate population being tested.”

It also depends on the lawful application of the test. “We actually encountered one vendor that engages in blatant reverse discrimination by eliminating the adverse impact on African Americans. How? It (unlawfully) uses a black/white ‘fudge factor’ to alter test scores,’” says Ross.

Another test vendor claims to perform adverse impact analyses based on “ethnic background.” But the test’s technical report only analyzes the data as “White v. Nonwhite” or “White v. Minority”. It provides no data breakdown by individual race or ethnicity (i.e. African-Americans, Hispanics, Asians, etc.). So there is no way to determine from the report whether the test has an adverse impact on any legally protected ethnic group.

“Another testing expert we encountered advised an employer that it was lawful to continue using a vendor’s test as long as it caused no adverse impact, even though the test had no validity. It’s hard to imagine professional advice that is as truthful, but dysfunctional, as that,” Ross states.

Some test vendors are more creative. One vendor of a so-called personality test took liberties with the test validation data to develop marketing materials which blatantly misrepresented the personality traits measured by the test. But if you think that’s bad, the same vendor’s marketing materials also represented, without any basis, that the test would improve tax and regulatory compliance. “You just can’t make up this kind of scam,” says Ross.

“Or consider a vendor who claimed validation studies were being done to bolster the tool’s limited validation evidence in its initial tryout… and then continued to market the test for over ten years without ever completing any of the additional validation studies,” adds Ross.

And some test vendors claim a test is fully lawful, but their so-called validity report and/or sales agreement fine print requires the employer who purchases the test to develop its own validity analyses to confirm the claim.

Employers can avoid many of these problems by asking test vendors the following types of questions:

  • Will you document your claim that it would be lawful for us to use your test for the jobs we have identified based on your past work?
  • Can you provide an employment law professional who will support your claim?

The Solution – Ask Questions

“Done well, employment testing and assessment can produce great payoff. And many vendors are high value professionals who would never consider using ‘sham, scam, and khazam’ strategies to sell their tests and assessment tools,” says Jones. “Too often, though, we find an employer misled by claims of research evidence that fall outside their area of expertise. That’s why we suggest putting the
salesperson to the test,” Jones adds.

In working with HR leaders and employment counsel, Jones and Ross offer ‘thinking points’ to help ensure an employer’s program increases payoff in hiring decisions, and avoids opening the door to legal challenge. Some of the points:

  • Exactly what will your organization do if we face a legal challenge for using your test?
  • If an advisor says ‘I picked three stocks last year that grew; I’ll really grow your investments, too,’ wouldn’t you ask how many he picked that did not grow?
  • If your physician said ‘Trust me, this drug works; there’s no need for me to examine you,’ would you grab the prescription and head to the pharmacy?
  • If your employer regularly evaluates the quality and consistency of its products and services, shouldn’t you review your employment testing tools to see how they actually work, too?
  • If your employer’s finance department carefully monitors return on its capital investments, shouldn’t you carefully monitor the value of your employment test investments?

“The principle is simple – ask questions, don’t settle for generalities, and find vendors who focus on tracking and continually improving how their product works for you,” says Ross. “As pointed out in Million-Dollar Hire, how you select people needs the same accuracy, monitoring, and continuous improvement you use in growing your organization’s other high-value assets,” Jones adds. Making ‘people decisions’ based on a vendor’s promises of ‘trust me, this will work’ can produce some of the biggest losses you’ll ever see.

About HirePayoff™

David Jones, PhD, is author of Million-Dollar Hire: Build Your Bottom Line, One Employee at a Time. In Million-Dollar Hire he explains how, even in a slow economy, U.S. employers make millions of hiring decisions every month… and in today’s demanding economy, companies no longer have room to get it wrong. With practical, real world illustrations, Jones uses the book to show there are tools to help treat every hiring decision with the same focus a business applies in acquiring other high-value assets.

Jones founded and served as CEO of HRStrategies, a firm listed among the fastest growing HR consulting and employment process outsourcing companies in the U.S. prior to acquisition by today’s Aon-Hewitt. There he designed and implemented employment testing programs for a majority of the Fortune 100, as well as startups, expansions, and public sector agencies.

Today, he is president of Growth Ventures Inc., whose HirePayoff™ division works with employers to tap candidates’ ‘Can Do and Will Do’ competencies. HirePayoff™ helps companies re-invent their online recruiting and hiring
processes. The target – find new hires with employee engagement, strong job performance, solid retention, and bottom-line drive. Unlike the examples described in this release, the HirePayoff™ online recruitment and selection tools are supported by sound test validation, and are uniquely driven by Six Sigma continuous improvement. For more information, visit

About Seyfarth Shaw LLC

Jeffrey Ross is a partner in Seyfarth Shaw LLC, a law firm with one of the largest and most respected employment law practices in the United States. Ross represents major employers in complex workplace litigation concerning employment discrimination and other employment matters. He is the Co-Chair of Seyfarth’s Hiring, Testing and Selection Best Practices Team and has extensive experience with the EEOC Uniform Guidelines on Employee Selection Procedures. Ross also contributed to the Million Dollar Hire chapter on the legal challenges faced in candidate screening and assessment.

Founded in 1945, Seyfarth Shaw was among the earliest exclusive practitioners of what has become labor and employment law. From that start, the firm has been an innovator in the field. Today, Seyfarth has more than 500 workplace lawyers who represent a broad spectrum of employers in virtually every aspect of employment law. For more information, visit

Media Contacts:

Dr. David Jones
President & CEO

Brian E. Kiefer
EVP, Business Development Director of Public Relations
HirePayoff™ Seyfarth Shaw LLP