TO: Those concerned about Validations, Deviations and
                   Discrimination regarding THE PEAC SYSTEM® evaluation.

FROM: George W. Tucker, MS.  THE PEAC SYSTEM®

RE: Initial Studies in Development of THE PEAC SYSTEM®

In the effort to provide accurate and consistent information to those whose interest encompasses the EEOC requirements regarding discrimination factors, as well as our deviation, reliability and validation studies, the following pages are provided. They represent copies from the final chapters of our PEAC SYSTEM® Reader's Manual, which, in turn, is a compilation of each stage of the ongoing study notes, Author's comments and suggestions. This manual is the foundation of the information from which a Reader provides for his or her clients in performance of Consulting for selection, training and management issues. It is lengthy. To satisfy due diligence for EEOC requirements, print it and put in the file for The PEAC SYSTEM, INC. Of course, you might want to read it along the way...

Although the PEAC SYSTEM® Reader's Manual is proprietary, for the most part, these few chapters are for public knowledge and are included for that benefit to you. You might feel more comfortable printing this extensive page for the PEAC SYSTEM file.


A reprint with permission

Discrimination, Validations, 
Deviations & Reliability


As in any hiring assistance evaluation, there are other facets to consider. The factors of age, sex, color, race, handicapped and other discriminatory factors need to be addressed. Obviously, in the case of the use of the PEAC SYSTEM®, where a manager or other hiring authority has met the individual, total objective screening is a moot point. However, one intent of this profile instrument is to enable large personnel departments to rapidly screen by personality and communication factors prior to the interview, a much more efficient system for volume organizations. Though the EEOC would remind us that, 'anyone can be trained to do anything,' we posit that the sensible owner/manager will best benefit in knowing just what that training and management might entail, and then be able to select the individual most likely to require the least amount of work on his or her part. Of course, any test or evaluation process is barely 20 to 25% of the selection process. Other considerations remain: background, intelligence, education, interview skills and the like.

Adverse Impact

The most important aspect in any testing service is that of 'Adverse Impact.' This relates to the 'ability' of a test to screen for minorities of any category, essentially having an adverse impact on a particular protected group. This may not even be intentional, and the development of any evaluation system must make certain that this factor does not creep in to the point that it can, indeed, have that impact. This is a very definite flag to catch the eye of the EEOC, and can put you or your company in line for lawsuits or fines, depending on the nature of the offense. However, we, at The PEAC SYSTEM®, have gone to great lengths to build a blind evaluation process, in that the testing is done on individuals we've never meet, and only the construction of the individual traits could possibly have that impact, if we had allowed it. However, in our validations, we were interested, first, in establishing a cohesive representation as an average, across all groups, protected or not, and then building profile indicators around that average. Our studies failed to set apart any group, in any of the traits, regardless of race, gender or other factors. What we were after, we have defined: There are certain communication styles who are easier to train, in particular positions and industries, simply because those profiles have certain instincts upon which to develop. Those that do not have the requisite instincts can still 'do the job,' for the most part, but only after much more intense training, and, perhaps, a longer learning curve. And simply put, a manager will often lean towards the individual who is most likely to be a fast learner, productive and easier to manage, regardless of the race, gender, religion, and so on.


Of course, there are other facets. For example, any evaluation that works off 'benchmarking,' a way to assess your best people, as a standard to hire more, just like them, and less like the others, is technically illegal. In fact, any evaluation that suggests you use their profile system for such a test, is setting you up for conflict with the EEOC. Though it sounds as if it would make sense, especially in larger companies, where, perhaps, three or four hundred individuals might be available, it has a built in failing. It is, first of all, far too small a group to provide accuracy. Even The PEAC SYSTEM® faced that hurdle in the first two thousand profiles, and we intentionally provided for feedback and a dynamic database, one that adjusted itself as we learned more and more in our developmental stage, and later, in the marketplace. There are now well over 270,000 (cir. 2004) units and the dynamics change very little, as no single individual, or even a small group, can impact that many units. In other words, when we indicate a profile has a particular ability to learn and be productive, in a specific environment, we have begun to come very close, indeed. In fact, our Peacscore© has become so accurate, that we are averaging well within 2 points of indicator of success, even though that was not the original intention of that measurement. More on this, later. 


Working closely with Kenneth Stein, PhD., then head of the Berkeley University of California Psychology Department, and Dr. K. Oates, another prominent leader within the Psychology Department, careful consideration was given to determine the most likely words to ascertain a particular 'construct,' or personality trait, to fit the majority of the 'professional population.' Since the focus of the PEAC SYSTEM® was toward professional level screening and evaluation, the minimum word selection was based on 1978 high school graduate word familiarity, as determined with the assistance of the University and their ongoing (at the time) studies of graduate level of competence among High School Graduates across the US.

The dangers inherent on such a program are twofold: 1) the subject may be less 'educated' in a climate where a 'pass at all costs' philosophy prevails, and 2) many technically oriented people are very poor in English word usage. Their vocabulary may be more rudimentary than the standard word association evaluation would allow.

To address these issues, considerable research was done along the lines of the theoretical works of William Marsten, Ph.D., and into the practical works done by others at Columbia University in the 1930's. A significant selection of other word association tests (nine-stable, current) was analyzed to determine the most 'standardized' words in use over the past 15 years.

Study 1- word recognition

To avoid the probability of a random deviation detracting from our work (author's 1990 note: best example, Nisbett, et al, 1987), we used as large a sample in every category as could reasonably be worked using University Seniors and Post Graduate Students as field researchers to gather and handle information. Subsequently, a population drawn from the four regional high schools (graduating seniors) and three regional colleges (entry level freshmen) held that with very few exceptions (see below), all of these words were accepted and understood by all of these people (n = 2631). 100 Psychology Students, Seniors, took 20-21 people each for this simple comprehension test.

It is apparent, then, that, as nearly as possible, the entries on the PEAC SYSTEM® are as easily accepted by any high school graduate as they are by any post graduate collegian. In other words, to the best of our ability, we find that the same words mean the same things to a great majority of the population above high school graduation. However, it is worth noting that 15 subjects (.0057% of our study group) had trouble with the word, 'dissenting.' That represents about 1 in 175 people, essentially an insignificant amount. Ongoing studies will assess the relevancy of this word to the general population and if necessary, it may be replaced. So, in the interests of the intentions of The PEAC SYSTEM®, our choice of words for the evaluation keep us more useful in the range above high school graduation. Our target is primarily the professional arena, away from blue collar and labor industries, and the acceptance of the word selection makes this more appropriate.


Significantly, the word association test in its standard form of checklist or forced choice style, usually yes or no, true or false issues, created the largest amount of deviation from test-retest situations. This is addressed in several studies and opinions published in the Library of Congress in outside studies of check-list word association evaluations as an issue that detracts from the accuracy of such evaluations.

Therefore, the PEAC SYSTEM® has replaced the weaker response forms with a five level spectrum that provides very accurate readings and additionally provides a scale for consistency to determine when the individual is attempting to skew the results. Providing both venues for measurement of accuracy has brought our reliability deviation to less than 2% on test retest situations and the specifics are covered in the next few pages. We have further enhanced the limitation of 'mood orientation' by the most effective word distribution (physical order), preventing 'construct momentum,' where our studies showed that a series of words that are in a similar trait can influence its outcome.


This is not an evaluation to help determine any form of physical, racial, or faith variations between tested subjects. In our second study, general population (n=1475), where the established word association layout and word choice were very much finalized, there was nothing found in the answers, the results or the Readings to identify any one of any color, race or religion, out of any group. For the most part, the PEAC SYSTEM® is anticipated to be run on site by remote managers of client companies, and neither the system operators, the Readers or their agents will be in contact with the evaluatees, so without any other input, no race, color or religious affiliations will be in play at all.

The only area of question resides in the use of an English language evaluation on those who do not speak English, or whose English is limited. These things are, of course, out of our control. If an evaluatee must guess at the meanings, one can expect a serious magnitude of error in the results. And our Consistency Factors will be quite low, a sure sign that other methods of screening are more appropriate.


As of this writing, there are many characteristics of Handicapped that can affect the PEAC SYSTEM® evaluation, from simply a difficulty in arriving at the testing site, to an inability to read or write, and so on. As time and technology allow, these issues are being addressed. In the case of an inability to see, or to physically write, an evaluation may be filled out by a proctor for the individual based on verbal or some other form of discussion. With little or no exceptions, there is some available method to assess an individual, if the participants are agreeable and willing to be creative. (Note-1999 added: The PEAC SYSTEM is online via for those who find it inconvenient or impossible to come to a testing location.)


The major factor for evaluation purposes is the intensity of the personality and how it relates to the 'suggested' profile range that the particular position requires. If one were to dig heavily into this area, one might discover that certain races or genders had differing levels of intensity, as a group, but the secondary analyzation, limited to study 1 (n=1,475), found that for every (percentage based) race/color minority of any particular profile type, there were a corresponding percentage similar in the Caucasian profiles. The same held true on female and male studies in that group. Although of limited scope, no determination of any physical characteristics of that individual, save energy levels ( an observed physical trait, when found to be at the high, or 'visible' end of the scale) were identifiable. As time and exposure have allowed, no significant study by any group or association has been able to convince us that intensity has anything to do with any discrimination factors. Frankly, a study that did not include every man, woman and child, around the world, would, by its very nature, be suspect! Interesting database potential, though....


The most argued and unresolved area of the process of personality evaluation or communication style identification is in the area of word association tests. The layman, or the professional with an often hidden agenda, insists that a test be only accurate and reliable when, in test-retest situations, the results match precisely. In the world of computers (which is growing in technological leaps and bounds) this is possible, but in the real world, it is more difficult. People are not that consistent. Though most, if not all word association evaluations, and certainly, most true-false scenario evaluations ("If you were at a party, and...."), require either a check or no check, i.e., a derived response which is an exact opposite of one another, these very extremes allow the highly changeable mood orientation to influence the outcome.

Though psychologists and psychiatrists can use these varying profiles to determine an overall trend in the response to therapy or treatment, it is of little use to the layman who intends to hire and train an individual, based, at least in part, on the profile. And it is so important to understand the inner self of the subject, for we feel that to be successful in the endeavor, one must try to tailor the manager's style as closely as possible to the communication needs of the trainee, thus ensuring better understanding, both ways, a cornerstone to the good productivity and growth.

In an instrument with as little as 100 choices, limited to yes/no, or check/no check, even true/false, changing as few as 25% of the answers creates a skew in the results that render the evaluation irrelevant. This is because the individual changes, all too often, represent 'all or nothing,' a serious deviation, indeed.

Our response to the above difficulties was to develop a word association evaluation that went beyond the 'gut reaction' to a word, and required a degree of thought. This was brought about by introducing five levels for each word and a strong consistency measure for identifying the attempt, intentional or subconscious, to manipulate the results.

The PEAC SYSTEM® was developed with the theory that someone might be a level '5' today, and a '4' tomorrow, but that they should never be a '1' in total opposition to the initial response. So, our studies have shown it to hold true, that intensity may vary in particular responses, and this will be reflected in the intensity of the construct, creating wider or narrower profiles according to the sum of the changes. Yet the profile pattern, for all intents and purposes, will remain basically the same. And our Consistency Factors help to indicate these minor changes and adjust accordingly.

Study 2- reliability study

Correlational vs. experimental research. Most empirical research belongs clearly to one of those two general categories. In correlational research we do not (or at least try not to) influence any variables but only measure them and look for relations (correlations) between some set of variables, such as blood pressure and cholesterol level. These are dependent variables. In experimental research, we manipulate some variables (independent variables) and then measure the effects of this manipulation on other variables; for example, a researcher might artificially increase blood pressure and then record cholesterol level. Data analysis in experimental research also comes down to calculating "correlations" between variables, specifically, those manipulated and those affected by the manipulation. However, experimental data may potentially provide qualitatively better information: Only experimental data can conclusively demonstrate causal relations between variables. (Author's note 2001: StatSoft Inc 1984 [])

Our initial reliability test was carried out on 1,475 relatively new hires (first year people - fifty Psych Post Graduate Students administering the evaluation to twenty people each), in small to mid-size (under a hundred people) companies that surrounded the University of California, Berkeley. Reliability measures the ability of the evaluation to replicate the results over time. In other words, can it be trusted to represent what it purports to measure? In this study, we were determining, first, if we could accurately measure an individual's basic personality and/or their underlying communication style. Initially, using the Split Half process, our reliability coefficient was over .91. Then, leaving those results aside, we moved to the Test-Retest system. Our retest was conducted at approximately sixty days (within 3 days) and none of the subjects had changed companies. Of these, however, 175 had been promoted at least one level. The results, as expected, indicated a wide variance in Stress profiles, or facades. In fact, documenting them in any recognizable configuration against the Basic Profile soon proved to be overwhelming. There simply was no way to draw any conclusions due to the extreme variance of facade files. This point is so critical in selection, hiring and training. Facades are changeable by environment, by responsibility and by relationships. And they change to unpredictable Styles upon need of the moment.

More important, however, was that of the 1,4750 initial tests, there were only 21 that showed a remarkable change from the first profile. That means that 1454 were consistent (maintained very similar or same basic patterns) in test retest. But to go further, we discovered in interview that 9 of these 11 had treated the process as a 'joke,' simply spontaneously selecting any answer and caring little for the results. These were dismissed as invalid, and irrelevant to the study, and therefore, unusable. The remaining two, however, were taken very seriously (the respondents professed) and were, indeed different, especially in intensity. Retaining these two, our reliability coefficient was slightly over .98.

Now, keep in mind, we had only the respondents' word that they had taken this carefully. But, both had low Decision Indices (emotional, as opposed to logical), and in our follow-up interviews, it became apparent that both of these answered in the 'mood of the day,' as opposed to thoughtful logic. Further data mining produced this revelation: The most likely candidate for variance, even small amounts, was from the individual most often in the emotional side of the Decisions category.

This may represent an ongoing, unavoidable problem in the emotional category, but it also represents a statistical error of only .13% of the test group. Still, because we think there might be something of value in investigating it further, we did just that. Back to empirical research. Could we change something and affect the outcome?

Study 3- The emotional index study

In the interest of determining the variance in the highly emotional sector of the group, interviews and further studies were conducted on 301 'emotional' people , ' drawn from our now sizeable database (n=4106), to determine the nature of what we termed, emotional variance, and how best to control it. Without exception, wide variances were produced in differing administration instructions, either as stated, (three part, from simplistic to convoluted) or as understood by the subjects (written paraphrase), and when these were optimized, test-retest variance across the board dropped to insignificance. This fact created the proper administration procedures that are included elsewhere within this manual and forwarded to all clients of the PEAC SYSTEM® evaluation.

Study 4- ERL study

The next level of variance, though not as significant in the early studies, was regarding our concept of Energy Reserve Level. (Def: That energy one draws upon to change the profile as needed, or to handle crisis. Invisible below '55,' highly visible at '60.') Although mean average of variance was a mere 1.3 points, on a scale of 28 to 62 (lowest to highest found to date), essentially insignificant, there were those who were more noticeable.

Contrary to our original theory, energy levels differed recognizably (more than five points) in test-retest situations in over 271 subjects, even with essentially the same identified profile. In our subsequent interviews conducted after each testing phase, all but one of the 271 people had something new ( a change) going on in their lives, or with their physical well being. Those who were high (in relation to their second evaluation) the first time, and much lower in the second, were either overworked, stressed or even sick the second testing phase, while the converse was true in the going from low to high. Therefore, the Energy Reserve Level is not a standardized measure of what the individual is capable of, but an indicator of the energy state at the time of the evaluation. Conclusions, then, drawn upon the immediate assessment for ERL must allow for variance in differing situations.

This led to further interviews with additional people with low Energy Reserve Levels, who did not change in the retest study to see if statistically a pattern or prediction could be made. Even when initial interviews had indicated a potentially higher Energy Reserve Level, while the test provided low levels both times, it was only with deeper probing, and the promise of the utmost confidentiality, that we identified family, career or company problems that contributed to a lower than expected ERL.

Though this may be subjective in nature, it stood the test of numbers and time, as a full 78% of that group (unexpectedly low ERL) 1) left their relationships or, 2) left their jobs within six months more, while 73% of the group with mid to high ERL constant over the six months, remained in their current employment... This deserves further study and will be conducted as time and resources permit. Though not the intent of the initial development of the PEAC SYSTEM®, the author sees a value here yet to be explored.

This revelation helped to develop the next important construct, the Current Fit (CFIT, herein). This was found to be an estimate of the stresses between the basic self and the apparent facade in their employment, and factoring in the ERL, that when low enough (the CFIT), indicates an over stressed situation, one that will most likely end in termination of employment, whether by firing or resignation. Ongoing studies are currently applying this to the relationship side of the PEAC SYSTEM® for subsequent determination of interrelationship stresses.

So, then, back to the ERL. This, then, was not necessarily a predictor of productivity nor success, but a more telling indicator to give the PEAC SYSTEM® Administrator or client more reason to dig into references, or other proof of whatever the individual evaluatee had promised or stated. So, though subjective, it still could provide positive value to our future customer. Keep in mind that a significant number, nearly a third of the subjects drawn from the database, held their low ERL through the multiple testing, and subsequent re-testing and interviews determined that they were, by communication styles standards, at the lower end of the scale, whereas, to derive our general population (now n =3631) average of '45' valid, there had to be, and it was so determined, that there were a like number of people at the higher end of the scale who held constant (range 28 to 62, mean=45.1).

Study 5- Emotional Index in Relation to Interests

In the next study, wherein the PEAC SYSTEM® evaluation was very much locked in as to format and content, we chose, at random, two levels of study group, one in the Humanities arena and one in the Technical arena, both from Freshman class level at the University of California, Berkeley (n = 700). Each was evaluated in a group session (average 38 to a class) and then individually and extensively interviewed in a Reading. Administration of the evaluation was written instructions, as standardized and as positive as we could make it in an educational environment. The following significant factors were drawn from this study:

a. Emotional indices indicating 'feeling orientation' significantly affected the intensity of the individual traits, and with it, the ERL, as expected.

b. 'Fact Oriented,' or logical individuals tended to make their selections within the range of 2 to 4, inclusive, on the response sheets, while the emotional group tended to run much wider, from 1 to 5 inclusive.

c. Humanities held nearly two to one Emotional people over the lower indices of emotional people in technical studies. By inversion, of course, the same was true on the Logical people in the Technical outweighing the Humanities by nearly two to one. Interestingly, those in the middle third (neither Logical nor Emotional) were distributed almost evenly (46% H to 54% T) between groups.

d. The actual error count discovered in 700 valid evaluated subjects:

                     1. Absolutely wrong: 0

                     2. Somewhat wrong: 30

                     3. Significantly wrong: 21

Of the third group, above, all misunderstood the purpose of the testing stage ( a teacher trust problem?), or misunderstood the instructions, and, either way, 'loaded the answers.' Consistency factors eliminated them, as all had consistency factors well below 60%, an unacceptable range. See Dominant vs Yielding, below. Subsequent interviews further encouraged these dismissals as unacceptable evaluations for the purposes of this study.

Dominant vs Yielding

On a scale of 1 to 5, with 5 being the most likely descriptor, these two words would be 100% in consistency at 1 and 5, or 5 and 1, i.e., opposite one another. If either was chosen as 5 and the other a 2, approximately 75% consistency would result, i.e., not quite opposite. If one was chosen a 5 and the other a 3, approximately 50% consistency would result. And so on. Spread this calculation over all the possible computations in The PEAC SYSTEM® and one can see that lack of knowledge (English language, communication, comprehension), a hidden agenda (attempt to manipulate), or poor attitude (dismissing the evaluation) will quickly be revealed in poor consistency factors in the resulting report.

Of the second group, the somewhat wrong, intensity variations seemed to be the main theme. However, 23 of the 30 thought there was more to it than what we said in administration, and were further led astray by information provided by their peers, before our arrival. Consistency factors were low, but acceptable (above 70%). But all this indicated was that they were intelligent enough to match their skewed answers, but the evaluation was still 'played with,' creating an inaccurate result. And after two separately conducted interviews, with different, uninformed reviewers (no knowledge of the previous interpretations), these 23 were dismissed from the study. Of the remaining seven, subsequent interviews could not determine the reason for the variances from what they expected, or claimed to be, vs. what the evaluation stated. All but one were low Decisions Indices (highly emotional) and the remaining one was mid level. Therefore, our statistical variation, or factor of deviation for the profile assessment was 1% ( 7 out of 700) in this general population.


Armed with this information, above, under the auspices of a closely held consulting group, and working directly under the supervision of Drs. Stein and Oates, of the Psychology Department at University of California, Berkeley, the PEAC SYSTEM® began a focus on job specifics against communication styles. Essentially, determining the so-called, 'ideal' profile in a particular position, within a particular industry. In some contrast to the standard Psychological performance of establishing a theory and then setting about proving or disproving it, a long and complicated process that need not be either one (author's opinion), we elected to test as many people (within specific positions in their industries) as possible in the final year of the evaluation development and run our dynamic database model against the data collected. This model is, of course, proprietary, developed by the author.

The three areas we chose for development were, in chronological order of introduction to the process, 1)personnel services, 2) sales- product and services, and 3) support (secretaries, bookkeepers, programmers, and so on).


1. The personnel arena- primarily search and placement, employment agencies and temporary firms, in and around San Francisco, Cal. This arena, from the Author's direct observation and participation, assisted by statistics from the National Association of Personnel Services (NAPS-name reflects current i.e. 1997- was NAPC), suffers an inordinate level of turnover, where fully 95% of all hires leave the industry within three years, and an average of 80% of those in the first year. This three phase study was only conducted on first and second year people (n = 3617) who had either attempted a career at one of these firms, or were still onboard. Secondary considerations were productivity levels as compared to the others at their (sometimes previous) firm, as determined and stated by their (previous) company's owner or manager. Where possible and available, records, under the strictest of confidentiality agreements, were studied and cross-matched to verify statements, in an effort to eliminate manager subjectivity.

2. Sales arena. Though we eventually had to divide this into two distinct groups (n = 4137) of people, 'retail' and 'other,' sales covered in-store, outside and the more subjective Real Estate and Insurance sales. The division became necessary as we developed distinct successful profile patterns, some for reactive sales, the customer coming to or calling into the salesperson, and others for more pro-active, pursuit sales. As in the previous category, secondary considerations were productivity levels as compared to the others at their (sometimes previous) firm, as determined and stated by their (previous) company's owner or manager. Where possible and available, records, under the strictest of confidentiality agreements, were studied and cross-matched to verify statements, in an effort to eliminate manager subjectivity.

3. The Support area concentrated on anyone who might be considered more 'behind the scenes,' i.e., their focus was less on people, and more on systems and functions. The largest group, by far, for our validations projects, this was administered and analyzed by a near army of Psychology Graduate Students, drawn from UC, Berkeley, and UC, San Francisco, spanning the Bay Area of California (n =8000). Again, tenure and productivity (in this case, workload and/or functionality) were the basis of study. The Author believes that much of this study went on to help produce The Berkeley Test, still a standard for general evaluation studies (author's opinion), though the actual numbers they elected to use is their own information and we do not have it here. Much of the results, however, well prior to The Berkeley Test, were folded back into the PEAC SYSTEM, as of 1982.

Consistently, for all three areas, we developed a bell curve for three levels of evaluation results. We applied data base models to filter for Tenure (at least a year, which signified at least some degree of success), Productivity within their respective offices (as determined by their manager in a twelve point proprietary assessment provided by the PEAC SYSTEM®), and the combination of both as a unit category.

In 1985, after approximately two-plus years of market study and semi-commercial introduction, the Author searched out a standardized arena to answer the Equal Employment Opportunity Commission's (EEOC) requirement that the test provide no screening for any discriminatory factor, such as listed in the Federal Employment guidelines and the Fair Credit Reporting Act of 1967. Part of that requirement (EEOC) was that the test be validated to show it could predict or influence a given body of similar employee positions under standard training and management practices. The EEOC indicated a requirement for a large body, preferably over 200 people to 'test the test.'

In 1986, the Author accepted a contract with ROMAC® ASSOCIATES, of Portland, Maine, a medium sized search and placement firm, to help standardize the training and to provide management consulting with a twofold agenda. One, they would benefit from the selection process, if it performed as expected, and two, their managers would receive some much needed guidance in hiring and retaining quality people. Their turnover, for first year sale people stood at well over 76%. Not as bad as some search and placement firms, but still expensive and unproductive. They had twelve offices, all on the eastern half of the US (initial personnel, n = 220 people. Their then current evaluation process, which held less than 20% (estimate) weight, was The Omnia, of Tampa, Fla. The Omnia is a checklist, check and no-check response word association evaluation. Their secondary evaluation system was Predictive Index, from Arcadia, Cal., a very similar evaluation to The Omnia. All hiring decisions contained, at least in part, influenced from one of these two checklists.

YEAR 1- Commercial Validation

In the first year, no decisions were made, in whole or in part, based on The PEAC SYSTEM® results, though the manager's still used their favorite evaluation of the past, The Omnia or Predictive Index. And each individual hire was tested with The PEAC SYSTEM® and followed. In addition, all current employees were evaluated for factors found previously in this validation paper, seeing correlation for stress and work/family issues. All employees, from sales, staffing and recruiting, through management, franchise owners and all support people were assessed equally. By the close of 1987, turnover was still at 75.4%, first year people, essentially unchanged.

Secondary Benefits to the study:

1) In the use of Predictive Index and The Omnia, our initial theory that a checklist (check or no-check) evaluation would measure 'mood orientation' held true. The basic self measured by each of these was a very close pattern to The PEAC SYSTEM® facade, which is intended to measure how an individual feels he or she must change to fit into an environment or relationship. No manager or hiring authority was given access to this information, and thus, the turnover remained unchanged.

2) In the failure to hire an individual who tested 'well' with The PEAC SYSTEM® and did not fare well with either of the other in use evaluations, we were able to track ( n =67) people who were picked up (hired) by other personnel firms in their respective areas. Of these, 78.3% remained over a year in employment and produced enough in sales to warrant keeping their desk. Although the information is not as detailed, due to the lack of relationship with ROMAC®'s competitors, and therefore little or no communication with management, there were several hires, 14, made from those competitors where the information we had generated was verified by a third party.

YEAR 2- Commercial Validation

In the second full year, 1987, The PEAC SYSTEM® was adopted as approximately 20% of the selection process. This replaced, except for a small part of the management system who liked the other evaluations out of habit and familiarity, the other evaluations. Even then, they were 'required' by their Franchisor, ROMAC®, Incorporated to use The PEAC SYSTEM® alongside their other choice. The PEAC SYSTEM® became a distinctly measured and controlled category in a proprietary evaluation form, of which no more than 20% weight was allowed. This still left the regularly established employment practices, such as interviewing, references for background, intelligence and prior experience (resume) in control, with an assist by the testing side. Turnover dropped to 45% first year people, across the board, from support through sales. But the Author was still training managers and franchise owners toward consistency in management, and the full effects were not yet known. And, in this year, the company had grown to thirty one offices.

YEAR 3- Commercial Validation

In the third and final year of the study, 1988, turnover dropped to 22.7% of first year sales people, and less than 10% support staff. The company hovered at or around 40 offices across the country. And by this time, the Author had developed the Peacscore©, an index that was proving accurate within 3 points for predicting the 'trainability' and tenure of each level of employee. (Author's note- 1994: with our working dynamic database constantly accepting new information for hires and terminations, productivity and performance, now well over 100,000 units, the Peacscore© has settled in at less than 1.6 point variance from actual figures based on hiring 100 people in a generically managed industry position with the same profiles, i.e., a 70 peacscore© would indicate 70 out of 100 hires of that particular profile would be in place and reasonably productive at the end of a year. n= 100 S. D.=2.1 r=Profile Type )

The Author's contract with ROMAC® ended in December, 1989 and the final figures had answered the EEOC requirements quite well. While we had not illegally (EEOC) ventured into any discrimination areas, we had proved that The PEAC SYSTEM® was a viable, accurate option for selection, and for guidance in managing and directing employees, but especially while keeping the standard elements of hiring and screening in place, too. It replaced the 'guesswork' so many managers used, and allowed them to make higher quality decisions. And we had done it in sales, recruiting and support areas, as required. (Total study 1987-1988: n = 568).

(Author's note- June 12, 1991)

Each category which had developed its own Bell Curve (proprietary) was subsequently applied and reapplied to filter to the final three levels: productivity, tenure and a combination of both. As more input from the marketplace after the PEAC SYSTEM® went to limited commercial use in 1983, then, subsequently in a far stronger position after our ROMAC® validation studies ended in 1989, it has been shown that we can predict the potential 'trainability,' and therefore, probable tenure within these categories with high success, but with the following caveat: Anyone can be trained to do anything, given enough time and money! But we have also shown that a job-style mismatch results in faster or higher turnover and can be prevented or at least muted, by tailoring training and management to the individual's communication style, but, above all, starting with the best raw material.


(Author's Notes- June, 1994)

As a broad brush statement, systems people like systems jobs, people oriented individuals like sales and recruiting, and the like, with many variables in between. Our dynamic database still adjusts, but since the early 1990's, the adjustments have been minimal. It is our intent to allow the dynamic construction continue until there is little or no adjustment period over any of our very highly rated validation databases.

(Author's note- January, 2000)

With nearly 200,000 evaluations in database, outside of the initial studies, the dynamics of the 'ideals' and their subsequent less appropriate profiles, for productivity and tenure have changed little. There are certain profiles that consistently hold the top positions in these bell curves, and those profiles who vary significantly from these 'ideals,' or top positions, require more attention and work by the trainer/manager. And we have had unparalleled success in helping managers make adjustments in training and management technique to accept and help those profiles less close to the 'ideals.' This, of course, directly impacts turnover, reducing it, in many instances, to less than 20% first year people, in companies who habitually have suffered 80% or more. However, the dynamic processing continues, in the event we uncover a group or company that can influence the results so far established.


The final step, then, is for the manager to administer the evaluation properly, or, if it is assigned to someone else, to make certain they are trained to administer it properly. Eventually, this will be online, and the evaluation process may be even more critical. One could say that administration is critical to the PEAC SYSTEM®'s accuracy and success, and, by extension, that of our manager clients. (Author's Note, 1999- it is now fully online-

The PEAC SYSTEM® is designed to make the manager's life easier. Starting with people that they select as most likely (or as close as available- taking into consideration the rest of the hiring variables, references, resumes, interviews, etc.) geared to the tasks at hand, be it people, systems, or something in between, it is invaluable at identifying the strengths and weaknesses, on day one, and giving that manager a head's up on what he or she will have to do to make the new hire successful. Like a good close, however, it won't work if it isn't used and it won't work well if it is used improperly.


Meanwhile, the PEAC SYSTEM® Reader is one of the most important tools in the process. Every evaluation is accompanied by a verbal explanation and suggestions for manager adjustment and advice as a free service. It is incumbent upon the Reader to oversee the training of management and their appointees, to be certain we can stand behind the validity and the results of the evaluation process.... Ongoing informational studies are being conducted in an effort to improve the PEAC SYSTEM® even further, if possible. Suggestions from managers, Readers and the like are appreciated and welcome.

George W. Tucker, MS




The principal traits, as in most evaluations, are Power (control), Extroversion (social), Analysis (information) and Conformance (structure) The constructs were first developed on a flat database then reconfigured to overlay on a scale from 10 to 80. This provides the Norm line, or center of the population adjustment. All profiles are built around the Norm line.

Traits Power Extroversion Analysis              Conformance
Basic Self
Mean 51.40 49.30 38.70                    39.90
Spread +/- 31.0 +/- 32.1 +/- 28.9                +/- 31.00
Cronbach a 0.93 0.92 0.91                      0.94
Mean 51.25 49.35 38.90                    39.95
Spread +/- 29.9 +/- 31.6 +/- 29.9                +/- 30.1
Cronbach a 0.92 0.93 0.91                      0.92

Legend: Mean= center of the scale (raw). Spread = Interpreted range

Although the facade, above, is not as critical in direct communication style survey, it provides important added information, from which intelligent decisions can be made. As such, it was included in the above table. See text for more information. Cronbach a developed for each construct. Additional subsequent reliability obtained in stages with Split-Half separation, using odd-even series, and test-retest situations.

The following table continues with those additional constructs important to The PEAC SYSTEM® theoretical and planned use. These include ERL (invisible, low and visible, high), Decisions (Feeling-subjective, to Fact-objective) and Energy Type (scale type B to type A):

Energy Reserve Level Decisions Energy Type
Basic Self
Mean 47.90 44.00 36.98
Spread +/- 14.2 +/- 31.0 +/- 39.9
Cronbach a

0.91 0.91 0.92