Importance: No single cognitive screen adequately captures all cognitive domains that are important for inpatient occupational therapy treatment planning.

Objective: To quantify the content validity of a novel 22-item cognitive screen, the Gaylord Occupational Therapy Cognitive (GOT–Cog) screen, developed to better inform inpatient occupational therapy treatment planning.

Design: Delphi-style expert panel review.

Setting: Long-term acute care hospital.

Participants: The first panel was attended by four occupational therapists, two speech-language pathologists, one physician assistant, and two neuropsychologists; the second, by four occupational therapists, one speech-language pathologist, and one physician assistant.

Intervention: Each Delphi panel discussed the relevance, essentiality, and clarity of each item. After each discussion, panelists completed a content validity survey to summarize their evaluation of each item.

Outcomes and Measures: On the basis of panelists’ survey responses, item- and scale-level relevance, essentiality, and clarity were quantified by calculating the respective content validity index (CVI), content validity ratio (CVR), and content clarity index (CCI). Universal agreement (UA) and κ statistics were also calculated, as appropriate.

Results: Upon presenting the initial 23-item instrument covering 10 cognitive domains to the first Delphi panel, several questions were added, removed, or rewritten, resulting in a 22-item instrument representing nine domains. After the second panel, several questions were again rewritten, and the domains reorganized. All scale-level metrics improved, including CVI (from 0.87 to 1.0), UA (0.52 to 1.0), CVR (0.43 to 0.94), and CCI (2.26 to 2.92).

Conclusions and Relevance: GOT–Cog displays overall excellent content validity and can proceed to construct validity testing.

Plain-Language Summary: By reporting on the content validity of the Gaylord Occupational Therapy Cognitive screen, this brief report begins the necessary process of evaluating the measure’s overall validity and reliability.

As part of any initial occupational therapy evaluation, cognitive and physical deficits and how they relate to a patient’s functional performance are evaluated to establish a treatment plan. Occupational therapists accomplish this by evaluating a patient’s functional cognition, which can be described as the interaction of cognitive skills, self-care, and community living skills and refers to the thinking and processing skills needed to accomplish complex everyday activities, such as household tasks, financial and medication management, volunteer activities, driving, and work (American Occupational Therapy Association [AOTA], 2013; Faul et al., 2010; Okie, 2009; Wolf et al., 2021). Assessing functional cognition is necessary to identify cognitive impairments that may challenge a patient’s ability to accomplish real-world tasks (AOTA, 2021). Occupational therapists use everyday activities in familiar contexts when evaluating patient cognition because they have a high potential to engage clients (AOTA, 2021). Assessment of functional cognition is also used to assist in discharge planning to help determine what functional level a patient may need to achieve before discharge. This includes functional tasks such as activities of daily living (ADLs) and instrumental activities of daily living (IADLs; AOTA, 2021).

To evaluate patient cognition, several standardized cognitive screens have been developed, including the Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005) and the Saint Louis University Mental Status Exam (SLUMS; Tariq et al., 2006). Our department had previously used the MoCA, which is the most sensitive test available for detecting Alzheimer’s disease, as well as for measuring executive functions and multiple cognitive domains (Nasreddine et al., 2005). However, the MoCA now requires a costly certification for use. The MoCA also lacks a functional cognition component and has no functional tasks relatable to a person’s everyday activities, for example, money management or sequencing of an ADL.

The SLUMS is used to identify people who have dementia or mild neurocognitive impairment and administration requires viewing a U.S. Department of Veterans Affairs–produced video available online for annual training (Tariq et al., 2006). The SLUMS lacks a functional component except for 1 item, which asks the patient to compute a grocery store scenario. However, it focuses on the patient’s working memory to solve the problem versus solving it with pen and paper. Although both tests look at visuospatial abilities, executive function, verbal fluency, attention, orientation, immediate recall, and delayed recall, neither test assesses these functions related to everyday activities that are in contexts familiar to patients. Additionally, both the SLUMS and the MoCA were created for the community-dwelling population, which may limit their overall utility when using them to develop an appropriate inpatient treatment plan.

The objective of this work is to describe the developmental process used to create and quantify the content validity of a new cognitive screen, the Gaylord Occupational Therapy Cognitive (GOT–Cog) screen. On the basis of expert consensus, this instrument includes the cognitive domains determined to be important for screening occupational therapy patient population treated in the inpatient setting, including long-term acute care hospitals (LTACHs). The purpose of this measure is to screen appropriate patients in areas of cognition as they relate to functional activities in ADLs and IADLs. The results of this screening tool are intended to guide the occupational therapy plan of care; identify areas to further assess, in conjunction with other measures; assist with discharge planning; and flag the need for additional services in the inpatient setting, such as speech therapy or neuropsychology.

Ethics Review

After consulting with the Gaylord Hospital Institutional Review Board (IRB), this work was determined to be exempt from IRB review because it did not directly assess human participants and rather was evaluated through discussion with and surveying of the Delphi-style expert panelists. Panelists were not given an honorarium for their participation.

Instrument Construction and Early Revision Process

In the LTACH setting, common admitting diagnoses often include stroke, brain injury, ventilator weaning, and other medical complications resulting from prolonged hospitalization (Buczko, 2011). Given that the cognitive screens available to Occupational Therapy Departments lack an adequate functional cognition domain, it was determined that the needs of these diverse patient populations may not be adequately addressed or identified.

To address this, Emily Meise started an initiative to create a new instrument that addressed the areas of cognition the Gaylord Hospital occupational therapy staff collectively determined to be essential for a comprehensive inpatient occupational therapy cognitive screen. To start, Meise conducted two surveys with the inpatient occupational therapy staff on what areas of cognition were, in their professional expertise, most appropriate to assess. Given the rehabilitation focus and the most prominent populations observed with cognitive impairments in the LTACH setting, the domains of cognition that are typically affected after brain injury, stroke, and hospitalization were emphasized. Ten domains were selected: orientation, verbal fluency, visuospatial tasks, functional problem-solving, reasoning, sequencing, attention, divided attention, immediate recall, and long-term memory.

Once the domains were identified, a literature review was completed for each to inform the development of items that addressed them (Folstein et al., 1975; Jackson et al., 1998; Katzman et al., 1983; Nasreddine et al., 2005; Tariq et al., 2006). The number of items and weight of each domain were then designed to reflect the needs of the occupational therapy inpatient treatment planning process. Given the differing complexity and importance of each domain, some were assigned one item, whereas others were assigned multiple items. An initial 24-item instrument addressing each of these domains was then drafted. In addition to consultation with a speech-language pathologist, the drafted instrument was reviewed and refined in the Gaylord Hospital occupational therapy department for general content clarity and ease of administration, resulting in the removal of one item.

Delphi-Style Expert Panel Review

To quantify the content validity of the 23-item, 10-domain instrument, GOT–Cog was evaluated through two iterative rounds of Delphi-style panel review. The first Delphi panel consisted of 9 topical experts: 3 inpatient occupational therapists, 1 outpatient occupational therapist, 2 speech-language pathologists, 2 board-certified neuropsychologists, and 1 physician assistant. Because of unforeseen scheduling conflicts, the second panel consisted of only 6 experts: 3 inpatient occupational therapists, 1 outpatient occupational therapist, 1 speech-language pathologist, and 1 physician assistant. Four of the six experts in the second panel were members of the first panel. The panel reviews were moderated by Henry Hrdlicka and Pete Grevelding; Amanda Meyer was a panelist in both discussions. As the primary author of the instrument, Meise observed the panel discussion but abstained from giving input so as not to bias the discussion.

At the start of the first panel review, the moderators gave a brief background on the new comprehensive cognitive screen and the rationale for developing it. The moderators also reviewed the concept of content validity testing and outlined the need for an expert panel review. Each item was then sequentially presented to the panelists. With each item, panelists were given the opportunity to provide verbal feedback on suggested edits and improvements. Once the screen was reviewed in its entirety, panelists were given a content validity survey to evaluate the relevance, essentiality, and clarity of each item and the scale as a whole. To limit interpanelist reproach and to allow for reporting of dissenting opinions, the survey was quasi-anonymized, with only the moderators knowing the panelists’ individual responses.

After the instrument was revised (i.e., the items were revised, added, or removed) on the basis of the feedback from the first panel, a second review panel was convened. At the start of the second session, an anonymized summary of the initial feedback was presented. The second review was then conducted in the same manner as the first. Because changes were recommended after the first discussion, an updated content validity survey reflecting these changes was provided to panelists after the second discussion.

Quantification of Content Validity: Relevancy, Essentiality, and Clarity

On the basis of panelist responses to the content validity survey, several metrics were calculated, including relevancy, essentiality, and clarity (Rodrigues et al., 2017; Yusoff, 2019; Zamanzadeh et al., 2015). The cutoffs described next and the criteria used to interpret each metric are summarized in Table 1.

Relevancy

To determine relevancy, panelists were asked to evaluate each item using the following 4-point Likert scale: 1 = item is not relevant to the domain, 2 = item is somewhat relevant to the domain, 3 = item is quite relevant to the domain, and 4 = item is highly relevant to the domain. On the basis of responses, item-level content validity indices (I–CVIs) and scale-level content validity indices (S–CVIs) were calculated. To address the possibility that chance agreement may lead to positive I–CVI calculations, modified κ statistics were also calculated (Zamanzadeh et al., 2015). To ensure agreement, S–CVI was calculated using both the I–CVI average (S–CVI/Ave) and the universal agreement of raters (S–CVI/UA) methods (Rodrigues et al., 2017; Yusoff, 2019; Zamanzadeh et al., 2015).

To interpret I–CVI, any item given a value >0.79 was considered to be relevant; items given a value 0.70–0.79, to need revision; and items given a value <0.70, to not be relevant, with elimination of that item to be considered (Rodrigues et al., 2017; Yusoff, 2019; Zamanzadeh et al., 2015).

For κ statistics, any item given a value >.74 was considered to have excellent item validity; items given a value .60–.74, to have good item validity; items given a value .40–.59, to have fair item validity, and items that given a value <0.40, to have poor item validity, with elimination of that item to be considered (Zamanzadeh et al., 2015). For S–CVI/UA (Rodrigues et al., 2017; Yusoff, 2019; Zamanzadeh et al., 2015) and S–CVI/Ave (Rodrigues et al., 2017), scores ≥0.80 or ≥0.90, respectively, were interpreted as displaying excellent content validity.

Essentiality

To determine essentiality, panelists were asked to evaluate each item using the following 3-point Likert scale: 1 = not essential, 2 = useful, but not essential, and 3 = essential. These responses were then used to calculate the item-level content validity ratio (I–CVR; Lawshe, 1975; Rodrigues et al., 2017; Zamanzadeh et al., 2015). CVR cutoffs are dependent on the number of raters (Lawshe, 1975). For the first panel, which consisted of 9 expert reviewers, I–CVR and S–CVR scores ≥0.78 were determined to indicate that the item is valid or essential. For the second panel, consisting of 6 raters, a cutoff of ≥0.99 was used (Lawshe, 1975). If a value was found to be less than the cutoff, the item was interpreted as not essential, with elimination or revision of that item to be considered. Once the I–CVR for all items was calculated, a scale-level CVR (S–CVR) was determined by averaging the I–CVR values.

Clarity

To determine clarity, panelists were asked to evaluate each item using the following 3-point Likert scale: 1 = not clear, 2 = item needs revision, and 3 = very clear. Although clarity has previously been used as a metric for evaluating content validity (Rodrigues et al., 2017; Zamanzadeh et al., 2015), we were unable to find published criteria with which to objectively interpret clarity measurements. So we established the following criteria for evaluating Content Clarity Index (CCI) at the item (I–CCI) and scale (S–CCI) levels.

To determine the I–CCI, the average clarity score was taken for each item. An average I–CCI score of 3 indicated that 100% of raters thought that the item was overall very clear and did not need revisions. An average I–CCI of 2.50–2.99 indicated that ≥50% of raters thought the item was clear but <100% were in agreement that the item overall needed minor revisions for clarity, and the item was mostly clear. An average I–CCI of <2.50 indicated that <50% of raters thought that the item was clearly written. This was interpreted as the item not being clear and that major revisions were needed. The S–CCI was then calculated by taking the average of all I–CCI values. The same criteria were used to interpret the final S–CCI scores.

Delphi-Style Expert Panel Review

Once refined by the Gaylord Hospital Occupational Therapy Department, the 23-item instrument was put through two iterations of a Delphi-style expert panel review. The first panel consisted of 9 raters, and there was a 100% response rate to the content validity survey.

First Panel

Item-level results.

I–CVI results for relevancy found that elimination was suggested for 2 of 23 items, 1 item required revisions to be relevant, and 20 items were relevant as written (Table 2). Modified κ statistics confirmed these findings, with 2 of 23 items demonstrating less than fair validity, 1 item having good item validity, and 20 items having excellent item validity.

For 9 raters, a CVR ≥0.78 is recommended to establish whether an item is essential; elimination or revision should be considered for any item with a CVR <0.78. On the basis of this interpretation, 10 of 23 items were found to be essential, with 7 or more of 9 raters in agreement, and elimination or revision was recommended for the remaining 13 items. Of these, 2 items had a CVR of −1.0, indicating 100% rater agreement that the items were not essential, and these items were eliminated. For the remaining 11 items, 9 had >50% raters (5 raters or more) in agreement that the items were essential, and 2 had <50% raters (4 raters or fewer) in agreement that the items were essential. Because of their importance, rather than being outright eliminated, these 11 items were revised and reevaluated in the second panel.

I–CCI results indicated that 3 of the 23 items were very clear, 8 items were mostly clear, and the remaining 12 items needed revision. The 20 items that were not indicated as very clear were revised and reevaluated in the second panel.

Scale-level results.

After the first Delphi panel, scale-level relevancy, essentiality, and clarity were calculated to determine universal agreement and evaluate the content validity of the whole measure. S–CVI/Ave, S–CVI/UA, and average scale-level CVR all indicated the need for revision at a scale level (Table 2). S–CCI calculations indicated that, on average, the measure was mostly clear.

In response to the first Delphi panel, several items were revised, added, removed, or rewritten, resulting in a revised instrument consisting of nine domains and 22 items. These changes are visualized in Figure 1. In brief, the immediate recall domain was removed in favor of retaining only delayed recall. The screen’s original functional problem-solving items were removed, and new items that were more ADL-focused were based on reviewer feedback. The sequencing item was also largely revised to be more ADL-focused. Additionally, more specific directions were added for administering the measure to assist in the standardization of the measure across raters.

Second Panel

Item-level results.

The next iteration of the instrument was presented to a second Delphi-style expert panel of 6 raters. Once again, there was a 100% response rate to the content validity survey.

After the second Delphi panel, both relevancy metrics, I–CVI and the modified κ statistic, indicated that all 22 items were relevant and had excellent item validity (Table 3).

For 6 raters, a CVR ≥0.99 is recommended to establish whether an item is essential. After the second panel, 18 of 22 items demonstrated a CVR of 1.00, indicating 100% rater agreement that the items were essential. The remaining 4 items had a CVR of 0.67, indicating that the items were nonessential and item elimination or revision should be considered. After completing the expert panel’s recommended revisions, these 4 items were kept in the final instrument.

For I–CCI, 16 of 22 items were deemed very clear with 100% agreement between panelists. These 16 items had an I–CCI of 3.0, indicating 100% rater agreement. The remaining 6 items were found to be mostly clear with ≥50% of raters in agreement.

Scale-level results.

After the second Delphi panel, scale-level relevancy and content validity were found to be excellent at a scale level with S–CVI/Ave and S–CVI/UA both equal to 1.0 (Table 3). Although the average scale-level CVR of 0.94 was less than the cut-off of ≥0.99 for 6 raters, with an average of 87% of raters in agreement, we feel comfortable concluding that the scale demonstrates the necessary essentiality. Finally, the mean S–CCI was 2.92, indicating that the instrument is mostly clear.

Following the second panel, no items were removed; however, several questions were rewritten on the basis of expert feedback, and the domains were reorganized into the final order: Verbal Fluency, Attention, Orientation, Visuospatial, Divided Attention, Auditory Memory, Sequencing, Functional Problem Solving, and Delayed Recall (Figure 1).

To establish the content validity of GOT–Cog, two Delphi-style panels consisting of content experts were surveyed to refine the measure. After the first Delphi panel, several questions were revised, removed, or rewritten, resulting in a revised instrument consisting of 9 domains and 22 items. After the second panel, no items were removed; however, several questions were rewritten on the basis of expert feedback, and the domains were reorganized in the final order: Verbal Fluency, Attention, Orientation, Visuospatial, Divided Attention, Auditory Memory, Sequencing, Functional Problem Solving, and Delayed Recall. The group worked to clarify all of the scoring and delivery instructions and to create separate patient and examiner copies for ease of administration and accommodations for potential users with visual impairments. Between panels, all scale-level metrics improved, including content validity (0.87 to 1.0), UA (0.52 to 1.0), essentiality (0.43 to 0.94), and clarity (2.26 to 2.92). The Delphi panels were instrumental in modifying the measure to best suit our patient population and ensure the content represented what it was intended to measure.

To our knowledge, GOT–Cog will be the first standardized cognitive screen that evaluates the domains of Orientation, Verbal Fluency, Visuospatial, Functional Problem Solving, Reasoning, Sequencing, Attention, Divided Attention, and Memory in a way that relates to occupational therapy treatment planning. Occupational therapists look at cognition from a functional point of view. For example, they evaluate ADL sequencing, the number of cues a client may need to complete a task, whether a client can manage their medications, whether a client can sustain attention to participate in a 30-minute session, and whether they can recall information they have previously been educated on. The Gaylord inpatient occupational therapy department had been using the MoCA for cognitive screening; however, because it required costly certification for staff, the department began administering the SLUMS to evaluate cognition. However, the SLUMS and MoCA assess memory and attention in a mechanical fashion, and neither assesses functional cognition in a context related to a patient’s daily activities. Although the MoCA can provide information on whether a client has mild, moderate, or severe cognitive impairment and the SLUMS can identify whether a patient has mild cognitive impairment or dementia, neither provides information on how these scores relate to functional activities in contexts familiar to patients and provide feedback on how it may affect their treatment planning and discharge planning.

By combining all of the pertinent cognitive domains desired by occupational therapists into a comprehensive cognitive screen with an increased focus on functional problem-solving and sequencing, GOT–Cog will allow occupational therapists to better identify patients with functional cognitive deficits that may affect their recovery and treatment regimen. The GOT–Cog will also allow therapists to ensure that early, appropriate referrals are made to other specialists in other fields, including neuropsychologists and speech therapists, ensuring that the best resources are brought to patients early in their recovery.

Several limitations of this work need to be addressed. First, neither of the Delphi panels included a former or current patient to represent the target population of the screen. The implication of this is that, although the occupational therapists and other clinicians on the two Delphi panels represent the clinical end users of the instrument, without feedback from a member of the target population, the face validity of the instrument could be considered incomplete. To address this, as construct validity and criterion validity are evaluated going forward, feedback from the target population and a broader sample of end users will be considered. Furthermore, we plan to acquire additional feedback on the measure’s clarity and readability from representatives of the target population and to reassess the content validity if needed.

The second limitation of the study is that the number and diversity of experts decreased between the first and second Delphi panels (9 and 6 panelists, respectively). However, most of the second-panel attendees were at both sessions, ensuring continuity between sessions. Moreover, the second panel consisted primarily of occupational therapists, the planned end users of the instrument. As such, although the decrease in expert panelists was unfortunate, we feel this was not detrimental to our ability to quantify the content validity of GOT–Cog.

The third limitation is that the expert panelists met face to face and discussed each item before completing the content validity survey. This is a limitation because fear of judgment or reproach from their fellow panelists may have dissuaded some participants from voicing a dissenting opinion. The discussion may also have biased panelists’ opinions, resulting in more consensus than may have occurred otherwise. To address this, the surveys were done individually with only the moderators knowing the panelists’ individual responses. Although this quasi-anonymized approach is a modification of the original Delphi protocol, the open discussion allowed panelists to ask clarifying questions and have collegial discussions regarding each item before making a judgment. Furthermore, this approach resulted in a 100% response rate for each survey.

The GOT–Cog was developed with the inpatient setting in mind and is intended to be used as part of a comprehensive occupational therapy–based assessment to identify areas of concern and further inform occupational therapy treatment planning. As such, GOT–Cog is intended to guide the occupational therapy plan of care, assist with discharge planning recommendations, and identify the need for additional services, such as speech therapy or neuropsychology. The results of this study have the following implications for occupational therapy practice:

  • ▪ This screening tool is anticipated to guide clinicians in identifying key problem areas to inform treatment planning and goal setting.

  • ▪ The GOT–Cog is expected to help occupational therapists develop strategies to compensate and adapt to environments to increase patients’ independence with ADLs and IADLs and ensure maximal safety in their environments.

To our knowledge, GOT–Cog will be the first standardized cognitive screen specifically designed for the inpatient setting that evaluates cognitive domains relevant and needed for occupational therapy treatment planning. Through a Delphi-style expert panel, this study indicates that the new occupational therapist–developed cognitive screen displays overall excellent content validity. Going forward, we plan to recruit inpatients admitted to our LTACH setting to evaluate the construct validity, interrater reliability, intrarater reliability, and responsiveness of this new screen.

Dr. Henry Hrdlicka and Ms. Emily Meise should be considered as co–first authors of this article. We sincerely thank all of the Delphi panelists for their valuable time and feedback, including occupational therapists Jaclyn Lavigne, Lauren Pocius, Marcia Brassard, and Stephanie McNeil; neuropsychologists Emily Williamson and Anthony Rinaldi; speech-language pathologists Janine Clarkson and Darielle Cooper; and physician assistant Mark Powers. A poster with a similar title, using a subset of the data presented here, was presented at the American Congress of Rehabilitation Medicine 2022 Conference.

American Occupational Therapy Association
. (
2013
).
Cognition, cognitive rehabilitation, and occupational performance
.
American Journal of Occupational Therapy
,
67
(
6
, Suppl.),
S9
S31
.
American Occupational Therapy Association
. (
2021
, November 24).
Role of occupational therapy in assessing functional cognition
. https://www.aota.org/practice/practice-essentials/payment-policy/medicare1/medicare---role-of-ot-in-assessing-functional-cognition
Buczko
,
W.
(
2011
, March 9).
Determining medical necessity and appropriateness of care for Medicare long term care hospitals
.
Centers for Medicare & Medicaid Services
. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Reports/Research-Reports-Items/CMS1247454
Faul
,
M.
,
Wald
,
M. M.
,
Xu
,
L.
, &
Coronado
,
V. G.
(
2010
).
Traumatic brain injury in the United States: Emergency department visits, hospitalizations, and deaths, 2002-2006
.
Centers for Disease Control and Prevention
. https://stacks.cdc.gov/view/cdc/5571
Folstein
,
M. F.
,
Folstein
,
S. E.
, &
McHugh
,
P. R.
(
1975
).
“Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician
.
Journal of Psychiatric Research
,
12
,
189
198
. https://doi.org/10.1016/0022-3956(75)90026-6
Jackson
,
W. T.
,
Novack
,
T. A.
, &
Dowler
,
R. N.
(
1998
).
Effective serial measurement of cognitive orientation in rehabilitation: The Orientation Log
.
Archives of Physical Medicine and Rehabilitation
,
79
,
P718
P721
. https://doi.org/10.1016/S0003-9993(98)90051-X
Katzman
,
R.
,
Brown
,
T.
,
Fuld
,
P.
,
Peck
,
A.
,
Schechter
,
R.
, &
Schimmel
,
H.
(
1983
).
Validation of a short orientation-memory-concentration test of cognitive impairment
.
American Journal of Psychiatry
,
140
,
734
739
. https://doi.org/10.1176/ajp.140.6.734
Lawshe
,
C. H.
(
1975
).
A quantitative approach to content validity
.
Personnel Psychology
,
28
,
563
575
. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x
Nasreddine
,
Z. S.
,
Phillips
,
N. A.
,
Bédirian
,
V.
,
Charbonneau
,
S.
,
Whitehead
,
V.
,
Collin
,
I.
,
Cummings
,
J. L.
, &
Chertkow
,
H.
(
2005
).
The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment
.
Journal of the American Geriatrics Society
,
53
,
695
699
. https://doi.org/10.1111/j.1532-5415.2005.53221.x
Okie
,
S.
(
2009
).
Traumatic brain injury in the war zone
.
New England Journal of Medicine
,
352
,
20432047
. https://doi.org/10.1056/NEJMp058102
Rodrigues
,
I. B.
,
Adachi
,
J. D.
,
Beattie
,
K. A.
, &
MacDermid
,
J. C.
(
2017
).
Development and validation of a new tool to measure the facilitators, barriers and preferences to exercise in people with osteoporosis
.
BMC Musculoskeletal Disorders
,
18
,
540
. https://doi.org/10.1186/s12891-017-1914-5
Tariq
,
S. H.
,
Tumosa
,
N.
,
Chibnall
,
J. T.
,
Perry
,
M. H.
, &
Morley
,
J. E.
(
2006
).
Comparison of the Saint Louis University Mental Status Examination and the Mini-Mental State Examination for detecting dementia and mild neurocognitive disorder—A pilot study
.
American Journal of Geriatric Psychiatry
,
14
,
P900
910
. https://doi.org/10.1097/01.JGP.0000221510.33817.86
Wolf
,
T. J.
,
Barco
,
P. P.
, &
Giles
,
G. M.
(
2019
, November 18).
Functional cognition: Understanding the importance to occupational therapy.
https://www.aota.org/About-Occupational-Therapy/Professionals/PA/Facts/Adult-Cognitive-Disorders.aspx
Yusoff
,
M. S. B.
(
2019
).
ABC of content validation and content validity index calculation
.
Education in Medicine Journal
,
11
,
49
54
. https://doi.org/10.21315/eimj2019.11.2.6
Zamanzadeh
,
V.
,
Ghahramanian
,
A.
,
Rassouli
,
M.
,
Abbaszadeh
,
A.
,
Alavi-Majd
,
H.
, &
Nikanfar
,
A.-R.
(
2015
).
Design and implementation content validity study: Development of an instrument for measuring patient-centered communication
.
Journal of Caring Sciences
,
4
165
178.
https://doi.org/10.15171/jcs.2015.017