Abstract

The American Journal of Occupational Therapy (AJOT) continues to be the most highly ranked occupational therapy journal as measured by its Journal Citation Reports impact factor (its 5-yr impact factor is now 3.325). AJOT’s goal is to remain occupational therapy’s leading research journal with disciplinary and interdisciplinary impact. AJOT instituted measures starting in January 2018 to address increasing concerns about research reproducibility and study reporting methods. Starting in 2019, AJOT will be distributed online only and will offer an annual “Best of” print compendium to AOTA members.

In 2018, the American Journal of Occupational Therapy (AJOT) continued its upward momentum and remains the most highly ranked occupational therapy journal. AJOT now has a 2-yr impact factor of 2.493 and a 5-yr impact factor of 3.325, according to the 2018 Journal Citation Reports (Clarivate Analytics, 2018; Table 1). Among the 69 rehabilitation journals on the Social Sciences Citation Index, AJOT ranks third. Of the 134 journals on both the Social Sciences Citation Index and the Science Citation Index Expanded, it ranks 15th. AJOT’s CiteScore for 2017 (citation count divided by number of documents published in 2014–2016) is 1.71, placing AJOT in the 91st percentile of the General Medicine category of journals, ranking 73rd of 841 journals (Scopus, 2018). AJOT’s Source Normalized Impact per Paper (SNIP) metric, a domain-normalized citation score, decreased slightly this year, from 1.258 to 1.111 (Scopus, 2018).

2018 Article and Readership Statistics

AJOT has a global impact, with readership and submissions from around the world. As of September 5, 2018, manuscripts had been submitted from 27 countries. Fifty-eight percent of accepted articles were from U.S. scholars.

From January 1, 2018, to September 6, 2018, according to Google Analytics, 506,702 users visited the AJOT website, a slight increase of 1.81% from the 497,694 users during the same period in 2017. Perhaps a better measure of usage is that of subscriber behavior: For the period January to August 2018, subscribers and members accessed articles or downloaded pdfs a total of 255,862 times, compared with 238,786 times in 2017, suggesting that our readers may be finding our content more relevant to their needs.

To date in 2018, 201 manuscripts have been submitted to AJOT, a decrease compared with previous years (Table 2). It is difficult to know the causes of the decrease in submissions. Possibly, it reflects a hiatus period between study completions, or it reflects a publishing lull that might be expected if the American Occupational Therapy Association’s (AOTA’s) calls for Centennial-related papers inflated submissions. No deadlines for special issues occurred in the first half of 2018; special issues may inspire authors to submit manuscripts that they would have delayed or sent to other outlets. Authors may also have been unprepared for the increased AJOT publishing requirements. Last, so many manuscripts were submitted and accepted in 2016–2017 that authors may be choosing to submit elsewhere rather than experience AJOT’s current publishing delay. AJOT editorial staff expect that with the reduced 2018 submissions, greater dissemination of study reporting requirements, and AJOT’s increased impact factor, submissions will rise in subsequent years.

We have accepted 67 manuscripts for publication this year. The number of research articles published in 2018 increased slightly, because not all 2017 Centennial celebration articles were research articles. Effectiveness research articles, ranging from feasibility studies to randomized controlled trials of interventions that fall within the scope of occupational therapy, remained the most common type (Table 3). Slightly more than half (53.5%) were original Level I evidence studies, an increase over previous years. This good news means either that a larger number of occupational therapy intervention research is occurring at higher evidence levels or that researchers are choosing to publish more often in AJOT. An additional 15 systematic reviews, also Level I evidence, focused on interventions. Study funding was similar to that in 2017, with slightly more U.S. federally funded studies but slightly fewer studies with foundation and international funding (Table 4).

There were 348 AJOT reviews completed this year by reviewers from around the world, including the United States, Canada, the United Kingdom, Australia, Hong Kong, Taiwan, and India. Review requests were accepted 76.4% of the time, a very good rate considering that reviewers’ workloads are large and that manuscripts are submitted irrespective of holiday or vacation periods and periods of high academic responsibilities. The large number of official AJOT reviewers facilitates high review acceptance rates by allowing us to not overburden reviewers.

The Cordelia Myers AJOT Best Article Award went to Carrie Gibbons, Nathan Smith, Randy Middleton, John Clack, Bruce Weaver, Sacha Dubois, and Michel Bédard for their article “Using Serial Trichotomization With Common Cognitive Tests to Screen for Fitness to Drive” (Gibbons et al., 2017). A subcommittee of AJOT Associate Editors selected from among all AJOT research articles published in 2017 the article that best met the award criteria: an article that describes high-impact and high-quality research, is timely and highly relevant, and addresses an urgent information need in the field (at least one author must be an occupational therapist).

Changes and Challenges

AJOT’s goal is to remain occupational therapy’s leading research journal with disciplinary and interdisciplinary impact. Therefore, AJOT continues to represent the broad scope of occupational therapy research and to publish research on domains that cross disciplinary boundaries. In their writing, authors need to ensure that their rationale and discussion sections address the following questions:

  • Did the study examine the occupational therapy profession?

  • Did it investigate interventions that are unique to occupational therapy?

  • Did the study test the value that occupational therapy brings to interdisciplinary interventions?

  • Did the study investigate interventions that fall under the scope of several disciplines?

Reproducibility of Results

A relatively recent concern in publishing is research reproducibility. An amazingly large number of studies published, even in well-respected journals, have failed to be reproduced (Begley & Ioannidis, 2015; Open Science Collaboration, 2015). Contributing issues include implementation quality, faulty reporting, and lack of understanding of participant and context factors influencing response to study manipulations.

One problem contributing to nonreproducibility is insufficient description of study manipulations, a particularly difficult issue with articles on individualized behavioral interventions such as those often published in AJOT. Such description is often hampered by journal manuscript page or word limitations. Because such limitations result from publication costs, this problem does not have an easy solution. AJOT editors will ask authors for an appendix providing greater manipulation descriptions when more description is needed.

A practice that could improve reproducibility is prerecruitment registration for clinical trials (National Institutes of Health, 2004) other than feasibility studies. AJOT continues to require clinical trial registration as well as adherence to study reporting guidelines (e.g., Preferred Reporting Items for Systematic Reviews and Meta-Analyses [PRISMA] for systematic reviews [Moher et al., 2009] and Consolidated Standards of Reporting Trials [CONSORT] for randomized clinical trials [Schulz et al., 2010]; and STrengthening the Reporting of OBservational studies in Epidemiology [STROBE] for observational studies [von Elm et al., 2007]).

Reporting Methods

Because publishers have been called on to improve study reporting methods (Begley & Ioannidis, 2015; Shekman, 2013), AJOT instituted several measures related to this call in January 2018, described in the sections that follow.

Outcomes and Meaningful Change.

Authors should discuss whether participant outcome changes are larger than normal variability in outcome assessment scores. Regardless of statistical significance, changes equal to or smaller than typical assessment score variability cannot be attributed to true participant change, but rather to testing error. Typically, change is considered true change when it is larger than the assessment’s standard error of measurement (SEM). This can be reported by group average or by how many participants in each group showed such change. However, not all assessments have had their SEM determined; for those that have not, the authors should discuss participant changes in light of what the field generally considers meaningful change (e.g., 10% of score). Study interpretation and implications for occupational therapy sections should reflect whether true change occurred.

Authors should indicate a primary outcome at a particular time point (e.g., postintervention, 6-mo follow-up) that is based on the main research question and the theory or mechanism of change. Such selection allows for power analysis–informed sample size determination and poststudy determination of actual power obtained, and it helps control for analysis numbers and interpretation of mixed outcomes. Identifying a primary outcome allows use of study bias rating scales, such as the Physiotherapy Evidence Database (PEDro; Maher et al., 2003; see also http://www.pedro.org), which are necessary in systematic reviews.

Authors should report outcome effect sizes. In addition to aiding interpretation of single trials, the reporting of effect sizes assists in interpreting a body of literature by allowing easier comparison among trials.

Feasibility studies should report mainly feasibility outcomes (e.g., recruitment rates, retention rates, ability to implement studies as planned) rather than participant outcomes. The goal of a feasibility study is to test the ability to implement the study rather than hypotheses regarding participant change. Inferential statistics are inappropriate in feasibility studies because sample sizes are typically too small to provide sufficient power or to adequately estimate population variability. To report participant responses, authors should use only descriptive statistics.

Fragility Index.

One of the issues in nonmedical health care intervention trials is that these trials usually have relatively small sample sizes. In addition, no clear understanding exists of all the factors influencing response to study manipulations. Within groups, there may be large variation in response to these manipulations.

The Fragility Index (Walsh et al., 2014) allows interpretation of the between-groups effect. The Fragility Index is similar to the failsafe statistic for meta-analyses (Rosenthal, 1979) and asks how many more participants would have to have the opposite effect to change the between-groups effect. For example, if a study found Intervention X to be more efficacious than usual care, the Fragility Index would ask how many more Intervention X participants would need to fail to respond to the intervention for usual care to be found equally as efficacious as Intervention X. A small Fragility Index means that only a few additional participants with the opposite effect would change the study results, indicating that the study findings are not very robust. Such a result need not mean that the studied intervention is not efficacious, but rather that more research is indicated to understand the intervention effect, such as the need to study what about intervention responders caused them to respond while others did not. Effect robustness should be reflected in the study’s interpretation.

Clinical Significance.

AJOT reviewers are giving greater scrutiny to study interpretations regarding clinical significance. Because it is relatively easy to find statistical significance for small effects, authors need to be diligent about honestly and accurately portraying the clinical significance of outcomes. When participant changes are small, authors need to think about whether the clinical significance is great enough to suggest allocation of scarce clinical or educational resources. Small effects that have limited clinical significance might indicate that the intervention is not very efficacious or effective, but they might also indicate the need to study manipulation combinations, change therapy schedules and amounts, alter other manipulation features or add content, or find subgroups who are better responders. Failing to address the important issue of clinical significance or considering all effects clinically meaningful does not help advance the field.

Goals for 2019

AJOT has several goals for the upcoming volume year, including improving the review process and moving to online-only distribution. The following steps are intended to streamline the review process:

  • We are working with reviewers to increase the number of reviews that are returned on time (only 70% of reviews are returned within 4 weeks). Some reviews are never returned. AJOT is asking reviewers to ensure that their email addresses are current at the AJOT manuscript submission site (https://ajot.submit2aota.org/) and that their email spam systems whitelist AJOT emails. Sometimes reviewers underestimate time requirements of other work or obtain unexpected new work after accepting a review; in these situations, reviewers should request an extension or surrender a review so that a new reviewer can be solicited.

  • AJOT will have a new reviewer training module in 2019. This module will cover review ethics (Committee on Publication Ethics, 2017), effective use of the AJOT manuscript review site, and tips for performing a quality critical review. We hope that this module will assist reviewers in providing high-quality reviews.

  • AJOT has created new review checklists that better match reporting guidelines for various article types. The instructions have been reworded for clarity, and some frequently missed sections have been moved for salience. These changes should make it easier to complete reviews.

  • Authors can help speed review of their manuscripts, too, by attending to the Guidelines for Contributors to AJOT (AOTA, 2018): In at least 30% of submissions, manuscripts are returned to authors because simple requirements such as line numbering have not been followed. Even authors who have frequently been published in AJOT should review the author guidelines, because changes occur each year; for example, we have implemented a requirement for structured abstracts, beginning in 2019. In addition, authors should always suggest reviewers when submitting a manuscript, particularly on an uncommon topic or a general professional issue.

In an effort to promote scholarly discussion, AJOT will accept occasional Letters to the Editor beginning in 2019. Refer to the 2018 Guidelines for Contributors to AJOT for more information about submission requirements.

Last, AJOT is moving to online-only distribution for 2019. This move is driven by factors ranging from increased print publication costs to the reality that although AJOT looks great on a bookshelf, the majority of readers begin their search for AJOT articles online. AOTA members will receive an annual “Best of” print compendium at the end of each year.

Conclusion

AJOT remains the world’s leading occupational therapy journal. The journal’s prestige continues to grow as the amount and quality of occupational therapy research grow and as AJOT addresses publishing issues related to quality. AJOT will maintain its goal of publishing quality research that speaks to the breadth and types of research related to occupational therapy.

References

American Occupational Therapy Association
. (
2018
).
Guidelines for contributors to AJOT
.
American Journal of Occupational Therapy
,
72
, 7212430010. https://doi.org/10.5014/ajot.2018.72S221
Begley
,
C. G.
, &
Ioannidis
,
J. P. A.
(
2015
).
Reproducibility in science: Improving the standard for basic and preclinical research
.
Circulation Research
,
116
,
116
126
. https://doi.org/10.1161/CIRCRESAHA.114.303819
Clarivate Analytics
. (
2018
).
2017 Journal impact factor list
. Retrieved from http://mjl.clarivate.com/.
Committee on Publication Ethics
. (
2017
).
Ethical guidelines for peer reviewers
.
Gibbons
,
C.
,
Smith
,
N.
,
Middleton
,
R.
,
Clack
,
J.
,
Weaver
,
B.
,
Dubois
,
S.
, &
Bédard
,
M.
(
2017
).
Using serial trichotomization with common cognitive tests to screen for fitness to drive
.
American Journal of Occupational Therapy
,
71
,
7102260010
. https://doi.org/10.5014/ajot.2017.019695
Lieberman
,
D.
, &
Scheer
,
J.
(
2002
).
AOTA’s Evidence-Based Literature Review Project: An overview
.
American Journal of Occupational Therapy
,
56
,
344
349
. https://doi.org/10.5014/ajot.56.3.344
Maher
,
C. G.
,
Sherrington
,
C.
,
Herbert
,
R. D.
,
Moseley
,
A. M.
, &
Elkins
,
M.
(
2003
).
Reliability of the PEDro scale for rating quality of randomized controlled trials
.
Physical Therapy
,
83
,
713
721
.
Moher
,
D.
,
Liberati
,
A.
,
Tetzlaff
,
J.
, &
Altman
,
D. G.
;
PRISMA Group
. (
2009
).
Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA statement
.
PLoS Medicine
,
6
(
6
),
e1000097
. https://doi.org/10.1371/journal.pmed.1000097
National Institutes of Health
. (
2004
).
Notice of revised NIH definition of “clinical trial” (Report No. NOT-OD-15-015). Retrieved from http://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-015.html
Open Science Collaboration
. (
2015
).
Estimating the reproducibility of psychological science
.
Science
,
349
,
aac4716
. https://doi.org/10.1126/science.aac4716
Rosenthal
,
R.
(
1979
).
The “file drawer problem” and tolerance for null results
.
Psychological Bulletin
,
86
,
638
641
. https://doi.org/10.1037/0033-2909.86.3.638
Schulz
,
K. F.
,
Altman
,
D. G.
, &
Moher
,
D.
;
CONSORT Group
. (2010).
CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials
.
BMJ
,
340
,
c332
. https://doi.org/10.1136/bmj.c332
Scopus
. (
2018
).
Source details: American Journal of Occupational Therapy
.
Shekman
,
R.
(
2013, December 9
).
How journals like Nature, Cell and Science are damaging science. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science
von Elm
,
E.
,
Altman
,
D. G.
,
Egger
,
M.
,
Pocock
,
S. J.
,
Gøtzsche
,
P. C.
,
Vandenbroucke
,
J. P.
;
STROBE Initiative.
(
2007
).
Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies
.
BMJ
,
335
,
806
808
. https://doi.org/10.1136/bmj.39335.541782.AD
Walsh
,
M.
,
Srinathan
,
S. K.
,
McAuley
,
D. F.
,
Mrkobrada
,
M.
,
Levine
,
O.
,
Ribic
,
C.
, . . .
Devereaux
,
P. J.
(
2014
).
The statistical significance of randomized controlled trial results is frequently fragile: A case for a Fragility Index
.
Journal of Clinical Epidemiology
,
67
,
622
628
. https://doi.org/10.1016/j.jclinepi.2013.10.019