MedPath

Reliability of a Masseter Muscle Prominence Scale and Lower Facial Shape Classification

Completed
Conditions
Healthy Volunteers
Interventions
Other: No Intervention
Registration Number
NCT01821534
Lead Sponsor
Allergan
Brief Summary

This study will evaluate the inter-rater and intra-rater reliability of a Masseter Muscle Prominence Scale for evaluating a patient's muscle prominence and a Lower Shape Classification for evaluating a patient's lower facial shape.

Detailed Description

Not available

Recruitment & Eligibility

Status
COMPLETED
Sex
All
Target Recruitment
201
Inclusion Criteria

-sufficient visual acuity without the use of glasses or with contact lenses to self-assess lower facial shape in a mirror

Read More
Exclusion Criteria
  • infection of the mouth or gums, or facial skin infection requiring antibiotics
  • planned dental or facial procedure
  • unwillingness to be photographed and have the photos used for research, training, or educational purposes
Read More

Study & Design

Study Type
OBSERVATIONAL
Study Design
Not specified
Arm && Interventions
GroupInterventionDescription
All ParticipantsNo InterventionHealthy volunteers. No treatment (intervention) was administered.
Primary Outcome Measures
NameTimeMethod
Inter-rater Reliability Using a Masseter Muscle Prominence Scale (MMPS)Day 1

The MMPS is an ordinal tool to assess the masseter muscle prominence (jaw muscle) for each side of the face from 1=minimal to 5=very marked. Inter-rater (among raters) reliability was calculated separately for the left and right side of the face using Kendall's coefficient of concordance (Kendall's W). Kendall W statistics overall for the left and right sides of the face were derived using the average of assessment 1 and assessment 2 rounded to the nearest whole integer for each subject and each clinician. A total of 8 physicians rated each subject. The degree of agreement of the point estimates of Kendall's W was interpreted according to the reference range scale that was pre-defined as: ≤0: poor, \>0 to ≤0.2: slight, \>0.2 to ≤0.4: fair, \>0.4 to ≤0.6: moderate, \>0.6 to ≤0.8: substantial, and \>0.8 to ≤1.0: almost perfect. The 95% confidence interval for Kendall's W is provided.

Inter-rater Reliability Using a Lower Facial Shape Classification (LFSC)Day 1

The LFSC is a qualitative tool to assess facial shape into one of 5 categories (A, B, C, D, and E). Inter-rater (among raters) reliability was calculated using Kappa statistics. Kappa statistics were calculated for each of the 5 facial categories. A total of 8 physicians rated each subject. The overall inter-rater agreement for Kappa statistics for all categories combined was estimated by pooling Kappa statistics for each category using a chi-square statistic. The degree of agreement of the point estimates of Kappa statistics was interpreted according to the reference range scale that was pre-defined as: ≤0: poor, \>0 to ≤0.2: slight, \>0.2 to ≤0.4: fair, \>0.4 to ≤0.6: moderate, \>0.6 to ≤0.8: substantial, and \>0.8 to ≤1.0: almost perfect. The 95% confidence interval for Kappa statistics is provided.

Intra-rater Reliability Using a MMPSDay 1

The MMPS is an ordinal tool to assess the masseter muscle prominence (jaw muscle) for each side of the face from 1 = minimal to 5 = very marked. Intra-rater (within raters) reliability was calculated separately for the left and right side of the face using weighted Kappa statistics. Weighted Kappa statistics were calculated for each of the 8 physician raters. The overall intra-rater agreement for Kappa statistics for all raters combined was estimated by pooling Kappa statistics for each rater using a chi-square statistic. The degree of agreement of the point estimates of Kappa statistics was interpreted according to the reference range scale that was pre-defined as: ≤0: poor, \>0 to ≤0.2: slight, \>0.2 to ≤0.4: fair, \>0.4 to ≤0.6: moderate, \>0.6 to ≤0.8: substantial, and \>0.8 to ≤1.0: almost perfect. The 95% confidence interval for Kappa statistics is provided.

Intra-rater Reliability Using a LFSCDay 1

The LFSC is a qualitative tool to assess facial shape into one of 5 categories (A, B, C, D, and E). Intra-rater (within raters) reliability was calculated using Kappa statistics. Kappa statistics were calculated for each of the 8 physician raters. The overall intra-rater agreement for Kappa statistics for all raters combined was estimated by pooling Kappa statistics for each rater using a chi-square statistic. The degree of agreement of the point estimates of Kappa statistics was interpreted according to the reference range scale that was pre-defined as: ≤0: poor, \>0 to ≤0.2: slight, \>0.2 to ≤0.4: fair, \>0.4 to ≤0.6: moderate, \>0.6 to ≤0.8: substantial, and \>0.8 to ≤1.0: almost perfect. The 95% confidence interval for Kappa statistics is provided.

Secondary Outcome Measures
NameTimeMethod
© Copyright 2025. All Rights Reserved by MedPath