Linguistically Tailored Health Messages to Encourage Plant-Based Food Choices in Adolescents
- Conditions
- Food ConsumptionSustainable Food ConsumptionSustainable Healthy Eating Behaviour
- Registration Number
- NCT06742346
- Lead Sponsor
- Erasmus University Rotterdam
- Brief Summary
The goal of this clinical trial is to investigate the effectiveness of linguistically tailored messages for promoting plant-based food choices in adolescents. The main question it aims to answer is:
• Are linguistically tailored messages more effective in promoting plant-based eating compared to a) non-tailored messages (active control), and b) not receiving messages at all (passive control)?
Researchers will compare participants exposed to linguistically tailored messages, non-tailored messages, and no messages to determine if linguistic messages are more effective in promoting plant-based food choices. Participants will receive daily messages promoting a plant-based diet from Monday to Friday for two weeks, accompanied by daily and weekend surveys about their food choices and message perception.
- Detailed Description
This study is a five-arm randomized controlled trial targeting adolescents aged 11-15 years in the UK and Ireland. In the intervention arms, participants will receive daily messages promoting a plant-based diet from Monday to Friday for two weeks. The first arm serves as a passive control group, with participants receiving no messages throughout the study. In arm 2, participants will receive daily non-tailored messages for the entire two weeks. In arm 3, participants will receive linguistically tailored messages. Participants in arm 4 will receive both non-tailored and tailored messages for one week each: tailored messages in week 1 and non-tailored in week 2 (arm 4a), and non-tailored in week 1 and tailored in week 2 (arm 4b). This design supports both within-subject comparisons (tailored vs. non-tailored) and between-subject comparisons (e.g., control vs. intervention).
At the start of the program, participants will play a chat game to collect chat data for analyzing their linguistic style. These data will be used to perform a text style transfer on the intervention messages using a Large Language Model (LLM). Following this, the two-week intervention will commence with a baseline survey conducted during the first weekend. During weekdays (Monday to Friday), participants in the intervention arms (arms 2, 3, and 4) will receive daily messages promoting a plant-based diet through the research app Avicenna. Additionally, all participants, including those in the passive control group, will receive daily surveys about their lunchtime food choices. Participants in the intervention arms will also respond to questions about message perception. On the weekends following each intervention week, participants will complete comprehensive surveys (midpoint and final surveys). These surveys, along with the baseline survey, will assess all outcome measures, including dietary behaviors, determinants of behavior, perceptions of the message source, message processing, and text style characteristics.
Sample size
The expected effect size was based on two meta-analyses of online health promotion studies that included general tailoring (Krebs et al., 2010; Lustria et al., 2013). These studies reported an average effect size of 0.08 (Cohen's f) overall and 0.112 for diet-related interventions specifically. Based on an assumed effect size of 0.1 and a power of 80% for between-subjects comparisons, a sample size of 592 participants is required. Since within-subject comparisons typically yield higher statistical power, we aim for a target sample size of 600 participants (150 per arm) with complete responses.
Recruitment of participants
Participants will be recruited through a research panel (Norstat) in the UK and Ireland. Panel members with eligible children will be invited to participate in the study. As part of the recruitment survey, parents will first answer pre-screener questions to confirm their child meets the inclusion criteria. If eligible, parents will continue in the recruitment survey, which further includes information about the study and the opportunity to sign up their child and provide consent for their child's participation. Once parental consent is obtained, children will be contacted to take part in the study. Participants will receive research panel credits (i.e., financial compensation) for completing each stage of the study: playing the chat game, completing the baseline survey, daily surveys, the midpoint survey, and the final survey.
Randomization
Block randomization will be used to allocate participants across the intervention arms. Arms 4a and 4b will be combined into a single block since they feature identical conditions, but only counterbalanced. This results in four total blocks. With a target sample size of 600, we aim for 150 participants per block. Participants will be assigned to one of the four blocks via the automated block randomization function in Qualtrics, integrated into the recruitment survey.
Chat Game
We developed a web-based chat game, called BetweenUs, to collect chat data while meeting ethical, safety, and privacy standards. The purpose of this game is to generate a sample of authentic text messages, from which we can learn about each participant's individual linguistic style. The game was custom-built (i.e., no third-party software) and hosted on a Microsoft Azure server of Erasmus University. It can be accessed here: (https://movez.ecda.ai/chat/test1).
BetweenUs was co-designed by children in a co-creation session. Children provided input on how the game should look, what types of avatars should be used, the length of the game, and which topics they would like to talk about. The game was inspired by popular imposter games, such as Among Us. The game was further piloted among children for final feedback.
In the game, players will be randomly paired in groups of 4 participants within a chatroom. At the start of the game, the player will be guided by five instruction screens explaining the rules of the game. Afterwards, the player can start the game, during which they need to answer two questions: 1) Does the player like or dislike the topic presented (e.g., watching football)? 2) Does the player feel comfortable playing the imposter? Then, a role (investigator or imposter) will be randomly assigned to each player. Investigators will need to chat about their actual like/dislike of the presented topic, while imposters are instructed to take on the opposite role of their actual preference (e.g., "I dislike watching football, although I actually like it") without being caught by the other players.
The game is divided into three rounds of one-to-one conversations of 8 minutes, followed by a group conversation of 5 minutes. This way, we can capture the dynamics of one-to-one and group conversations, reflecting real-world chats. The game concludes with voting, asking the investigators to guess the imposter within the group. The investigator wins if they guess the imposter correctly, while the imposter wins if they do not receive the majority vote.
The chat game is entirely anonymous. Players are represented by animal-like avatars. No personal or identifiable information is collected during the game, and players are instructed to refrain from sharing personal details. If any personal or sensitive details (such as names, age, etc.) are shared in the chat, they will be removed/hashed. Additionally, the game topics are carefully chosen by researchers (considering suggestions from the co-creation session) to ensure sensitive subjects are avoided. Players also have the option to indicate their discomfort by taking on the role of an imposter during the game. Finally, the data is accessible only by researchers and saved on university servers.
Intervention messages
Ten intervention messages were developed to promote the benefits of plant-based eating in an autonomy-supportive manner. Each message targets one of the three most mentioned motives for plant-based eating (Miki et al., 2020): health motives, environmental motives, and animal welfare motives. The order of message motives is kept constant throughout the weekdays, with each day addressing one of the three motives across both weeks.
Message creation was guided by adolescents' preferences for intervention content. All messages were designed to be factual (including science-based statistics), concise (4-5 sentences), relevant to teens (e.g., addressing health concerns like acne), and autonomy-supportive (offering suggestions and tips rather than directives) (Hingle et al., 2013). The messages were intentionally written in a neutral linguistic style, avoiding the use of highly emotive language, non-standard syntax, or emojis.
These messages constitute the non-tailored messages and serve as the foundation for developing tailored messages through text style transfer using a Large Language Model (LLM). The non-tailored messages were written in an active, conversational voice to enable a fair comparison with the tailored versions, which are also expected to maintain an active voice following the style transfer.
Message neutrality was validated through both expert human evaluation and the Linguistic Inquiry Word Count 2022 (LIWC22) tool, which assessed each message across 22 linguistic dimensions, scoring from 0 (neutral) to 100 (extremely high) (Boyd et al., 2022). To ensure the messages were accurate, trustworthy, clear, engaging for adolescents, and primarily factual in style, multiple rounds of proofreading were conducted by senior researchers with expertise in youth communication.
Large Language Model
An LLM will be used to linguistically tailor the intervention messages. The primary task of the LLM is to learn each participant's conversational style using data collected from the chat game, and then adapt the intervention messages accordingly. This process will result in 10 linguistically tailored messages for each participant, while the content and factual information of the messages remain constant across both non-tailored and tailored versions.
The LLM Mistral-large was selected based on its current general performance scores and open-source availability. In a separate study, we evaluated the performance of various LLMs (Mistral and GPT) in tailoring intervention messages, focusing on text style transfer accuracy, content preservation (i.e., meaning similarity), and fluency (i.e., comprehensibility). All evaluated models demonstrated good capabilities in capturing users' conversational styles and scored very high on content preservation and fluency. Mistral-large was ultimately chosen as the most suitable model for this study.
First, Mistral-large will generate style-neutral equivalents of the chat data to create a parallel dataset for each participant. To maintain relevance, only messages exceeding the participant's median word count will be included, thereby excluding brief or non-meaningful messages (e.g., "hi"). This parallel dataset will serve as the training input for the model.
Next, the LLM will be explicitly prompted to act as a linguistic expert, analyzing the conversational style reflected in the parallel dataset. Specifically, the LLM will be guided to consider: a) the participant's use of function words (e.g., pronouns like "I," "we," "you"), b) their preferred tone (e.g., formal or informal, analytical or narrative), c) the stylistic words they commonly use (e.g., phrases, filler words), and d) the usage of emojis or emoticons in example sentences. The LLM will also provide an explanation of the text style transfer.
Research App
Daily messages and surveys will be delivered via a smartphone research app called Avicenna. Participants are required to download and register for this app using a unique ID prior to the start of the program. This app ensures anonymous participation in the research program.
Each school day, participants will receive a morning push notification prompting them to view a new message by opening the app. Additionally, the app will display a red badge whenever a new message is available. Messages can be accessed until midnight on the same school day, after which they will disappear from the app, and any unread messages will be marked as missing data for that day. Upon reading the message, participants will complete daily measures directly within the app. On weekends, participants will receive a notification reminding them to complete the weekend surveys (i.e., baseline, midpoint, and final surveys).
Recruitment & Eligibility
- Status
- ENROLLING_BY_INVITATION
- Sex
- All
- Target Recruitment
- 600
- Aged 11-15 years
- Residing in the UK or Ireland
- Having access to plant-based food options during school hours (e.g., provided by the school, purchased, or brought from home or a store)
• Adolescents who follow a strict vegetarian or vegan diet
Study & Design
- Study Type
- INTERVENTIONAL
- Study Design
- PARALLEL
- Primary Outcome Measures
Name Time Method Dietary behavior From the start of the intervention week (baseline survey) to the end of the intervention, after two weeks. Participants will be asked to report their dietary behavior from the previous day. They will describe what they ate for lunch at school (open-ended question) and indicate whether the lunch was plant-based. Weekly measures will assess the frequency of plant-based meals and snacks over the past week. Participants will report how many days they consumed a plant-based breakfast, lunch, and dinner (response options: 0-7 days) and the number of plant-based snacks they had per day (response options: 0-6+ servings).
Attitude During the intervention at the end of week 1 and week 2. Attitude is measured by four items value, usefulness, pleasantness and interest.
Question: When you think about plant-based eating (vegetarian or vegan), how would you rate plant-based eating on...
1. Value? Response options: 1) Worthless; 2) Somewhat Worthless; 3) Somewhat Valuable; 4) Valuable
2. Usefulness? Response options: 1) Useless; 2) Somewhat Useless; 3) Somewhat Useful; 4) Useful
3. Pleasantness? Response options: 1) Unpleasant; 2) Somewhat Unpleasant; 3) Somewhat Pleasant; 4) Pleasant
4. Interest? Response options: 1) Boring; 2) Somewhat Boring; 3) Somewhat Interesting; 4) Interesting.
The items will be aggregated into a single composite score.Behavioral intention During the intervention at the end of week 1 and week 2. Question: How often do you plan to eat plant-based (either vegetarian or vegan) within the next week in situations where you can freely choose? Response options: 1) Never; 2) Rarely; 3) Sometimes; 4) Often
- Secondary Outcome Measures
Name Time Method Perceived Source Similarity During the intervention at the end of week 1 and week 2. This measure aims to assess your perception of the similarity between yourself and the individual behind the daily messages. Question: How much do you agree with the following statements?
1. The person behind the messages and I are very similar.
2. The person behind the messages and I seem to have the same personal characteristics.
Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.
The items will be aggregated into a single composite score.Source Liking During the intervention at the end of week 1 and week 2. The following questions aim to evaluate your perception of the individual behind the daily messages. Items:
1. The person behind the messages is... Response options: 1) Very Unlikable; 2) Somewhat unlikable; 3) Somewhat likable; 4) Very Likable
2. The person behind the messages is the type of person with whom I would... Response options: 1) Never spend time with; 2) Rarely spend time with; 3) Occasionally spend time with; 4) Frequently spend time with.
The items will be aggregated into a single composite score.Source Trust During the intervention at the end of week 1 and week 2. The following questions aim to assess your perception of trust of the individual behind the daily messages. The person behind the messages is...
1. Trustworthy? Response options: 1) Untrustworthy; 2) Somewhat Untrustworthy; 3) Somewhat Trustworthy; 4) Trustworthy
2. Real? Response options: 1) Fake; 2) Somewhat Fake; 3) Somewhat Real; 4) Real
3. Convincing? Response options: 1) Unconvincing; 2) Somewhat Unconvincing; 3) Somewhat Convincing; 4) Convincing
4. Authentic? Response options: 1) Not Authentic; 2) Somewhat Not Authentic; 3) Somewhat Authentic; 4) Authentic.
The items will be aggregated into a single composite score.Message Liking During the intervention, daily (Monday-Friday) for two weeks. Question: If you saw this message in real life, how likely would you give it a Like? Response options: 1) I would definitely not give a 'Like'; 2) I would probably not give a 'Like'; 3) I might give a 'Like'; 4) I would probably give a 'Like'; 5) I would definitely give a 'Like'.
Message Shareability During the intervention, daily (Monday-Friday) for two weeks. Question: If you saw this message in real life, how likely would you share it on your social media (for example, Tiktok, Snapchat or WhatsApp)? Response options: 1) I would definitely not share the post; 2) I would probably not share the post; 3) I might share the post; 4) I would probably share the post; 5) I would definitely share the post.
Perceived Message Relevance During the intervention, daily (Monday-Friday) for two weeks. Question: How much do you agree with the following statements?
1. The message I received today is relevant to my life.
2. The message I received today grasped my attention.
3. The messages I received said something important to me. Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.
The items will be aggregated into a single composite score.Perceived Message Personalization During the intervention at the end of week 1 and week 2. Perceived message personalization is specifically about the style (not content) of the messages of the past five days. Question: How much do you agree with the following statement?
1. The way in which the messages were written felt like it was personally written for me.
Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.Perceived Style Similarity During the intervention at the end of week 1 and week 2. Question: How much do you agree with the following statement?
1. The way in which the messages were written is similar to how I would write messages to a friend.
Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.Message Processing Depth During the intervention at the end of week 1 and week 2. Question: How much do you agree with the following statements?
1. The messages were interesting to me.
2. I was motivated to read the messages.
Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.
The items will be aggregated into a single composite score.Message Reactance During the intervention at the end of week 1 and week 2. Question: How much do you agree with the following statements?
1. The messages were trying to manipulate me.
2. The messages' claims about plant-based food were overblown.
3. The messages annoyed me.
Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.
The items will be aggregated into a single composite score.Message involvement During the intervention at the end of week 1 and week 2. Question: How much do you agree with the following statements?
1. I paid close attention to process the messages.
2. It was engaging for me to process the messages.
3. It was involving for me to process the messages.
Response options: 1) Strongly Disagree; 2) Disagree; 3) Somewhat Disagree; 4) Somewhat Agree; 5) Agree; 6) Strongly Agree.
The items will be aggregated into a single composite score.
Trial Locations
- Locations (1)
Norstat UK Ltd. (Online Research Panel)
🇬🇧London, United Kingdom