Evidence-Based Guidelines for Fatigue Risk Management in EMS: Formulating Research Questions and Selecting Outcomes (2024)


Background: Greater than half of Emergency Medical Services (EMS) personnel report work-related fatigue, yet there are no guidelines for the management of fatigue in EMS. A novel process has been established for evidence-based guideline (EBG) development germane to clinical EMS questions. This process has not yet been applied to operational EMS questions like fatigue risk management. The objective of this study was to develop content valid research questions in the Population, Intervention, Comparison, and Outcome (PICO) framework, and select outcomes to guide systematic reviews and development of EBGs for EMS fatigue risk management. Methods: We adopted the National Prehospital EBG Model Process and Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework for developing, implementing, and evaluating EBGs in the prehospital care setting. In accordance with steps one and two of the Model Process, we searched for existing EBGs, developed a multi-disciplinary expert panel and received external input. Panelists completed an iterative process to formulate research questions. We used the Content Validity Index (CVI) to score relevance and clarity of candidate PICO questions. The panel completed multiple rounds of question editing and used a CVI benchmark of ≥0.78 to indicate acceptable levels of clarity and relevance. Outcomes for each PICO question were rated from 1 = less important to 9 = critical. Results: Panelists formulated 13 candidate PICO questions, of which 6 were eliminated or merged with other questions. Panelists reached consensus on seven PICO questions (n = 1 diagnosis and n = 6 intervention). Final CVI scores of relevance ranged from 0.81 to 1.00. Final CVI scores of clarity ranged from 0.88 to 1.00. The mean number of outcomes rated as critical, important, and less important by PICO question was 0.7 (SD 0.7), 5.4 (SD 1.4), and 3.6 (SD 1.9), respectively. Patient and personnel safety were rated as critical for most PICO questions. PICO questions and outcomes were registered with PROSPERO, an international database of prospectively registered systematic reviews. Conclusions: We describe formulating and refining research questions and selection of outcomes to guide systematic reviews germane to EMS fatigue risk management. We outline a protocol for applying the Model Process and GRADE framework to create evidence-based guidelines

Key words:

  • evidence-based guidelines
  • fatigue
  • PICO questions


Reports of fatigued Emergency Medical Services (EMS) workers and adverse events related to fatigue are on the rise.Citation1–3 These events may not be uncommon. Recent studies show greater than half of EMS clinicians report mental and physical fatigue while at work.Citation4–8 Fatigue has been described as “a subjective, unpleasant symptom, which incorporates total body feelings ranging from tiredness to exhaustion creating an unrelenting overall condition which interferes with an individual's ability to function to their normal capacity.”Citation9 Half of EMS clinicians report less than six hours of sleep per night and half rate their quality of sleep as poor.Citation7 Greater than one-third report excessive daytime sleepiness and merely half report feeling recovered between scheduled shifts.Citation8,10 Fatigue has been linked to greater odds of injury, medical error, patient adverse events, and safety compromising behavior in the EMS setting.Citation5 The EMS setting is void of guidelines for managing sleep, fatigue, and shift work, yet the need to address fatigue in EMS with guidance based on the best available evidence is compelling.Citation4–8

Evidence-based guidelines (EBGs) are “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.”Citation11 These statements aid in decision-making and guide protocols or standard operating procedures, often promoting “best practice.” The National Guideline Clearinghouse (www.guidelines.gov) is a repository maintained by the Agency for Healthcare Research and Quality (AHRQ) where thousands of guidelines are referenced and maintained. The Guidelines International Network founded in 2002 maintains more than 3,700 guidelines. National societies, governing bodies, professional organizations, insurers, and other influential organizations promote use of EBGs to reduce variability in practice and improve effectiveness, quality, and outcomes.Citation12

The process for guideline development is complex and involves application of frameworks that lead small groups through a series of steps and judgments. There are numerous techniques for rating evidence quality and developing recommendations.Citation13–22 Key steps include formulating research questions, searching the literature, appraising and rating the quality of literature, and making recommendations. The National Prehospital EBG Model Process (Model Process) was adopted in 2012 as a standard with eight steps for development of EBGs germane to the EMS setting.Citation23

We address the challenge of applying the Model Process to non-clinical EMS questions, and describe a process for formulating clear, relevant, and objectively quantified “consensus-based” research questions. We report on the selection of outcomes for each research question and outline a planned approach for applying the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) framework for evidence appraisal and formulation of recommendations.Citation22 Our work stems from the 2013 Fatigue in EMS advisory adopted by the National EMS Advisory Council, which called for a review of the evidence and guidance to the EMS community to aid in fatigue management.Citation24 This paper is relevant to EMS clinicians, administrators, researchers, and policy makers given the increased emphasis on evidence-based medicine in EMS and recent creation of several robust prehospital clinical care guidelines.Citation12,23,25–30 A detailed description of our approach may be informative to researchers and decision-makers beyond the EMS setting, given the novelty of applying the Model Process and the GRADE framework for EBG development germane to occupational health and operations.Citation31



We first addressed step one of the Model Process by performing a scoping review of the literature for guidelines or consensus-based protocols for fatigue risk management in the EMS setting. Scoping reviews are commonly used to rapidly review a body of literature for key concepts that underpin an area of research and identify the primary sources of available evidence.Citation32–34 To the best of our knowledge, formal guidance for fatigue risk management in the EMS environment is non-existent.Citation6 We established a project website and mechanism to receive external input of stakeholders. Stakeholder input was received in-person on February 2nd, April 26th, and April 27th, 2016 at public meetings at the U.S. Department of Transportation's Headquarters in Washington, DC. Public notification of these meetings and request for external input was submitted to the Office of the Federal Register and made public on the www.federalregister.gov website on January 11, 2016 and April 13, 2016. We will continue to receive external inputs throughout the life of this EBG effort and provide a summary of that input in a future report. For purposes of this paper, we describe experiences with development of relevant and clear research questions and the selection of outcomes germane to decision-making for EMS administrators. The University of Pittsburgh Institutional Review Board approved this study.

We addressed step two of the Model Process by forming an 11-person panel of experts and a multi-disciplinary research/project team. Panelists were selected based on evidence of expertise in sleep medicine, fatigue or sleep health related research of public safety occupations, emergency medicine, or EMS (see ). As prescribed by the Institute of Medicine (IOM),Citation11 we have distributed the responsibilities of EBG development between the expert panel and research team. Responsibilities of the research team include: a) work with the expert panel to formulate research questions and select outcomes; b) evaluate the results of systematic reviews performed by the team's medical research librarian; and c) evaluate quality of evidence using the GRADE framework and produce evidence profiles for each research question. Responsibilities of the expert panel include: a) formulate research questions and select outcomes; b) evaluate evidence profiles created by the research team; and c) use the GRADE framework to create recommendations for fatigue risk management in the EMS setting. A summary of self-reported disclosures by panelists and members of the research/project team appear in the Online Supplement AppendixA.

Table 1. List of expert panelists

Download CSVDisplay Table

Protocol for Question Development and Outcome Selection

The development of research questions began with panelists and members of the research team (minus the study's principal investigator [PI]) offering candidate research questions directly to the study's PI. The PI reviewed the questions offered and assembled the questions into the Problem/Population, Intervention, Comparison, and Outcome (PICO) framework; a framework that is widely used and cited as integral to developing clinical practice guidelines applied in the prehospital environment.Citation23,35 Next, the PI presented panelists with candidate PICO questions for review and editing during a scheduled 1.5-day in-person public meeting. The PI guided panelists through an iterative process of refining PICO question wording and scoring two characteristics of content validity of candidate questions (relevance and clarity). Content validity is the extent to which a research question is brief, clear, easy to understand, has perceived relevance, and is appropriately framed for the targeted audience.Citation36,37 The characteristic of relevance was presented in the form of a question: “is the question connected/germane to the issue at hand and suitable in its current form for purposes of guiding a systematic review?” The characteristic of clarity was also presented in the form of a question: “is the question clear, intelligible appropriately worded, sharp, and easy to understand by a diverse audience?” Panelists scored content validity on a 4-point Likert scale in real-time during the public meeting and used an established algorithm for quantifying content validity, the Content Validity Index (CVI). A benchmark of <0.78 on a 0–1.00 scale was used to determine the need for additional editing before confirming question wording.Citation37

Next, the PI gathered recommendations from panelists and the research team regarding plausible outcomes linked to each PICO question and relevant to decision-making for EMS administrators. Outcomes were presented to members of the expert panel and research team with instructions to rate the importance based on a 9-point Likert scale prescribed by the GRADE methodology.Citation38 A rating of 1–3 implied an outcome was “least important,” a rating 4–6 implied an outcome was “important,” and a rating 7–9 implied the outcome was “critical.”

Statistical Analyses

We use the CVI, an established algorithm for quantifying content validity of candidate PICO questions.Citation37 Ratings of relevance and clarity were recorded on a 4-point Likert scale standard for CVI measurement: 1 = question is not relevant, 2 = question needs major revisions to be relevant, 3 = question needs minor revisions to be relevant, 4 = question is relevant; 1 = question is not clear, 2 = question needs major revisions to be clear, 3 = question needs minor revisions to be clear, 4 = question is clear. As prescribed, we calculated CVI scores for relevance and clarity by summing the total number of 3 and 4 ratings and dividing the total by the total number of individuals scoring. The established benchmark of 0.78 on a 0–1.00 scale was used to determine when the panel and research team might consider question wording acceptable and further edits unnecessary.Citation37 We calculated the mean and median rating of outcomes for each PICO question and stratified outcomes into three domains: 1–3 (of limited importance), 4–6 (important, but not critical), and 7–9 (critical for making a decision). We used Microsoft Excel (v.14.6.4) and SAS software (v.9.4; Cary, North Carolina) for statistical analyses.


Members of the expert panel and research team proposed 13 candidate research questions. Discussion amongst panelists led to six candidate questions being eliminated or merged with other PICO questions prior to scoring of content validity. For the remaining seven candidate questions, CVI scores for relevance and clarity obtained during initial scoring exceeded the predefined benchmark of 0.78 for each candidate PICO question (Online Supplement AppendixB). Despite reaching these benchmarks, panelists and members of research team refined and edited select questions to further improve relevance and clarity. Three of seven candidate PICO questions (43%) were scored once for relevance and clarity, three questions (43%) were edited and scored twice, and one (14%) underwent editing and scoring a total of three times. Final scores of relevance and clarity exceeded the predefined 0.78 CVI benchmark. See Online Supplement AppendixB for a detailed description of PICO question evolution and CVI scoring.

Panelists and members of the research team suggested four plausible outcomes for PICO question one (the diagnostic question) and 11 plausible outcomes for each of the remaining 6 intervention focused PICO questions (Online Supplement Table3). The mean, median, and IQR of scores for each outcome rating are shown in the Online Supplement Table3. We did not round the mean or median score to the nearest whole number when classifying outcomes. The number of outcomes rated critical, important, or less important varied by PICO question. The median number of outcomes for each PICO question rated as critical, important, and less important was 1 (IQR 1), 6 (IQR 2), and 4 (IQR 1.5), respectively.

The final version of each PICO question and outcomes rated as critical or important were registered with PROSPERO,Citation39 an international database of prospectively registered systematic reviews [PROSPERO 2016 registration numbers: CRD42016040097, CRD42016040099, CRD42016040101, CRD42016040 107, CRD42016040110, CRD42016040112, CRD420 16040114].


We have adhered to steps outlined in the Model Process and GRADE framework for question development and outcome selection, providing a first example of applying these concepts to answer non-clinical EMS-related questions. Our approach to quantifying content validity of PICO question wording was not a prescribed element of the Model Process or GRADE and may serve as a model for others.

The significance of developing research questions and selecting outcomes for systematic reviews should not be discounted.Citation40 It is essential that during the planning phase of systematic reviews, investigators begin with clear and relevant research questions with common elements of the population(s) of interest, intervention(s), comparison group(s), and outcome(s).Citation40 Narrowly framed questions may exclude relevant research (evidence) during the review, constrain the synthesis of evidence to a subgroup type analysis, and contribute to inferences of the evidence with limited generalizability to the targeted population.Citation41 Broadly framed questions are often favored so that minor differences in studies, such as poorly defined populations or interventions that are inconsistently labeled across studies, do not lead to exclusion of comparable research.Citation40,42 Although it is widely believed that research questions intended to guide systematic reviews benefit from numerous edits and from the input of stakeholders;Citation40 there is no gold standard for question development nor a prescribed method for quantifying consensus on the final version of research questions.

Our panel and research team reached consensus on seven PICO questions following multiple rounds of proposing, debating, and refining candidate questions. Panelists eliminated questions germane to prognosis given the inability to offer recommendations for this type of PICO question when following the GRADE framework. One PICO question is focused on diagnosis of fatigue, whereas six address the impact of interventions (see and Online Supplement AppendixB). There is no gold standard definition of fatigue and numerous fatigue measurement tools exist for different purposes.Citation9,43,44 The second PICO question was framed to explore evidence behind shift work and shift scheduling interventions. The duration of shifts and the timing of when a shift begins and ends is a provocative issue within the EMS industry.Citation1,6,45,46 A review and synthesis of the best available evidence is needed because at present, it is unclear what shift-scheduling interventions are associated with reduced fatigue and/or lower fatigue-related risks.Citation6 Questions three and four are framed to survey evidence germane to use of countermeasures such as caffeine and sleep/rest strategies to mitigate fatigue and improve sleepCitation47,48,49. The fifth PICO question aims to identify evidence connected to education and training EMS and related personnel to deal with shift work and mitigate fatigue-related risks. Interest in helping shift workers cope with shift work through education and training is growing, especially in the fire service.Citation50,51 The sixth PICO question seeks to determine if implementation of statistical modeling of sleep and work scheduling patterns is helpful in mitigating fatigue related risksCitation52–55. The seventh and final PICO question will explore the role of interventions that affect task load or workload during shift workCitation56–60. A review of the best available evidence germane to each PICO question would benefit the decisions of administrators related to fatigue risk management.

Table 2. Consensus-derived PICO questions used to guide systematic reviews

Download CSVDisplay Table

The process of formulating research questions for purposes of systematic reviews is an important step for development of EBGs.Citation38,61 To the best of our knowledge, there is no prescribed method or gold standard approach to quantifying consensus on the wording and framing of research questions. We perceived the absence of objectively measured consensus on question formulation as a limitation and threat to the overall process of EBG development. We chose to quantify content validity of candidate questions with the CVI calculation because the calculation is straightforward, the measure is widely used in evaluation of questions for research purposes, and benchmarks for interpretation widely accepted.Citation37 We believe inclusion of this calculation strengthens our project with objective measures of consensus from a multi-disciplinary group of experts. We are among the first to quantify the content validity of candidate research questions informing systematic reviews.

Scoring for most of the proposed outcomes resulted in five or more outcomes per PICO question rated as “important.” We were surprised to learn that, for three of seven PICO questions, none of the candidate outcomes were rated as “critical.” A likely explanation for these findings is that panelists were instructed to score outcome importance from the viewpoint of the EMS administrator who is often charged with making decisions relevant to fatigue risk management. This perspective is unique from other EBG projects, where the guideline is clinical in focus and panelists are often charged with selecting outcomes from a patient perspective.Citation38 Other potential explanations include concern among panelists for the limited evidence in existence germane to one or more outcomes; belief that fatigue risk management is a shared responsibility between employers and employees;Citation62 and diversity of perspective among panelists.

The act of rating outcomes is an essential element of the GRADE methodology, yet one of the greatest challenges for panelists to perform.Citation38 Panelists must rank outcomes as critical, important, or less important. The overall quality of evidence for each question is then primarily evaluated based on the total evidence available for the critical outcome(s). If there is high quality evidence in the form of well-executed clinical trials for the critical outcomes and low-quality evidence for important or less important outcomes, then the overall quality of evidence will be judged as high. If the inverse scenario occurs, the overall quality will likely be judged as low. In some cases, no critical outcomes will be identified and the overall quality of evidence is determined based on the important outcome(s). Therefore, the a priori determination of which outcomes are considered critical versus important or less important may have a substantial effect on the overall rating of evidence and the eventual recommendations provided in a guideline.

We report experiences with the Model Process and GRADE relevant to formulating research questions and selecting outcomes for systematic reviews and provide a model for other guideline developers. The creation of evidence-based guidelines is a complex and iterative process that requires numerous steps and judgments. The results of these initial steps will guide completion of next steps. These include multiple systematic reviews and use of the GRADE framework to evaluate quality of evidence and formulate recommendations.


We acknowledge limitations with our approach and have incorporated strategies to lessen the impact of bias and limitations where possible. There are numerous frameworks for evaluating the quality of evidence and developing evidence-based guidelines.Citation13–22 We are following the Model Process and selected GRADE as our framework for evaluating the quality of evidence and formulating recommendations, as supported by the National Prehospital Evidence-Based Guidelines Strategy.Citation63 The GRADE framework is gaining acceptance amongst the medical and public health fields as a rigorous tool for EBG development.Citation22,25 The framework has been applied to clinical questions within the EMS environment;Citation25–28,30 meaning it is gaining acceptance among EMS researchers and other EMS stakeholders.

A significant component of our protocol is construction of research questions to guide the systematic review. We chose to use the PICO framework to develop research questions, yet there is no gold standard for developing research questions. The PICO framework is widely used, yet it has limitations.Citation35 We have no reason to believe that PICO is inappropriate for our purposes, and in our opinion, the benefits of using PICO outweigh potential drawbacks. Research questions based on PICO's four anatomic parts are believed to produce comprehensive literature searches, whereas questions that lack this type of framework are at risk of going unanswered or yielding irrelevant results.Citation35,64,65 Our approach to measuring consensus on PICO question wording is novel and introduces a degree of objectivity to the formative steps to EBG development not yet observed in prior EBG projects.

Some may describe our PICO questions as broad with explicit comparisons implied rather than plainly stated in the question itself. Online Supplement AppendixB provides a detailed list of multiple possible comparisons within each PICO question. Numerous comparisons within each PICO present both opportunities and challenges for our team. One opportunity is the option to explore numerous comparisons of diverse interventions and avoid the need to develop and appraise new PICO questions. A challenge common to all systematic reviews is time and resources. Systematic reviews require many months, and in some cases, years to complete. We plan to address multiple comparisons within each PICO one-by-one. The decision on which comparison(s) to address first will be guided by feedback from expert panelists and members of the research team.

The development of research questions, selection of outcomes and rating of importance, and eventual creation of EBGs is inextricably linked to the composition of panelists involved.Citation11 We invited 11 experts to serve on our expert panel. Their invitation was based on individual expertise and knowledge of sleep health, fatigue, emergency medicine, and prehospital EMS. The composition of our panel adheres to the Institute of Medicine's (IOM) Standard 3.1 for Guideline Development Group Composition: “The guideline development group should be multi-disciplinary and balanced, comprising a variety of methodological experts and clinicians, and populations expected to be affected by the guidelines.”Citation11 A different panel and team of researchers comprised of different individuals may have produced a different set of PICO questions and proposed/rated a different set of outcomes.


We describe a novel approach to the formulation of research questions and selection of outcomes for purposes of systematic reviews and creation of evidence-based guidelines. It is important to disseminate these findings and outline a planned process for EBG development for the following reasons: 1) to establish fatigue risk management in EMS as a priority; 2) to highlight a novel approach to PICO question development that may serve as a model for others that develop guidelines; 3) to promote transparency; 4) to promote stakeholder involvement, which is key to guideline development; 5) to generate scrutiny from the scientific community; and 6) to benefit the wider research community that apply a model process to issues relevant to occupational and environmental health.

Supplemental material


Download MS Word (39.1 KB)


Download MS Word (28.8 KB)


Download MS Word (29.8 KB)

Evidence-Based Guidelines for Fatigue Risk Management in EMS: Formulating Research Questions and Selecting Outcomes (2024)


What is the key recommendation from the panel regarding the tools to measure the fatigue of EMTS? ›

The panel recommends using fatigue and sleepiness survey instruments for assessing and monitoring fatigue. The panel recommends scheduling shifts <24 hours whenever possible, providing access to caffeine throughout shifts, incorporating on-duty naps, and providing education and training in fatigue risk management.

What is the EMS policy for fatigue? ›

Use of fatigue/sleepiness surveys to measure and monitor EMS personnel fatigue. Limit EMS shifts to less than 24 hours in duration. Provide EMS personnel access to caffeine to help stave off fatigue. Allow EMS personnel the opportunity to nap while on duty.

What is the fatigue risk management strategy? ›

FRMS is a holistic and comprehensive approach to managing fatigue risk. It's data-driven, risk-informed, performance-based and addresses fatigue risk from multiple perspectives, including: Workload & Staffing Levels. Shift Schedule & Work Hours.

What is evidence based practice in pre hospital care? ›

In the pre-hospital environment, evidence based practice (EBP) is about identifying the best available evidence and applying clinical expertise to interpret it. Simply performing clinical practice in the same way as you always did may not result in the optimum care for your patients every time.

What is the most important factor to prevent fatigue? ›

The most important factor to prevent fatigue is to pay attention to your circadian rhythm and your sleep debt.

What are the four recommended fatigue countermeasures? ›

Applying countermeasures in combination, like a "coffee nap," may also be more effective than one countermeasure strategy alone.
  • Fatigue Countermeasure 1 - Physical Activity or Physical Rest. ...
  • Fatigue Countermeasure 2 - Stimulants. ...
  • Fatigue Countermeasure 3 - Conversation & Socialization. ...
  • Fatigue Countermeasure 4 - Nutrition.
Mar 24, 2023

How do you prevent fatigue in EMS? ›

EMS fatigue management initiatives should include education and training about fatigue, providing access to caffeine and encouraging on-shift napping if possible.

What is the purpose of the fatigue management policy? ›

It includes guidance on identifying areas at increased risk for work-related fatigue, assessing the Work Health and Safety related issues and providing strategies to eliminate work related fatigue as much as possible or to minimise its impact where it cannot be eliminated.

How does fatigue affect paramedics? ›

Shift work may also contribute to sleep disorders among paramedics. Fatigue is associated with increased errors and adverse events, increased chronic disease and injury rates, depression and anxiety, and impaired driving ability.

Why is fatigue risk management important? ›

Fatigue is inevitable in 24/7 operations because the human brain and body function optimally with unrestricted sleep at night. Therefore, as fatigue cannot be eliminated, it must be managed. Fatigue management refers to the methods by which Operators and operational personnel address the safety implications of fatigue.

What are some of the key parts of fatigue management? ›

  • Minimize sleep loss. Promoting quantity and quality sleep is an essential factor for ensuring optimal performance during work time. ...
  • Naps during night shifts. Napping as a fatigue countermeasure has been found to be effective for shift workers. ...
  • Good sleeping habits. ...
  • Circadian adaptation.

What counts as evidence in evidence-based practice? ›

These evidence bases are named according to their source: • research • clinical experience • patients, clients and carers • local context and environment. As mentioned above, research evidence has assumed priority over other sources of evidence in the delivery of evidence- based health care.

What are some examples of evidence-based practice? ›

There are various examples of evidence-based practice in nursing, such as:
  • Use of oxygen to help with hypoxia and organ failure in patients with COPD.
  • Management of angina.
  • Protocols regarding alarm fatigue.
  • Recognition of a family member's influence on a patient's presentation of symptoms.

What is the instrument for measuring fatigue? ›

The Fatigue Severity Scale (FSS), Pittsburgh Fatigability Scale (PFS) and Visual Analogue scale (VAS-F), Fatigue Impact Scale (FIS) and the Functional Assessment of Chronic Illness Therapy Fatigue (FACIT-F) presented good psychometric properties.

How do you measure material fatigue? ›

To perform a fatigue test a sample is loaded into a fatigue tester or fatigue test machine and loaded using the pre-determined test stress, then unloaded to either zero load or an opposite load. This cycle of loading and unloading is then repeated until the end of the test is reached.

How do you monitor fatigue in the workplace? ›

Three common types of monitoring capabilities include electroencephalography (EEG) sensors to monitor brain activity relative to fatigue, monitoring for visual cues and microsleeps, and using sleep and activity data to calculate fatigue risk levels.


Top Articles
Latest Posts
Article information

Author: Mrs. Angelic Larkin

Last Updated:

Views: 6644

Rating: 4.7 / 5 (67 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Mrs. Angelic Larkin

Birthday: 1992-06-28

Address: Apt. 413 8275 Mueller Overpass, South Magnolia, IA 99527-6023

Phone: +6824704719725

Job: District Real-Estate Facilitator

Hobby: Letterboxing, Vacation, Poi, Homebrewing, Mountain biking, Slacklining, Cabaret

Introduction: My name is Mrs. Angelic Larkin, I am a cute, charming, funny, determined, inexpensive, joyous, cheerful person who loves writing and wants to share my knowledge and understanding with you.