Critique of Quantitative Article (Maylone Collab 2010)

Attached an article(Maylone) that need to be critique. I also attached a format for this paper(paperoutline_skeleton). Examples from the book where reference as box in the paper requirement below is also attached.

Guidelines for Critique of Quantitative Article
For the quantitative critique follow box 5.2 in your text on pages 112 to 114 – some of the questions may not apply to the study – if it does not include that it is not applicable – when completing the critique – use level 1 headings to address the main sections of the critique, such as Title, Abstract, Introduction, Method, etc. and level 2 to address each criterion under the main sections such as statement of the problem, hypothesis or research question etc. under Introduction. Under method you will use a level 2 heading for protection of human rights, research design, population and sample, etc. Using level 3 headings address each bulleted question(s) that is appropriate – this should not be a narrative form rather rewrite the question or provide a few brief words in a level 3 heading and then provide your answer. This approach will helps to prompt you so that you do not miss any content and also informs as to where you addressed the content.
Refer to the pages in the box for additional information from your text as well as review the guidelines provided below.
Guidelines
What is a summary? For the purpose of this assignment, a summary is a detailed yet concise and accurate description of the information in the article. When summarizing, consider the logic of the presentation of the information. Follow the order of information as the author(s) had a purpose in how the information is organized. Do not add information. After completing this assignment, you will be able to appraise work from the purpose throughout the paper to the findings and be sure the never be comfortable in doing this. Evidence-based practice requires a critical appraisal of all elements of a study. Without each part being sound the overall findings and implications are inaccurate and does not add to the body of knowledge.

Rules for Critique: Use Guide found in Box 5.2 starting on page 112 to 114. In last column of guide, you are directed to detailed critiquing guidelines in the chapters.
For additional help to critique research articles, read APA chapter 2 (how to write a report of a study) and chapter 28, p. 682-688 starting with Content of Research Reports.
Before summarizing each section, read in textbook about the purpose of each step and following the heading (like Problem Statement), state what is a problem statement according to Polit and Beck.
Introduction
Statement of the Problem
Read about problem statements in text, p. 82, and boxes 4.1, 4.2. Consider the following question: What is the purpose of a problem statement? Start the section by defining a problem statement. Then summarize the problem statement. For critique, did the author(s) achieve the purpose? Use critique criteria to frame your answer.

Research Purpose/Questions/Hypotheses.
Chapter 4 covers the research purpose, questions, and hypotheses. A research report may have all three, two, or one. Many authors use a research purpose or aim rather than a research question. Hypotheses are used for experimental designs. Consider the design when you critique. If there is no experiment, is a hypothesis appropriate? Not all quantitative studies test hypotheses. Some describe, explore, and explain, rather than predict. A hypothesis is a prediction.

Literature Review

The literature review should prepare the reader for the variables that will be included in the study. By presenting literature as a foundation for the study, the researcher justifies the approach to the study and supports the need to conduct the study. The questions in the guide and box 5.4, p. 122 all apply except for #7 in box 5.4.

Conceptual/Theoretical Framework
Box 6.3, p. 145, covers the questions necessary to guide this critique. Pay close attention to the wording. When the question includes a phrase like “if there is an intervention” and your selected study does not have an intervention, write not applicable to note that you considered the question but it is not relevant to the study. The mention of theories and frameworks in the literature review may simply be used as informational in the same way that the reference to studies is used. Unless the authors specify that they are using a theoretical framework, and usually this is preceded by a heading, state that there is no framework. If there is none, use the information in the text regarding the purpose of a theoretical framework and in the critique address whether or not a lack of a framework is justified. Do not offer a framework – if one is not identified by the authors.

Method
Protection of Participants’ Rights
The critique should follow the guidelines in Box 7.3, page 170. Following the order of the questions helps to develop a logical critique. Box 7.1 lists potential benefits and risks for participants. Box 7.2 provides examples of questions for building ethics into a study design. In healthcare, HIPPA is the legislation that assures confidentiality to patients. What governmental authority is responsible to assure ethical conduct for research? What is the role of institutional review boards (IRBs)?

All studies should assure confidentiality. However, some studies collect data that could be very harmful to the subjects if disclosed. Yet the knowledge is very important. For example, how do pedophiles find children?
Note the difference between confidentiality and anonymity.
Minimally, studies should state IRB approval and informed consent. In addition consider: did they maximize good and minimize harm for the participants, privacy/confidentiality, note if vulnerable populations according to your text.

Research Design
Use questions Box 9.1, p. 230. Note that many questions starts with “if the study was an RCT”.if your study is correlation, write not applicable and note why.

Example of what might be in the summary: A descriptive correlation design was used. Participants filled out several questionnaires (described under data measurement) and also were interviewed.
Critique: This design is appropriate for the study in that there was no intervention and the purpose was to explore relationships; there was no intent to demonstrate causality. The design is described accurately and in detail. # 6, box 9.1 would be critical since this is a non-experimental design. Note that they ask is the design is retrospective? Other choices are concurrent, and prospective. Concurrent means that the data is collected in one time frame collecting data for the moment (How are you feeling today?). Prospective means that the collection starts now and continues into the future. For the heart failure study, the data is retrospective in the sense that the interview and questionnaires focus on the past rather than at the moment. Read about retrospective designs..how might memory lead to bias? The idea of “is this the best design?” for retrospective would depend on the problem being studied. In order to find out what led to someone being hospitalized, you would have to either look at the record or ask them. So sometimes there is no choice. However, if the study was on nurse-patient communication in the ICU, a concurrent study would be more accurate than retrospective where all you have to go on is what was recorded about communication in the patient’s record. If it is done concurrently, you could have video-taping, observation, etc. A prospective study might be to collect data about care from today until one month from now. The information is collected from the medical record but the data collectors are able to remind individuals (like posters in the nursing station) to remember to record information on a specific topic. If retrospective, the data may be missing even though the care occurred.

Question number seven (longitudinal) means that data is collected over a long time. If satisfaction data is collected monthly for twelve months, this is longitudinal. Measuring the outcome in an experimental study twice (once at end of the study and 30 days later) would not be considered longitudinal. The risk study would have benefitted by being designed as a longitudinal study. Longitudinal designs can be built into both experimental and non-experimental research; usually they are non-experimental.

Question # 9 refers only to experimental designs.

When a question is asked, do not assume that it applies to your study. Also, read what it means. Reading about the advantages and disadvantages of retrospective, concurrent, and prospective studies will allow you to consider what type of study you have AND if the authors could have strengthened the study by conducting it prospectively rather than retrospectively. Some data CANNOT be collected except for one wayso it is not accurate to state that they should have collected it prospectively, if it is not possible. Some studies that examine causality can only use correlation (not experiments) because the independent variable cannot be manipulated (like the age of people or gender, you can’t assign it) or should not (it is unethical to assign people to a smoking group and others to not smoking). In correlation, you can look at health outcomes for smokers versus non-smokers.

So basically, for design: Did the researchers use the best design possible to answer the question?

If intervention, how well described? Does it have construct validity?

Population and Sample
Use Box 12.1, p. 289. Was the population identified? Was the sampling plan clear and is it the best (considering feasibility) to obtain a representative sample? Were inclusion criteria clear and did they follow them? Were there any exclusion criteria and, if so, were these justified by the author(s)? Was the procedure to obtain the sample adequately described? Was the sample adequately described (how many total, age, gender etc.)? You can also describe the sample under findings.since the description of the sample is data and critique that information there.

Data Collection and Measurement
Box 13.3 and 13.4 p. 323 and Box 14.1, p. 347. Between the three boxes, there are plenty of questions to guide the critique. What is important to remember is that you have to select the questions carefully. For example, question #7 in Box 13.3 asks if data collectors were carefully selected for appropriate traits. Unless you know what this means (from reading the text), you may not understand the question. If the only data collectors are the researchers, this question becomes irrelevant. For some studies where surveys are done in multiple sites, individuals are hired to do the data collection. Traits include being of the same race as those being surveyed and not only pleasant, friendly, safe appearance. Researchers use the best method to assure cooperation. This is not equal opportunity hiring.

The ultimate question is whether the data collected represents the truth or error (bias). The quality of the instruments (validity, reliability, precision, sensitivity etc.) and the quality of the data collection procedures contribute to internal validity.

Strategies to Assure Internal Validity
Use Box 10.1, p. 254. Basically, this section is not a summary but a critique as to whether or not the researcher(s) used adequate strategies to assure internal validity. It touches upon all other sections in the design of the study. When inferential tests are used (tests yielding probability statistics) a power analysis should be done to assure that the sample size is adequate to prevent a Type II error. For experiments, increasing the effect size (making sure that the intervention is strong enough to produce an effect) is a strategy to assure power and therefore affect internal validity.
Again, pay attention to the questions. Some only apply to intervention studies (experiments). Selection biases affect all studies. The characteristics of the sample affect external validity but the way in which the sample is selected affects self-selection biases which affect internal validity.
If the researchers intend to demonstrate causality, were threats to internal validity controlled by the design? In correlation designs where researchers do intend to demonstrate causality, how did the authors address plausible alternative explanations? This can be done by a logical explanation or by controlling via statistical procedures.
Validity of a study is affected by the validity and reliability of instruments and any bias contributed by the researchers when collecting data.
Results
Data Analysis
The questions in the critique guide and box 16.1 and 17.1 (p. 400, p. 429) both should be used. The research question (aim/purpose) identifies the major relationships that should be analyzed in a study. Usually the data analysis plan will include only those statistical procedures planned at the beginning of the study.

Additional statistical procedures are done in order to follow-up on interesting/unexpected findings or to control for variables. For the appropriate statistical procedure, this refers to mathematical level: nominal, ordinal, etc. for example, a t-test requires one nominal and one interval/ratio variable. If there is a single item used (instead of a summed score) this is ordinal. A t-test is not appropriate. Pearson Correlation requires variables at the interval or ratio level. Again, single items on a questionnaire (scored from strongly agree to strongly disagree or words to that effect) can never be higher than ordinal. Interestingly, multiple regression allows for the independent variables to be at any level though the dependent variable must be interval or ratio.

As for the most powerful analytic method, unless the author describes why a test was selected, at this point you would not likely know this. For an example of authors who describe this, check out appendix H, p. 286. On page 301, there is a critique of the data analysis. The authors presented a less powerful analytic method in order to use change scores so readers could easily see the difference between groups. However, they used a more powerful test first to be sure that the relationship was present as well. They explained this in great detail. If there is no explanation, you will not know unless you checked with someone with expertise in statistics. Were Type I and Type II errors minimized? This is all about control and minimizing error. Excellent tools that measure concepts precisely minimize Type II errors. Type I errors are minimized by controlling for alternative explanations.the relationship is due to the independent variable (intervention in an experimental design or “causative” variables in correlation). Minimizing selection bias is another way to minimize Type I errors. If the authors did a power analysis, they were attempting to decrease Type II errors.

Basically, intent-to-treat designs are considered the gold standard for randomized trials. Basically, this deals with the issue of attrition. In most experimental designs, if someone drops out, their data is ignored. The sample decreases in size and researchers attempt to explain away the biases that may occur due to attrition. Each person lost affects the benefits of randomization. Intent-to-treat designs keep all original participants and through a complex statistical procedure create “imputed” data for those individuals. In other words, they make the data up based on the probability of what the data would have been. This is considered preferable because in real life, there will always people who don’t finish the treatment. This way the results will look more like what you can expect in terms of effect if you were to do it in real life.
Missing values is when some participants left some questions unanswered. The researchers may describe how they handled missing values. So long as there is minimal missing data, the authors may replace the missing answer with the mean for the group. In data in which more than min. amounts of information a code is entered to identify that the subject did not address the question. For that analysis, the participant is dropped from the analysis. If you ever see a table where the n (number of subjects) changes from variable to variable, this means that some have been dropped for missing info. From box 16.1. For question 5 in box 16.1 If the study used a risk index, include it and if not note it is not applicable ( i.e. false positives, versus false negatives). From Box 17.1, p. 429, address each question – if one is not applicable note this in your critique.
Findings
From Box 5.2, Basically, did the authors present the major findings for the study? These are the findings that reflect the purpose of the study. Did they use tables to help you understand the description in the text of the article. Did they make it clear if they did additional analyses?
Regarding the question about meta-analysis. Meta-analysis is when fairly complex statistics are used to analyze the results of many studies based solely on the findings. In other words, what is entered into the formula are the findings from many studies found via literature review. There is no access to the original data.
The findings should be clear and interesting to read. There should be clear organization so you can follow. If they provide too much information without providing some explanation or reminder as to how it fits (the model, for example), it can be overwhelming and confusing. You want to “have a sense for the whole” picture – did the findings help the reader to achieve this?
Interpretation of Findings
The major challenge in this section will be to clearly separate interpretation from implications and recommendations. Interpretation means that the authors discuss their findings in light of the theory/model used (if there is one) AND prior findings. Most often, the articles used in the literature review reappear. However, it is also common to bring in new literature. Interpretation is what do the findings mean? This is clearest when the findings are not what was expected. The authors then attempt to explain why. Perhaps a faulty instrument, did not apply to the age group, intervention not strong enough etc. The questions in the basic guide and the boxes are pretty self-evident. Part of interpretation is identifying the limitations of the study. If the expected relationship(s) is not significant, they may state that this is likely due to the small sample size. The sample size is a limitation. Limitations are also related to external validity. The findings (even if strong) can only be generalized to individuals similar to those in the study. So your critique should focus on whether or not the authors adequately explained the findings.both expected and not expected. Did they identify problems with the study? Box 19.1 includes both interpretation and implications/recommendations (p. 482). The last question under interpretation can either be placed in interpretation of implications/recommendations. This is related to external validity.
Did the researchers accurately identified the limitations and provided implications and recommendations within those constraints.

Implications and Recommendations
Implications are logical deductions from the interpretation of findings: If this, then that. For example in a study that explores women’s experiences of infertility and the findings note that women report feeling hollow and empty and that people are judgmental an sensitive to their feelings re: children. The authors can state that based on these findings, health professionals are challenged to review their attitudes about patients. They are not recommending this, they are pointing out that if this is true, then this should be considered. Recommendations are much stronger statements. Recommendations can be for practice, education, or research. Usually there are at least recommendations for research. If the authors have a specific section called implications/recommendations, it makes your work easier. However, implications threaded through interpretation. You don’t have to pick them all out. You could state that though there is a section for implications, there were others interspersed in the interpretation that were not brought up in the final section. For example, .
The strongest criticism is if the authors state implications and recommendations not warranted by the findings. Consider if the findings are very weak in the study though the literature review led to expectations that the intervention would have a much more powerful effect. The authors then make recommendations based on the strength of the literature review and make practice recommendations in spite of the weak findings of their study – this is not sound. It is important that the implications flow from the findings which flows from the study design – weaknesses or lack of sound approaches in design or validity will impact the quality of the overall study.
Global Issues
Presentation
Researcher Credibility
Summary Assessment
References (use APA format)
Up to 15% of the grade can be deducted for APA and Grammar – for the purpose of this assignment provide a title page per APA guidelines (running head and page numbers and headers on each page) and use levels 1-3 headings. Each bulleted question or group of questons is worth 1 pt. and will be scored out of 49 and converted out of 100 and entered in the grade book. For example the introduction section is worth 12 of the 49 points.
Try and address each bulleted question concisely and directly referencing the study as needed. Be thorough but not excessively wordy, rather provide your answer and support it with material from your text book or the study when appropriate. Please do not submit 20 page narratives without the three levels of headings – they will be returned with a 48 request for re-submission – please aim for 12 pages maximum.
Please use the template for the quantitative assignment to submit and complete your work.

Click Here For More Details

Get this Essay

Leave a Reply