After the trial, what next?
Working with children and families experiencing difficulty and disadvantage is extremely challenging, most often because the problems and the behaviours we try to change are caused by many different interacting factors. These problems should be addressed by interventions with proven effectiveness, but collecting evidence of this effectiveness is not easy.
Usually, interventions in healthcare are evaluated by randomised controlled trial. This approach chooses and compares the most important outcome or outcomes in those randomly chosen to receive the intervention with others who were not. It is the best way to see whether the intervention has any impact on these outcomes, and to make sure that any changes are not due to natural change over time or something else, which we may not have measured.
One of the problems with evaluating the effectiveness of these interventions is that they are as complex as the behaviours they are trying to change; in each intervention there are often many moving parts, and a myriad of pressures, which interact to affect all the outcomes.
For example, the process of changing eating patterns so that people have a healthier diet depends on many things. It depends on having the right information delivered in the right way by the right person so that it is acceptable. It depends on contextual factors that shape behaviour, such as the habits of family and peers and the external influences of advertising and media. It depends on the environment we live in, for example the availability of and access to healthy foods. An intervention might try to affect one or more of these contributing mediators of healthy eating, but usually we only measure the outcome – whether or not healthy eating has been achieved.
While outcome data is essential to understanding how valuable an intervention might be, it only tells us part of the story we need to understand what works for whom, where and why. This understanding is crucial to finding out why an intervention worked if it did and, importantly, if it didn’t then why not. An intervention that did not show any positive effect could have been actually ineffective, or it could have been delivered poorly – for instance, to the wrong population or for the wrong amount of time.
Additionally, data on the outcome on its own will not tell us how the people receiving the intervention coped with or resisted the effects of their peers, the environment and outside influences in changing their behaviour. We need this information to understand how the intervention might be helpful in other circumstances or in different groups of people, and perhaps to evolve the intervention into something even more effective.
To understand what works for whom, where and why researchers often use qualitative approaches, where participants are invited for an interview in which they discuss their experience in their own words. They might also record how much of the intervention was delivered, in terms of content or duration. They might also find a way to capture how closely the intervention delivered resembled the original design (fidelity), and if not how and why it deviated.
The randomised controlled trial of FNP in England has clearly told us that for mothers aged 19 at their last menstrual period, FNP has no effect on birth-weight, smoking cessation in pregnancy, the number of emergency admissions to A&E and the prevalence of subsequent pregnancies 24 months after the birth of the first child. It has also told us that there are no differences in the effect depending on whether mothers: are younger or older than 16; are in education, employment or training (or not); have better or worse adaptive function; or are experiencing higher or lower levels of poverty. We also know that the intervention was delivered well in terms of content and duration and that what was delivered closely resembled the original model.
This tells us that the model as it stands is not working as anticipated for these outcomes, but it does not tell us why. We do not know whether there are some components of FNP that were working well. For example, how important is client engagement and the therapeutic relationship? How do FNP clients interact with the FNP programme materials?
We know from the trial that FNP was appreciated and enjoyed by the young mothers who received it, but it seems to have had no impact on parenting quality outcomes such as anticipatory parenting, attachment, parental strain, maternal sensitivity, child mood or responsiveness. Yet improvement in the group receiving FNP was found in children’s language development at 12 and 18 months, and in cognitive development at 24 months. How can parenting not be improved but children’s language and cognitive development show benefit? The trial does not tell us how or why these results occurred – where is this benefit coming from?
While this trial is well able to tell us that FNP does not have the effects we thought that it might, it does not help us to understand why. Without such data, the mechanisms through which this intervention achieves its effects remain encased in a black box, and planning the next steps for it poses a considerable challenge.
New Wing, Somerset House, Strand,
London, WC2R 1LA
Phone: (01803) 762400
Lower Hood Barn, Dartington,
Totnes, TQ9 6AB
Phone: (01803) 762400
Robertson House, 152 Bath Street,
Glasgow, G2 4TB
Phone: (01803) 762400