Skip to main content

On-farm research: Trial demonstrates importance of a good design

John Thomas, University of Nebraska-Lincoln; Sara Berg, South Dakota State University; Josh Coltrain, Kansas State University; Lizabeth Stahl, University of Minnesota

plot-comparison
Figure 1. Pinto bean planting method comparison. Left: 90,000 plt/A in 30" rows vs. 120,000 plt/A in 7.5" rows (right).
The crop season is upon us and producers across the state have been planting and getting their crops established. Farmers are interested in knowing what works best, yields the most, and especially, what is most profitable during these tight economic times. Some may want to compare products or practices on their own farm or look at information from other farms or industry studies.

Tips for conducting on-farm research are outlined in the U of MN Extension fact sheet “How to do research on your farm”. The following is a real-life example that highlights the importance of two key factors (randomization and replication) in conducting useful on-farm research.

How should a basic study be set up or laid out in the field? One very common approach is to divide a field in half and compare the halves or possibly compare two fields in close proximity and see which variety or practice yields highest. This approach can end with very misleading results because of the variability that exists across a field or fields due to many factors. Some sources of variability include the following:
  • Variations in soil type
  • Topography
  • Varying management practices
  • Drainage
  • Pesticide residues
  • Disease pressure
  • Compaction
  • Weather events

Just as you can count on yield monitor results to not remain constant across a field, you can essentially count on there being sources of variability that would impact study results if you just split a field in half or compared fields across the road from each other.

A better approach, which provides a better estimation of future performance of a treatment, is to put out replicated studies with random placement of treatments in each replication. This simply means that the same treatment is put out more than one time across the area of study to be assured that treatment performance is not based on location in the field. Replication from three to six times is common in most agricultural studies. The more replications, the more reliable results will be in a given comparison. Repeating the replicated comparisons for more than one year is also a good idea to test performance over more environments to come to stronger conclusions and estimations of real differences between treatments.

As an example, an on-farm trial completed in 2016 is described below, showing how replication affected the results. This study compared two systems commonly used in planting pinto beans in Nebraska. The treatments were applied and replicated six times with random placement. One treatment was pinto beans planted in 30 inch rows at a population of 90,000 plants per acre; the second treatment was pinto beans planted in 7.5 inch rows at 120,000 plants per acre (Figure 1). This was a large field trial with each treatment being 60 feet wide by 1,400 feet long. The randomization was as follows:

Rep 1 Rep 2 Rep 3 Rep 4 Rep 5 Rep 6
7.5" 30" 30" 7.5" 7.5" 30" 7.5" 30" 30" 7.5" 30" 7.5"

The average yields from the treatments in the six replications were as follows: 7.5 inch with 120,000 population yielded 52 bu/ac and the 30 inch treatment with 90,000 population yielded 44 bu/ac. The 7.5-inch treatment yielded 8 bu/ac more than the 30 inch treatment. Having analyzed the yield data statistically (at the .05 probability level), yields were significantly different, with the least significant difference being 2 bu/ac. This means that due to variability within the study, a yield difference of less than 2 bu/ac would not indicate any treatment differences.

hail-damage-figure
Figure 2. Change in yield advantage of the 7.5 inch treatment as compared in split field layout verses a replicated randomized field layout. An early August hail storm had greater damage on one half of the field. Like treatments were lumped together on the hailed half verses the light hailed half to get the above yield averages in the split field comparisons.
During early August a hail storm damaged the field, with the most significant damage occurring on the half of the field containing replications 4, 5 and 6. If the field had just been split with one treatment on each side, results would have looked different. If we lump the 7.5 inch treatments from the hailed side of the field together we would find an average yield of 49 bu/ac. In comparison, if we lumped the 30 inch treatments together on the side with minimal hail, average yield for this treatment would have equaled 45 bu/ac. This equals a difference between treatments of 4 bu/ac (half the difference that was detected by the full, replicated trial). Conversely, if we had the 30 inch treatments on the side of the field that received the most hail, yield for this treatment would have been 43 bu/ac and yield for the 7.5 inch treatment on the side receiving minimal hail would have equaled 54 bu/ac, for a difference of 11 bu/ac (Figure 2).

It is clear that when the six replications were spread out across the field we found a more accurate estimation of the impact of these systems on yield than splitting the field in half. In all three layouts, the 7.5 inch treatment yielded the most. The split field design either exaggerated or diminished the yield advantage of the 7.5 inch treatment depending on which treatment was exposed to the heavier hail damage (Figure 2). Poorly laid out field studies can generate misleading data and can lead to incorrect conclusions. Also keep this in mind when you are looking at data from other studies you encounter. In our modern era with GPS guidance, it is relatively easy to put in replicated, randomized studies, even on large field-scale comparisons.
Print Friendly and PDF

Comments