Below is some information about different research projects I've worked on. An * indicates a project that I contributed to as a non-authored statistician.
An Empirical Study of Punitive Damages*
University of Oxford, United Kingdom
Authors: James Goudkamp and Eleni Katsampouka
Citation: Goudkamp, J. and Katsampouka, E. (2017), An Empirical Study of Punitive Damages. Oxford Journal of Legal Studies: 1-33. doi: 10.1093/ojls/gqx013
This article reports and discusses the results of an empirical study of punitive damages. It examines 146 claims that were decided in all parts of the UK (save for Scotland, which does not recognise punitive damages) by first instance courts in the first 16years of the twenty-first century. The study is the first of its kind to be conducted in the UK. In the morass of data, important evidence is uncovered regarding punitive damages. Our most significant findings include (1) that punitive damages (when claimed) are awarded reasonably regularly, (2) that the average award of punitive damages is relatively modest, (3) that there is considerable uniformity in terms of the size of punitive damages awards, and (4) that actions for defamation are unlikely to constitute an important source of punitive damages awards. These (and other) findings cast considerable doubt upon widely held views regarding punitive damages.
Contributory Negligence in the Twenty-First Century: An Empirical Study of First Instance Decisions*
University of Oxford, United Kingdom
Authors: James Goudkamp and Donal Nolan (Faculty of Law)
Citation: Goudkamp, J. and Nolan, D. (2016), Contributory Negligence in the Twenty-First Century: An Empirical Study of First Instance Decisions. The Modern Law Review, 79: 575–622. doi: 10.1111/1468-2230.12202
In this article we report the results of an empirical study of 368 first instance decisions on the contributory negligence doctrine handed down in England and Wales between 2000 and 2014. The two central questions at which we looked were: how often a defendant's plea of contributory negligence was successful; and by how much a claimant's damages were reduced when a finding of contributory negligence was made. We also considered the extent to which the answers to these questions depended on the following variables: the claimant's age; the claimant's gender; the type of damage suffered by the claimant; the contextual setting of the claim; and the year of the decision. Our study uncovered several important truths about the contributory negligence doctrine hidden in this mass of case law, some of which cast significant doubt on the accuracy of widely held views about the doctrine's operation.
Dynamic time warping for assessment of inter-rater reliability for annotations of wearable camera data
University of Oxford, United Kingdom
Mentor: Órlaith Burke (Nuffield Department of Population Health)
Wearable cameras are increasingly being used in research as a means to gather more robust information about people’s daily activity levels. However, the quantity and complexity of the data produced by these cameras requires the development of an image annotation scheme that is sufficiently descriptive yet able to be implemented consistently by multiple researchers. In order to analyze the ability of the image annotation protocol to produce consistent annotations, inter-rater reliability (IRR) needs to be assessed. Traditional methods of assessing IRR like Cohen’s kappa do not utilize the unique features of the image annotation data, so a new method is needed.
In this report we implement a dynamic time warping (DTW) algorithm with the longest common prefix (LCP) distance metric to assess IRR for annotations of images produced by wearable cameras. Two raters each produced image annotations for twelve study participants. We use DTW with four different step patterns to align the two time series and calculate the normalized distance for each warping path. This normalized distance is used as the new metric of inter-rater reliability. We also implement DTW for randomly simulated annotations in order to provide a baseline normalized distance to compare against the normalized distances for our observed data. We conclude that DTW with the LCP distance metric and Rabiner-Juang type 5 step pattern is the most appropriate method to analyze IRR for this type of data. It represents an improvement upon traditional measures of IRR because it utilizes the time series characteristics of the data and the hierarchical nature of the categories in the image annotation protocol.
Modeling feral cat dynamics in Knox County, TN
National Institute of Mathematical and Biological Synthesis, Knoxville, TN
Mentors: Suzanne Lenhart, John C. New (University of Tennessee)
Additional co-authors: Nick Robl, Alice M. Bugman, An T.N. Nguyen, Bridgid Lammers, Teresa L. Fisher, Heidi Weimer
Feral cats (Felis catus) are recognized as a problem internationally due to their negative impact on wildlife and potential to spread infectious disease to people and other animals. Trap-neuter-return (TNR) programs are employed in many areas to control feral cat populations as a humane method, and this approach is used on a limited basis in Knox County, Tennessee. Despite the frequent use of TNR as a strategy, its effectiveness remains controversial. The objective of this mathematical model is to predict the impact of selected strategies on the population of feral cats. The model with three age classes predicts the population over a period of 5 years in one month time steps. TNR rates are varied to investigate the effects of targeting spay/neuter programs seasonally, and such targeting predicts a measurable decline in feral cat population growth over a five year period. Targeting TNR intervention at adult females during the time prior to mating season in highly populated feral colonies may further decrease the population. These results suggest a more efficacious strategy than non-targeted TNR programs.
Event detection using natural language processing
Vanderbilt University Medical Center, Nashville, TN
Mentors: Jesse Ehrenfeld, Paul St. Jacques (VUMC)
Integrated into every patient’s chart are check lists that anesthesiologists use to indicate specific events within a surgery, including adverse events. Studies show that in anesthesiology 27% of cases have a non-routine event, but voluntary reporting of such events is low (Weigner et al. 2003). Voluntary incident reporting only detects 1.5% of adverse events and only 6% of adverse drug events (Murff 2003). Anesthesiologists also have fields where they can enter free text on a patient’s chart post-operatively. We hypothesize that we will be able to detect a greater number of adverse events through the use of natural language processing tools in Python that search for specific key phrases and language structures that indicate a certain event has occurred. We aim to discover the real rate of occurrence for a specific adverse event in anesthesiology at Vanderbilt University and to calculate the effectiveness of the checklist. We began this research looking for instances of difficult intubation. We analyzed the comment fields of patient charts at Vanderbilt University retrospectively for two sample weeks in June 2012, totally approximately 5500 case comments. With a sample size of 1800 comments, the Python program finds all the comments discussing difficult intubation. There are 18 total comments in this sample size with difficult intubation. With more sample points, the program misses some comments and outputs some comments that do not include difficult intubation. Our original hypothesis about underreporting of difficult intubation is correct by our analysis. Though the NLP program needs improvements to be accurate for larger sizes of data, the majority of the cases that mentioned difficult intubation in the case comments did not have an explicit reporting of it in the structured checkboxes. Despite 88.9% of these cases explicitly reporting multiple attempts at intubation, only 16.7% were listed as “difficult,” and 44.4% were listed as “easy.” The information from this report could be used to implement a system for the operating room that reminds medical personnel to report difficult intubations in the checkboxes.
Fitting the eigensolution of compartment models
Oak Ridge National Laboratory, Oak Ridge, TN
Mentors: Richard C. Ward, Keith F. Eckerman (ORNL)
The radiation dosimetry community uses models to describe the fate of inhaled or ingested radionuclides. Such exposures can result in cell death and cell transformations that depend strongly on the behavior of the radionuclide within the body. In some instances medical attention requires that such analyses can be quickly performed. An example of such software is Integrated Modules for Bioassay (IMBA), developed by the United Kingdom to interpret urine and fecal bioassay samples. The software lacks a differential equation solver and instead uses a function to fit solutions to the equations. Eigenanalysis software produced by Killough and Eckerman can be used to find the exact solution, but solutions often include ten or more exponential terms in linear combination. The IMBA software can only handle up to ten terms, so those solutions with more terms must be reduced for use in IMBA. The purpose of this research is to develop a method to solve compartment models describing physiological and chemical processes and to fit the numerical solution by sums of exponential terms of a reduced number. This will enable the use of the models to be computed faster than the original approach. The exact solution of a plutonium biokinetic compartment model will be reduced to determine the optimum functional fit for the IMBA program. The exact solution contains up to eighteen terms. The ten-term solution set will fit the exact solution curves as nearly as possible. The solution set will be determined by an automated non-linear curve-fitting algorithm in Mathematica. It was found that Mathematica provides a simpler data fitting method than the trial and error method previously used, but many problems were encountered. The difficulty in determining what fitting method is being used leads to bad fits that have yet to be explained. Also, the time to run the fitting algorithm on some compartments is too long to be efficient. These problems may make the program not ideal for its original purpose.
Patterns at the smallest scale: fractal analysis of the lung and modeling of nanoparticle clearance
Oak Ridge National Laboratory, Oak Ridge, TN
Mentor: Richard C. Ward (ORNL)
This paper explores the accuracy of a fractal-based model for the airway system of the human lung and the clearance of nanoparticles within the system. The purpose of research is to create realistic lung models based on fractals and inhalation kinetics so nanoparticle deposition and clearance can be visualized and understood. Lung diameter data is analyzed using rules for surface area maximization of previous lung branching models. The diameter data is then displayed on a model of a lung branching tree based on the Lindenmayer system method of producing fractals. The clearance model of particles derived by the International Commission on Radiological Protection is created in compartment form in the program JDesigner from Systems Biology Workbench, and then analyzed for its possible application toward modeling particles on the nano-scale. The research has the potential to aid in the development of nanoparticle drug delivery systems through inhalation. These systems could target diseases as a result of regional deposition modeling and nanoparticles’ natural ability to diffuse through a variety of membranes.