Lindsay Lee

Policy || Statistics || Disability

Welcome to my site! I am currently working with the World Health Organization in the Blindness and Deafness Prevention, Disability and Rehabilitation Unit. I completed my Master of Public Policy and MSc in Applied Statistics at the University of Oxford as a Rhodes Scholar. I have mostly been involved with disability advocacy and mathematical analysis in public policy and law. Ultimately, I want to use the power of mathematics to analyze and advocate for public policies that benefit the most marginalized among us.

Research

Below is some information about different research projects I've worked on. An * indicates a project that I contributed to as a non-authored statistician.


Contributory Negligence in the Twenty-First Century: An Empirical Study of First Instance Decisions*

University of Oxford, United Kingdom

Authors: James Goudkamp and Donal Nolan (Faculty of Law)

Citation: Goudkamp, J. and Nolan, D. (2016), Contributory Negligence in the Twenty-First Century: An Empirical Study of First Instance Decisions. The Modern Law Review, 79: 575–622. doi: 10.1111/1468-2230.12202

Abstract:

In this article we report the results of an empirical study of 368 first instance decisions on the contributory negligence doctrine handed down in England and Wales between 2000 and 2014. The two central questions at which we looked were: how often a defendant's plea of contributory negligence was successful; and by how much a claimant's damages were reduced when a finding of contributory negligence was made. We also considered the extent to which the answers to these questions depended on the following variables: the claimant's age; the claimant's gender; the type of damage suffered by the claimant; the contextual setting of the claim; and the year of the decision. Our study uncovered several important truths about the contributory negligence doctrine hidden in this mass of case law, some of which cast significant doubt on the accuracy of widely held views about the doctrine's operation.

Dynamic time warping for assessment of inter-rater reliability for annotations of wearable camera data

University of Oxford, United Kingdom

Mentor: Órlaith Burke (Nuffield Department of Population Health)

Abstract:

Wearable cameras are increasingly being used in research as a means to gather more robust information about people’s daily activity levels. However, the quantity and complexity of the data produced by these cameras requires the development of an image annotation scheme that is sufficiently descriptive yet able to be implemented consistently by multiple researchers. In order to analyze the ability of the image annotation protocol to produce consistent annotations, inter-rater reliability (IRR) needs to be assessed. Traditional methods of assessing IRR like Cohen’s kappa do not utilize the unique features of the image annotation data, so a new method is needed.

In this report we implement a dynamic time warping (DTW) algorithm with the longest common prefix (LCP) distance metric to assess IRR for annotations of images produced by wearable cameras. Two raters each produced image annotations for twelve study participants. We use DTW with four different step patterns to align the two time series and calculate the normalized distance for each warping path. This normalized distance is used as the new metric of inter-rater reliability. We also implement DTW for randomly simulated annotations in order to provide a baseline normalized distance to compare against the normalized distances for our observed data. We conclude that DTW with the LCP distance metric and Rabiner-Juang type 5 step pattern is the most appropriate method to analyze IRR for this type of data. It represents an improvement upon traditional measures of IRR because it utilizes the time series characteristics of the data and the hierarchical nature of the categories in the image annotation protocol. 

Modeling feral cat dynamics in Knox County, TN

National Institute of Mathematical and Biological Synthesis, Knoxville, TN

Mentors: Suzanne Lenhart, John C. New (University of Tennessee)

Additional co-authors: Nick Robl, Alice M. Bugman, An T.N. Nguyen, Bridgid Lammers, Teresa L. Fisher, Heidi Weimer

Abstract: 

Feral cats (Felis catus) are recognized as a problem internationally due to their negative impact on wildlife and potential to spread infectious disease to people and other animals. Trap-neuter-return (TNR) programs are employed in many areas to control feral cat populations as a humane method, and this approach is used on a limited basis in Knox County, Tennessee. Despite the frequent use of TNR as a strategy, its effectiveness remains controversial. The objective of this mathematical model is to predict the impact of selected strategies on the population of feral cats. The model with three age classes predicts the population over a period of 5 years in one month time steps. TNR rates are varied to investigate the effects of targeting spay/neuter programs seasonally, and such targeting predicts a measurable decline in feral cat population growth over a five year period. Targeting TNR intervention at adult females during the time prior to mating season in highly populated feral colonies may further decrease the population. These results suggest a more efficacious strategy than non-targeted TNR programs.

Event detection using natural language processing

Vanderbilt University Medical Center, Nashville, TN

Mentors: Jesse Ehrenfeld, Paul St. Jacques (VUMC)

Abstract:

Integrated into every patient’s chart are check lists that anesthesiologists use to indicate specific events within a surgery, including adverse events. Studies show that in anesthesiology 27% of cases have a non-routine event, but voluntary reporting of such events is low (Weigner et al. 2003). Voluntary incident reporting only detects 1.5% of adverse events and only 6% of adverse drug events (Murff 2003). Anesthesiologists also have fields where they can enter free text on a patient’s chart post-operatively. We hypothesize that we will be able to detect a greater number of adverse events through the use of natural language processing tools in Python that search for specific key phrases and language structures that indicate a certain event has occurred. We aim to discover the real rate of occurrence for a specific adverse event in anesthesiology at Vanderbilt University and to calculate the effectiveness of the checklist. We began this research looking for instances of difficult intubation. We analyzed the comment fields of patient charts at Vanderbilt University retrospectively for two sample weeks in June 2012, totally approximately 5500 case comments. With a sample size of 1800 comments, the Python program finds all the comments discussing difficult intubation. There are 18 total comments in this sample size with difficult intubation. With more sample points, the program misses some comments and outputs some comments that do not include difficult intubation. Our original hypothesis about underreporting of difficult intubation is correct by our analysis. Though the NLP program needs improvements to be accurate for larger sizes of data, the majority of the cases that mentioned difficult intubation in the case comments did not have an explicit reporting of it in the structured checkboxes. Despite 88.9% of these cases explicitly reporting multiple attempts at intubation, only 16.7% were listed as “difficult,” and 44.4% were listed as “easy.” The information from this report could be used to implement a system for the operating room that reminds medical personnel to report difficult intubations in the checkboxes. 

Fitting the eigensolution of compartment models

Oak Ridge National Laboratory, Oak Ridge, TN

Mentors: Richard C. Ward, Keith F. Eckerman (ORNL)

Abstract:

The radiation dosimetry community uses models to describe the fate of inhaled or ingested radionuclides. Such exposures can result in cell death and cell transformations that depend strongly on the behavior of the radionuclide within the body. In some instances medical attention requires that such analyses can be quickly performed. An example of such software is Integrated Modules for Bioassay (IMBA), developed by the United Kingdom to interpret urine and fecal bioassay samples. The software lacks a differential equation solver and instead uses a function to fit solutions to the equations. Eigenanalysis software produced by Killough and Eckerman can be used to find the exact solution, but solutions often include ten or more exponential terms in linear combination. The IMBA software can only handle up to ten terms, so those solutions with more terms must be reduced for use in IMBA. The purpose of this research is to develop a method to solve compartment models describing physiological and chemical processes and to fit the numerical solution by sums of exponential terms of a reduced number. This will enable the use of the models to be computed faster than the original approach. The exact solution of a plutonium biokinetic compartment model will be reduced to determine the optimum functional fit for the IMBA program. The exact solution contains up to eighteen terms. The ten-term solution set will fit the exact solution curves as nearly as possible. The solution set will be determined by an automated non-linear curve-fitting algorithm in Mathematica. It was found that Mathematica provides a simpler data fitting method than the trial and error method previously used, but many problems were encountered. The difficulty in determining what fitting method is being used leads to bad fits that have yet to be explained. Also, the time to run the fitting algorithm on some compartments is too long to be efficient. These problems may make the program not ideal for its original purpose. 

Patterns at the smallest scale: fractal analysis of the lung and modeling of nanoparticle clearance

Oak Ridge National Laboratory, Oak Ridge, TN

Mentor: Richard C. Ward (ORNL)

Abstract:

This paper explores the accuracy of a fractal-based model for the airway system of the human lung and the clearance of nanoparticles within the system. The purpose of research is to create realistic lung models based on fractals and inhalation kinetics so nanoparticle deposition and clearance can be visualized and understood. Lung diameter data is analyzed using rules for surface area maximization of previous lung branching models. The diameter data is then displayed on a model of a lung branching tree based on the Lindenmayer system method of producing fractals. The clearance model of particles derived by the International Commission on Radiological Protection is created in compartment form in the program JDesigner from Systems Biology Workbench, and then analyzed for its possible application toward modeling particles on the nano-scale. The research has the potential to aid in the development of nanoparticle drug delivery systems through inhalation. These systems could target diseases as a result of regional deposition modeling and nanoparticles’ natural ability to diffuse through a variety of membranes.