Skip to main content

Algorithm

Description

Many disease-outbreak detection algorithms, such as control chart methods, use frequentist statistical techniques. We describe a Bayesian algorithm that uses data D consisting of current day counts of some event (e.g., emergency department (ED) chief complaints of respiratory disease) that are tallied according to demographic area (e.g., zip codes).

Objective

We introduce a disease-outbreak detection algorithm that performs complete Bayesian Model Averaging (BMA) over all possible spatial distributions of disease, yet runs in polynomial time.

Submitted by elamb on
Description

As major disease outbreaks are rare, empirical evaluation of statistical methods for outbreak detection requires the use of modified or completely simulated health event data in addition to real data. Comparisons of different techniques will be more reliable when they are evaluated on the same sets of artificial and real data. To this end, we are developing a toolkit for implementing and evaluating outbreak detection methods and exposing this framework via a web services interface.

Submitted by elamb on
Description

A comprehensive definition of a syndrome is composed of direct (911 calls, emergency departments, primary care providers, sensor, veterinary, agricultural and animal data) and indirect evidence (data from schools, drug stores, weather etc.). Syndromic surveillance will benefit from quickly integrating such data. There are three critical areas to address to build an effective syndromic surveillance system that is dynamic, organic and alert, capable of continuous growth, adaptability and vigilance: (1) timely collection of high quality data (2) timely integration and analysis of information (data in context) (3) applying innovative thinking and deriving deep insights from information analysis. In our view there is excessive emphasis on algorithms and applications to work on the collected data and insufficient emphasis on solving the integration challenges. Therefore, this paper is focused on information integration.

Objective

EII is the virtual consolidation of data from multiple systems into a unified, consistent and accurate representation. An analyst working in an EII environment can simultaneously view and analyze data from multiple data sources as if it were coming from one large local data warehouse. This paper posits that EII is a viable solution to implement a system covering large areas and disparate data sources for syndromic surveillance and discusses case studies from environments external to health.

Submitted by elamb on
Description

Ideal anomaly detection algorithms shoulddetect both sudden and gradual changes, while keeping the background false positive alert rate at a tolerable level. The algorithms should also be easy to use. Our objective was to develop an anomaly detection algorithm that adapts to the time series being analyzed and reduces false positive signals.

Submitted by elamb on
Description

In this paper we investigate the use of the CUSUM algorithm on retrospective MMR and Pentacel (DTaP-IPV-Hib) immunization data to determine if this type of surveillance tool is useful for measuring changes in immunization rates.

Submitted by elamb on
Description

The purpose of this study is to depict a local county health departmentís analysis and dissemination algorithm of surveillance system (SS) aberration (alarm) to designated stakeholders within the community.

Submitted by elamb on
Description

This paper describes a Bayesian algorithm for diagnosing the CDC Category A diseases, namely, anthrax, smallpox, tularemia, botulism and hemorrhagic fever, using emergency department chief complaints. The algorithm was evaluated on real data and on semi-synthetic data, and this paper summarizes the results of that evaluation.

Submitted by elamb on
Description

The traditional SaTScan algorithm[1],[2] uses the euclidean dis- tance between centroids of the regions in a map to assemble a con- nected (in the sense that two connected regions share a physical border) sets of regions. According to the value of the respective log- arithm of the likelihood ratio (LLR) a connected set of regions can be classified as a statistically significant detected cluster. Considering the study of events like contagious diseases or homicides we con- sider using the flow of people between two regions in order to build up a set of regions (zone) with high incidence of cases of the event. In this sense the regions will be closer as the greater the flow of peo- ple between them. In a cluster of regions formed according to the cri- terion of proximity due to the flow of people, the regions will be not necessarily connected to each other.

 

Objective

We present a new approach to the circular scan method [1] that uses the flow of people to detect and infer clusters of regions with high incidence of some event randomly distributed in a map. We use a real database of homicides cases in Minas Gerais state, in south- east Brazil to compare our proposed method with the original circu- lar scan method in a study of simulated clusters and the real situation.

Submitted by dbedford on
Description

Population surges or large events may cause shift of data collected by biosurveillance systems [1]. For example, the Cherry Blossom Festival brings hundreds of thousands of people to DC every year, which results in simultaneous elevations in multiple data streams (Fig. 1). In this paper, we propose an MGD model to accommodate the needs of dealing with baseline shifts.

Objective:

Outbreak detection algorithms monitoring only disease-relevant data streams may be prone to false alarms due to baseline shifts. In this paper, we propose a Multinomial-Generalized-Dirichlet (MGD) model to adjust for baseline shifts.

 

Submitted by Magou on