Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2009.
Includes bibliographical references (p. 163-174).
This research develops a systematic approach to analyze the computational performance of Dynamic Traffic Assignment (DTA) models and provides solution techniques to improve their scalability for on-line applications for large-scale networks. DTA models for real-time use provide short-term predictions of network status and generate route guidance for travelers. The computational performance of such systems is a critical concern. Existing methodologies, which have limited capabilities for online large-scale applications, use single-processor configurations that are less scalable, and rely primarily on trade-offs that sacrifice accuracy for improved computational efficiency. In the proposed scalable methodology, algorithmic analyses are first used to identify the system bottlenecks for large-scale problems. Our analyses show that the computation time of DTA systems for a given time interval depends largely on a small set of parameters. Important parameters include the number of origin-destination (OD) pairs, the number of sensors, the number of vehicles, the size of the network, and the number of time-steps used by the simulator. Then scalable approaches are developed to solve the bottlenecks. A constraint generalized least-squares solution enabling efficient use of the sparse-matrix property is applied to the dynamic OD estimation, replacing the Kalman-Filter solution or other full-matrix algorithms. Parallel simulation with an adaptive network decomposition framework is proposed to achieve better load-balancing and improved efficiency. A synchronization-feedback mechanism is designed to ensure the consistency of traffic dynamics across processors while keeping communication overheads minimal. The proposed methodology is implemented in DynaMIT, a state-of-the-art DTA system. Profiling studies are used to validate the algorithmic analysis of the system bottlenecks.
(cont.) The new system is evaluated on two real-world networks under various scenarios. Empirical results of the case studies show that the proposed OD estimation algorithm is insensitive to an increase in the number of OD pairs or sensors, and the computation time is reduced from minutes to a few seconds. The parallel simulation is found to maintain accurate output as compared to the sequential simulation, and with adaptive load-balancing, it considerably speeds up the network models even under non-recurrent incident scenarios. The results demonstrate the practical nature of the methodology and its scalability to large-scale real-world problems.
by Yang Wen.
Ph.D.