Methodological-Innovations---aetransportorg

Document Sample
Methodological-Innovations---aetransportorg Powered By Docstoc
					Seminar:               Methodological Innovations
----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 01i
-------------------------------------------------------------------------------------------------------------------------------------------------------
Flexible models for analysing route and departure time choice
BATLEY, R, University of Leeds, UK
DALY, A, University of Leeds and RAND Europe, UK
FOWKES, T, University of Leeds, UK
WHELAN, G, University of Leeds, UK
The representation of the interrelated choice of route and departure time within a discrete choice model
presents important theoretical and practical problems for modellers. The model must be able to
represent the nature of the choice process and it must be able to take account of the complex patterns
of correlations between the available options. Such correlations involve overlapping routes and
substitutability between departure times that are close together.

Recent developments in behavioural modelling have established the practicality of a wide range of
model forms. Several of these are based within the family of generalised logit models, such as nested
logit, paired comparisons, cross-nested and ordered logit models. Other models are based on the
normal distribution, such as the multinomial probit model, or on ‘hybrid’ forms, such as the error
components logit (mixed logit) model. It is the aim of this paper to examine how this range of model
forms is able to deal with the combined route and departure time problem in the context of a simulated
data set. The benefit of the research lies in an improved ability to explain and predict the impact of
network travel time changes and variability. Travel time variability is increasingly recognised as a key
influence on both departure time and route choices.

This paper is the fourth in a series of papers arising from research conducted as part of a Research
Council-funded project investigating the potential of the error component logit model for the study of
drivers' route and departure time choice.

----------------------------------------------------------------------------------------------------------------- --------------------------------------
                                                     Session MI 01ii
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Evaluation of mixed logit as a practical modelling alternative

ALVAREZ-DAZIANO, R, University of Chile, Chile
MUNIZAGA, M A, University of Chile, Chile
Mixed Logit, Error Components and Kernel Logit, are different names for a model which idea comes
from the beginning of the eighties, but has become popular in the last few years. It appears as an
alternative to Multinomial and Nested Logit that can accommodate more flexible covariance structures
of the error term. In that sense it is a competitor to Probit, which has been timidly incorporated to
common practice.

We have studied both theoretically and empirically, the use and potential of Mixed Logit models. We
discuss their characteristics, properties and estimability. We compare Mixed Logit with Multinomial and
Nested Logit, and also with Probit and other alternative models. The theoretical comparisons are
focused on the structure of the covariance matrix of the error term, as this has been the traditional way to look at
correlation and/or heteroscedasticity (which is the flexibility that modellers have been looking
for). The estimation of Mixed Logit models require the estimation of additional (compared to MNL)
parameters, and the question of how many or which ones are identifiable does not have a
straightforward answer. In order to check for identifiability a deep analysis of the differentiated
covariance matrix is required. We carried out a discussion on identifiability issues, and the limitations
that it imposes to flexibility.

The dimension of the integral that has to be evaluated numerically in the cases of Mixed Logit and
Probit models is the key aspect on estimability; we look at this especially, evaluating different simulation
procedures in terms of computational efficiency (CPU time, number of iterations to convergence).
We apply Mixed Logit and its competitors to real data, and evaluate its behaviour. Our main
conclusions are that Mixed Logit models are indeed a powerful tool, comparable in most aspects to
Probit, and the most important aspect of its implementation to be successful will be the adequate
justification of the error structure. A warning must be made on identifiability, given the fact that an
unidentifiable model can be estimated and no clear signals will be found on the statistics allowing to
detect the problem. We want to make the point that several key aspects must be evaluated through the
analysis of the covariance matrix of any particular case under study.

-------------------------------------------------------------------------------------------------------------------------------------------------------
                                                     Session MI 01iii
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Error components in demand estimation

VILDRIK SORENSEN, M, CTT, Technical University of Denmark, Denmark
Models including error components are today at the frontier of application and development in
transport modelling. The name Error Component Models are often used indiscriminately with Random
Parameters Logit, Models with Stochastic (or Distributed) Preferences (or Coefficients), Logit Kernel
Models or Mixed Logit Models, for models where the error components are added to the traditional
(linear) utility function in the following way Ui = (B + K)Xi + E, where B respective E are the
preferences respective the unexplained part of the variation, while K is a vector of error components.

At present the use of such models is growing rapidly, due to an increased access to. In general the
method of Maximum Simulated Likelihood (MSL) is applied, although this only optimises within a given a priori
distribution of the error components. Only few of the analyses so far, has dealt with the
interesting question of correlation between these error components.

An alternative method to determine the distributions was suggested in Sørensen & Nielsen (2001) and
more thoroughly described in Sørensen & Nielsen (2002). The paper uncovers the empirical
distribution of the data by repeated estimations. The purpose of the method is to determine the type of
distribution, though (for some distributions) it can determine the parameters of the distribution. The
authors found, that it is likely that the error components (random coefficients) are lognormally
distributed –and perhaps more interesting, that correlation between the error components is outspoken.


This paper adds on to the paper of Sørensen & Nielsen (2002) with a more comprehensive test of how
to incorporate EC’s into traffic models (the construction of the utility function).

A general EC utility function can be written as

Ui = BXi + F(K) + E ,

where B respective E are the preferences respective the unexplained part of the variation, while F(K) is
a function of the matrix of error components.

The paper compares logit models built, assuming a
- traditional utility function (linear, without EC)
- random coefficients (linear, with EC’s added to the coefficients)
- EC’s on orthogonal elements (principal components), without linear utility (primarily for reference)
- linear utility with EC’s on orthogonal elements (whereby independent distributions of EC can be
expected)

All utility functions with EC’s are set up two times, specified as independent distributions and
simultaneous distributions.

This paper makes a full comparison of models based on the above 7 different utility functions
(everything else being equal) with a focus on model fit as well as (potentially different) distribution of
error components dependent on the functional form of the utility function.

The paper will conclude with guidelines on how to include error components in demand estimation.
-------------------------------------------------------------------------------------------------------------------------------------------------------
                                                     Session MI 02i
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Mode choice of complex tours
AXHAUSEN, K, ETHZ, Switzerland
CIRILLO, C, FUNDP-GRT, Belgium
On the basis of the six-week diary Mobidrive this paper studies mode choice at the level of the tour,
i.e. the trips are amalgamated into a journey. In 86.1% of the case these tours involve only one mode.
This justifies the focus on the main mode of a tour.

The survey is discussed and a detailed descriptive analysis of the tours and the activity patterns
provided. The choice modelling involves both simple multi-nomial logits models as well as mixed logits
accounting for within-group error correlations. The models are specified very richly, including in
particular a detailed characterisation of the choice situation, but also of the long-term commitments of
the persons and households.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 02ii
-------------------------------------------------------------------------------------------------------------------------------------------------------
Non-linearities in the valuations of time estimates

FONTAN, C, University of Cergy-Pontoise, France
LAPPARENT, M de, Universite Paris 1 Pantheon Sorbonne, France
PALMA, A de, University of Cergy-Pontoise, France
The valuations of prices and costs of travel times appear to be key parameters in the calibration of any
demand traffic simulation systems. For example in the UK, Wardman (2000) propose a complete
review of the empirical estimates. However, it seems that practical requirements are fogging the real
nature of these concepts, which should be strictly related to the individual tastes and behaviors, but are
also depending on the travel alternatives and their corresponding effective levels of supplied attributes.
Thus, we should find individual specific values of travel times, changing with the attributes levels and
with transportation mode specific tastes and behavioural considerations. First, theoretical approaches
on the topic have considered the time component as a scarce resource. Thus, it is crucial to avoid its
irrational and senseless spending (Becker 1965), but also to search for a minimal time allocation, all
other things being equal, in boring but necessary activities, particularly in travel, hence it is possible time has an
implicit price, at least a resource price. De Serpa (1971) noticed that it is not exactly the case
when we consider that the consumer is not always the producer of his/her activities, particularly in travel
activity. In this case, the user is a time-taker and rationally knows that he/she has to account for a
minimal amount to be spent. Then, time resources are different from travel time, and de Serpa defines
the value of travel time as the implicit transfer value from travel time to resource times. Truong and
Hensher (1985) have generalised this approach, by splitting the travel time in different components:
access/egress times, waiting times, commuting times and riding/on-board times and have found
significant values of travel times savings. It has become a usual approach in practical modelling (Bhat
1998-a and 1998-b, Horowitz and al 1980, Quinet 1998). Another general important standpoint is to
differentiate the several reasons why the travel time has to be allocated. The basic idea is that the
activity to be realised at destination does not improve the same way the well-being of the individual.

Thus, the corresponding needed travel time to access it, does not take the same importance in the mind
of the individual. This implies different preferences for different situations, and has led to activity-based
travel demand systems (Domenich and McFadden 1975, Train and McFadden 1978, Small 1992,
BenAkiva and dePalma 1996). We focus in this paper on the regular Journey-To-Work (JTW)
framework, particularly the transportation mode choice issue. MVA (1987) and Mcfadden (2000)
have noticed in practical discrete choice modelling, that several possible econometric specifications
correspond to different theoretical approaches (McFadden 1973, Ben-Akiva and Lerman 1985). We
have also noticed that most of the empirical applications have used functional forms with pre-specified
properties for the implicit valuations of travel portfolios attributes, such as income effects (see for
instance dePalma and Kilani 1999, de Palma and Fontan 2001). MVA (1987) and Mcfadden (2000)
have also highlighted the fact that the travel time may be unreasoningly quantified. It is possible to
account for these effects using simple transformations of the corresponding variables, but we argue that
it is a misleading approach, since it is based on a priori behavioural considerations, that are simply
leading to pre-specified results. Thus, we develop in a first section a regular journey-to-work
transportation mode choice model, within the traveller is faced to mutually exclusive alternatives, each
supplying distinct travel portfolios. We describe the decision process of a rational traveller. Hence, we
derive definitions for the concepts of prices and values of travel times. We analyse their response
functions under general assumptions. We show that there are different situations emerging according to
the capacity of endogenising leisure patterns during the OD trip, defining behavioural profiles for the
traveller, so that the effective travel time may be twisted by the individual which often does not strictly
quantify it. We also allow for an income effect entering the willingness to pay for saving the time
attributes. Each price of times is specific to the individual and to the alternative. We develop a binomial
Box-Cox Logit model (Gaudry 1978, Gaudry 1981, Gaudry and al. 1996), particularly adapted for
joint estimation of tastes and behaviour parameters, as it will be motivated. We propose some estimates
of the parameters of the theoretical VOT functions in French Parisian region using an updated 1998
sub-sample of the large regional travel survey.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 02iii
 -------------------------------------------------------------------------------------------------------------------------------------------------------
Modelling mode choice behaviour for extra-urban journeys

CANTARELLA, G E, University of Salerno, Italy
LUCA, S de, University of Salerno, Italy
This paper deals with the simulation of mode choice behaviour for extra-urban journeys. They are
generally home-based, and the mode choice behaviour is greatly affected by user socio-economic
characteristic, whilst the effect of level of service (LoS) attributes can be significantly non-linear, and a
relevant fraction of users may be captive to a specific mode. Moreover, the zoning as well as the access
and egress to the transit system can often be modelled only at an aggregate level, due to data
availability. Finally, the structure of dispersion matrix among the perceived utility may likely be quite
complex also due to unusual modes, such as car-pool, dial-a-ride, ... These considerations are not in
favour of models used for urban journeys, which turn out quite rigid with respect to LoS attributes, and
are often greatly determined by alternative specific attributes (ASA) when applied to extra-urban
journeys.

This paper follows a random utility approach and focuses on the effects of hypotheses and data on the
efficiency (how satisfactorily the real phenomenon is simulated) of the resulting model as well as its
effectiveness (how easy to use the model is, and which data it requires). The Dogit model will be used
to simulated user captivity to any alternatives, and Box-Cox transformation to test non-linearity of utility
with respect to LoS attributes, whilst Cross-Nested Logit or Probit models to allow complex
structures of the dispersion matrix. Spatial ASA will be adopted to avoid detailed zoning and access
and egress representation.

Some indices will also be presented and discussed to compare model effectiveness. The effectiveness
will also be tested against artificial neural network models as a benchmark.

Results of applications to real test-sites will be presented to support general considerations.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 03i
------------------------------------------------------------------------------------------------------------------------------------------------- ------
Combined use of stated choice, transfer price, and Frisch Method in a valuation survey

FEARNLEY, N, Institute of Transport Economics, Oslo, Norway
SAELENSMINDE, K, Institute of Transport Economics, Oslo, Norway
A good Stated Preference survey helps respondents to make real trade-offs between attributes of the
alternatives presented, and to state their real preferences. In terms of all valuation methods this means
that the possibilities for giving protest and strategic answers is minimised. And in terms of Stated Choice
(SC) this means that attribute levels must be well balanced and that the choice situation must not be too
complex. As a result the proportion lexicographic choices will be reduced.
This paper presents a survey among public transport users in Oslo, where 6 different questionnaire
designs were used. Three different methods were used and combined:
- Stated Choice (SC)
- Transfer Price (TP)
- 'Frisch Method' (FM), named after the 1969 Nobel Price laureate Ragnar Frisch, who launched a
new and simple way to circle in respondents' trade-offs between attributes in a sequence of questions.


Overlapping tests have been run in order to check respondents for consistency across the methods. We
have found relatively good correspondence between individual respondents' preferences and valuations
across the methods. For example, respondents who sort their answers lexicographically according to
one attribute in SC have also a tendency towards higher WTP for the same attribute in TP.

However, while the valuations of travel time factors are relatively stable, the proportions of
zero-valuations and lexicography vary to a great degree between the methods. The major problem of
zero-valuations in TP is reduced significantly with the new FM approach. In many respects, therefore,
we regard FM as a superior method to TP.

In order to make the SC design more flexible the variation of the price level has been adjusted
according to WTP obtained in the preceding TP questions. Although not significant, there is a clear
trend towards lower proportions lexicographic answers in SC with this method to fine-tune the
appropriate price level variations.

Combined use of valuation methods is therefore found to (i) improve the flexibility of the SC design; (ii)
to determine the degree to which respondents answer consistently; and (iii) to compare the
performance of each method.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 03ii
-------------------------------------------------------------------------------------------------------------------------------------------------------
Simultaneous analysis of choice and transfer price data

DALY, A, RAND Europe, The Netherlands
GUNN, H, RAND Europe, The Netherlands
Estimation of the relative importance of components of travel disutility (or generalised cost) is of
fundamental importance in transportation planning, whether modelling the choices of individual travellers or
assessing the value they attach to travel time or other components for evaluation purposes. The most common
way in which the values of these journey attributes is estimated is through the use of choice models, in which the
choice is interpreted as an observation that the traveller has preferred one
combination of journey attributes to the other available combinations, i.e. that the utility of the alternative is greater
than the utility of the available non-chosen alternatives.

An alternative data form that has looked attractive in principle for many years is ‘transfer price’ data, in
which respondents are asked how much better their choice is than a specified alternative. Such data has
also been called Contingent Valuation and a substantial literature exists documenting its advantages and
disadvantages. The key aspect of this data which makes it attractive is that the amount of utility
difference is collected, rather than, as with choice data, simply asking which alternative has the greater
utility. The increased information content given by transfer price data can greatly increase the estimation
accuracy and potentially help to reduce biases arising from the use of SP data.

The theoretical framework of utility maximisation used in choice modelling is also applicable to transfer
price data, which raises the possibility of analysing both types of data together. In separate analysis of
transfer price data, which has been used hitherto, the magnitude of the utility difference expressed by
the transfer price is regressed, usually in a simple linear regression, on the explanatory time and cost
variables. However, this analysis ignores the fact that we know absolutely, from the choice that is also
observed, what the sign of that utility difference is. It is in principle possible to use the choice and
transfer price data together, using the information on both the size and sign of the utility difference. The
simplest formulation of this simultaneous estimation would imply models of the ‘Tobit’ type, or close
relatives to that form. The key characteristic of these models is that they recognise that the traveller’s
unmeasured preferences - i.e., what is represented by the error term - are effectively the same, or at
least highly correlated, when he or she makes as choice as when he or she responds to the transfer
price question.

Further sophistication of such approaches involves extending the analysis to considering Stated
Preference as well as Revealed Preference data and incorporating the correlations of the error terms in
these data types with that of Transfer Price. Response biases of various types in the data, including the
rounding which is characteristic of Transfer Price data, can also be included in the most sophisticated
analyses.

The paper discusses the assumptions and analysis methods which allow simultaneous use of these data
types and draws conclusions for their appropriate use in a range of circumstances.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 04i
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Multimodal assignment to congested networks: fixed-point models and algorithms
D'ACIERNO, L, Università degli Studi di Napoli "Federico II", Italy
GALLO, M, Università degli Studi di Napoli "Federico II", Italy
MONTELLA, B, Università degli Studi di Napoli "Federico II", Italy
This paper presents an elastic demand multimodal assignment model with transit costs depending on car
flows in the case of non-exclusive bus lanes (shared lanes). This assumption implies "non-separable"
cost functions and therefore it is not generally possible to demonstrate the uniqueness of the equilibrium
solution and the convergence of solution algorithms, only with the conditions proposed in literature.

This model is a multimodal model because the mode choice depends on congested costs for both modes.
In this paper a condition on demand models for uniqueness of equilibrium solutions is stated; that
condition implies some hypotheses on transit cost functions. At this time, it is not yet proved that Logit
models adopted in the proposed multimodal formulation satisfy this condition except for some test
networks, as reported in the paper. However, tests on a real network showed the convergence of
algorithms and uniqueness of the solution. Moreover a comparison among performances of solution
algorithms is reported. Research perspectives will be addressed to prove in general way the
applicability of the proposed condition for the transit-congested multimodal assignment model.

------------------------------------------------------------------------------------------------------------ -------------------------------------------
                                                     Session MI 04ii
 ----------------------------------------------------------------------------------------------------------------------------- --------------------------
An efficient design for very large transport models on PCs

LINDSAY, C, ME&P, WSP, UK
WILLIAMS, I, ME&P, WSP, UK
As part of the development of the highway assignment module for use in the pilot version of the UK
DTLR’s National Passenger Transport Model, ME&P created a 10,000 zone assignment model which
was run on PC (though not without tedious complications). Subsequently, there was interest to
upgrade the model to be iterated to a converged equilibrium solution at the 10,000 zone level while still
using a detailed road network for all of GB.

This paper outlines the theoretical foundations and the application in practice of the sampling approach
that was designed for this purpose. Through appropriate sampling procedures the computational
power and detail is focused on zone pairs in proportion to the level of traffic that they generate, while
avoiding bias in the results. The approach involves:
- designing a suitable algorithmic structure and order for the main operations in the transport model
software
- designing a suitable theoretical sampling and permutation structure to avoid bias
- implementing the approach in practice for a large model and reviewing the performance.

It is demonstrated that major gains in computational efficiency can be achieved with minimal reductions
in the accuracy of the local mode split and assignment of flows to the network. The net result is that the
computational requirement for large models increases more in proportion to the number of zones, rather
than its square.

---------------------------------------------------------------------------------------------------------------------------------------------- ---------
                                                    Session MI 04iii
----------------------------------------------------------------------------------------------------- --------------------------------------------------
Day-to-day approach for path choice models in the transit system: analytical evaluations and
simulation results

RUSSO, F, University of Reggio Calabria, Italy
VELONA, P, University of Reggio Calabria, Italy
The aim of this paper is to evaluate link flows and networks performances, such as travel times and
levels of service, in day-to-day dynamics. In this paper the formulation of the modal split model is
recalled, stressing the dependency of within-day and day-to-day dynamics of the transit service. The
modal split is influenced by the day-to-day learning components, like in the private transportation
systems. Once identified some components subject to day-to-day modification, some examples of
system dynamics according to different supply evolution typologies are introduced. An analysis of the
weight of the control parameters in the dynamic process is developed, considering the stability system
conditions.

A simultaneous path choice model is considered, taking into account the dependence of some attributes,
present in the function of systematic utility, from the congestion caused by the flow of the users of the
service and from the congestion caused by the promiscuous lines where the public transport vehicles
move.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 05i
 -------------------------------------------------------------------------------------------------------------------------------------------------------
Pitfalls and solutions for spatial general equilibrium models used for transport appraisal

MUSKENS, A C, TNO Inro, The Netherlands
OOSTERHAVEN, J, University of Groningen, The Netherlands
TAVASSZY, L A, TNO Inro, The Netherlands
THISSEN, M J P M, University of Groningen, The Netherlands
The use of spatial equilibrium models for assessing the economic impacts of transport projects is one of
the key items on the research agenda for project appraisal in the Netherlands. These models are
particularly suitable to analyse indirect effects of transport projects through linkages between the
transport sector and the wider economy (i.e. the transport using sectors). Potentially, according to the
literature, these impacts can turn out to be up to 40% in magnitude of the direct impacts. There is,
however, no general indication that indirect effects are always of this magnitude - this has to be proven
on a case-by-case basis. After two years of applications of SCGE models for transport appraisal, we
found that the conventional specification of spatial equilibrium models can lead to problems in project
appraisal in terms of inaccuracies in the assessment of impacts. This paper discusses how to fine-tune
these models to allow an accurate assessment of these indirect effects. These ideas should be of value
for those practitioners or researchers who are developing SCGE applications for use in transport
appraisal.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 05ii
-------------------------------------------------------------------------------------------------------------------------------------------------------
Cost-benefit evaluation of infrastructure doing it the Hedonic way
ANKER NIELSEN, O, CTT / Danish Technical University, Denmark
HUSTED RICH, J, CTT / Danish Technical University, Denmark
An increased awareness of the cost of negative externalities as noise, air pollution, and human
exposure, combined with the fact that urban structures are becoming denser, had led to an increasing
need for reliable cost-benefit evaluations of infrastructure projects. In addition to this, there has been an
increased focus on public consumption, which has required a more careful planning phase to reduce
wasteful expenditure.

The hedonic pricing method as launched by Rosen (1974) constitutes one of the supposedly most
promising tools to evaluate negative and positive benefits from changes in infrastructure by utilizing the
hedonic pricing process observed in the housing market. Houses are valuated according to specific
attributes and surroundings, which can be used to calculate overall benefits of infrastructure projects
and to derive implicit prices of environmental loads such as noise and deterioration of amenities. The
paper is organised into two parts, in which, in the first part establishing a data foundation by means of
GIS and Internet technology is the primal aim. The second part of the paper focuses on econometric
issues and includes an application where benefits from a newly finished metro project in the
Copenhagen region are calculated.

The greatest difficulty in generating data suitable for the hedonic model is to link houses and related
prices with the proximity to attributes that are likely to affect prices negatively as well as positively. This is done by
relating addresses to a number of thematic GIS layers used to describe residential quality.

The different themes include recreational grounds, seaside and lakes, parks, sport facilities (golf
courses, football fields, sports arenas, etc.) and local taxation levels to mention a few. The
uncontroversial (negative) perception of traffic noise is handled through noise-buffers whereas the
positive effect of being close to certain amenities are calculated by weighting the share of such amenities within
different distance bands of the address. Zone-based commuter and shopping accessibility is
implemented by means of logsums from an external demand model. House prices and house
characteristics are extracted directly from the Internet by using java-scripts that automatically search
web sites of real-estate agencies. Selling prices tend to be a fairly good reflection of the
willingness-to-pay for housing in the market. Furthermore, the way data is generated makes the method
independent of expensive external data sources and portable to other regions and countries. The
validity of Internet data is analysed in a separate comparison with actual trade prices from the central
register.

In order to test the applicability of the hedonic methodology, an evaluation of the Copenhagen metro
project is presented. The hedonic model estimation is carried out in a mixed regression setup, in which
model parameters are allowed to follow pre-defined distributions. The mixed regression model is
conceptual comparable to mixed models in discrete choice analysis (also known as error component
models, and mixed logit), which is recognised to be successful in accounting for taste variation. This
property is especially important in the hedonic model, since, as in stochastic assignment models, we do
not observe individual characteristics as income, age, and sex. The application, which calculates the
overall benefit of the project as well as an implicit price of noise, illumines the strength and the cost
efficiency of the method compared to traditional cost-benefit analysing tools.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 06i
------------------------------------------------------------------------------------------------------------------------------- ------------------------
A dynamic O-D matrix estimator using analytic derivatives

LINDVELD, C, Delft University of Technology, The Netherlands
Static O-D matrix estimation can be formulated as a bilevel programming problem. The upper level
problem tries to make the estimated matrix satisfy certain constraints, such as traffic counts, row and
column totals, and similarity to an apriori matrix. The lower level problem typically stipulates an
equilibrium assignment of the matrix to a network.

This framework can be extended to the dynamic case. However, as is shown in the literature,
computational issues such as convergence and efficiency of the solution algorithms become a serious
issue.

One of the central issues is the calculation of a convergent series of descent step in the upper level of
the problem. Recently a solution was presented by Codina and Montero based on the concept of
subgradients. Although the use of subgradients is computationally expensive, the Codina and Montero
report improvements in convergence and in the speed of convergence.

In this paper we propose to alleviate the problem of finding convergent descent steps for the upper
level problem. Denoting the objective function of the O-D matrix T in the upper level by z(T), we can
split z(T) intp a part z1(q_obs, q_model(T)) related to the assignment results and a part z2(T_apriori,
T) reated to the apriori matrix, we have that z(T) = z1(T) + z2(T).

We will explicitly calculate the derivatives of the upper level problem z(t) with respect to the matrix
cells, by using the analytic stochastic dynamic traffic assignment (DTA) method recently proposed by
Ismail Chabini. We will use the derivatives to investigate convergence of the algorithm in a simple test
case.

---------------------------------------------------------------------------------------------------------------------------- ---------------------------
                                                     Session MI 06ii
 ----------------------------------------------------------------------------------------------------------------------------- --------------------------
Estimation of origin-destination matrices from link counts
GORDON, A, Mott MacDonald, UK
HAZELTON, M, University of Western Australia, Australia
WATLING, D, University of Leeds, UK
An accurate origin-destination trip matrix is an essential part of a successful traffic assignment model.
Typical methods for building a matrix involve expensive surveys which often have low sample rates and
rarely observe all possible OD movements. Although there are well-established techniques for
estimating trip matrices from link counts they depend on having a prior trip matrix, the accuracy of
which is a key determinant of the accuracy of the estimated matrix.

This paper describes a method developed as part of a project for the UK Highways Agency which can
estimate a trip matrix from link counts alone, without the need for a prior matrix. It does this by making
greater use of the information provided by counts collected over a number of days. Rather than
working with just mean counts, the variance and the covariance of counts are also used. In many cases
this provides enough information to estimate route flows using a least squares estimator, from which OD
flows are readily obtained. The information from the link counts can be supplemented with estimates of
route choice proportions.

The paper describes the theory of the method in detail and presents the results from two applications to
UK road networks. The first uses simulated data on a section of the M25. The second involves real
data from automatic traffic counts on the Kent trunk road network. These and other results lead to
conclusions about the optimal application of the method, including issues such as the number of days of
count data required.

--------------------------------------------------------------------------------------------------- ----------------------------------------------------
                                                     Session MI 06iii
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Methods for the estimation of time-dependent origin-destination matrices using traffic flow
data on road links

MAHER, M, School of Built Environment, Napier University, UK
MUSTARD, D, TRL Limited, UK
ZHANG, X, TRL Limited, UK
An origin-destination (O-D) matrix is a table containing traffic demands from each origin to each
destination in a road network. It is an essential source of information in every aspect of the transport
planning process. Estimating O-D matrices from traffic flow data collected on road links is a tool
commonly used to produce or estimate O-D matrices when existing information is incomplete or
out-of-date. It is also less labour-intensive than traditional methods for deriving O-D matrices, such as
the O-D survey method using home or roadside interviews.

Estimating O-D matrices for a general network requires information on route choices in terms of the
proportions of each O-D flow using each route or link in the road network. In general, route choice proportions are
dependent on congestion levels, which, in turn, depend on traffic demands. They are
normally obtained by a traffic assignment model given an O-D matrix. Therefore, a matrix estimation
method for congested networks is normally combined with a traffic assignment model. Almost all
assignment-based estimation methods developed so far are static: the input flow data consists of total
traffic volumes on road links over a single (and normally long) period of time and an average O-D
matrix is estimated. Time-variations in traffic are not considered. A dynamic method uses time series
data of traffic flows and so it is possible to estimate time-dependent O-D matrices. To date, only
non-assignment-based dynamic methods have been developed in the literature. In these methods, it is
assumed that the route choice proportions are constants and are determined separately from the matrix
estimation process. This assumption may be justified only in non-congested networks and may lead to
inconsistencies between the results of matrix estimation and traffic assignment.

The estimation method presented in this paper is developed in an on-going research project. The
principle of a static estimation method will be extended so as to develop an assignment-based dynamic
estimation method. The estimation problem is formulated as a mathematical programming problem
based on the entropy-maximisation principle, although other formulations may also be used, such as the
least squares principle. The traditional entropy-maximisation method (ME2 method) for matrix
estimation assumes that traffic-flow data are error-free. This assumption will not be made in this paper.
The programming problem has a hierarchical structure: at the top level an O-D matrix is estimated given
route choice proportions; and, at the lower level, route choice proportions are determined by an
assignment model given an O-D matrix defined at the top level. A heuristic solution algorithm is
proposed for solving the programming problem, using the dynamic traffic assignment package
CONTRAM. The algorithms will be tested using both simulated data and data collected on a real road
network. It will be shown that the algorithms are efficient and are applicable to practical networks.

-------------------------------------------------------------------------------------------------------------------------------------------------------
                                                    Session MI 06iv
----------------------------------------------------------------------------------------------------------------------------- --------------------------
A generation constrained approach for the estimation of O/D trip matrices from traffic counts

IANNO, D, ATAM (Public Transport Company), Italy
POSTORINO, M N, University of Reggio Calabria, Italy
Over the last years increasing attention has been devoted to methods for the estimation of O/D matrices
from traffic counts, named in the following O/D count based estimation, ODCBE. These methods
efficiently use traffic flow measures combining them with other available information, in order to correct and
improve an initial estimation of the O/D trip matrix.

All the methods proposed to resolve the ODCBE need an initial estimate of the O/D matrix (target
matrix) and a set of link traffic flows measured on the transportation network. Virtually all of them are
formulated as optimisation or mathematical programming problems with an objective function and a set
of constraints. ODCBE models also need an estimation of the assignment matrix, i.e. of the O/D flow
percentages that use each link of the network for which traffic counts are available. These percentages
depend on the generalised costs for all the links of the network, but for congested networks generally
these costs are not known by the analyst and they are computed on the basis of the link flows
forecasted through the assignment model, that again depend on the O/D matrix to be assigned. In the
literature the problem of the circular dependence among the O/D matrix estimation and the traffic flow assignment
has been studied by different authors as a problem of bi-level mathematical programming for
deterministic user equilibrium assignment models (Fisk, 1988, 1989; Bell, 1991; Chen and Florian,
1994, Yang, 1995; Cascetta and Postorino, 2001).

In this paper a generalised least squares (GLS) model and a stochastic assignment model are used to
resolve the ODCBE problem. Particularly, the objective function is specified in order to take into
account some generation constraints that prevent the emission from each origin zone to be greater than
the actual one, as could be the case if no constraints in that sense are introduced.

In the proposed formulation a trip generation constraint is explicitly considered in the model, taking into
account the fact that the estimated trips generated by each zone cannot be greater than the maximum
actual generation of the zone itself. It is well known that, given a set of counted traffic flows, there are
more O/D matrices that, assigned to the network, reproduce the same observed flow values. The
introduction of the trip generation constraint allows to select, among the possible solution of the
problem, the matrix that satisfies the generation constraint and then is more reliable.

The results obtained on a test network are very satisfactory and they show a general improvement with
respect to the usual formulation both in the reproduction of the counted flows and the no-counted
flows, i.e. on the whole network.

-------------------------------------------------------------------------------------------------------------------------------------------------------
                                                     Session MI 07ii
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Inside the queue: hypercongestion and road pricing in a continuous time/continuous place
model of traffic congestion

VERHOEF, E, Free University, Amsterdam, The Netherlands
This paper develops a continuous time - continuous place model of road traffic congestion, based on
car-following theory. The model fully integrates two archetype representations of traffic congestion
technology, namely "flow congestion", originating in the works of Pigou, and "vertical queuing" models,
pioneered by Vickrey. Because a closed-form analytical solution of the formal model does not exist, its
behaviour is explored in a numerical exercise. In a setting with endogenous departure time choice and
with a bottleneck halfway the route, it is shown that "hypercongestion" can arise as a dynamic
"transitional and local" equilibrium phenomenon. Also dynamic toll schedules are explored. It is found
that a toll rule based on an intuitive dynamic and space-varying generalization of the standard Pigouvian
tax rule can hardly be improved upon. A naïve application of a toll schedule based on Vickrey's
bottleneck model, in contrast, performs much worse and actually even reduces welfare.

-------------------------------------------------------------------------------------------------------------------------------------------------------
                                                     Session MI 07iii
----------------------------------------------------------------------------------------------------------------------------- --------------------------
A genetic algorithms based approach for solving optimal toll location problems

SHEPHERD, S P, ITS, University of Leeds, UK
SUMALEE, A, ITS, University of Leeds, UK
A mathematical approach to second-best tolling and to identifying the optimal toll locations for a general
network is categorised as a mathematical program with equilibrium constraint (MPEC). This class of
optimisation program is an NP-Hard problem which means there does not yet exist any algorithm to find the exact
global optimum. A derivative based approach to solve the optimal toll problem (named
CORDON) is demonstrated in this paper for a medium scale network. However, as the properties of
MPEC suggests that it is unlikely for the derivative based approach to find the global optimal solution,
an alternative genetic algorithm (GA) based approach for finding optimal toll levels for a given set of
chargeable links (named GA-CHARGE) is developed to tackle this problem. The GA approach does
not stop at a local optimum or rely on the use of derivatives. A variation on the GA based approach is
also used to identify the best toll locations (GA-LOCATE) making use of "location indices" suggested
by Verhoef (2000). However, the location indices adopted often overestimate the benefits and this
causes a problem when the implementation costs are included in the analysis. Thus, an alternative
method based on the idea of Parallel Genetic Algorithms (PGA) is developed. The PGA based method
(named PGA-ALL) is designed to solve the problems of optimal toll location and optimal toll levels
simultaneously. These methods are tested with a medium-scale network of Leeds to compare their
performance.

-------------------------------------------------------------------------------------------------------------------------------------------------------
                                                     Session MI 08i
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Person-specific models in SP analysis
DALY, A, ITS, University of Leeds and RAND Europe, UK
SOLA CONDE, P, ITS, University of Leeds, UK
Given that Stated Preference surveys based on choice experiments typically collect multiple responses
from each individual contacted in the survey, it is natural to ask whether it is useful to develop choice
models for each respondent. Person-specific models are often used in general marketing studies. These
models could be used in the transport context either as final products to understand and predict
behaviour or as an intermediate stage in the development of more sophisticated models of population
behaviour.
Study of person-specific models has been inhibited by the results of work by Morikawa (ref. about
1990), who found that person-specific models performed much less well than models based on pooling
the data from all respondents. However, recent work by Kroes and Cirillo (1999) suggests that
Morikawa’s findings may apply to a smaller range of circumstances than had been thought, thus
opening the possibility that person-specific models could be useful in some circumstances.

The paper reports the results of a study undertaken at ITS Leeds in which simulated data is used to
investigate the circumstances in which person-specific models could be useful. By varying the
assumptions underlying simulated behaviour, it was possible to represent a range of circumstances and
to investigate the success of person-specific models in each case. Some areas for further research are
identified. An important step in the investigation is the specification of appropriate ways to measure the
success of models in reproducing the simulation assumptions and two separate measures have been
defined and applied.

The results indicate that in most circumstances person-specific models are not as successful as models
based on pooled data, but when inter-personal variation is high, in ways that are described in the paper,
person-specific models might indeed be useful. Thus in extreme cases it may be useful to investigate
person-specific models as an intermediate step to developing a population model, but in most cases
Morikawa’s conclusions remain valid.

The paper will be of value to SP researchers in developing methodology by indicating the limits of
applicability of simple analysis methods and how to proceed when those limits are passed. The new
aspects of this work are that it goes beyond the previous studies in terms of the realism of the error
structures considered, therefore giving better insight into the true extent of applicability of
person-specific modelling.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 08ii
----------------------------------------------------------------------------------------------------------------------------- --------------------------
Best practice in SP design

DALY, A, RAND Europe, The Netherlands
KROES, E, RAND Europe, The Netherlands
SANKO, N, Ecole Nationale des Ponts et Chaussees, France
Stated Preference surveys have now been used for up to twenty years for looking at transport policy
issues and a number of schools of practice have developed. Within each school, practitioners improve
their methodology but they rarely if ever take account of developments in competing schools. The
result is that particular aspects of SP practice are done better in one school than another.

A key stage in conducting an SP study is the experimental design of the survey; this is also one of the
areas where there are substantial differences of approach between the schools. A researcher trying to
set up a study is therefore faced with a series of practical issues and can have great difficulty in finding
his or her way through the jungle of the literature.

The paper reports the findings of a study conducted by RAND Europe to look at the state of practice
in experimental design for SP surveys. Comparing the methods used by different schools - in the US,
Australia and Japan, as well as in Europe - practical conclusions are drawn as to the best procedures to
follow in the present state of understanding of design procedures. Areas are identified where research
is clearly necessary, but in many cases it is possible to make clear recommendations.

The original aspect of the paper, other than comparing the work of the competing SP schools, is in
deriving practical step-by-step procedures for choosing the best available design for a given survey.

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 09i
-------------------------------------------------------------------------------------------------------------------------------------------------------
 Uncertainties in modelling time and cost trading in travel: temporal and sequency issues
GUNN, H, RAND Europe, The Netherlands
BURGE, P, RAND Europe, UK

ABSTRACT NOT AVAILABLE

----------------------------------------------------------------------------------------------------------------------------- --------------------------
                                                     Session MI 09ii
-------------------------------------------------------------------------------------------------------------------------------------------------------
Understanding and valuing journey time variability
COPLEY, G, FaberMaunsell, UK
MURPHY, P, FaberMaunsell, UK
PEARCE, D, Highways Agency, UK
Congestion is an increasingly important issue for road users and the Highways Agency is has a specific
objective to ‘take action to reduce congestion and increase the reliability of journey times’.

Reducing travel times is a key issue in economic appraisal. Reducing the variability in travel times has
not been treated with the same importance despite evidence from previous studies that variability in
journey times is valued more highly by some people than journey time itself.

Valuing journey time variability for road users is very difficult given the complexity of the subject.
Traditionally researchers have estimated the ratio of parameters for journey time mean and standard
deviation or ‘reliability ratio’. More recent work carried out in the US has adopted a more
behaviourally sound ‘scheduling approach’ which estimates early and late scheduled delay time and the
probability of being late explicitly. This study builds on this work and has undertaken a detailed
qualitative assessment of what travel time variability means to people in order to understand the most
meaningful method of presenting journey time information. Its methodology has been innovative in
presenting journey time information and also including optimisation of departure time choice in the SP
experiment.

The study had two key objectives, which were addressed in two phases:
- Explore using qualitative research, what travel time variability means to people and to gain an
understanding of the best methods of representing it.
- Measure the value that people place on journey time variability using stated preference techniques.

The qualitative phase consisted of depth interviews and focus groups. Travel Diaries were also used to
collect detailed information about journeys made. The main purpose of this phase was to get a
understanding of what journey time variability means to people and to explore how it impacts on
journey planning. Different types of presentational techniques (linear, clockface based on previous
research and histogram developed in this study) were explored to derive a preferred method of
presentation for the stated preference experiment. The criteria used by respondents (eg journey time
mean, minimum, maximum, range, standard deviation) when faced with a choice between two journey
time distributions with different characteristics were explored.

The key findings from the qualitative research were that journey planning does not allow for ‘extreme’
incidents. Considerable buffers are built into schedules in order to avoid being late. However many
business appointments are made with a degree of flexibility - the ‘ish’, which recognises the difficulty in
predicting journey times and the acceptability of being late for particular appointments. The different
presentational methods were generally understood but the importance of different criteria used by
travellers varied considerably.

The second quantitative phase consisted of an exploratory stated preference survey with 200 travellers,
using the histogram method of presentation. The first stage of the interview consisted of a travel diary.
This was followed by a computer aided interview in which respondents were presented with choices
between different customised journey time distributions. A key aspect of the computer interview was
that respondents were asked to optimise their departure times in the light of the distributions presented.
They were then asked to choose between different optimised distributions.

The analysis will derive parameters for mean journey time, scheduled delay - early and late time, the
probability of being late and also standard deviation. It will be possible to produce separate models for
different types of commuter. This will enable analyses using reliability ratios and the more behaviourally
sound scheduling approach to be compared.

The results of the exploratory SP survey will provide an insight into how different types of commuters
value journey time variability.

				
DOCUMENT INFO
Shared By:
Tags: Metho, dolog
Stats:
views:11
posted:12/3/2009
language:English
pages:14
Description: Methodological-Innovations---aetransportorg