AI and The Environment
AI Blueprints for 16 Environmental Projects Pioneering
Sustainability
By Cindy Mason

This Book in A Nutshell
If You Don’t Have Time to Read The Whole Book
READ THIS
Chapter 1 Fire Fighting
Combining Human Assessment and Reasoning Aids for Decision-making
in Planning Forest Fire Fighting
This chapter addresses fire fighting resource planning with a
hybrid AI system for automatically planning first attacks to a
forest fire based on work organization in an Italian Provincial
center. The complexity of fire fighting is typical of environmental
problems. Fire is a dynamic phenomenon that changes and whose
evolution is determined by weather conditions like wind direction
and intensity, by humidity, and by fuel type, which changes rapidly
in sometimes unpredictable ways, requiring fast decisions.
Data about these operating conditions are often uncertain,
incomplete, and in some cases totally absent. The decision
making automation is complicated with relevant fire events evolving
with different time and spatial scales. The planning problem
of a fire emergency, like many environmental emergencies, is
complicated because multiple organizations are involved with
decision making over the fire territory and cooperation is needed
for good decision making on the strategies to fight the fire:
which fire front, where to locate resources, what needs most
attention (e.g. railway, houses, etc.). There are also decisions
about which order to do things. In this way past knowledge is
very important. Their approach uses multiple AI techniques:
lazy learning, case-based reasoning and constraint reasoning. The
system is aimed at supporting the user in the whole process of
forest fires management and the user always remains active in the
ultimate decisions.
This work conducted at the IRST in Italy by Paolo Avesani, Francesco
Ricci, and Anna Perini.
Chapter 2 Flood Prediction
Introducing Boundary Conditions in Semi-quantitative Simulation
The chapter addresses flood prediction and water supply control and
describes a hybrid AI method that addresses the problem of
incomplete and imprecise information that generally plagues
many environmental simulation systems. Predictions with standard
numerical simulations for boundary value problems* can be error
prone because they require precision but in reality the information
is imprecise and incomplete. For example, when flood conditions
approach, empirical data on the level/flow-rate curve for rivers
becomes less and less accurate. In general, the precise shape
and size of a body of water is rarely known. The task of flood
control and water supply prediction is both difficult and vitally
important. For example, a lake has a dam with floodgates that
can be opened or closed to regulate the water flow through power
generating turbines, the water level (stage) of the lake, and the
downstream flow. The goal of a controller is to provide adequate
reservoir capacity for power generation, consumption, industrial
use, and recreation, as well as downstream flow. In exceptional
circumstances, the controller must also work to minimize or avoid
flooding both above and below the dam. The conceptual and practical
aspects addressed by the AI system include the ontology (actions vs.
measurements), the temporal scale (instantaneous vs. extended
changes), the impact of discontinuity on model structure and the
consequences of incompleteness in predictions. Environmental
simulation tools are useful to careful evaluate the effect of
actions in critical and dynamically changing situations. They
evaluate empirically derived models and parameters, and help
to forewarn of undesired possible future situations. The hybrid AI
simulation method extends the application of qualitative AI
modelling methods to include simulating dynamic systems and problem
of handling boundary conditions*.
*in a simulation system, boundary value problems specifying how
external influences on dynamic systems vary over time
This work conducted at the Università de Udine in Italy by Georgio
Brajnik.
Chapter 3 Sewage and Pollution
Integrating General Expert Knowledge and Specific
Experimental Knowledge in Waste Water Treatment Plant
The chapter addresses pollution level control for waste water
treatment plants using a hybrid AI system that combines both
learning from past experience and from domain knowledge for
real-time control of a wastewater treatment plant (WWTP).The main
goal of a wastewater treatment plant is to reduce the pollution
level of the wastewater at the lowest cost, that is, to remove,
within die possible measure, strange compounds (pollutants) of the
inflow water to the plant prior to discharge to the environment. So,
the effluent water has the lower levels of pollutants as possible
(in any case, lower than the maximum ones allowed by the law).The
plants taken as models -in this study- are based on the main
biological technology usually applied: the activated sludge process.
The target wastewater plant studied is located in Manresa, near
Barcelona (Catalonia). This plant receives about 30000 m3/day inflow
from 75000 inhabitants. The automated solution to this real-time
control problem is a multi-paradigm reasoning architecture able to
input and process the different elements of the knowledge learning
process, to learn from past experience (specific experimental
knowledge) and to acquire the domain knowledge (general expert
knowledge). These are the key problems in real-time control AI
systems design. These problems increase when the process belongs to
an ill structured domain and it is composed by several complex
operational units . Therefore, an integrated AI methodology which
combines both learning from past experience and from domain
knowledge is proposed. This multi paradigm reasoning provides the
target system, a wastewater treatment plant (WWTP), with some
advantages over other approaches applied to real world
systems. Due to the dynamic learning environment, the system
is able to adapt itself to different waste water treatment plants,
making the system to be exportable to any plant with some minor
changes. It is only needed to fill the Case library with an initial
set of specific cases (operating situations of the concrete WWTP),
which can be obtained semi-automatically from real operational
data. All these facts make it more powerful than other single
technologies applied to wastewater treatment plants as
knowledge-based approaches, statistical process control techniques,
fuzzy controller methods, etc., as well as to other complex
ill-structured domains. With this approach, the plant can be
controlled in normal situations (mathematical control), in abnormal
usual situations (expert control) and in abnormal unusual situations
(experimental control).
The chapter is based on work conducted in Spain at the Technical
University of Catalonia by Miquel Sànchez, Ulises Cortés, and by
Ignasi R. Roda, and Manel Poch also in Spain at Universitat de
Girona.
Chapter 4 Sustainable Forests/Timber Harvesting
Planning with Agents in Intelligent Data Management for
Forestry
The chapter concerns the sustainability of our forests. The
largest vegetation on earth are the forests and we place a great
demand on them to provide us with wood products - industrially and
as consumers. To make sustainable timber harvesting decisions we
need to know the state of growth and health of forests over large
areas of the earth is essential. The data comes in at about 1Tb/day,
is format and media diverse, exists on multiple computing platforms
with different access and use policies and includes topographic,
soils, hydrology, geology, remote sensing and forest cover
descriptions over large areas of the earth. The AI system
helps manage this hairy data problem. It is used in support of
human decision making process by automatically monitoring data and
detecting changes and trends on the state of the biosphere and
vegetation. The hybrid-AI system uses software agents that combine
both learning from past experience and from knowledge. One of the
main problems that the AI system must tackle is the update of forest
cover maps stored in digital form in geographical information
systems (GIS) by processing remotely sensed imagery in order to
detect changes in the state of the forest. The agents learn to do
this automatically through the use of a training interface that
allows human experts to describe how each task is performed.
It is therefore not required to hand code the agents since they are
automatically generated by the training interface.These agents can
each give a description of the task they perform. These descriptions
are then used by a problem solving system that integrates the use of
search based planning, case-based reasoning, derivational analogy
and machine learning. The software agent systems learn by
unobtrusively observing the manner in which they are used, adapt to
the tasks for which they are used, and learn from the circumstances
of their use.
This work conducted in Canada by David G. Goodenough at the Pacific
Forestry Center and by Daniel Charlebois and Stan Matwin also in
Canada at the University of Ottawa.
Chapter 5 Water Pollution Prediction
Water Pollution Prediction With Evolutionary Neural Trees
This chapter addresses water pollution prediction using an
evolutionary neural learning method for time series data. The
task studied here is to predict nitrate levels a week ahead in the
watersheds of the Sangamon River in Illinois, USA, from the previous
values. The AI method of evolutionary learning networks is
generally used for the modeling and prediction of complex systems.
In contrast to conventional neural learning methods, genetic
learning makes relatively few assumptions about the models of
data. The method is effective in identifying important
structures and variables in systems whose functional structures are
unknown or ill-defined. It uses tree-structured neural
networks whose node type, weight, size and topology are dynamically
adapted by genetic algorithms. Since the genetic algorithm used for
training does not require error derivatives, a wide range of neural
models can be identified. The performance results compare favorable
to those achieved by well-engineered, conventional
system-identification methods. The original study here also
aims at giving some indication of the biochemical and physical
relationships among the variables and of the controllability of the
system. Application areas for this approach include but are
not limited to prediction, monitoring, and diagnosis of complex
systems, such as environmental processes.
The chapter is based on work conducted in Germany at the German
National Research Center for Computer Science by pioneers Byoung-Tak
Zhang, Peter Ohm, and Heinz Mühlenbein.
Chapter 6 Toxic Algae Blooms
A Qualitative Modeling Approach to Algal Bloom Prediction
This AI project concerns the problem of toxic algae blooms and
is a collaboration between researchers from Brazil, Germany and
France. The approach to AI system development is an
intelligent model-based systems to support decision making
concerning many environmental factors of algae bloom. It is
discussed in the context of an algae bloom in the Rio Guaíba in
Southern Brazil. Knowledge-based systems support analysis and
decision making using a representation of our human knowledge about
the involved algae bloom processes. Because of the very nature of
these algae and bloom processes, our knowledge about them, and the
information available, this is a great challenge for standard
qualitative modeling. This chapter presents preliminary results of
our work including: using AI-modeling for the phenomena of the algal
bloom and an intelligent process-oriented description of some of the
essential mechanisms contributing to algal bloom. In particular, two
problems have to be addressed that are typical for modeling
ecological systems. First, the spatial distribution of parameters
and processes relevant to algae blooms has to be taken into account
which leads us to locate processes in or between water body
compartments, the elements of a topological partitioning of the
area. Second, the various processes involved in an algae bloom
development act with speeds of different orders of magnitude (e.g.
chemical reactions vs. changes in fish population), which requires
AI techniques of time-scale abstraction. Our approach to
modeling the interactions involved in this bloom phenomenon use a
language called QPC. QPC allows the automatic direct
expression and representation of physical models of compartments and
their interaction, and the application of time-scale abstraction in
the composition of a scenario model.
This work was carried out in Brazil, France and Germany with
pioneers Waldir Roque at the Federal University of Rio Grande do
Sul, Ulrich Heller and Peter Struss from Technical University
of Munich, and Francois Guerrin at INRA Toulouse.
Chapter 7 Recycling and Resource Use in Product Life Cycle
The Green Browser
The chapter is about revealing product life cycle information
from the raw material stage through use and eventual disposal or
recycling. Unlike a general purpose browser, the Green Browser uses
AI methods to focus selectively on environmental (green) product
information extracted from the net. Applying AI methods to general
browser technology allows us to quickly reveal products bearing
positively on green production and environmental protection.
This creates greater public literacy and market selection through a
product’s potential impacts: from resource usage and extraction to
disposal and dispersal. A focused browser finds green product
information faster than a normal browser. Faster, easier
access to such information supports informed corporate and public
decisions and enables stakeholders (e.g., employees, shareholders,
consumers, regulators, NGO. etc.) to have automated focused access
to all available environmental information. The AI information
representation and design schemes proposed for this purpose are
called green life cycle model and green life cycle
design. The chapter proposes a representational scheme
called green life cycle model which organizes corporate information
for the Green Browser. For the purpose of supporting design for life
cycle of green products (green life cycle design), the scheme is
built to illustrate a product’s potential impacts from the raw
material stage through use and eventual disposal or recycling. Firms
are encouraged to process their firm-specific information based on
the scheme. Second, the chapter discusses how the Green Browser can
support information sharing to enable stakeholders to obtain the
detailed picture of products.
This chapter is based on work in Japan, with pioneering scientists
from the University of Tokyo: Yasushi Umeda, Tetsuo Tomiyama,
Takashi Kiriyama, and Yasunori Baba.
Chapter 8 People arguing and making decisions
Support for Argumentation in Natural Resource Management
In this chapter AI helps resolve arguments about natural
resources among differently interested parties. When decisions
to be made involve changes to natural resources such as oceans,
forests or the atmosphere, the interests of various stakeholders
need to be taken into account, including scientists from different
disciplines and local stakeholders with different goals and
priorities. Software methods to support participants in these
discussions are now widely believed to help make wiser and more
sustainable management decisions by more easily weighing up the
views of all relevant parties. These parties include land-owners,
residents, environmental pressure groups, wildlife biologists and
other scientists, governmental bodies and industries. When there is
disagreement, people require ways to explore the reasons for the
different viewpoints and to seek out areas of consensus which can be
built upon. This work uses an AI method based on meta-level
representation of argumentation frameworks to explore multiple
knowledge bases in which conflicting opinions about environmental
change are expressed. A formal meta-language is defined for
articulating the relationships between, and arguments for,
propositions in knowledge bases independently of their particular
object-level representation. A prototype system has been implemented
to evaluate the usefulness of this framework and to assess its
computational feasibility. The results so far are promising.
This chapter comes from the Scottish pioneer Mandy Haggith at the
University of Edinburgh.
Chapter 9 Underground Nuclear Testing
An Intelligent Assistant for Nuclear Test Ban Treaty
Verification.
This chapter addresses treaty verification for underground
nuclear testbed agreements using a hybrid-AI software agent
assistant that classifies and filters seismic data from Norway’s
regional seismic array, NORESS. Verification of a
Comprehensive Test Ban Tan (zero testing) has driven the development
of enhanced seismic verification technology with lower detection
levels and better noise reduction signal extraction algorithms.
However each detected event must be analyzed to determine if it
contains a clandestine nuclear test. Lowering the detection
threshold causes an exponential increase in the number of events
detected. The volume of events to be analyzed and classified
overwhelms human analysts. SEA was developed in the Treaty
Verification Research Group at Lawrence Livermore National
Laboratory. The overall system is hybrid - it contains
hardware and many kinds of software, such as advanced signal
processing algorithms that work with SEA. The agent
architecture supports a pattern driven application of
computationally expensive numerical analysis. Three important
aspects of the intelligent software assistant SEA are (1) the user
interface permits interactive or human-agent analysis (2) it reduces
the workload of the human analyst by filtering and classifying the
large volume of continuously arriving data, presenting “interesting”
events for human review and explanation of its analysis (3) it
emulates the common sense problem solving behavior and explanation
capability of the human seismic analyst by using a multi-context
Assumption Based Truth Maintenance System.
The work was conducted in the Treaty Verification Program at
Lawrence Livermore Laboratory in the USA by Cindy Mason. Dr.
Mason authored the proposal for the first international AI and
Environment workshop while on a joint appointment with
Stanford University and NASA Ames Research Center.
Chapter 10 Assembling Satellite Data
The COLLAGE/KHOROS Link: Planning for Image Processing Tasks
This chapter looks at the assembly of satellite data. It
is an overarching and pervasive issue in environmental computer
systems. Challenges include how to represent and partition
information in a way that fosters extensibility and flexibility and
how to do this across many kinds of satellite data and analysis
products that are often changing and growing. To solve this
problem we use a branch of AI known as planning. AI Planning
allows us to automatically generate the necessary sequence of image
processing steps for examining satellite remote sensing data.
Several obvious issues arise when integrating a variety of data and
products for viewing/analysis: low-level connection tasks;
representation translation tasks; the need to present different
kinds of users with a suitably coherent combined architecture.
To make the system open to future media and data products we are
interested in how to represent and partition information in a way
that fosters extensibility and flexibility. We describe our
work to do this with two existing systems at NASA called -
COLLAGE/KHOROS, which accesses a suite of image processing
algorithms that are constantly changing and updating. Our
challenge for the planning system is to provide the assembly and
viewing of these data and data products to be useable by a variety
of users with different skill levels. These kinds of issues, of
course, are common among many software engineering enterprises.
This is a NASA project from the USA. Pioneers Scott Schmidler
and Nick Short from NASA Goddard and Amy Lansky, Mark Friedman, and
Lise Getoor, from NASA Ames created the work in this chapter.
Chapter 11 Forest Ecosystem Simulation
KBLIMS For Forested Ecosystem Simulation Management
This chapter addresses forested ecosystem management with
hybrid-AI question answering simulation systems. Answering
questions like “what is the effect of clearcutting on
watersheds in the Turkey Lakes region of Ontario Canada?”
involve complex interactions between simulations and specialized
tools about climatic, topographic, hydrologic, pedological and
ecological processes. The manual process of answering a question is
a laborious task where simulations, tools and modeling systems from
multiple disciplines are run like batch processing, generating many
cumbersome files. The questions also involve the complex
interactions between fundamentally different kinds of data from
geographic information system(GIS) and ecosystem simulation
modeling. For example, geographic information systems typically
represent information as points, polygons, lines and layers, whereas
simulation systems use system state, mass and energy flux, and the
interaction and dynamics of species or individuals. Automating this
process relieves the tedium and speeds the process involved in each
q/a so that many more queries can be completed. To manage
aggregation and integration across these fundamentally different
disciplines, data types and systems, the AI systems use a
multi-layered ontology across conceptually different systems with an
architecture based on the notion of a query model that executes a
set of user-defined queries. The system allows many kinds of queries
over many combination and levels of aggregation and scales and
includes simulation queries, spatial data queries, deduction queries
and an aggregation of these processes. The system can run on
either user-defined or system-defined queries. Typical use of the
system is done automatically so the user, e.g. an ecologist, need
not explicitly parameterize and run simulation models. System use of
the simulation systems are managed by the knowledge base using its
meta knowledge about the tools, modelers, etc. which allows for the
integration of either tightly-coupled or loosely-coupled
systems. Users interface to the system expressing a
simulation experiment by first identifying a set of high level
concepts/objects as a spatial query, then specifying some action to
be performed on these concepts/objects, such as a combined
simulation query and aggregation query. The AI explanation
system is based on the same ontological/concepts.
This chapter is based on work conducted in Canada by pioneers
Vincent B. Robinson and D. Scott Mackay while at the University of
Toronto.
Chapter 12 Weather Bulletins
SCRIBE: An Interactive System for Composition of
Meteorological Forecasts
This chapter describes work by the Canadian Meteorological
Center to interactively generate public weather forecasting
bulletins from weather matrices and sensors across and people across
Canada. The system, SCRIBE, uses hybrid-AI or ensemble methods
to generate plain language public forecast bulletins in French or
English from a set of stations or sample points prepared at a
three-hour time resolution over a range of Canada. Although
the system is created for the purpose of automation it is also run
in manual mode and all processing can be monitored and modified by
human users. A semantic numerical analysis processes the
weather element matrices according to standards of
codification. The resulting content is described with
more than 40 precipitation concepts (rain, rain heavy at
times...), including three types of concepts applicable to
thunderstorms (risk, possibility, a few) at up to three levels at
the same time (ex.: rain and snow possibly mixed with ice pellets)
it can also produce two types of concepts applicable to
precipitation accumulation (liquid and frozen), six classes of
probability of precipitation concepts, 13 sky cover concepts (11
stationary states and two evolving states), 14 classes of wind speed
with eight directions, two types of visibility concepts (blowing
snow and fog) and ten types of maximum/minimum temperature concepts.
By using the standards of codification the AI system provides a
simple way to display the content of the weather element matrices
for human editing rather than displaying the raw numbers. Once the
editing task is complete at the interface level, the modified
concept file is quality controlled before being fed to the knowledge
base system again to generate the plain language bulletin. The
knowledge base system creates a basic sentence structure that can be
matched into different structures representing different semantics
expressing the same content, following a case base reasoning
approach. The knowledge base system uses approximately
600 rules to generate the standardized ontological weather
concepts. It uses approximately 1000 rules to generate the
plain language bulletins. The use of rules supports the ability to
explain the steps of the automation process.
The Canadian pioneers for this work conducted at the Canadian
Meteorological Center are R. Verret, G. Babin, D. Vigneux, J.
Marcoux, J, Boulais, R. Parent, S. Payer, and F. Petrucci.
Chapter 13 Weather forecasting
Retrieving Structured Spatial Information from Large
Databases
This chapter addresses weather forecasting with an intelligent
software agent assistant. The agent acts as a “memory
amplifier” for meteorologists to assist in weather forecasting by
rapidly locate and analyze similar kinds of past weather. Much
of environmental data, including meteorological, covers a large
region of the earth so it is organized as spatially. A challenge is
that historical multi media meteorological data includes audio,
text, satellite images, laser disks, etc.. The chapter
presents the first AI method to intelligently retrieve spatially
organized data with a technique known as a case-based reasoning
system for the rapid display of historical meteorological
data. Case based analysis allows the comparison of similar
instances or ‘cases’ of an example. This work is distinguished
by the large size of its case base, by its need to represent
structured spatial information, and by its use of a relational
database to store spatial data cases. This briefly describes
some of the technical issues that follow from these design
considerations, focusing on the role of the relational
database. The system is called MetVUW Workbench.
This work was conducted in New Zealand by pioneers Eric K. Jones and
Aaron Roydhouse while at the University of Wellington.
Chapter 14 Sharing Digital Environmental Resources
Environmental Information Mall
The chapter addresses sharing environmental tools and data
products across government agencies, institutions and other large
organizations. The key to the Environmental Information Mall
project is an environmental concepts ontology. It supports the
interoperation of data and analytical tools from a variety of
independent sources. The chapter describes AI tools for the
creation and maintenance of the ontology, and then shows how it can
be used including but not limited to: information sources advertise
their capabilities; mediators combine analytical tools with the data
on which they operate to share data products; end users locate
relevant information; and intelligent agents or intelligent user
interfaces that fuse information from several sources.
This chapter comes from Texas, USA. It is based on the work of
pioneers Michael Huhns, Munindar P. Singh and Gregory E. Pitts who
were working at MCC.
Chapter 15 Biodiversity and ecosystem catalogues
BENE - Biodiversity and Ecosystems Network Environment
The chapter addresses biodiversity and ecosystem cataloging in
global collaboratives for biodiversity conservation and ecosystem
protection, restoration, and management communities. BENE
(Biodiversity and Ecosystems Network Environment) fosters enhanced
communications and collaborations through the intelligent sharing of
networks of biodiversity and ecosystem data and collections. Current
estimates of the diversity of life (plant, animal, microorganisms)
on Earth (biodiversity) range beyond the ~1.5 million species
described to date up to perhaps as high as -130 million species. The
number of entries in such a collective are vast and are created
by wide ranging diverse social, political and economic members
from governments, corporations, academia, private foundations,
individual citizens and so on. The data types are also diverse
and include samples or specimens themselves, field notes of
scientists and taxonomists, museum repositories, geographic
information systems and spatial data, genome data, education and
television data, etc. BENE project a) points users to
new networks of biodiversity data collections using intelligent
search b) provides web based user access to an integrated network of
collectives using search agents that rely on ontological and
meta-data. At present, there are nodes in Australia (4), Brazil (the
BIN21 Secretariat resides at the Base de Dados Tropical), Costa
Rica, Ecuador, Finland (2), Italy, Japan, United Kingdom and the
United States (BENE is the only BIN21 node in the U.S.A.).
This chapter represents a joint effort by the USA conducted by
pioneering scientist Steve Young at the Smithsonian Institute and
the US Environmental Protection Agency and Leland Ellis and Andrew
Jackson at Texas A&M University.
Chapter 16 Plant physiology and climate change modeling
Automated Modeling of Complex Biological and Ecological
Systems
This chapter concerns plant physiology and climate change
modeling, specifically the automatic generation of simulation models
that answer prediction questions and explain how climate change may
affect plant physiology. It is particularly useful to
predict the effects of global climate changes on plants and animals
in specific regions. In general, predicting answers to climate
change questions takes vast amounts of knowledge, time, people with
special knowledge and is error prone. Automating this prediction
process speeds it up allowing consideration of many different
scenarios and assumption conditions. Equally important to the
answer for a prediction question is the reason for that
answer. Any system like this must also cough up an
explanation. But the automation tools themselves choke on the
vast knowledge during computation. To answering climate
prediction questions we not only need general principles of plant
and animal physiology but species interactions and specific data on
individual species, climatic events, and geologic formations.
The central issue in automatically answering prediction questions is
constructing a model from this wealth of information that captures
the important aspects of the scenario and their relationships to the
variables of interest. This avoids the problem of working with
a sea of knowledge, much of which is irrelevant to a particular
question. The novel approach taken to solving this
problem makes Siri and Watson look like kindergarten
tools. Spoiler - the AI system automatically generates code on
the fly for each prediction question. The key to this approach
lies in building a meta model of the knowledge, causal-relations and
tools. Using the meta knowledge, a predictive
question/answering system is coded on the fly based on causal
relationship elements needed for each question using only computing
simulations and knowledge elements relevant to the question. The
causal information is also the basis for the explanation
facility. Consider the general form of a prediction
question in plant physiology: “How would decreasing soil moisture
affect a plant’s transpiration rate?” A prediction question poses a
hypothetical scenario (e.g., a plant whose soil moisture is
decreasing) and asks for the resulting behavior of specified
variables of interest, (e.g., the plant’s transpiration rate). Using
detailed knowledge of plant physiology and other physical systems,
the q/a system is generated by parsing the prediction question and
determining the relevant elements and factors needed to answer the
question. The authors introduce a modeling program
called TRIPEL for answering prediction questions based on causal
influences. It defines the modeling task, criteria for
distinguishing relevant aspects of the scenario from irrelevant
aspects and the algorithm that uses these criteria to automatically
construct the simplest adequate model for answering a
question. The chain of causal influences provides the basis
for an explanation facility. Representing the information in
multiple levels of abstraction support explanation styles matched to
the kind of user (scientist, decision maker, etc.) This system can
be generally used on any body of knowledge to automatically generate
predictive q/a systems. In biology and ecology, such questions are
important for predicting the consequences of natural conditions and
management policies as well as for teaching biological and
ecological principles.
This chapter represents the work of Texan pioneers Jeff Rickel and
Bruce Porter at the University of Texas, USA.
This “Book in a Nutshell” is composed by the editor and contributor,
Cindy Mason and is a combination of facts found in the chapters
together with her own writing. Any errors in the
representation of the ideas in the chapters are the sole
responsibility of the editor.