70
CT-220 Sistemas Multiagentes Introdução Prof. Paulo André Castro [email protected] www.comp.ita.br/~pauloac Sala 110, IEC-ITA

CT-220 Sistemas Multiagentes Introduçãopauloac/ct220/ct220_cap1_p1.pdf · entre agentes. Linguagens e plataformas de programação multiagentes. Engenharia de software orientada

Embed Size (px)

Citation preview

CT-220 Sistemas Multiagentes

Introdução

Prof. Paulo André Castro [email protected]/~pauloac Sala 110, IEC-ITA

Nova disciplina

� CT-220 - Sistemas Multiagentes. Requisito recomendado: CT-234 ou equivalente. Requisito exigido: Não há. Horas semanais: 3-0-0-6.

Conceituação de agentes autônomos e sistemas multiagentes. Comunicação entre agentes. Linguagens e plataformas de programação multiagentes. Engenharia de software orientada a agentes: metodologias e técnicas de

análise e projeto orientado a agentes. Introdução a teoria dos jogos.

Aprendizagem distribuída. Interação multiagente: leilões e protocolos de negociação. Lógica modal para sistemas multiagentes. Arquiteturas BDI negociação. Lógica modal para sistemas multiagentes. Arquiteturas BDI

(Belief-Desire-Intention). Aplicações de sistemas multiagentes: finanças, simulação social e sistemas de defesa entre outras.

Planejamento

� Cap. 0 Apresentação do Curso

� Cap 1. Conceituação de agentes autônomos e sistemas multiagentes

� Cap. 2. Introdução a Teoria de Utilidade e Teoria de Jogos

� Cap. 4. Interação multiagente: leilões e protocolos de negociação

� Cap. 5. Comunicação entre agentes, Linguagens e plataformas de programação

multiagentes.

� 5.1. Comunicação entre agentes

� Woold. cap. 8

� 5.2. Linguagens e Programação multiagente

� Woold cap. 9

� 5.3. Plataformas

� Semaninha

Planejamento - 2

� Cap. 6. Engenharia de software orientada a agentes: metodologias e técnicas de análise e projeto orientado a agentes.

� Cap. 7. Aprendizagem distribuída

� Cap. 8. Lógica modal para sistemas multiagentes e Arquiteturas BDI (Belief-

Desire-Intention)

� Cap. 9. Aplicações de sistemas multiagentes: finanças, simulação social e

sistemas de defesa entre outras (Inclui

Apresentação de trabalhos desenvolvidos pelos alunos)

Forma de Avaliação

� 1º. Bimestre

� 1 Prova

� 2º. Bimestre

� Prova+1 Proposta de Projeto

� Exame� Exame

� Projeto: Artigo + Apresentação

Referências

� Wooldridge, M.; An Introduction to Multiagent Systems; John Wiley and Sons; 2002.

� Bellifemine, F., Caire, G. and Greenwood, D.; Developing multi-agent systems with JADE. Wiley. 2007.

Shoham, Y. and Leyton-Brown, K. Multiagent Systems algorithmic, � Shoham, Y. and Leyton-Brown, K. Multiagent Systems algorithmic, game-theoretic, and logical foundations. New York: Cambridge Press. 2009.

� Notas de Aula

Outras Referências

� FRANKLIN, S.; GRAESSER, A. Is it an agent or just a program? a taxonomy for autonomous agents. In: International Workshop on Agent Theories,

architectures and Languages, 3. Berlin: New York: Springer-Verlag, 1996. p. 21–35.

� BORDINI, J. H. R.; WOOLDRIDGE, M. Programming MultiAgent Systems in AgentSpeak using Jason. London: Wiley, 2007.

� WEISS, G. MultiAgent Systems: A modern approach to Distributed Artificial

Intelligence. Cambridge, Massachusetts: The MIT Press, 1999.Intelligence. Cambridge, Massachusetts: The MIT Press, 1999.

� BRADSHAW, J. M. Software Agents. Cambridge, Massachusetts: The MIT

Press, 1997

� ROSENSCHEIN, J. S.; ZLOTKIN, G. Rules of encounter:designing conventions for automated negotiation among computers. Boston: MIT

Press, 1994.

� FIPA. FIPA. The Foundation for Intelligent Physical Agents. Disponível em: <http://www.fipa.org>.

Cap 1. Conceituação de agentes e

sistemas multiagentes

Overview

� Five ongoing trends have marked the history of computing:

� ubiquity;

� interconnection;

� intelligence;

� delegation; and� delegation; and

� human-orientation

Ubiquity

� The continual reduction in cost of computing capability has made it possible to introduce processing power into places and devices that would have once been uneconomic

� As processing capability spreads, sophistication (and intelligence of a sort) becomes ubiquitousintelligence of a sort) becomes ubiquitous

� What could benefit from having a processor embedded in it…?

Interconnection

� Computer systems today no longer stand alone, but are networked into large distributed systems

� The internet is an obvious example, but networking is spreading its ever-growing tentacles…

� Since distributed and concurrent systems have � Since distributed and concurrent systems have become the norm, some researchers are putting forward theoretical models that portray computing as primarily a process of interaction

Intelligence

� The complexity of tasks that we are capable of automating and delegating to computers has grown steadily

� If you don’t feel comfortable with this definition of “intelligence”, it’s probably because you are a human“intelligence”, it’s probably because you are a human

Delegation� Computers are doing more for us – without our

intervention

� We are giving control to computers, even in safety critical tasks

� One example: fly-by-wire aircraft, where the machine’s judgment may be trusted more than an machine’s judgment may be trusted more than an experienced pilot

� Next on the agenda: fly-by-wire cars, intelligent braking systems, cruise control that maintains distance from car in front…

Human Orientation

� The movement away from machine-oriented views of programming toward concepts and metaphors that more closely reflect the way we ourselves understand the world

� Programmers (and users!) relate to the machine � Programmers (and users!) relate to the machine differently

� Programmers conceptualize and implement software in terms of higher-level – more human-oriented –abstractions

Programming progression…

� Programming has progressed through:

� machine code;

� assembly language;

� machine-independent programming languages;

� sub-routines;

� procedures & functions;

� abstract data types;

� objects;

to agents.

Global Computing

� What techniques might be needed to deal with systems composed of 1010 processors?

� Don’t be deterred by its seeming to be “science fiction”

� Hundreds of millions of people connected by email � Hundreds of millions of people connected by email once seemed to be “science fiction”…

� Let’s assume that current software development models can’t handle this…

Where does it bring us?

� Delegation and Intelligence imply the need to build computer systems that can act effectively on our behalf

� This implies:

� The ability of computer systems to act independently

� The ability of computer systems to act in a way that represents our best interests while interacting with other humans or systems

Interconnection and Distribution

� Interconnection and Distribution have become core goals in Computer Science

� But Interconnection and Distribution, coupled with the need for systems to represent our best interests, the need for systems to represent our best interests, implies systems that can cooperate and reach agreements (or even compete) with other systems that have different interests (much as we do with other people)

So Computer Science expands…

� These issues were not studied in Computer Science until recently

� All of these trends have led to the emergence of a new field in Computer Science: multiagent systems

Agents, a First Definition

� An agent is a computer system that is capable of independent action on behalf of its user or owner (figuring out what needs to be done to satisfy design objectives, rather than constantly being told)told)

� We will see soon other definitions and a deep discussion about what an agent really is

Multiagent Systems, a Definition� A multiagent system is one that consists of a

number of agents, which interact with one-another

� In the most general case, agents will be acting on behalf of users with different goals and motivationsmotivations

� To successfully interact, they will require the ability to cooperate, coordinate, and negotiatewith each other, much as people do

Agent Design, Society Design

� The course covers two key problems:

� How do we build agents capable of independent, autonomous action, so that they can successfully carry out tasks we delegate to them?

� How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in (cooperating, coordinating, negotiating) with other agents in order to successfully carry out those delegated tasks, especially when the other agents cannot be assumed to share the same interests/goals?

� The first problem is agent design, the second is society design (micro/macro)

Multiagent Systems

� In Multiagent Systems, we address questions such as:

� How can cooperation emerge in societies of self-interested agents?

� What kinds of languages can agents use to communicate?communicate?

� How can self-interested agents recognize conflict, and how can they (nevertheless) reach agreement?

� How can autonomous agents coordinate their activities so as to cooperatively achieve goals?

Multiagent Systems

� While these questions are all addressed in part by other disciplines (notably economics and social sciences), what makes the multiagent systems field unique is that it emphasizes that the agents in question are computational, information processing entities.information processing entities.

Limits� Multiagent systems are not a panacea

� It is not always a good idea to model systems

with several entities as MAS. For instance, it may

be a bad idea using MAS if…

� the entities aren’t really autonomous

� communication among entities is not relevant

� there is a simpler and good enough model for the

problem

Spacecraft Control

� When a space probe makes its long flight from Earth to

the outer planets, a ground crew is usually required to

continually track its progress, and decide how to deal

with unexpected eventualities. This is costly and, if

decisions are required quickly, it is simply not practicable.

For these reasons, organizations like NASA are seriously For these reasons, organizations like NASA are seriously

investigating the possibility of making probes more

autonomous — giving them richer decision making

capabilities and responsibilities.

� NASA’s DS1 (Deep Space) has done it!

� http://nmp.jpl.nasa.gov/ds1/

Autonomous Agents for

specialized tasks

� The DS1 example is one of a generic class

� Agents (and their physical instantiation in robots) have a role to play in high-risk situations, unsuitable or impossible for humans

� The degree of autonomy will differ depending on the � The degree of autonomy will differ depending on the situation (remote human control may be an alternative, but not always)

Air Traffic Control

� “A key air-traffic control system…suddenly fails, leaving flights in the vicinity of the airport with no air-traffic control support. Fortunately, autonomous air-traffic control systems in nearby airports recognize the failure of their peer, and cooperate to track and deal with all affected flights.”with all affected flights.”

� Systems taking the initiative when necessary

� Agents cooperating to solve problems beyond the capabilities of any individual agent

Internet Agents

� Searching the Internet for the answer to a specific query can be a long and tedious process. So, why not allow a computer program — an agent — do searches for us? The agent would typically be given a query that would require synthesizing pieces of information from various different Internet information sources. Failure various different Internet information sources. Failure would occur when a particular resource was unavailable, (perhaps due to network failure), or where results could not be obtained.

What if the agents become better?

� Internet agents need not simply search

� They can plan, arrange, buy, negotiate – carry out arrangements of all sorts that would normally be done by their human user

� As more can be done electronically, software agents � As more can be done electronically, software agents theoretically have more access to systems that affect the real-world

� But new research problems arise just as quickly…

Research Issues

� How do you state your preferences to your agent?

� How can your agent compare different deals from different vendors? What if there are many different parameters?

� What algorithms can your agent use to negotiate with other agents (to make sure you get a good deal)?other agents (to make sure you get a good deal)?

� These issues aren’t frivolous – automated procurement could be used massively by (for example) government agencies

� The Trading Agents Competitions…

Multiagent Systems is Interdisciplinary� The field of Multiagent Systems is influenced and

inspired by many other fields:

� Economics

� Philosophy

� Game Theory

� Logic� Logic

� Ecology

� Social Sciences

� This can be both a strength (infusing well-founded methodologies into the field) and a weakness (there are many different views as to what the field is about)

� This has analogies with artificial intelligence itself

Some Views of the Field

� Agents as a paradigm for software engineering:Software engineers have derived a progressively better understanding of the characteristics of complexity in software. It is now widely recognized that interaction is probably the most important single characteristic of complex softwaresingle characteristic of complex software

� Over the last two decades, a major Computer Science research topic has been the development of tools and techniques to model, understand, and implement systems in which interaction is the norm

Some Views of the Field

� Agents as a tool for understanding human societies:Multiagent systems provide a novel new tool for simulating societies, which may help shed some light on various kinds of social processes.

This has analogies with the interest in “theories of � This has analogies with the interest in “theories of the mind” explored by some artificial intelligence researchers

Some Views of the Field

� Multiagent Systems is primarily a search for appropriate theoretical foundations:We want to build systems of interacting, autonomous agents, but we don’t yet know what these systems should look like

� You can take a “neat” or “scruffy” approach to the � You can take a “neat” or “scruffy” approach to the problem, seeing it as a problem of theory or a problem of engineering

� This, too, has analogies with artificial intelligence research

Objections to MAS

� Isn’t it all just Distributed/Concurrent Systems?There is much to learn from this community, but:

� Agents are assumed to be autonomous, capable of making independent decision – so they need mechanisms to synchronize and coordinate their activities at run timeactivities at run time

� Agents are (can be) self-interested, so their interactions are “economic” encounters

Objections to MAS

� Isn’t it all just AI?

� We don’t need to solve all the problems of artificial intelligence (i.e., all the components of intelligence) in order to build really useful agents

� Classical AI ignored social aspects of agency. These � Classical AI ignored social aspects of agency. These are important parts of intelligent activity in real-world settings

Objections to MAS� Isn’t it all just Economics/Game Theory?

These fields also have a lot to teach us in multiagent systems, but:

� While as game theory provides descriptive concepts, it doesn’t always tell us how to compute solutions; we’re concerned with computational, resource-we’re concerned with computational, resource-bounded agents

� Some assumptions in economics/game theory (such as a rational agent) may not be valid or useful in building artificial agents

Objections to MAS

� Isn’t it all just Social Science?

� We can draw insights from the study of human societies, but there is no particular reason to believe that artificial societies will be constructed in the same way

� Again, we have inspiration and cross-fertilization, but hardly subsumption

What is an Agent?� The main point about agents is they are autonomous:

capable of acting independently, exhibiting control over their internal state

� Thus: an agent is a computer system capable of autonomous action in some environment in order to meet its design objectivesmeet its design objectives

40

SYSTEM

ENVIRONMENT

input output

What is an Agent?

� Trivial (non-interesting) agents:

� thermostat

� UNIX daemon (e.g., biff)

� An intelligent agent is a computer system capable of flexible autonomous action in some environmentsome environment

� By flexible, we mean:

� reactive

� pro-active

� social

41

Reactivity� If a program’s environment is guaranteed to be fixed, the

program need never worry about its own success or failure – program just executes blindly

� Example of fixed environment: compiler

� The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are

dynamicdynamic

� Software is hard to build for dynamic domains: program must take into account possibility of failure – ask itself whether it is worth executing!

� A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful)

42

Proactiveness

� Reacting to an environment is easy (e.g., stimulus → response rules)

� But we generally want agents to do things for us

� Hence goal directed behavior

� Pro-activeness = generating and attempting to � Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative

� Recognizing opportunities

43

Balancing Reactive and Goal-

Oriented Behavior

� We want our agents to be reactive, responding to changing conditions in an appropriate (timely) fashion

� We want our agents to systematically work towards long-term goals

These two considerations can be at odds with one � These two considerations can be at odds with one another

� Designing an agent that can balance the two remains an open research problem

44

Social Ability� The real world is a multi-agent environment: we

cannot go around attempting to achieve goals

without taking others into account

� Some goals can only be achieved with the

cooperation of others

Similarly for many computer environments: � Similarly for many computer environments:

witness the Internet

� Social ability in agents is the ability to interact

with other agents (and possibly humans) via

some kind of agent-communication language,

and perhaps cooperate with others

45

Other Properties

� Other properties, sometimes discussed in the context of agency:

� mobility: the ability of an agent to move around an electronic network

� veracity: an agent will not knowingly communicate false informationinformation

� benevolence: agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it

� rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved — at least insofar as its beliefs permit

� learning/adaption: agents improve performance over time

46

Agents and Objects

� Are agents just objects by another name?

� Object:

� encapsulates some state

� communicates via message passing

� has methods, corresponding to operations that may � has methods, corresponding to operations that may be performed on this state

47

Agents and Objects� Main differences:

� agents are autonomous:agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent

� agents are smart:capable of flexible (reactive, pro-active, social) behavior, capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior

� agents are active:a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

48

Objects do it for free…

� agents do it because they want to

� agents do it for money

49

Agents and Expert Systems

� Aren’t agents just expert systems by another name?

� Expert systems typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g., blood diseases)

� Example: MYCIN knows about blood diseases in � Example: MYCIN knows about blood diseases in humans

� It has a wealth of knowledge about blood diseases, in the

form of rules

� A doctor can obtain expert advice about blood diseases by

giving MYCIN facts, answering questions, and posing

queries

50

Agents and Expert Systems

� Main differences:

� agents situated in an environment:MYCIN is not aware of the world — only information obtained is by asking the user questions

� agents act:MYCIN does not operate on patientsMYCIN does not operate on patients

� Some real-time (typically process control) expert systems are agents

51

Intelligent Agents and AI

� Aren’t agents just the AI project?Isn’t building an agent what AI is all about?

� AI aims to build systems that can (ultimately) understand natural language, recognize and understand scenes, use common sense, think creatively, etc. — all of which are very hardcreatively, etc. — all of which are very hard

� So, don’t we need to solve all of AI to build an agent…?

52

Intelligent Agents and AI� When building an agent, we simply want a system

that can choose the right action to perform, typically in a limited domain

� We do not have to solve all the problems of AI to build a useful agent:

a little intelligence goes a long way!a little intelligence goes a long way!

� Oren Etzioni, speaking about the commercial experience of NETBOT, Inc:“We made our agents dumber and dumber and dumber…until finally they made money.”

53

Environments – Accessible vs.

inaccessible

� An accessible environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state

� Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessibleInternet) are inaccessible

� The more accessible an environment is, the simpler it is to build agents to operate in it

54

Environments –

Deterministic vs. non-deterministic

� A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action

� The physical world can to all intents and purposes � The physical world can to all intents and purposes be regarded as non-deterministic

� Non-deterministic environments present greater problems for the agent designer

55

Environments - Episodic vs. non-

episodic� In an episodic environment, the performance of an

agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios

� Episodic environments are simpler from the agent developer’s perspective because the agent can developer’s perspective because the agent can decide what action to perform based only on the current episode — it need not reason about the interactions between this and future episodes

56

Environments - Static vs. dynamic

� A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent

� A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s controlin ways beyond the agent’s control

� Other processes can interfere with the agent’s actions (as in concurrent systems theory)

� The physical world is a highly dynamic environment

57

Environments – Discrete vs. continuous� An environment is discrete if there are a fixed, finite

number of actions and percepts in it

� Russell and Norvig give a chess game as an example of a discrete environment, and taxi driving as an example of a continuous one

� Continuous environments have a certain level of � Continuous environments have a certain level of mismatch with computer systems

� Discrete environments could in principle be handled by a kind of “lookup table”

58

Environment classification

Agents as Intentional Systems

� When explaining human activity, it is often useful to make statements such as the following:

Janine took her umbrella because she believed it was going to rain.

Michael worked hard because he wanted

to possess a PhD.

� These statements make use of a folk psychology, by which human behavior is predicted and explained through the attribution of attitudes, such as believing and wanting (as in the above examples), hoping, fearing, and so on

� The attitudes employed in such folk psychological descriptions are called the intentional notions

60

Agents as Intentional Systems

� The philosopher Daniel Dennett coined the term intentional system to describe entities ‘whose behavior can be predicted by the method of attributing belief, desires and rational acumen’

� Dennett identifies different ‘grades’ of intentional system:

‘A first-order intentional system has beliefs and desires ‘A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. …A second-order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) — both those of others and its own’

61

Agents as Intentional Systems

� Is it legitimate or useful to attribute

beliefs, desires, and so on, to

computer systems?

62

Agents as Intentional Systems� McCarthy argued that there are occasions when the

intentional stance is appropriate:

‘To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is usefulwhen the ascription helps us understand the structure of the machine, its past or future behavior, or how to repair or improve it. It is perhaps never

63

past or future behavior, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most usefulwhen applied to entities whose structure is incompletely known’.

Agents as Intentional Systems

� What objects can be described by the intentional stance?

� As it turns out, more or less anything can. . . consider a light switch:

‘It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of

� But most adults would find such a description absurd!Why is this?

64

(very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires’. (Yoav Shoham)

Agents as Intentional Systems

� The answer seems to be that while the intentional stance description is consistent,

. . . it does not buy us anything, since we essentially understand the mechanism sufficiently to have a simpler, mechanistic description of its behavior.

(Yoav Shoham)

� Put crudely, the more we know about a system, the less � Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behavior

� But with very complex systems, a mechanistic, explanation of its behavior may not be practicable

� As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation — low level explanations become impractical. The intentional stance is such an abstraction

65

Agents as Intentional Systems� The intentional notions are thus abstraction tools, which

provide us with a convenient and familiar way of describing, explaining, and predicting the behavior of complex systems

� Remember: most important developments in computing are based on new abstractions:

� procedural abstraction� procedural abstraction

� abstract data types

� objects

Agents, and agents as intentional systems, represent a further, and increasingly powerful abstraction

� So agent theorists start from the (strong) view of agents as intentional systems: one whose simplest consistent description requires the intentional stance

66

Agents as Intentional Systems� This intentional stance is an abstraction tool — a

convenient way of talking about complex systems, which allows us to predict and explain their behavior without having to understand how the mechanism actually works

� Now, much of computer science is concerned with looking for abstraction mechanisms (witness procedural looking for abstraction mechanisms (witness procedural abstraction, ADTs, objects,…)

So why not use the intentional stance as an abstraction tool in computing — to explain, understand, and, crucially, program computer systems?

� This is an important argument in favor of agents

67

Agents as Intentional Systems

� Other 3 points in favor of this idea:

� Characterizing Agents:

� It provides us with a familiar, non-technical way of understanding & explaining agents

� Nested Representations:

� It gives us the potential to specify systems that include representations of other systems

� It is widely accepted that such nested representations are essential for agents that must cooperate with other agents

68

Agents as Intentional Systems

� Post-Declarative Systems:

� This view of agents leads to a kind of post-declarative programming:

� In procedural programming, we say exactly what a system should do

� In declarative programming, we state something that we want to achieve, give the system general info about the relationships between objects, and let a built-in control mechanism (e.g., goal-directed theorem objects, and let a built-in control mechanism (e.g., goal-directed theorem proving) figure out what to do

� With agents, we give a very abstract specification of the system, and let the control mechanism figure out what to do, knowing that it will act in accordance with some built-in theory of agency (e.g., the well-known Cohen-Levesque model of intention)

69

An aside…

� We find that researchers from a more mainstream computing discipline have adopted a similar set of ideas…

� In distributed systems theory, logics of knowledge are used in the development of knowledge based protocols

� The rationale is that when constructing protocols, one often encounters reasoning such as the following:encounters reasoning such as the following:

IF process i knows process j hasreceived message m1

THEN process i should send process jthe message m2

� In DS theory, knowledge is grounded — given a precise interpretation in terms of the states of a process; we’ll examine this point in detail later

70