Download ppt - LECTURE FIVE

Transcript
Page 1: LECTURE FIVE

ARTIFICIAL NEURAL NETWORKS AND NEUROSEMANTICS

人工神经元网络以及神经语义学

Page 2: LECTURE FIVE

If eliminativism is right, then we cannot reduce types of mental states to anything else. That means, the natural kinds of any type of mental states cannot survive a strict implementation of an eliminativist program.

Somehow similar to this case: from an eliminativist perspective, the Great French Revolution is not a legitimate label which can pick out a single historical event. Rather, it should be viewed as a label attached to a loose collection of the behaviors of numerous individuals. Though historians need to rely on this label when the behaviors of individuals are epistemologically inaccessible to them; they also need to be prepared to abandon this label when new data are available.

Page 3: LECTURE FIVE

Philosophers of mind and cognitive scientists need to be prepared to abandon the mental vocabulary when new data about human’s neural systems are available.

Page 4: LECTURE FIVE

1. To learn something from Neuroscience;2. To seek some possibility of making the

neurological story more universal (with, say, the help of AI)

3. To try to reconstruct the mental architecture out of the findings in neural science and AI.

Page 5: LECTURE FIVE

By definition, “Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body” .

The biological neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it. The dendrites (树突 )and the axon (轴突) end in pre-synaptic terminals(突触前末端) . The cell body is the heart of the cell. It contains the nucleolus(细胞核) and maintains protein synthesis(蛋白质合成) . A neuron has many dendrites, which look like a tree structure, receives signals from other neurons.

Page 6: LECTURE FIVE

A single neuron usually has one axon, which expands off from a part of the cell body. This I called the axon hillock(轴丘 ). The axon main purpose is to conduct electrical signals generated at the axon hillock down its length. These signals are called action potentials(动作电位 ).

The other end of the axon may split into several branches, which end in a pre-synaptic terminal. The electrical signals (action potential) that the neurons use to convey the information of the brain are all identical. The brain can determine which type of information is being received based on the path of the signal.

Just similar to this case: I will send the some message to different medias, and the authority of each media will change the weight of what I said from the audience perspective.

Page 7: LECTURE FIVE

Once modeling an artificial functional model from the biological neuron, we must take into account three basic components. First off, the synapses of the biological neuron are modeled as weights. Let’s remember that the synapse of the biological neuron is the one which interconnects the neural network and gives the strength of the connection. For an artificial neuron, the weight is a number, and represents the synapse. A negative weight reflects an inhibitory connection, while positive values designate excitatory connections. The following components of the model represent the actual activity of the neuron cell. All inputs are summed altogether and modified by the weights. This activity is referred as a linear combination. Finally, an activation function controls the amplitude (值幅 )of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1.

Page 8: LECTURE FIVE

As mentioned previously, the activation function acts as a squashing function(压缩函数) , such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or -1 and 1). In general, there are three types of activation functions, denoted by Φ(.)

Page 9: LECTURE FIVE

First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater than or equal to the threshold value.

Page 10: LECTURE FIVE

Secondly, there is the Piecewise-Linear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation.

Page 11: LECTURE FIVE

This function can range between 0 and 1, but it is also sometimes useful to use the -1 to 1 range. An example of the sigmoid function is the.hyperbolic tangent function(双曲正切函数 ).

Page 12: LECTURE FIVE
Page 13: LECTURE FIVE

Within neural systems it is useful to distinguish three types of units: input units (indicated by an index i) which receive data from outside the neural network, output units (indicated by an index o) which send data out of the neural network, and hidden units (indicated by an index h) whose input and output signals remain within the neural network.

Page 14: LECTURE FIVE

Each unit performs a relatively simple job: receive input from neighbours or external sources and use this to compute an output signal which is propagated to other units.

Apart from this processing, a second task is the adjustment of the weights.

The system is inherently parallel in the sense that many units can carry out their computations at the same time.

During operation, units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously; with asynchronous updating, each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be able to do this at a time. In some cases the latter model has some advantages.

Page 15: LECTURE FIVE

Semantic content is distributed in a huge network whose topological structure will evolve when new inputs come in,

rather than stored in a fixed location in the brain.

Or in another way around, your belief-token of something is not encoded by this neuron of that one, but by a huge network!

Page 16: LECTURE FIVE

What if the brain can be scanned and mathematically re-modeled?That means, maybe we can download your thought and re-implement it in another brain!

Page 17: LECTURE FIVE

Avatars are Na'vi-human hybrids which are operated by genetically matched humans. So if the human is Jack, its Avatar will share the same mental states with Jack when being operated. Avatar –Jack looks like a mental duplicate of Jack.

Page 18: LECTURE FIVE

What is meaning now?Answer: Structured Activation Spaces as

Conceptual Frameworks!!

Page 19: LECTURE FIVE
Page 20: LECTURE FIVE
Page 21: LECTURE FIVE

Dr. Ernest LeporeActing Director of the Rutgers Center for Cognitive Science (RuCCS) 

Page 22: LECTURE FIVE

Fodor has made many and varied criticisms of holism. He identifies the central problem with all the different notions of holism as the idea that the determining factor in semantic evaluation is the notion of an "epistemic bond". Briefly, P is an epistemic bond of Q if the meaning of P is considered by someone to be relevant for the determination of the meaning of Q. Meaning holism strongly depends on this notion. The identity of the content of a mental state, under holism, can only be determined by the totality of its epistemic bonds . And this makes the realism of mental states an impossibility:

"If people differ in an absolutely general way in their estimations of epistemic relevance, and if we follow the holism of meaning and individuate intentional states by way of the totality of their epistemic bonds, the consequence will be that two people (or, for that matter, two temporal sections of the same person) will never be in the same intentional state. Therefore, two people can never be subsumed under the same intentional generalizations. And, therefore, intentional generalization can never be successful. And, therefore again, there is no hope for an intentional psychology."

Page 23: LECTURE FIVE

http://plato.stanford.edu/entries/connectionism/

Neurophilosophy at WorkPAUL CHURCHLANDUniversity of California, San DiegoChapter 8: Neurosemantics

I have sent the whole PDF of the entire book to the public e-mail box!

Page 24: LECTURE FIVE