Why Lexical Semantics?

July 28, 2011

The purpose of Syntax is to determine why some sentences are grammatical sentences of a language, and why others are not. However, lexical semantics often has to step in where syntax cannot. For example, in the previous post, Syntactic rules could not explain why some verbs only allowed certain argument structures. And this makes some sense. After all, it is clearly not on the structure (syntax) alone that we exclude these sentences, because they are perfectly viable with other verbs. This is well used in Computer Science, for example. An integer plus operation specifies that it requires two integers, and will not, for example, add two rational non-integers. Even more broad, in many computing languages, 2+yellow will not process (unless yellow has been named separately as a numeric object).

To go deeper down the rabbit hole, following Jackendoff, Beavers, Levin, Pinker, and others, I posit that human language is deeply grounded in a mental physics. For example, we can see that non-agentive verbs do not allow for agentive adverbs (28), language can tell when an object is affected in some way (29), and even to what degree objects are affected (30). In the first case, adverbs like ‘intentionally’ are fine with some verbs, and not with others. In the second case, verbs that entail a change of state in their object (kick) can be preceded by ‘what happened to the O is that’, whereas verbs that do not entail a change of state in their object (like run). Finally, in the third case, verbs which entail a change from state not-X to state X cannot take conative constructions (via the at particle), whereas verbs which simply entail a potentially repeatable change of state can. These two examples yield the semantic notions of agency and patienthood respectively, while the third adds more nuance (John Beavers categorizes break as taking a totally affected object, and cut as an affected object.

(28) I intentionally left my phone on during the movie.
(28b) I (*intentionally) noticed the fly in the corner of the room.
(29a) What happened to the puppy is that Sven kicked it.
(29b) *What happened to five miles is that I ran it.
(30a) I broke (*at) the bread.
(30b) I cut (at) the bread.

As well, another key notion to language is direct causation. For example, we can say that Jane broke a window if she threw a baseball at it, but not if she failed to catch a baseball thrown at a window. Moreover, we say Jeff dimmed the lights when he turns a switch down but not when he draws power to another device (say, by turning a microwave on). Our language notion of causation is not one fully grounded in actual physics, but in our mental physics, the same faculty people often use to determine guilt (the difference between murder and manslaughter, for example).

Now of course, one could argue that this is not a function of language, but rather a function of cognition. However, these distinctions are made on the fly in language all the time, with very little error, and as early as age two, suggesting it to be a natural, and not a developed function. From this, we can begin to conclude that our syntax is constrained by semantics, a semantics rooted in a conceptual physics. By investigating the mysteries of semantics, we discover how the human mind sees the world.


The role of lexical semantics in argument selection, Part 1

July 13, 2011

As observed in Syntax, Part 3: Argument Structure, there are a number of potential complements for verbs, and not all seem to be available for all verbs, and in Syntax, Part 4: Thematic Roles, I presented the idea of thematic roles. So how are the argument structure (what complements it can take) of a verb or the thematic roles it has to assign determined? One relatively common approach is to state that argument structure is essentially arbitrary, and learned on a word by word basis. After all, it does not seem that meaning constrains argument structure very well, as there are words with very similar meanings, yet different argument structure. For example, we can see that eat and cook can be either transitive or intransitive, whereas devour and microwave are both obligatorily transitive.

(25a) I ate (the meatballs)
(25b) I devoured the meatballs

(26a) I cooked (my dinner)
(26b) I microwaved my dinner

However, there is an increasingly more popular theory of lexical information that says that the meaning of a word determines its argument structure. This seems to me to be intuitive and potentially more economical. First, it seems that some verbs simply cannot take arguments very well based on their meanings. To show this, I will look at just a subset of all verbs, those verbs where the subject does some action which causes a change of state in the object, such as kill. It is very easy to imagine that English could also allow an intransitive use of these verbs, creating another subset of verbs, those verbs where the subject (potentially habitually) does some action which causes a change of state in objects, but not one specified by the sentence. In fact, English does allow this construction sometimes, as shown in the following sentences

(27a) He kills for money.
(27b) Smoking kills.

On the other hand, at the far other end of the spectrum, we have that subset of verbs which consists of intransitive sentences that seem to have subjects, but no possible objects, such as die. It is hard to imagine die being used transitively (unless we use die to mean ’cause to die’, like kill, which some languages do allow, but that’s a separate issue). So it certainly seems like certain semantic constraints exist on argument structure. However, linguists such as Jackendoff, Beavers, Levin, and Pinker have all argued that verb meaning is decompositional, that is, that there are primitive meanings which, when combined, make up all of the meanings of verbs. What these are, I will save for a later post, but causation (mentioned above in the ’cause to die’ example) is one of them, as is the ‘change of state’. This point of view is attractive for the following reason: languages have tens of thousands of words. If the human brain simply stored these each as primitive concepts, it would be less economical than having some basic concepts. As well, our provided definitions of words do not always seem adequate. For example, the dictionary provides as a definition of paint ‘to coat, cover, or decorate (something) with paint’. But, as Jackendoff notes in the introduction to Lexical and Conceptual Semantics, we would not want to consider dipping a paintbrush into a can of paint to be painting it. Our notions of words seem to be very sensitive to the physical manner of an action, even if our definitions are not.

So if we consider the semantics of a verb to be decompositional, and to determine its argument structure, we are left with some important questions:

What are the primitives of verb meaning?
How do they determine argument structure?
If they do not, what does?
These are the topics that interest me and were the basis for my thesis, and will be the subject of the next few posts.

Note: I am still in the midst of getting into a potential job, so I may or may not be as regular a poster as I should like.