Busy

October 22, 2011

My job search has turned into a job, leaving me with even less time than I had expected. While I do intend to get back on a regular posting schedule soon, as this is my first real (40 hour/week) job, I have yet to learn how to manage my time. Once I get into the swing of things, I hope to start writing (and posting) more often. However, I imagine that will not happen until mid december, unless my projects get a lot lighter. On the plus side, my job is very involved in language, so I’m never bored.


Language and Steampunk

August 3, 2011

I have not posted on Ercethi or the novel in a good amount of time. This is for, I think, a good reason. Some features of Ercethi, namely those about Affectedness, were waiting in drafts to be published once edited. Unfortunately, my Undergraduate thesis was not only successful, it was too successful. As such, while my Ercethi verbs are consistent with the universal rules of grammar, they are not wholly encompassing. As such, I will be adding a third class of verbs. When this is done, I will finally write about what my thesis said, as well as how I have chosen to apply this to my thesis.
As well, I am starting to get back into the swing of writing The Edge of the World. Unfortunately, a lot of what I have been doing is rewriting. Much of what I have feels both rushed and derivative, so my next large update will up the word count significantly without actually adding any more chapters. This should hopefully happen by the end of September. I would like to have it up before then, but I have grad school applications to work on.


Event Structure

August 3, 2011

What is an event? An event is a conceptual happening corresponding to the meaning of a semantics. For example, in the sentence I ran five miles, the event begins when I begin running, and ends when I have traversed 5 miles from my starting point. Events can have properties much like Nouns can have adjectives. So you can have a tall [giraffe], a giraffe who is taller than average giraffe, and you can have a quick [me-running of five miles] (I ran 5 miles quickly), a running of five miles that was quick for me. Some of these properties, like the ones are not syntactically relevant. That is, tall giraffe can be used anywhere where giraffe can be, and die on Thursday or die quickly can be used anywhere die can.  albeit occasionally redundantly. This is a natural consequence of our recursive grammar. On the other hand, as we have observed, there are some such properties which are restricted in their application. Consider the following:

a plane
planes

‘A plane’ is singular, whereas ‘planes’ is plural. First, as we can see in the following sentences, ‘a plane’ takes a singular auxiliary verb was, whereas ‘planes’ takes the singular auxiliary verb were:

(31a) The plane was/*were flying.
(31b) The planes were *was/were flying.

Moreover, conceptually, they ‘a dog’ refers to one entity*, whereas ‘dogs’ refers to multiple entities, so it is clear that one should be singular and the other plural. This has interesting implications. Plural objects can occupy a number of different locations (being made up of multiple things), whereas singular objects (without being broken down) cannot.

(32a) The plane was scattered across the world
(32b) Planes were scattered across the world

Finally, duration verbs like ‘all night’ or ‘for five minutes’ can occur only with verbs that imply a change of state if the object is plural, not singular.

(33a) *The saboteur broke the plane all night.
(33b) The saboteur broke planes all night.

However, there are some types of entities that, while appearing to denote multiple entities are syntactically singular, but semantically confused.

(31c) The fleet (of planes) was/*were flying.
(32c) The fleet of planes was scattered across the world.
(33c) *The saboteur broke the fleet of planes all night.

(31d) The water was/*were wet
(32d) Water was scattered across the floor.
(33d) The thirsty man drank water all night.

This leaves us with four categories of verbs. As good linguists, whenever we see 4 categories, a thought that must come to mind is that these 4  categories could be mapped to two independent properties. In this case, I follow Jackendoff (for now) in calling these qualities boundedness and internal structure. Something is bounded iff dividing it up does not yield something of the same type, and something has internal structure iff it has meaningful subparts. I posit the same four categories that he does

Individual – [+b -i] – A dog
Group –          [+b+i] – A pack of dogs
Substance –  [- b -i] – Dog (as in there was dog all over the road)
Aggregate –  [- b+i] – Dogs

As we can see, most words can be made into the four types, even where it seems semantically strange. However, these noun properties also apply to events (this makes some sense, as we can treat events like nouns, such as with the word ‘earthquake’, or any gerund).

However, instead of being bound and internally structured in space, events are so in time: the subject matter of my next post.

*Remember, these entities’ classifications are determined by our mental physics, not real-world physics. A dog is made up of many things, but the human brain sees it as one thing.


Why Lexical Semantics?

July 28, 2011

The purpose of Syntax is to determine why some sentences are grammatical sentences of a language, and why others are not. However, lexical semantics often has to step in where syntax cannot. For example, in the previous post, Syntactic rules could not explain why some verbs only allowed certain argument structures. And this makes some sense. After all, it is clearly not on the structure (syntax) alone that we exclude these sentences, because they are perfectly viable with other verbs. This is well used in Computer Science, for example. An integer plus operation specifies that it requires two integers, and will not, for example, add two rational non-integers. Even more broad, in many computing languages, 2+yellow will not process (unless yellow has been named separately as a numeric object).

To go deeper down the rabbit hole, following Jackendoff, Beavers, Levin, Pinker, and others, I posit that human language is deeply grounded in a mental physics. For example, we can see that non-agentive verbs do not allow for agentive adverbs (28), language can tell when an object is affected in some way (29), and even to what degree objects are affected (30). In the first case, adverbs like ‘intentionally’ are fine with some verbs, and not with others. In the second case, verbs that entail a change of state in their object (kick) can be preceded by ‘what happened to the O is that’, whereas verbs that do not entail a change of state in their object (like run). Finally, in the third case, verbs which entail a change from state not-X to state X cannot take conative constructions (via the at particle), whereas verbs which simply entail a potentially repeatable change of state can. These two examples yield the semantic notions of agency and patienthood respectively, while the third adds more nuance (John Beavers categorizes break as taking a totally affected object, and cut as an affected object.

(28) I intentionally left my phone on during the movie.
(28b) I (*intentionally) noticed the fly in the corner of the room.
(29a) What happened to the puppy is that Sven kicked it.
(29b) *What happened to five miles is that I ran it.
(30a) I broke (*at) the bread.
(30b) I cut (at) the bread.

As well, another key notion to language is direct causation. For example, we can say that Jane broke a window if she threw a baseball at it, but not if she failed to catch a baseball thrown at a window. Moreover, we say Jeff dimmed the lights when he turns a switch down but not when he draws power to another device (say, by turning a microwave on). Our language notion of causation is not one fully grounded in actual physics, but in our mental physics, the same faculty people often use to determine guilt (the difference between murder and manslaughter, for example).

Now of course, one could argue that this is not a function of language, but rather a function of cognition. However, these distinctions are made on the fly in language all the time, with very little error, and as early as age two, suggesting it to be a natural, and not a developed function. From this, we can begin to conclude that our syntax is constrained by semantics, a semantics rooted in a conceptual physics. By investigating the mysteries of semantics, we discover how the human mind sees the world.


The role of lexical semantics in argument selection, Part 1

July 13, 2011

As observed in Syntax, Part 3: Argument Structure, there are a number of potential complements for verbs, and not all seem to be available for all verbs, and in Syntax, Part 4: Thematic Roles, I presented the idea of thematic roles. So how are the argument structure (what complements it can take) of a verb or the thematic roles it has to assign determined? One relatively common approach is to state that argument structure is essentially arbitrary, and learned on a word by word basis. After all, it does not seem that meaning constrains argument structure very well, as there are words with very similar meanings, yet different argument structure. For example, we can see that eat and cook can be either transitive or intransitive, whereas devour and microwave are both obligatorily transitive.

(25a) I ate (the meatballs)
(25b) I devoured the meatballs

(26a) I cooked (my dinner)
(26b) I microwaved my dinner

However, there is an increasingly more popular theory of lexical information that says that the meaning of a word determines its argument structure. This seems to me to be intuitive and potentially more economical. First, it seems that some verbs simply cannot take arguments very well based on their meanings. To show this, I will look at just a subset of all verbs, those verbs where the subject does some action which causes a change of state in the object, such as kill. It is very easy to imagine that English could also allow an intransitive use of these verbs, creating another subset of verbs, those verbs where the subject (potentially habitually) does some action which causes a change of state in objects, but not one specified by the sentence. In fact, English does allow this construction sometimes, as shown in the following sentences

(27a) He kills for money.
(27b) Smoking kills.

On the other hand, at the far other end of the spectrum, we have that subset of verbs which consists of intransitive sentences that seem to have subjects, but no possible objects, such as die. It is hard to imagine die being used transitively (unless we use die to mean ’cause to die’, like kill, which some languages do allow, but that’s a separate issue). So it certainly seems like certain semantic constraints exist on argument structure. However, linguists such as Jackendoff, Beavers, Levin, and Pinker have all argued that verb meaning is decompositional, that is, that there are primitive meanings which, when combined, make up all of the meanings of verbs. What these are, I will save for a later post, but causation (mentioned above in the ’cause to die’ example) is one of them, as is the ‘change of state’. This point of view is attractive for the following reason: languages have tens of thousands of words. If the human brain simply stored these each as primitive concepts, it would be less economical than having some basic concepts. As well, our provided definitions of words do not always seem adequate. For example, the dictionary provides as a definition of paint ‘to coat, cover, or decorate (something) with paint’. But, as Jackendoff notes in the introduction to Lexical and Conceptual Semantics, we would not want to consider dipping a paintbrush into a can of paint to be painting it. Our notions of words seem to be very sensitive to the physical manner of an action, even if our definitions are not.

So if we consider the semantics of a verb to be decompositional, and to determine its argument structure, we are left with some important questions:

What are the primitives of verb meaning?
How do they determine argument structure?
If they do not, what does?
These are the topics that interest me and were the basis for my thesis, and will be the subject of the next few posts.

Note: I am still in the midst of getting into a potential job, so I may or may not be as regular a poster as I should like.


Syntax, pt. 7: Recap

July 5, 2011

Since it’s been awhile, I thought I would recap, in part simply to regain my bearings.

 

First, I spelled out the basic principles of the syntactic enterprise: A syntax should be as general as possible, as economic as possible, and psychologically plausible:

a system which simply lists the grammatical sentences of English is not general, and will not handle new sentences.
a system which has more rules is less scientific than one with fewer.
a system which posited psychic understanding of language via intent would not be psychologically reasonable.

Finally, we posit this inherent grammar for two reasons: one, certain languages seem not to be possible. And while the evidence for these sorts of claims was once based on the lack of languages with these qualities, a recent study at Johns Hopkins suggests that hypothetical language that violate so-called language universals are actually significantly harder for people to learn. So at least at some level, it seems that the idea of an inherent grammar has some traction.

As for the syntax of English, we posited that sentences can be divided (rather intuitively) into constituents. For example, in the following sentence, we would want to say that ‘his’ and ‘teeth’ are connected in some way that ‘teeth’ and ‘in’ are not, despite their seeming proximity:

(11) The Duke of Earl will brush his teeth in the airplane.

We posit this for three reasons. First, as above, we feel that some constituents are more closely connected. Second, we can move constituents together in a way we cannot move non constituents. For example, (11c) is grammatical, albeit stilted, whereas (11d) is not a sentence. Third, constituents can be replaced by wh-phrases in ways non-constituents cannot. For the full explanation, see Syntax, pt. 1: Constituents.

From there, we concluded that (most) English sentences have some necessary constituents, namely a subject and an attached verb phrase (which may or may not have verb arguments). I stated earlier that I believed that the argument structure of verbs could be derived from their meanings, and at least in the case of ambitransitive verbs (verbs that can be either transitive or intransitive, such as eat or break):
I ate lutfisk. -> I ate
I broke the microwave. -> The microwave broke

I will reserve this explanation for a later post, as I cannot do it justice as a footnote.

In the remaining two posts, I showed that any account of language must be recursive. So for my next post, I will assume the following verb structure rules:

S -> NP VP
NP -> Det N’
N’ -> AP N’*
N’ -> N PP**
AP -> Adv A’
A’ -> A PP
PP -> P NP
VP -> Adv V’
V’ -> V’ PP*
V’ -> V PP/NP**

 

*The phrase rules marked with ‘*’ are Adjunct phrase rules; the modifiers added at this level adjoin to the phrase, but are not part of its inherent structure.
**The phrase rules marked with ‘**’ are Complement phrase rules; the modifiers added at this level are the complements of the head, and are part of its inherent structure.


Return, pt. 2

June 28, 2011

It would only take a casual reader to notice that it’s been awhile since I’ve written anything here. The past year was my senior year at Reed College, and as of May 19th (or 20th, or 21st, or some time around then…I’m sure it says on my diploma, but that isn’t the important part), I am a Reed College graduate! For those of you who are familiar with colleges and standardized testing, Reed College : academic rigor :: UCSB : partying. For those of you who aren’t familiar with colleges, despite being ridiculously small (student body of 1400), Reed College has produced the second-highest number of Rhodes scholars. Moreover, Reed College is third in the percentage of its graduates who go on to get Ph.D.s (and I hope to be one of them). Unlike most colleges, Reed College has experienced almost no grade inflation, with an average GPA of 3.05, despite an average high school GPA of 4.034 for its entering students. Finally, in emulation of most graduate schools, Reed College requires the completion of a thesis. It was this thesis that has taken up most of my time since my last post, as the rest of the summer was spent preparing and choosing a topic. So as an explanation of my long absence, I would point you to my thesis. Unfortunately, an important point from that body of work is the basis for a paper I hope to submit to a Linguistics conference, and as such, it will stay close to my chest for now. However, the conclusions of  my thesis are most relevant to my constructed language (and ensure it follows the rules for human language), so you will see tastes of it in upcoming posts.

Then, having graduated, I intended to begin posting again. But after so much work, I found it easier to turn off my brain when I wasn’t searching for jobs. As such, I have taken a (rather unsuccessful) attempt at semi-pro gaming in both Starcraft 2 and League of Legends, although my real intent was to spend some time relaxing. Now that that momentary fashion has passed, I plan to return to writing next week with the same schedule I had previously established. As well, I have already written reviews of some fantasy/sci-fi authors, but I had forgotten my wordpress password. So those will be posted up on my blog seemingly back in time as soon as I get around to editing them. I hope you will continue to read as I construct a language, tell a story, and delve into the secrets of language. As well, there will be no break next year, as due to some issues with my ID, I will be applying to grad schools for Fall 2012, not 2011.