Research (Cem Bozsahin)

(a page for the light-hearted and forgiving; slightly more serious things e.g. papers, tools and talks are linked below)

topics: ( language | cogsci | computing ) | papers | tools | talks


Short summary:

i'm trying to understand the scientific and philosophical implications of the following: (and no, i am not a pancomputationalist or born-again computationalist, since you asked. I wrote this about that.)

Somewhat surprisingly, for some, all of these go back to one chap: Alan Turing. The first idea is Turing's thesis (not Church-Turing thesis), which is essentially the beginning of cogsci. The second idea is the Turing Machine. The third idea was Turing's ACE, which is essentially an electronic universal TM (let's face it; other `first computers' were engineering kludges).

Slightly longer:

I am a grammarian. I look at morphology-phonology-lexicon-syntax-semantics relation(s) as a cognitive scientist and try to understand their computational mechanisms. It's easier to call all that just 'grammar'. Grammars are not only for language; plans, audio systems and visual systems (and buildings!) have grammars too. Theory of grammar looks at these problems as indirect association of forms and meanings, with explicit arguments about resource boundedness of their computation.

I study linguistic, computational, cognitive and philosophical aspects of grammars, more or less in this order of involvement. I also look at how perception can give rise to knowledge in a haphazard and bumpy sort of way. Nowadays this is called `dynamics'. I prefer to think of it as clumsy computing of the discrete variety. (Clumsiness comes from complexity in nature, not from bungling agents.) Computationalism of the discrete kind makes a narrower claim in a wider range of cognitive problems than cognitivism, connectionism or dynamical systems. I also happen to believe that it is more testable.

Once we strictly or radically lexicalise a grammar, word learning and language acquisition converge onto the same problem, whatever that is. It is an indirect way of attempting to understand the nature of the problem. We are empirically well-grounded in this affair, because human languages are provably non-context-free, and we can strictly lexicalise the most restrictive super class of context-free grammars we have found so far, the linear-indexed grammars. The curious thing is that, once we do that, ie strictly lexicalise a descriptively adequate grammar, we end up with limited kinds of semantic dependencies, although syntax seems to be so, ehm, infinite. Understanding the limited nature of this problem forces 'infinity' to play second fiddle in linguistics.

Kindly note that I am not promoting finitism, which is a school of mathematics that uses some ideas of constructive mathematics (finite number of operations). I am suggesting that explaining why an infinite space has finite number of properties is more exciting. So i assume from the beginning that there can be potential infinity. (Potential/real infinity goes back to Aristotle, and comes back in linguistics to haunt us.) Saying that that's because we have finite number of rules is not very exciting (not to me, at least), until we say something more about these rules. Once we understand that, we might say, "oh by the way, it's potentially infinite.''

Let me exemplify. Take four words of the set W = {I,you,think,like}. (Actually that's nine word forms but lets not nitpick like a morphologist.) We can create an infinite language from W: I like you. I think I like you. You think I like you. I think you think I like you. You think I think you think I like you, etc.etc. We can say that two rules are at work here, one for transitives like `like' and one for complement-takers like `think'. Right. Then why don't we get: I like I think I like you? Easy: 'like' is not a complement-taker like `think'. So the difference between the two rules explains the odd case. Now, consider why `think' is capable of doing it while `like' is not: it takes a complement that can take its own arguments. If you cannot do that, you're stuck with words like `like'. Now, that's *one* explanation for two rules.

Now assume you want to take on board another process that is commonly unbounded: relativisation. If you go the 'finite rules' way of explaining infinity, there is one more rule to explain, that of relativisation. If you go the finite-dependencies way, you can use the unique explanation above. That's one explanation for three rules. This is not a meaning-determines-form explanation, it is about codeterminism of semantics and constituency. And you can check the syntactic reflex of that in all languages. (What we have done is a mini practice of von Humboldtian (or generative) `infinity-in-finiteness' explanations, which presume infinity, and Schonfinkelian (or combinatory) `explaining infinity'.)

Notice that in the W-experiment we get infinity too: any argument-taking argument can do this, and no argument without arguments can.

One of my greatest teachers (Aryeh Faltz) once told me that anything finite is inherently boring. It is parochial. Why would anyone write a grammar to understand a finite number of dependencies? just list them all and be done with it. I believed in that dictum for a long time, but now i'm having second thoughts. It all depends on how we deal with finiteness. In this business, numbers can be everything, contra popular belief.

A finite but very large language, say one with 10^200 sentences, needs a search algorithm to see which structures are realised, eventhough, after that search, the meaning seems just retrieval, not composition. Search is needed for an infinite language too, minus the semantic retrieval bit, which one must compose. If we know how to compose meanings, we can do that for finite systems too, and use it as an argument to show that their structures are made up of simple and finitely many primitives. The more interesting question is, infinite languages appear to have finite dependencies in them. So it seems that things can be finite and interesting. After all, the universe appears to be finite, but we wouldn't take all atoms in the universe as its explanation. Enter boundedness.


The Language bit

The interesting bit in linguistic theorising is that human languages exhibit limited semantic dependencies in their syntax. We would like to know why. A strong hypothesis in this respect is that languages differ only in their lexicons, and an invariant combinatorics gives semantics to order, and order alone, to lead to limited constituency and dependency. Common dependencies need not be stipulated in grammars, only the language specific ones. From this perspective, infinity of languages (therefore recursion) is of secondary interest. I am beginning to think this is a stronger hypothesis than infinity (in the sense that it takes more burden of proof in its shoulders). I am in favour of testing strong hypotheses before we entertain the weaker ones. My high school best buddy told me to do that. I usually don't do what I'm told but I make exceptions.

Lexicalising a grammar is crucial for this hypothesis. The underlying idea in fully lexicalising a grammar is the notion of "possible lexical category," as models of what Edmund Husserl called "sensibly distinct representations in the mind." Many of us (radical lexicalists) believe categories need explanation, rather than stipulation. NB. these kinds of categories are knife edges: one side is syntactic, the other semantic. Any lexicalised grammar need do justice to both, unless we start believing in one-edged knives. (The so-called one-edged knives, kard, culter, facon etc. are knives with one edge sharpened, since you asked).

I try to work towards a theory of grammar. When we radically lexicalise a grammar, something weird happens to words. By definition they are exceptional because they are all different, but they begin to bear combinatory categories that must be all over the grammar as a recycled resource. This resource and its recycling needs a theory. Naturally, something 'idiosyncratic' does not need a theory, so the theory we need must be about words' possible use in syntactic contexts, ie it must be about constituency. Language is then a closure of the lexicon with respect to an invariant (and finite) combinatory system. (Having said that, I do believe language is a kludge; if I wanted perfection in nature, I'd study sharks.) Lexicon is language with small el, l, and its combinatorial theory is language with big el, L. I guess what I'm saying is that a theory of kludge is a kludge too; it's turtles all the way down. Schonfinkel called them combinators. They are nifty kludges to give semantics to order.

We can conceive the lexicon as something shaped by the invariant. The invariant can be studied semiotically (extensionally) and psychologically (intensionally). The same goes for the lexicon. But first we must account for Merrill Garrett's insight that ``parsing is a reflex.'' (try turning it off if you're a skeptic). The part and parcel of the strong hypothesis is that this is due to having a combinatory system that is purely computational and oblivious to world matters. No movement, no ghost in the machine, no checking, no caching, no tampering, no tinkering. What you do with that computation in real life is, ehem, the real meaning of life. (What about the mind, you say. I don't know. These hefty global questions usually emanate from certain parts of American East Coast. Ask them.) I am more interested in ecology-minded cogsci rather than mind-centered cogsci.

More specifically, i am interested in how combinatory and substantive constraints shape surface syntax, and the lexical reflex of that effect. I am also interested in interactions in components--functionally speaking--of a language system: morphology, syntax, semantics, prosody, information structure, what have you. Recently, I have been studying grammatical relations, word order, directionality and categorisation in the lexicon, intonation in grammar, Bayesian sorcery for choosing categories, and morphosyntax, based on a theory of syntax-semantics called Combinatory Categorial Grammar (CCG).

On the applied side, I am interested in parsers for categorial grammars, and modeling multidomain interactions, such as generation of contextually appropriate discourse entities, and syntax-morphology-phonology trilogy in parsing. Some public tools from applied research are available below, at our lab, and at Edinburgh-born openCCG.
Subscribe to ankara-linguistic-circle
Email:
Browse Archives at groups.google.com


The Cogsci bit

I believe grammar can be one of the most productive and creative tools in model construction for cognitive science once it's taken off the dusty shelves of high school, and from linguist's ivory tower. It relates directly to computation as we know it. But first a bit of how we might have gotten to that point (all guesswork, of course).

The major difference between trees and animals is that animals move and trees don't. Everything that moves has a nervous system. (Not necessarily a central system, but a nervous system.) It seems that the whole need for a central nervous system arose because things that move must coordinate their movement and actions. (If you are in doubt, try tying your shoelaces as you run). Or it could just be a serendipitous accident to give us the mother of all neurons in a single thunderstrike (or astrocytes, if you like), in which case I will close shop and worship Taranis.

The point of cognitive science is to make sense of how coordinated activity can take place with what little perceptive abilities a species have, and how task-specific knowledge can give rise to something more than the token experience. That's what David Hume suggested---well not in these words, and i'm a bit old-fashioned in this matter to leave the good words to their owners.

When things move, they must track other objects and coordinate their actions. (An inquiring mind might estimate the potential lifetime of a mouse that seems to totally ignore a curious or hungry cat.) A simple hypothesis, aka. the computationalist hypothesis, is that all kinds of coordinate action are more of the same stuff. What distinguishes the species is their resource endowment and life training (i.e. exposure to data). So maybe, just maybe, the most uniquely human cognitive trait, language, is more of the same stuff, with more resources and less training, rather than a gift to mankind or some kind of miracle. (read: the only miracle I believe in is the national lottery.) A bit of evolutionary patience might give us wonders, if you pardon the pun.

Here's a consensus list for top hundred works in cogsci: Top 100

Just to exploit the benefits of a non-representative democracy, aka. web, I publicise my own top ten+ list, for whatever it's worth:a my cogsci top ten


The computation bit

There seems to be a lot of confusion about what computation can do and must do in cognition. Myself being one of the confused, I try to convince students (and usually fail) that just because you use a computer to model does not mean you are a computationalist. Just because you don't use a computer does not mean you are not a computationalist. (I tend to think Panini was computationalist, and ACT-R is not. ACT-R is software engineering for cogsci.) Just because you think symbols are natural representations for the mind does not mean that we've got a Turing machine running in our heads. Some psychologists might think that's computationalism, but it isn't. (Trust me, not all of them think like that.)

Computationalism is a style of thinking which suggests that computational principles (discreteness, complexity, resource boundedness) carve the hypothesis space of higher-level cognitive processes. Easier solutions appear earlier than more difficult ones. But, easy solutions may not be enough, because we face multiple constraints in a complex life. In other words, the problem space could be general, perhaps divided into classes of problems according to their demands, but the solution is task-specific. Computational ease, difficulty and comprehensiveness are measured by complexity in time and space, automata-theoretic demands, frequency (for biased search), completeness, decidability. We've got theories about these things, which go under the names Complexity Theory, Algorithms, Automata Theory and Logic.

Two examples: 1) Suppose we have a string of n words. Suppose also that the problem is figuring out what parts of the string means what. In Quine's sense, there are infinitely many possibilities. In Siskind's sense, the possibilities are reduced to likelihoods by parsimony, e.g. exclusivity of potential meanings, cross-situational inference etc. We must also allow for the posibility that a sequence of words in the string could mean one thing, like in an idiom. Without constraints, there are O(2^n) possibilities to look at. (It is the powerset of n elements.) With Zettlemoyer and Collins constraint, that only contiguous substrings can have a pindownable meaning, the possibilities reduce to O(n^2). Any n larger than 4 can tell you why we must do something like this. Then we begin to worry about what this contiguity assumption brings to cognition, and whether it is attested elsewhere in the cognitive world. A cognitivist theory might start with assumptions like: nouns are learned first because they stand for objects and there are lots of objects around. Computationalists would say short, frequent, unambiguous, perhaps long but repetitive words are learned first because we know that these properties make the problem computationally easier. You decide.

2) Think of word learning again. This time as a language game as in Luc Steels's Talking Heads. (ok real ones are more talkative, and much better in music.) A group of agents try to learn communication, which we measure by success of building a common vocabulary in one-on-one interaction of speaker-hearer role play. Some cognitivist assumptions could be "avoid homonymy, avoid synonymy". To a computationalist, that puts the cart before the horse. We can see through simulations that homonymy and synonymy will cause unstable systems or late convergence to common vocabulary at best. They are effects, not causes. If people want to communicate (now that's a cause), and figure out that agreeing on meanings of labels is a simple way to do that, we get limited homonymy and synonymy and convergence. Do you know of a language in which homonymy and synonymy are completely absent? Why don't we avoid them completely if we're at it? Maybe that's not what we're doing, but that's what we are getting.

I guess what i seem to be saying is that functionalism is a ghost that every self-respecting scientist must try to exorcise.

In sum, if I were a rich man, I would take AMT as my main inspiration, Allen Newell as my friendly psychologist (yes, i chose a computer guy as my psychic guide, so what?), Chomsky as the scientist, with care, minus the romanticism, much less the baroque nativism (as opposed to more sensible variety), for what seems like an inner drive for explanations, but also for public relations (he insists on computation although in a weird way, but is good for business), Montague and Curry as design engineers (nb. rigorous models), Hume as the boss (now that's a big wig, because it has to cover large territory), Husserl as the heavyweight philosopher (he's got a big beard to prove it too, unlike the islander variety, or the wrong side of the Atlantic variety--a great one: Fodor, and it's great in this business if half of what you say makes sense), Wittgenstein as the master spoiler for that dormant cognitivist lurking in all of us, and for fun (my cats talk to me; i would like to return the favour), Darwin as my gentle naysayer without academic quibbles (the man almost gave up his life's work upon a single letter from New Zealand), Lamarck as my constant reminder (he was wrong, as we will all be one day, perhaps even now, but he was a bloody good scientist), and Haj Ross as the linguist (name one syntactic nut to crack that you cannot trace back to JRR or NC). I am guessing a person of these likenesses might materialise in the 23rd century.

To the (potential) students, friends, fiend or foe who ask me whether i am an internalist, externalist, realist, mentalist, instrumentalist, eliminativist, empiricist, rationalist, physicalist, materialist, idealist, functionalist and what not, i can say this much (sorry for the presumption; i've been asked this question too many times--i don't think my answers have been cohesive or consistent): it seems that finding an interesting AND workable subset of these principles and ideas to everyone's satisfaction is an admirable task (assuming that finding an interesting OR workable subset is boring). But life's short. I wouldn't mind if the label comes later. I thought i was safe with the labels `realist', `naturalist' and `computationalist' but life's full of surprises. So I try to follow the pillow-book approach for some scientific fun. If a book of 10th century can tell us so much about life just by listing people, birds, dogs, trees, rain, snow etc., I'd like some of that.

I blame Beckett for legitimising schoolboy humour in public places (and these guys). Actually I blame Sam for everything, because he's dead.


Some papers and books


All comments welcome. I used to maintain this part of the page regularly, but now most updates and new uploads are in
Academia.edu.

Bozsahin, Cem (DRAFT 1.0). Grammars, Programs and the Chinese Room. (pdf, for comments).
(longer version appeared in 2006 International European Conference on Computing and Philosophy; ECAP)
(much longer version appeared in the 2012 book below)
Bozsahin, Cem (DRAFT v2.0).
Word Order, Word Order Flexibility and the Lexicon (was `Lexical Origins of Word Order and Word Order Flexibility.')
In preparation for a chapter in Theoretical Issues in Word Order, S. Ozsoy (ed.), Kluwer. For comments. (pdf)
Bozsahin, Cem (DRAFT 1.0).
Directionality and the Lexicon: Evidence From Gapping. For comments. (.pdf).

Bozsahin, Cem (2012). Combinatory Linguistics. Berlin/Boston: Mouton de Gruyter.

Uyumaz, Begum, Cem Bozsahin and Deniz Zeyrek (2014).
Turkish resources for visual word recognition. To appear in LREC 2014, Rejkjavik. pdf
Cem Bozsahin (2014).
What's a computational constraint? In Annual meeting of Int Asn for Computing and Philosophy (IACAP), Thessaloniki, July 2014. pdf
Enes Yuncu, Huseyin Hacihabiboglu and Cem Bozsahin (2014).
Automatic Speech Emotion Recognition using Auditory Models with Binary Decision Tree and SVM. In 22nd Int Conf on Pattern Recognition (ICPR), Stockholm.pdf

Cem Bozsahin (2013). Natural Recursion doesn't work that way: automata in planning and syntax. (pdf| slides)
Philosophy and Theory of Artificial Intelligence, PT-AI 2013, Oxford. comments most welcome (final version to appear in a book by Springer)
Isin Demirsahin, Adnan ozturel, Cem Bozsahin and Deniz Zeyrek (2013). Applicative Structures and Immediate Discourse in the Turkish Discourse Bank. (pdf|slides)
Language Annotation and Interoperability in Discourse Workshop, LAW VII & ID, ACL 51, Sofia, Bulgaria.
Kilic, Ozkan and Cem Bozsahin (2013). Selection of Linker Type in Emphatic Reduplication: Speaker's Intuition meets Corpus Statistics. (pdf)
35th Annual Meeting of Cognitive Science Society (COGSCI 2013), Berlin, Germany. to appear.

Bozsahin, Cem (2012). Properties as anaphors. (pdf)
Workshop on Altaic Formal Linguistics (WAFL 8), Stuttgart, May. To appear in a MITWPL 2012 volume.
Ozturel, I. Adnan and C. Bozsahin (2012). Musical Agreement via Social Dynamics Can Self-Organize a Closed Community of Music: A Computational Model (pdf).
In 12th International Conference on Music Perception and Cognition (ICMPC) and 8th Triennial Conference of the European Society for the Cognitive Sciences of Music (ESCOM). Thessaloniki, Greece, July.
Eryilmaz, Kerem and C. Bozsahin (2012). Lexical Redundancy, Naming Game and Self-constrained Synonymy (pdf).
In 34th Annual Meeting of the Cognitive Science Society, Sapporo, Japan, August.

Kilic, Ozkan and C. Bozsahin (2012). Semi-supervised morpheme segmentation without morphological analysis (pdf).
In LREC 2012, Istanbul, May.
Bozsahin, Cem (2011). Morphological preprocessing or parsing: where does semantics meet computation? (pdf)
(this is my take on Turkish CL/NLP, which was submitted as a position paper to a national meeting of Turkish NLP researchers at Gebze, October 2011.)
Bozsahin Cem (2011). Serialization and the verb in Turkish coordinate reduction.
Journal of Linguistic Research 2011/1 [Dilbilim Arastirmalari], pp.51-67.[local version]

Berfin Aktas, Cem Bozsahin and Deniz Zeyrek (2010). Discourse relation configurations in Turkish and an annotation environment.
ACL 2010, LAW IV workshop. (pdf)
Ozge, Umut and Cem Bozsahin (2010). Intonation in the grammar of Turkish. Lingua 120:132-175. pdf
Zeyrek, Deniz, Umit Turan and Cem Bozsahin (2008). The role of annotation in understanding discourse.
ICTL 2008 Proceedings. (pdf)

Coltekin, Cagri and Bozsahin, Cem (2007). Syllables, Morphemes and Bayesian Computational Models of Acquiring a Word Grammar.
Proc. of 29th Annual Meeting of Cognitive Science Society, Nashville. (pdf)
Bozsahin, Cem, Asli Goksel (2007). Turkce'de Ezgi: Sozdizim ve Edimle Iliskisi. 21. Dilbilim Kurultayi, Mersin. (doc)
Bozsahin, Cem (2004).
On the Turkish Controllee.
to appear in ICTL 2004 Proceedings (pdf). for comments

Tutar, Sercan, Cem Bozsahin, and Halit Oguztuzun (2003).
TPD: An Educational Programming Language Based on Turkish Syntax.
The First Balkan Conference in Informatics, (pdf). November, Thessaloniki.
Bozsahin, Cem (2002).
The Combinatory Morphemic Lexicon. Computational Linguistics, 28(2):145-186. (pdf)
Yuksel, Ozgur, and Cem Bozsahin (2002)
Contextually Appropriate Reference Generation. Natural Language Engineering, 8(1):69-89. (pdf)

Bozsahin, Cem (2000).
Gapping and Word Order In Turkish. Proc. of 10th Int. Conf. on Turkish Linguistics, Istanbul, August. (pdf)
Bozsahin, Cem, and Deniz Zeyrek. (2000).
Dilbilgisi, bilisim ve bilissel bilim [Grammar, Computation and Cognitive Science]. Dilbilim Arastirmalari 2000 [Research in Linguistics, vol.11]. (pdf)
Sehitoglu, Onur, and Cem Bozsahin. (1999).
Lexical Rules and Lexical Organization. in Breadth and Depth of Semantic Lexicons, Evelyn Viegas (ed.), Kluwer. (pdf)

Bozsahin, Cem (1998).
Deriving the Predicate-Argument Structure for a Free Word Order Language. Proceedings of COLING-ACL'98, pp. 167-173, Montreal. (pdf)
Bozsahin, Cem (1997).
Combinatory Logic and Natural Language Parsing. Elektrik, Turkish J. of EE and CS, 5(3), 347-357. (pdf)
Bozsahin, Cem (1996).
Ulamsal dilbilgisi ve Turkce [Categorial Grammar and Turkish]. Dilbilim Arastirmalari 1996 [Research in Linguistics] 7:230-244. (pdf)

Bozsahin, Cem and Elvan Gocmen (1995).
A Categorial Framework for Composition in Multiple Linguistic Domains. Proc. of the 4th Int Conf on Cognitive Science of NLP, Dublin (CSNLP'95). (pdf)
Oflazer, Kemal, and Cem Bozsahin. (1994).
Turkce Dogal Dil Isleme [Turkish NLP]. Proc. of Turkish Informatics Society TBD'94. (ps)
Bozsahin, Cem, and Nicholas V. Findler. (1992).
Memory-based Hypothesis Formation. Cognitive Science, 16(4):431-454. (pdf)

Some public-domain tools

Some talks

  1. Can computation give rise to meaning? (Bogazici Philosophy 2014, and German-Turkish Science Year celebrations at ODTU in Ankara) (pdf) (same talk for ODTU DAS, 2015 pdf)
  2. METU Remembers Turing: October 16 Meeting Proceedings for Turing Centenary (mpg videos of all talks, mostly in Turkish except Prof. M. Akhmet's)
  3. Alan Turing: A century of computing. (Yeditepe Univ. Turing Conference, and METU Economics departmental colloq., May 2012). English|Turkish
  4. Some dependency nuts to crack computationally. (Istanbul Technical University, NLP Colloq. 9.5.2011) pdf
  5. Why is computationalism relevant to language acquisition? (Psycholinguistics and Cognitive Science Meeting, September 21-22, 2010, at METU Informatics)(pdf)
  6. (Same talk above, with slightly different material, given to cognitive scientists at Bogazici University, and to psychologists at Koc University, Jan-Feb. 2011)
  7. Darwingiller [Darwin & Co.] (Darwin Day at METU Cognitive Science, Nov. 21, 2009) (pdf)
  8. Schonfinkel'den Dilbilime Anlam ve Dizim (Semantics and Syntax From Schonfinkel to Linguistics). ODTU Felsefe Bol. 25. Yil 'Anlam' Kongresi. 19.12.2008
  9. Meaning, form and adjacency: Schonfinkel's legacy (Ankara Linguistic Circle, 10.10.2008) (pdf)
  10. What do we parse when we parse? (Bogazici Univ. Linguistics Colloq., 3.4.2008) (pdf)
  11. Computationalism as a philosophy of science in cognitive science. (METU Philosophy and Cognition Workshop, 8.3.2008) ( pdf)
  12. Turkce'de Ezgi: Sozdizim ve edimle Iliskisi (Turkish Intonation: Its relations to syntax and pragmatics-- with Umut Ozge and Asli Goksel). Mersin XXI. Dilbilim Kurultayi, 10 Mayis 2007)
  13. Dil ne degildir? (What language is not) (Abant Izzet Baysal Universitesi, Psikoloji, 23.3.2007)
  14. Type-dependence of Language (Ankara Linguistic Circle, March 2007)
  15. Two notions of category in linguistics: Some (really naive) Algebra (METU Applied Mathematics Colloq. March 2007)
  16. Lexical Integrity and Lexical Organisation (CL and Phonology Colloquium, Saarbrücken, 19.1.2006)
  17. Language from the lexicon (Cognitive Science Colloquium, METU Ankara, 21.10.2005)
  18. Kultur oncesi Dil: Niye cocuklarin bazi 'yanlislari' baska bir dilde 'dogru' cikiyor, bazi 'yanlislari' da hic yapmiyorlar? (Gercek Seminerleri, Ankara 8.12.2004) (duyuru | sunum)
  19. Zihinsel Sozlukte Dilbilgisi (Hacettepe Dilbilim, Ankara, 5.11.2004) (ps)
  20. What's in a Lexicon ? (METU CS Colloq., March 2004) (pdf)
  21. Control and Grammatical Relations (Paris, Octobre 2003) (pdf, some material outdated, see Ed'04)
  22. Lexical Origins of Word Order and Word Order Flexibility (Edinburgh Linguistic Circle, February 2003/Antwerp Typology Sem. March 2003) (pdf)
  23. Inflectional Morphology as Syntax (Edinburgh ICCS/HCRC, Octobre 2002) (pdf, similar in material to METU CS'04)
  24. (Yapay) Zeka ve Dil (ODTU, 9.11.2001) (doc)
  25. Berim [computing] (pdf) Mersin 1998 Dilbilim Kurultay'indaki konusmamin dokumu; benden de 1-2 kucuk duzeltme. Dokum icin Mustafa Aksan'a tesekkurler; moral destegi icin de Tahsin Yucel hocamiza. (Kelime Onur Sehitoglu'na aittir. Ilk defa 1996'da ODTU Bilgisayar Muhendisligi cay odasinda kullanilmistir. Ben ve Halit Oguztuzun copcatanlariyiz.)
a. OK I confess. I was asked by a student.^
Cognitive Science Department
Informatics Institute
Middle East Technical University
06800 Ankara, Turkey